Is it possible to change user name? 2020-06-26T11:09:59.780Z
I find this forum increasingly difficult to navigate 2019-07-05T10:27:32.975Z
Founders Pledge is seeking a Community Manager 2018-03-07T17:23:56.094Z
Founders Pledge hiring a Personal & Development Assistant in the US 2018-02-06T19:14:13.419Z
The almighty Hive will 2018-01-28T17:59:07.040Z
Against neglectedness 2017-11-01T23:09:04.526Z
Job: Country Manager needed for Germany at Founders Pledge 2017-04-26T14:26:12.764Z


Comment by arepo on Big List of Cause Candidates · 2020-12-31T15:27:49.208Z · EA · GW

Can you spell both of these points out for me? Maybe I'm looking in the wrong place, but I don't see anything in that tag description that recommends criteria for cause candidates.

As for Scott's post, I don't see anything more than a superficial analogy. His argument is something like 'the weight by which we improve our estimation of someone for their having a great idea should be much greater than the weight by which we downgrade our estimation of them for having a stupid idea'. Whether or not one agrees with this, what does it have to do with including on this list an expensive luxury that seemingly no-one has argued for on (effective) altruistic grounds?

Comment by arepo on Big List of Cause Candidates · 2020-12-30T10:10:56.027Z · EA · GW

Write a post on which aspect? You mean basically fleshing out the whole comment?

Comment by arepo on Big List of Cause Candidates · 2020-12-30T10:10:05.033Z · EA · GW

One other cause-enabler I'd love to see more research on is donating to (presumably early stage) for-profits. For all that they have better incentives it's still a very noisy space with plenty of remaining perverse incentives, so supporting those doing worse than they merit seems like it could be high value.

It might be possible to team up with some VCs on this, to see if any of them have a category of companies they like but won't invest in; perhaps because of a surprising lack of traction; or perhaps because of predatory pricing by companies with worse products/ethics; perhaps some other unmerited headwind.

Comment by arepo on Big List of Cause Candidates · 2020-12-30T09:59:23.776Z · EA · GW

Then I would suggest being more clear about what it's comprehensive of, ie by having clear criteria for inclusion. 

Comment by arepo on Big List of Cause Candidates · 2020-12-30T00:05:45.156Z · EA · GW

I would like to see more about 'minor' GCRs and our chance of actually becoming an interstellar civilisation given various forms of backslide. In practice, the EA movement seems to treat the probability as 1. We can see this attitude in this very post, 

I don't think this is remotely justified. The arguments I've seen are generally of the form 'we'll still be able to salvage enough resources to theoretically recreate any given  technology', which  doesn't mean we can get anywhere near the economies of scale needed to create global industry on today's scale, let alone that we actually will given realistic political development. And the industry would need to reach the point where we're a reliably spacefaring civilisation, well beyond today's technology, in order to avoid the usual definition of being an existential catastrophe (drastic curtailment of life's potential).

If the chance of recovery from any given backslide is 99%, then that's only two orders of magnitude between its expected badness and the badness of outright extinction, even ignoring other negative effects. And given the uncertainty around various GCRs, a couple of orders of magnitude isn't that big a deal (Toby Ord's The Precipice puts an order of magnitude or two between the probability of many of the existential risks we're typically concerned with).

Things I would like to see more discussion of in this area:

  • General principles for assessing the probability of reaching interstellar travel given specific backslide parameters and then, with reference to this:
  • Kessler syndrome
  • Solar storm disruption
  • CO2 emissions from fossil fuels and other climate change rendering the atmosphere unbreathable (this would be a good old fashioned X-risk, but seems like one that no-one has discussed - in Toby's book he details some extreme scenarios where a lot of CO2 could be released that wouldn't necessarily cause human extinction by global warming, but that some of my back-of-the-envelope maths based on his figures seemed consistent with this scenario)
  • CO2 emissions from fossil fuels and other climate change substantially reducing IQs
  • Various 'normal' concerns: antibiotic resistant bacteria; peak oil; peak phosphorus; substantial agricultural collapse; moderate climate change; major wars; reverse Flynn effect; supporting interplanetary colonisation; zombie apocalypse
  • Other concerns that I don't know of, or that no-one has yet thought of, that might otherwise be dismissed by zealous X-riskers as 'not a big deal'
Comment by arepo on Big List of Cause Candidates · 2020-12-29T23:39:28.703Z · EA · GW

I wish we could finally strike off cryonics from the list. The most popular answers in the linked 'Is there a hedonistic utilitarian case for Cryonics? (Discuss)' essay seem to be essentially 'no'.

The claim that 'it might also divert money from wealthy people who would otherwise spend it on more selfish things' gives no reason to suppose that spending money on yourself in this context is somehow unselfish. 

As for 'Further, cryonics might help people take long-term risks more seriously'. Sure. So might giving people better health, or, say, funding long-term risk outreach. At least equally as plausibly to me, constantly telling people that they don't fear death enough and should sign up for cryonics seems likely to make people fear death more, which seems like a pretty miserable thing to inflict on them.

I just don't see any positive case for this to be on the list. It seems to be a vestige of a cultural habit among Less Wrongers that has no place in the EA world.

Comment by arepo on Is it possible to change user name? · 2020-06-27T15:18:34.277Z · EA · GW

Is that an intentional policy, or just a feature that hasn't been implemented yet?

If intentional, could you say why? Obviously it could be confusing, but there are some substantial downsides to preventing it.

Comment by arepo on 80,000 Hours: Anonymous contributors on flaws of the EA community · 2020-06-26T11:15:06.846Z · EA · GW

I'm not sure how public the hiring methodology is, but if it's fully public then I'd expect the candidates to be 'lost' before the point of sending in a CV.

If it's less public that would be less likely, though perhaps the best candidates (assuming they consider applying for jobs at all, and aren't always just headhunted) would only apply to jobs that had a transparent methodology that revealed a short hiring process.

Comment by arepo on Forum update: Tags are live! Go use them! · 2020-06-02T15:21:43.013Z · EA · GW

I think this will make the forum far more useful. Could you add some kind of taglist (or prominent link to one) to the home page?

Comment by arepo on Tips for overcoming low back pain · 2020-03-26T09:33:29.474Z · EA · GW

I wonder if there's a case for carrying heavier loads on your front if you can't easily use hands only. It seems counterintuitive, since that would pull you forward into a hunch, but maybe what matters would be working your posterior chain rather than the actual posture it temporarily puts you in.

Comment by arepo on What are the best arguments for an exclusively hedonistic view of value? · 2020-03-10T10:56:25.522Z · EA · GW

I've got a very slowly in-progress multipart essay attempting to definitively answer this question without resort to (what we normally mean by) intuition:

Comment by arepo on 80,000 Hours: Anonymous contributors on flaws of the EA community · 2020-03-07T10:52:20.491Z · EA · GW

Kudos to 80K for both asking and publishing this. I think I literally agree with every single one of these (quite strongly with most). In particular, the hiring practices criticism - I think there was a tendency especially with early EA orgs to hire for EAness first and competence/experience second, and that this has led to a sort of hiring practice lock in where they value the characteristics if not to the same degree then with a greater bias than a lean efficiency-minded org should have.

A related concern is overinterviewing - I read somewhere (unfortunately I can't remember the source) the claim that the longer and more thorough your interview process, the more you select for people with the willingness and lack of competition for their time to go through all those steps.

This (if I'm right) would have the quadruple effect of wasting EAs' times, which you'd hope would be counterfactually valuable, wasting the organisations' times, ditto, potentially reducing the fidelity of the hiring practice, and of increasing the aforementioned bias towards willingness.

Comment by arepo on I find this forum increasingly difficult to navigate · 2019-07-05T23:11:45.024Z · EA · GW
Re: searching for great posts, there is also an archive page where you can order by top and other things in the gear menu.

Ok, that's quite a lot more helpful than I'd realised - why not make it more prominent though? I didn't see these options even when actively looking for them, and even knowing they're there, unless I deep link to the page as someone above suggested, it's several clicks to reach where I want to be. Though (more on this below), the 'top' option is the only one I can see myself ever using.

Can you say more about how you used the old forum? I’m hearing something like “A couple of times per year I’d look at the top-posts list and read new things there”. (I infer a couple of times per year because once you’ve done it once or twice I’d guess you’ve read all the top posts.) I think that’s still very doable using the archive feature.

I mainly used the 'top posts in <various time periods>' option (typically the 1 or 3 month options, IIRC); median time between visits was probably something like 1-3 months, so that fit pretty well. That said, even on the old forum I strongly wished for a way to filter by subject. Honestly, my favourite forums for UX were probably the old phpBB style ones, where you'd have forums devoted to arbitrarily many subtopics. I don't think they're anywhere near the pinnacle of forum design, but 'subtopic' is such an important divider, that I feel much less clear on how I can get value from a forum without it (which is part of why I've never spent a huge amount of time on the EA forums - though a bigger part is just not having much time to spare)

To a lesser degree, I found the metadata on who'd been active recently. It let me pseudo-follow certain users (though I suspect an actual follow function would be more helpful)

Am also surprised that you lose posts. My sense is that for a post to leave the frontpage takes a couple of days to a week. Do you keep tabs open that long? Or are you finding the posts somewhere else?

Often a friend would link me to a post that had already been around for a week or two when I read it.

Comment by arepo on I find this forum increasingly difficult to navigate · 2019-07-05T22:54:55.653Z · EA · GW
My impression, incidentally, is that the search functionality is decidedly better than it was on the old forum: the search results seem to be more related to what I'm looking for, and be easier to sort through (eg separating 'comments' and 'posts')

For what it's worth, my main concerns are the visual navigation (esp filtering and sorting) rather than a search feature - the latter I find Google invariably better for, as long as you can persuade the bots to index frequently.

Comment by arepo on I find this forum increasingly difficult to navigate · 2019-07-05T11:23:58.747Z · EA · GW

(also worth noting that for me it'd be really helpful to have a user-categorisation or tagging system, so we could easily filter by subject matter. Even just old-school subforums would be swell, but the ideal might be allowing non-authors to tag posts as well)

Comment by arepo on What's the best structure for optimal allocation of EA capital? · 2019-06-05T18:58:24.893Z · EA · GW

A less drastic option would be for OpenPhil to just hire more research staff. I think there's some argument for this given that they're apparently struggling to find ways to distribute their money:

1) a new researcher doesn't need to be as valuable as Holden to have positive EV against the counterfactual of the money sitting around waiting for Holden to find somewhere to donate it to in 5 years

2) the more researchers are hired, even (/especially) when they're ones who Holden doesn't agree with, the more they guard against the risk of any blind spots/particular passions etc of Holden's coming to dominate and causing missed opportunities, since ultimately as far as I can tell there aren't really other strong feedback mechanisms on the grants he ends up making than internal peer review.

(I wouldn't argue strongly for this, but I haven't seen a counterpoint to these arguments that I find compelling)

Comment by arepo on Aging research and population ethics · 2019-04-28T16:01:53.159Z · EA · GW
The PA view doesn't need to assign disvalue to death to make increasing lifespans valuable. It just needs to assign to death a smaller value than being alive.

It depends how you interpret PA. I don't think there is a standard view - it could be 'maximise the aggregate lifetime utility of everyone currently existing', in which case what you say would be true, or 'maximise the happiness of everyone currently existing while they continue to do so', which I think would turn out to be a form of averaging utilitarianism, and on which what you say would be false.

If we make LEV nearer we don't increase the distress anti-aging therapies will cause to people at first. We just anticipate the distress.

Yes, but this was a comment about the desirability of public advocacy of longevity therapies rather than the desirability of longevity therapies themselves. It's quite plausible that the latter is desirable and the former undesirable - perhaps enough so to outweigh the latter.

This doesn't matter though, since, as I wrote, impact under the neutral view is actually bigger.

Your argument was that it's bigger subject to its not reducing the birthrate and adding net population in the near future is good in the long run. Both are claims for which I think there's a reasonable case, neither are claims that seem to have .75 probability (I would go lower for at least the second one, but YMMV). With a .44+ probability that one assumption is false, I think it matters a lot.

Financing aging research has only the effect of hastening it, so moving the date of LEV closer. The ripple effect that defeating aging would cause on the far future would remain the same. People living 5000 years from now wouldn't care if we hit LEV now or in 2040. So this isn't even a measure of impact.

Again this is totally wrong. Technologies don't just come along and make some predetermined set of changes then leave the world otherwise unchanged - they have hugely divergent effects based on the culture of the time and countless other factors. You might as well argue that if humanity hadn't developed the atomic bomb until last year, the world would look identical to today's except that Japan would have two fewer cities (and that in a few years, after they'd been rebuilt, it would look identical again).

Also, my next post is exactly on the shorter term impact. I think it'll be published in a couple of weeks. It will cover DALYs averted at the end of life, impact on life satisfaction, the economic and societal benefits, impact on non-human animals.

Looking forward to it :)

Comment by arepo on Aging research and population ethics · 2019-04-28T09:32:31.793Z · EA · GW

I think it's an interesting cause area (upvoted for investigating something new), though I have three important quibbles with this analysis (in ascending order of importance):

1) The person-affecting (PA) view doesn't make this a slam-dunk. PAness doesn't signify that death in itself has negative value, so given your assumption 'that there isn't suffering at the end of life and people get replaced immediately', on the base PA view, increasing lifespans wouldn't in itself generate value. No doubt there are flavours of PA that would claim death *does* have disvalue, but those would need to be argued for separately.

Obviously there often *is* profound suffering at the end of life, which IMO is a much stronger argument for longevity research - on both PA and totalising views. Though I would also be very wary of writing articles arguing on those grounds, since most people very sensibly try to come to terms with the process of ageing to reduce its subjective harm to them, and undoing that for the sake of moving LEV forward a few years might cause more psychological harm than it prevented.

2) My impression is that the PA view is held by a fairly small minority of EAs and consequentialist moral philosophers (for advocates of nonconsequentialist moral views, I'm not sure the question would even make sense - and it would make a lot less sense to argue for longevity research based on its consequences), and if so, treating it as having equal evidential weight as totalising views is misleading.

It's obviously too large a topic to give much of an inside view on here, but if your view of ethics is basically monist (as opposed to dualist - ie queer-sort-of-moral-fact-ist) I don't think there's any convincing way you could map real-world processes onto a PA view, such that the PA view would make any sense. There's too much vagary about what would qualify as the 'same' or a 'different' person, and no scientific basis for drawing lines in one place rather than another (and hence, none for drawing any lines at all).

3) 'Reminder: most of the impact of aging research comes from making the date of LEV come closer and saving the people who wouldn't otherwise have hit LEV.'

This is almost entirely wrong. Unless we a) wipe ourselves out shortly after hitting it (which would be an odd notion of longevity), or b) reach it within the lifespans of most existing people *and* take a death-averse-PA view, the vast majority of LEV's impact of it will come on the ripple effect on the far future, and the vast majority of its expected impact will be our best guess as to that.

EAs tend to give near-term poverty/animal welfare causes a pass on that estimation, perhaps due to some PA intuitions, perhaps because they're doing good and (almost) immediate work, which if nothing else gives them a good baseline for comparison, perhaps because the immediate measurable value might be as good a proxy as any for far-future expectation in the absence of good alternative ways to think about the latter (and plenty of people would argue that these are all wrong, and hence that we should focus more directly on the far future. But I doubt many of the people who disagree with *them* would claim on reflection that 'most of the impact of poverty reduction comes from the individuals you've pulled out of poverty').

Longevity research doesn't really share these properties, though, and certainly doesn't have them to the same degree, so it's unlikely to have the same intuitive appeal, in which case it's hard to argue that it *should*. Figuring out the short-term effects is probably the best first step towards doing this, but we shouldn't confuse it with the end goal.

Comment by arepo on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-04T23:38:25.863Z · EA · GW
the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.

This seems very wrong to me. I work at Founders Pledge in London, and I doubt a single one of the staff there would disagree with a proposition like 'the magnitude of London rents has a profound effect on my lifestyle'.

They also pay substantially closer to market rate salaries now than they did for the first 2-3 years of existence, during which people no doubt would have been far more sympathetic to the claim.

Comment by arepo on Why is the EA Hotel having trouble fundraising? · 2019-03-27T23:02:34.617Z · EA · GW

A couple of thoughts I'd add (as another trustee):

3. Demand for the hotel has been increasing more or less linearly (until we hit current funding difficulties). As long as that continues, the projects will tend to get better.

This seems like a standard trajectory for meta-charities: for eg I doubt 80k's early career shifts looked anywhere near as high value as the average one does now. I should know - I *was* one of them, back when their 'career consultation' was 'speculating in a pub about earning to give' (and I was a far worse prospect than any 80k advisee or hotel resident today!)

Meanwhile it's easy to scorn such projects as novel-writing, but have we forgotten this? For better or worse, if Eliezer hadn't written that book the rationality and EA communities would look very different now.

6. This might be true as a psychological explanation, but, ceteris paribus, it's actually a reason *to* donate, since it (by definition) makes the hotel a more neglected cause.

Comment by arepo on EA Hotel with free accommodation and board for two years · 2019-03-19T23:06:08.398Z · EA · GW

I would be wary of equivocating different forms of 'inconvenience'. There are at least three being alluded to here:

1) Fighting the akrasia of craving animal products

2) The hassle of finding vegan premade food (else of having to prepare meals for yourself)

3) Reduced productivity gains from missing certain nutrients (else of having to carefully supplement constantly)

Of these, the first basically irrelevant in the hotel - you can remove it as a factor by just not giving people the easy option to ingest them. The second is completely irrelevant, since it's serving or supplying 90% of the food people will be eating.

So that only leaves three, which is much talked about, but so far as I know, little studied, so this 'inconvenience' could even have the wrong sign: the only study on the subject I found from a very quick search showed increased productivity from veganism for health reasons; also on certain models of willpower that treat it as analogous to a muscle, it could turn out that depriving yourself (even by default, from the absence of offered foods) you improve your willpower and thus become more productive.

I've spoken to a number of people who eat meat/animal products for the third reason, but so far as I know they rarely seem to have reviewed any data on the question, and almost never to have actually done any controlled experiments on themselves. Honestly I suspect many of them are using the first two to justify a suspicion of the third (for eg, I know several EAs who eat meat with productivity justifications, but form whom it's usually *processed* meat in the context of other dubious dietary choices, so they demonstrably aren't optimising their diet for maximal productivity).

Also, if the third does turn out to be a real factor, it seems very unlikely that more than a tiny bit of meat every few days would be necessary to fix the problem for most people, and going to the shops to buy that for themselves seems unlikely to cause them any serious inconvenience.

Comment by arepo on EA is vetting-constrained · 2019-03-08T22:53:03.401Z · EA · GW

I can't help but appreciate the irony that 5 hours after having been posted this is still awaiting moderator approval.

Comment by arepo on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T23:44:43.543Z · EA · GW
Given that other organizations can raise large funds, an alternative explanation is that donors think that the expected impact of the organizations that cannot get funding is low.

It's not entirely obvious how that looks different from EA being funding constrained. No donors are perfectly rational and they surely tend to be irrational in relatively consistent ways, which means that some orgs having surplus funds is totally consistent with there not being enough money to fund all worthwhile orgs. (this essentially seems like a microcosm of the world having enough money to fix all its problems with ease, and yet there ever having been a niche for EA funding).

Also, if we take the estimates of the value of EA marginal hires on the survey from a couple of years back literally, EA orgs tend to massively underpay their staff compared to their value, and presumably suffer from a lower quality hiring pool as a result.

Comment by arepo on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T23:07:37.316Z · EA · GW

I agree with all of this, though I'd add that I think part of the problem is the recent denigration of earning to give, which is often all that someone realistically *can* do, at least in the short term.

Comment by arepo on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-01-13T22:10:53.060Z · EA · GW

Can I suggest keying the maths in the post, so that those of us wanting to try and parse it but without a mathematical background can feasibly do so?

Comment by arepo on Why we have over-rated Cool Earth · 2018-12-04T21:47:56.725Z · EA · GW
I think nobody delved into the Cool Earth numbers because it wasn't worth their time, because climate change charities generally aren't competitive with the standard EA donation opportunities

This claim seems exactly what people felt was too hubristic - how could anyone be so confident on the basis of a quick survey of such a complex area that climate didn't match up to other donation opportunities?

Comment by arepo on EA Hotel with free accommodation and board for two years · 2018-06-07T12:20:09.290Z · EA · GW

Is there any particular reason why the role needs to be filled by an EA? I think we as a community are too focused on hiring internally in general, and in this case almost no engagement with the ideas of EA seems like it would necessary - they just need to be good at running a hotel (and ok with working around a bunch of oddballs).

Comment by arepo on EA Hotel with free accommodation and board for two years · 2018-06-04T22:15:51.631Z · EA · GW

Hey Greg, this is a super interesting project - I really hope it takes off. Some thoughts on your essay:

1) Re the hotel name, I feel like this should primarily be made with the possibility of paying non-EAs in mind. EAs will - I hope - hear of the project by reputation rather than name, so the other guests are the ones you're most likely to need make a strong first impression on. 'Effective Altruism Hotel' definitely seems poor in that regard - 'Athena' seems ok (though maybe there's some benefits to renaming for the sake of renaming if the hotel was failing when you bought it)

2) > Another idea for empty rooms is offering outsiders the chance to purchase a kind of “catastrophic risk insurance”; paying, say, £1/day to reserve the right to live at the hotel in the event of a global (or regional) catastrophe.

This seems dubious to me (it's the only point of your essay I particularly disagreed with). It's a fairly small revenue stream for you, but means you're attracting people who're that little bit more willing to spend on their own self-interest (ie that little bit less altruistic), and penalises people who just hadn't heard of the project. Meanwhile, in the actual event, what practical effect would it have? Would you turn away people who showed up early when the sponsors arrived for their room?

If you want an explicit policy on using it as a GCR shelter, it seems like 'first come first served' would be at least as meritocratic, require less bureaucracy and offer a much more enforceable Schelling point.

3) As you say, I think this will be more appealing the more people it has involved from the beginning, so I would suggest aggressively marketing the idea in all EA circles which seem vaguely relevant, subject to the agreement of the relevant moderators - not that high a proportion of EAs read this forum, and of those who do, not that many will see this post. It's a really cool idea that I hope people will talk about, but again they'll do so a lot more if it's already seen as a success.

4) You describe it in the link, but maybe worth describing the Trustee role where you first mention it - or at least linking to it at that point.

Comment by arepo on Founders Pledge is seeking a Community Manager · 2018-03-16T14:54:35.645Z · EA · GW

K. I'll consider my wrist duly slapped!

Comment by arepo on Cognitive and emotional barriers to EA's growth · 2018-03-12T01:35:31.881Z · EA · GW

Great stuff! A few quibbles:

  • It feels odd to specify an exact year EA (or any movement) was 'founded'. Givewell (surprisingly not mentioned other than a logo on slide 6) have been around since 2007; MIRI since 2000; FHI since 2005; Giving What We Can since 2009. Some or all of these (eg GWWC) didn't exactly have a clear founding date, though, rather becoming more like their modern organisations over years. One might not consider some of them more strictly 'EA orgs than others' - but that's kind of the point.

  • I'd be wary of including 'moral offsetting' as an EA idea. It's fairly controversial, and sounds like the sort of thing that could turn people off the other ideas

  • Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).

  • Slide 12 talks about suffering exclusively; without getting into whether happiness can counterweigh it, it seems like it could mention positive experiences as well

  • I'd be wary of criticising intuitive morality for not updating on moral uncertainty. The latter seems like a fringe idea that's received a lot of publicity in the EA community, but that's far from universally accepted even by eg utilitarians and EAs

  • On slide 18 it seems odd to have an 'other' category on the right, but omit it on the left with a tiny 'clothing' category. Presumably animals are used and killed in other contexts than those four, so why not just replace clothing with 'other' - which I think would make the graph clearer

  • I also find the colours on the same graph a bit too similar - my brain keeps telling me that 'farm' is the second biggest categorical recipient when I glance it it, for eg

  • I haven't read the Marino paper and now want to, 'cause it looks like it might update me against this, but provisionally: it still seems quite defensible to believe that chickens experience substantially less total valence per individual than larger animals, esp mammals, even if it's becoming rapidly less defensible to believe that they don't experience something qualitatively similar to our own phenomenal experiences. [ETA] Having now read-skimmed it, I didn't update much on the quantitative issue (though it seems fairly clear chickens have some phenomenal experience, or at least there's no defensible reason to assume they don't)

  • Slide 20 'human' should be pluralised

  • Slide 22 'important' and 'unimportant' seem like loaded terms. I would replace with something more factual like (ideally a much less clunkily phrased) 'causes large magnitude of suffering', 'causes comparatively small magnitude of suffering'

  • I don't understand the phrase 'aestivatable future light-cone'. What's aestivation got to do with the scale of the future? (I know there are proposals to shepherd matter and energy to the later stages of the universe for more efficient computing, but that seems way beyond the scope of this presentation, and presumably not what you're getting at)

  • I would change 'the species would survive' on slide 25 to 'would probably survive', and maybe caveat it further, since the relevant question for expected utility is whether we could reach interstellar technology after being set back by a global catastrophe, not whether it would immediately kill us (cf eg - similarly I'd be less emphatic on slide 27 about the comparative magnitude of climate change vs the other events as an 'X-risk', esp where X-risk is defined as here:

  • Where did the 10^35 number for future sentient lives come from for slide 26? These numbers seem to vary wildly among futurists, but that one actually seems quite small to me. Bostrom estimates 10^38 lost just for a century's delayed colonization. Getting more wildly speculative, Isaac Arthur, my favourite futurist, estimates a galaxy of Matrioska brains could emulate 10^44 minds - it's slightly unclear, but I think he means running them at normal human subjective speed, which would give them about 10^12 times the length of a human life between now and the end of the stelliferous era. The number of galaxies in the Laniakea supercluster is approx 10^5, so that would be 10^61 total, which we can shade by a few orders of magnitude to account for inefficiencies etc and still end up with a vastly high number than yours. And if Arthur's claims about farming Hawking radiation and gravitational energy in the post-stellar eras are remotely plausible, then the number of sentient beings Black Hole era would dwarf that number again! (ok, this maybe turned into an excuse to talk about my favourite v/podcast)

  • Re slide 29, I think EA has long stopped being 'mostly moral philosophers & computer scientists' if it ever strictly was, although they're obviously (very) overrepresented. To what end do you note this, though? It maybe makes more sense in the talk, but in the context of the slide, it's not clear whether it's a boast of a great status quo or a call to arms of a need for change

  • I would say EA needs more money and talent - there are still tonnes of underfunded projects!

Comment by arepo on Founders Pledge is seeking a Community Manager · 2018-03-09T00:34:36.680Z · EA · GW

I'm agnostic on the issue. FB groups have their own drawbacks, but I appreciate the clutter concern. In the interest of balance, perhaps anyone who agrees with you can upvote your comment, anyone who disagrees can upvote this comment (and hopefully people won't upvote them for any other reason) and if there's a decent discrepancy we can consider the question answered?

Comment by arepo on Why not to rush to translate effective altruism into other languages · 2018-03-08T00:10:13.309Z · EA · GW

Seconding Evan - it's great to have this laid out as a clear argument.

Re this:

In this way, any kind of broad based outreach is risky because it’s hard to reverse. Once your message is out there, it tends to stick around for years, so if you get the message wrong, you’ve harmed years of future efforts. We call this the risk of “lock in”.

I think there are some ways that this could still pan out as net positive, in reverse order of importance:

1) It relies on the arguments against E2G as a premium EA cause, which I'm still sceptical of given numerous very large funding gaps in EA causes and orgs. Admittedly In the case of China (and other semideveloped countries) the case against E2G seems stronger though, since the potential earnings are substantially lower, and the potential for direct work might be as strong or higher.

2) Depending on how you discount over time, and (relatedly) how seriously you take the haste consideration, getting a bunch of people involved sooner might be worth slower takeup later.

3) You mentioned somewhere in the discussion that you've rarely known anyone to be more amenable to EA because they'd encountered the ideas, but this seems like underestimating the nudge effects on which 99% of marketing are based. Almost no-one ever consciously thinks 'given that advert, I'm going to buy that product' - but when you see the product on the shelf, it just feels marginally more trustworthy because you already 'know' it. It seems like mass media EA outreach could function similarly. If so, lock-in might be a price worth paying.

This isn't to say that I think your argument is wrong, just that I don't yet think it's clear-cut.

It also seems like the risks/reward ratio might vary substantially from country to country, so it's perhaps worth thinking about at least each major economy separately?

To the degree that the argument does vary from country to country, I wonder whether there's any mileage in running some experiments with outreach in less economically significant countries, esp when they have historically similar cultures? Eg perhaps for China, it would be worth trialling a comparatively short termist strategy in Taiwan.

Comment by arepo on Why not to rush to translate effective altruism into other languages · 2018-03-07T23:34:39.762Z · EA · GW

This seems no more (to me, less) of a concern than that having a diversity of languages and cultures would help avoid it becoming tribalised.

Also, re the idea of coordination, cf my comment above about 'thought leaders'. I know it's something Will's been pushing for, but I'm concerned about the overconcentration of influence in eg EA funds (although that's a slightly different issue from an overemphasis on the ideas of certain people)

Comment by arepo on Why not to rush to translate effective altruism into other languages · 2018-03-07T18:03:21.720Z · EA · GW

Somewhat tangentially, am I unusual in finding the idea of 'thought leaders' for a movement about careful and conscientious consideration of ideas profoundly uncomfortable?

Comment by arepo on EA Funds hands out money very infrequently - should we be worried? · 2018-02-11T21:38:21.850Z · EA · GW

Huh, that seems like a missed opportunity. I know very little about investing, but aren't there short-term investments with modest returns that would have a one-off setup cost for the fund, such that all future money could go into them fairly easily?

Comment by arepo on The almighty Hive will · 2018-02-02T23:01:45.197Z · EA · GW

Keep in mind such insurance can happen at pretty much any scale - per Joey's description (above) of richer EAs just providing some support for poorer friends (even if the poorer friends are actually quite wealthy and E2G), for organisations supporting their employees, donors supporting their organisations (in the sense of giving them licence to take risks of financial loss that have positive EV), or EA collectives (such as the EA funds) backing any type of smaller entity.

Comment by arepo on EA Funds hands out money very infrequently - should we be worried? · 2018-02-01T00:39:46.781Z · EA · GW

I also feel that, perhaps not now but if they grow much more, it would be worth sharing the responsibility among more than just one person per fund. They don't have to disagree vociferously on many subjects, just provide a basic sanity check on controversial decisions (and spreading the work might speed things up if research time is a limiting factor)

Comment by arepo on The almighty Hive will · 2018-01-30T20:37:05.122Z · EA · GW

I pretty much agree with this - though I would add that you could also spend the money on just attracting existing talent. I doubt the Venn diagram of 'people who would plausibly be the best employee for any given EA job' and 'people who would seriously be interested in it given a relatively low EA wage' always forms a perfect circle.

Comment by arepo on How scale is often misused as a metric and how to fix it · 2018-01-30T20:27:33.177Z · EA · GW

I read this the same way as Max. The issue of cost to solve (eg) all cases of malaria is really tractability, not scale. Scale is how many people would be helped (and to what degree) by doing so. Divide the latter by the former, and you have a sensible-looking cost-benefit analysis, (that is sensitive to the 'size and intensity of the problem', ie the former).

I do think there are scale-related issues with drawing lines between 'problems', though - if a marginal contribution to malaria nets now achieves twice as much good as the same marginal contribution would in 5 years, are combatting malaria now and combatting malaria in five years 'different problems', or do you just try to average out the cost-benefit ratio between somewhat arbitrary points (eg now and when the last case of malaria is prevented/cured). But I also think the models Max and Owen have written about on the CEA blog do a decent job of dealing with this kind of question.

Comment by arepo on The almighty Hive will · 2018-01-30T20:11:38.331Z · EA · GW

I had a feeling that might be the case. That page still leaves some possible alternatives, though, eg this exemption:

an employer is usually expected to provide accommodation for people doing that type of work (for example a manager living above a pub, or a vicar looking after a parish)

It seems unlikely, but worth looking at whether developing a sufficient culture of EA orgs offering accommodation might suffice the 'usually expected' criterion.

It also seems a bit vague about what would happen if the EA org actually owned the accommodation rather than reimbursing rent as an expense, or if a wealthy EA would-be donor did, and let employees (potentially of multiple EA orgs) stay in it for little or no money (and if so, in the latter case, whether 'wealthy would-be donor' could potentially be a conglomerate a la EA funds)

There seems at least some precedent for this in the UK in that some schools and universities offer free accommodation to their staff, which don't seem to come under any of the exemptions listed on the page.

Obviously other countries with an EA presence might have more/less flexibility around this sort of thing. But if you have an organisation giving accommodation to 10 employees in a major developed world city, it seems like you'd be saving (in the UK) 20% tax on something in the order of £800 per month per employee, ie about £7600 per year, which seems like a far better return than investing the money would get (not to mention, if it's offered as a benefit for the job, being essentially doubly invested - once on the tax savings, once on the normal value of owning a property).

So while I'm far from confident that it would be ultimately workable, it seems like there would be high EV in an EA with tax law experience looking into it in each country with an EA org.

Comment by arepo on Against neglectedness · 2017-11-06T20:16:15.804Z · EA · GW

Ta Holly - done.

Comment by arepo on Against neglectedness · 2017-11-06T20:04:13.850Z · EA · GW

Just read this. Nice point about future people.

It sounds like we agree on most of this, though perhaps with differing emphasis - yy feeling is that neglectedness such a weak heuristic that we should abandon it completely, and at the very least avoid making it a core part of the idea of effective altruism. Are there cases where you would still advocate using it?

Comment by arepo on Against neglectedness · 2017-11-06T19:36:07.942Z · EA · GW

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.

On reflection I don't think I even believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.

Comment by arepo on Against neglectedness · 2017-11-03T16:54:57.266Z · EA · GW

then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Can you expand on this? I only know of elasticity from reading around it after's Rob's in response to the first draft of this essay, so if there's some significance to it that isn't captured in the equations given, I maybe don't know it. If it's just a case of relabelling, I don't see how it would solve the problems with the equations, though - unused variables and divisions by zero seem fundamentally problematic.

But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

But [

this only holds to the extent that the field is proportionally less neglected - a priori you're less replaceable in an area that's 1/3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.


which is just point 6 from the 'Diminishing returns due to problem prioritisation' section applied. I think all the preceding points from the section could apply as well - eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it's one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can't see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.

Comment by arepo on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T09:58:46.425Z · EA · GW

Please let's not give people any more incentives to game the karma system than they already have.

Comment by arepo on If tech progress might be bad, what should we tell people about it? · 2016-04-03T00:00:38.129Z · EA · GW

If it's not a force for good, and if you believe investment banking and similar roles damage the economy, that makes earning to give via them look more attractive.

Comment by arepo on Why don't many effective altruists work on natural resource scarcity? · 2016-02-22T18:30:16.980Z · EA · GW

But the question I am concerned with is whether it's the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.

Sure. I mean, we basically agree, except that I feel much lower confidence (and anxiety at the confidence with which non-specialists make these pronunciations). Going into research in general is something that I've mostly felt more pessimistic about as an EA approach than 80K are, but if someone already partway down the path to a career based on resource depletion showed promise and passion in it, I'd think it plausible it was optimal for them to continue.

Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:

  • Noticing that market prices - a collective judgement of millions of informed people in these industries - seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.

I would probably trust the market over a single scientist, but I would trust the collective judgement of a field of scientists over the market. I don't see what mechanism is supposed to make the market a reliable predictor of anything if not a reflection of the scientific understanding of the field with individual randomness mostly drowned out.

  • Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.

I've seen the same, but my own sense is that the reverse problem - economists having an astonishing lack of understanding of science - is much more acute. Also, I find scientists more scrupulous about the limits of their predictive ability. To give specific examples two of which are by figures close to the EA movement, Stephen Landsburg informing Stephen Hawking that his understanding of physics is '90% of the way there', Robin Hanson arguing without a number in sight that 'Most farm animals prefer living to dying; they do not want to commit suicide' and therefore that vegetarianism is harmful, and Bjorn Lomborg's head-on collision with apparently the entire field of climate science in The Skeptical Environmentalist.

  • Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.

I can't opine on this, except that I still feel greater epistemic humility is worthwhile. If your conclusions are right, it seems worth trying to get them published in a prominent scientific journal (or if not by you then by an academic who shares your views - and perhaps hasn't already alienated the journal in question) - even if you don't manage, one would hope you'd get decent feedback on what they perceived as the flaws in your argument.

It's true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don't see a reason for it to be higher overall, in expectation.

Perhaps, but I don't feel like you've acknowledged the problem that technological progress relies on technological progress, such that this could turn out to be a house of cards. As such, it needn't necessarily be resource depletion that brings it crashing down - any GCR could have the same effect. So work on resource depletion provides some insurance against such a multiply-catastrophic scenario.

Comment by arepo on Why don't many effective altruists work on natural resource scarcity? · 2016-02-22T17:53:38.221Z · EA · GW

(reposted from slightly divergent Facebook discussion)

I sometimes wonder if the 'neglectedness criterion' isn't overstated in current EA thought. Is there any solid evidence that it makes marginal contributions to a cause massively worse?

Marginal impact is a product of a number of factors of which the (log of the?) number of people working on it is one, but the bigger the area the thinner that number will be stretched in any subfield - and resource depletion is an enormous category, so it seems unlikely that the number of people working on any specific area of it will exceed the number of people working on core EA issues by more than a couple of orders of magnitude. Even if that equated to a marginal effectiveness multiplier of 0.01 (which seems far too pessimistic to me), we're used to seeing such multipliers become virtually irrelevant when comparing between causes. I doubt if many X-riskers would feel deterred if you told them their chances of reducing X-risk was comparably nerfed.

Michael Wiebe commented on my first reply:

No altruism needed here; profit-seeking firms will solve this problem.

That seems like begging the question. So long as the gap between a depleting resource and its replacement is sufficiently small, they probably will do so, but if for some reason it widens sufficiently, profit-seeking firms will have little incentive or even ability to bridge it.

I'm thinking of the current example of in vitro meat as a possible analogue - once the technology for that's cracked, the companies that produce it will be able to make a killing undercutting naturally grown meat. But even now, with prototypes appearing, it seems too distant to entice more than a couple of companies to actively pursue it. Five years ago, virtually none were - all the research on it was being done by a small number of academics. And that is a relatively tractable technology that we've (I think) always had a pretty clear road map to developing.

Comment by arepo on Why don't many effective altruists work on natural resource scarcity? · 2016-02-21T10:59:21.758Z · EA · GW

Julian Simon, the incorrigible optimist, won the bet - with all five becoming cheaper in inflation adjusted terms.'

I hope he paid Stanislav Petrov off for that.

Less glibly, I lean towards agreeing with the argument, but very weakly - it seems far too superficial to justify turning people away from working on the subject if that's where their skills and interests lie.

In particular in seems unclear that economic-philosophical research into GCR and X-risk has a greater chance of actually lowering such outcomes than scientific and technological research into technologies that will reliably do so once/if they're available.

Yes, people can switch from one resource to another as each runs low, but it would be very surprising if in almost all cases the switch wasn't to a higher-hanging fruit. People naturally tend to grab the most accessible/valuable resources first.

Perhaps the global economy is advancing fast enough or faster than enough to keep pace with the increasing difficulty of switching resource-bases, but that feels like a potential house of cards - if something badly damages the global economy (say, a resource irreplaceably running out, or a project to replace one unexpectedly failing), the gulf between several other depleting resources and their possible replacements could effectively widen. The possible cascade from this is a GCR in itself, and one that numerous people seem to consider a serious one. I feel like we'd be foolish to dismiss the large number of scientifically literate doomsayers based on non-expert speculation.

Comment by arepo on Being a tobacco CEO is not quite as bad as it might seem · 2016-02-06T13:45:20.678Z · EA · GW

Slight quibble:

This introduces another factor we need to control for. Yes, if you really are better than the alternative CEO you might sell more cigarettes, and yes the board clearly thought you were the best choice for CEO - but what if they're wrong? We need to adjust by the probability that you are indeed the best choice for CEO, conditional on the board thinking you were.

This seems like a pretty hard probability to estimate. My guess is it is quite low - I would expect many potential applicants, and a relatively poor ability to discriminate between them - but in lieu of actual analysis lets just say 50%.

You seem to shift here between p(Best among applicants) and p(Better than the guy who would have been hired in lieu of you). Guesstimating 50% for the former sounds reasonable-ish to me, but I would guess it's substantially higher for the latter.

Maybe this comes out in the wash, since the difference between you and your actual replacement is smaller in expectation than the difference between you and the best among all the applicants.