Posts

Is it possible to change user name? 2020-06-26T11:09:59.780Z
I find this forum increasingly difficult to navigate 2019-07-05T10:27:32.975Z
Founders Pledge is seeking a Community Manager 2018-03-07T17:23:56.094Z
Founders Pledge hiring a Personal & Development Assistant in the US 2018-02-06T19:14:13.419Z
The almighty Hive will 2018-01-28T17:59:07.040Z
Against neglectedness 2017-11-01T23:09:04.526Z
Job: Country Manager needed for Germany at Founders Pledge 2017-04-26T14:26:12.764Z

Comments

Comment by Arepo on Concerns with ACE's Recent Behavior · 2021-04-19T05:55:30.807Z · EA · GW

Maybe your sense of what you're claiming and my sense of what you're claiming are using different meanings of 'cancel culture'. In your previous comment, you wrote 

'On the other hand, we've had quite a bit of anti-cancel-culture stuff on the Forum lately. There's been much more of that than of pro-SJ/pro-DEI content, and it's generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly'

So I've been assuming that you were referring to 'pro-SJ/DEI' and 'anti-cancel-culture' more or less antonymonously. Yes, the group is against deplatforming (at least, without extreme  epistemic/moral caution), no it's not against SJ/DEI.

Inasmuch as they're different concepts, then I don't see you you couldn't think anti-cancel-culture - which is basically 'pro-segregation' - culture wouldn't help prevent a split! The point is then not to exclude any cultural group, but to discourage segregation, hostility, and poor epistemics  when discussing this stuff.

Comment by Arepo on Concerns with ACE's Recent Behavior · 2021-04-18T18:40:27.471Z · EA · GW

I checked in with the other two admins about our approx political positions, and the answers were:

  • radical centrist
  • centre left-ish
  • centre left-ish

We're trying to find both a social justice and conservative admin to add some balance, but so far no-one's come forward for either.

Comment by Arepo on Concerns with ACE's Recent Behavior · 2021-04-18T18:36:48.387Z · EA · GW

 I'm honestly a bit flummoxed here. Why would contributing to a Facebook group explicitly aligned with one side of this dispute help avoid a split?

I set up the group, and while I have my own views  on which groups are less tolerant/tolerated I'm very keen for the group to do what it suggests in the title: bring people together/encourage cooperation/tolerance in all directions etc. It is absolutely not 'explicitly aligned with one one side' .

(I have strong downvoted your comment for making this claim without giving any basis for it. I'll retract the downvote if you edit/moderate this remark, since otherwise I'm fairly agnostic about the comment content)

Comment by Arepo on Concerns with ACE's Recent Behavior · 2021-04-18T18:35:04.234Z · EA · GW

> I'm honestly a bit flummoxed here. Why would contributing to a Facebook group explicitly aligned with one side of this dispute help avoid a split?

I set up the group, and while I have my own views  on which groups are less tolerant/tolerated I'm very keen for the group to do what it says in the title bring people together/encourage cooperation  etc. It is absolutely not 'explicitly aligned with one one side' .

Comment by Arepo on What Makes Outreach to Progressives Hard · 2021-03-18T09:09:45.817Z · EA · GW

Helpful post!

What makes you say rejecting person-affecting views has uncomfortable (for progressive) and environmental ethics, out of curiosity? I would have thought the opposite: person-affecting views struggle not to treat environmental collapse as morally neutral if it leads to a different set of people existing than would have otherwise.

Comment by Arepo on Deference for Bayesians · 2021-02-17T14:40:45.698Z · EA · GW

I've strong upvoted Ben's points, and would add a couple of concerns:
* I don't know how in any particular situation one would usefully separate the object-level from the general principle. What heuristic would I follow to judge how far to defer to experts on banana growers in Honduras on the subject of banana-related politics?
* The less pure a science gets (using https://xkcd.com/435/ as a guide), the less we should be inclined to trust its authorities, but the less we should also be inclined to trust our own judgement - the relevant factors grow at a huge rate

So sticking to the object level and the eg of minimum wage, I would not update on a study that much, but strong agree with Ben that 98% is far too confident, since when you say 'the only theoretical reason', you presumably mean 'as determined by other social science theory'.

(In this particular case,  it seems like you're conflating the (simple and intuitive to me as well fwiw) individual effect of having to pay a higher wage reducing the desirability of hiring someone with the much more complex and much less intuitive claim that higher wages in general would reduce number of jobs in general - which is the sort of distinction that an expert in the field seems more likely to be able to draw.)

So my instinct is that Bayesians should only strongly disagree with experts in particular cases where they can link their disagreement to particular claims the experts have made that seem demonstrably wrong on Bayesian lights.

Comment by Arepo on Making decisions under moral uncertainty · 2021-02-09T22:57:57.470Z · EA · GW

There are some fundamental problems facing moral uncertainty that I haven't seen its proponents even refer to, let alone refute:
 

  • The xkcd.com/927 problem - whatever moral uncertainty theory one expounds to deal with theories T1...Tn seems likely to constitute Tn+1.  I've just been reading through Will's new book, and though it addresses this one, it does so very vaguely, basically by claiming that 'one ought under moral uncertainty theory X to do X1' is a qualitatively different claim than 'one ought under moral theory Y to do Y1'. This might be true, depending on some very murky questions about what norms look like, but it also seems that the latter is qualitatively different from the claim that 'one ought under moral theory Z to do Z1'. We use the same word 'ought' in all three cases, but it may well be a homonym.
  • If one of many of the subtypes of moral anti-realism is true, moral uncertainty is devoid of content - words like 'should', 'ought' etc are either necessarily wrong or not even meaningful.
Comment by Arepo on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-03T15:27:26.346Z · EA · GW

One issue I feel the EA community has badly neglected is the probability given various (including modest) civilizational backslide scenarios of us still being able to (and *actually*) developing the economies of scale needed to become an interstellar species. 

To give a single example, a runaway Kessler effect could make putting anything in orbit basically impossible unless governments overcome the global tragedy of the commons and mount an extremely expensive mission to remove enough debris to regain effective orbital access - in a world where we've lost satellite technology and everything that depends on it. 

EA so far seem to have treated 'humanity doesn't go extinct' in scenarios like this as equivalent to 'humanity reaches its interstellar potential', which seems very dangerous to me - intuitively, it feels like there's at least a 1% chance that we wouldn't ever solve such a problem in practice, even if civilisation lasted for millennia afterwards. If so, then we should be treating it as (at least) 1/100th of an existential catastrophe - and a couple of orders of magnitude doesn't seem like that big a deal especially if there are many more such scenarios than there are extinction-causing ones.

Do you have any  thoughts on how to model this question in a generalisable way that it could give a heuristic for non-literal-extinction GCRs? Or do you think one would need to research specific GCRs to answer it for each of them?

Comment by Arepo on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-03T14:24:31.398Z · EA · GW

What do you make of Ben Garfinkel's work on scepticism towards AI's capacity being separable from its goals/his broader skepticism of brain in a box scenarios?

Comment by Arepo on Big List of Cause Candidates · 2020-12-31T15:27:49.208Z · EA · GW

Can you spell both of these points out for me? Maybe I'm looking in the wrong place, but I don't see anything in that tag description that recommends criteria for cause candidates.

As for Scott's post, I don't see anything more than a superficial analogy. His argument is something like 'the weight by which we improve our estimation of someone for their having a great idea should be much greater than the weight by which we downgrade our estimation of them for having a stupid idea'. Whether or not one agrees with this, what does it have to do with including on this list an expensive luxury that seemingly no-one has argued for on (effective) altruistic grounds?

Comment by Arepo on Big List of Cause Candidates · 2020-12-30T10:10:56.027Z · EA · GW

Write a post on which aspect? You mean basically fleshing out the whole comment?

Comment by Arepo on Big List of Cause Candidates · 2020-12-30T10:10:05.033Z · EA · GW

One other cause-enabler I'd love to see more research on is donating to (presumably early stage) for-profits. For all that they have better incentives it's still a very noisy space with plenty of remaining perverse incentives, so supporting those doing worse than they merit seems like it could be high value.

It might be possible to team up with some VCs on this, to see if any of them have a category of companies they like but won't invest in; perhaps because of a surprising lack of traction; or perhaps because of predatory pricing by companies with worse products/ethics; perhaps some other unmerited headwind.

Comment by Arepo on Big List of Cause Candidates · 2020-12-30T09:59:23.776Z · EA · GW

Then I would suggest being more clear about what it's comprehensive of, ie by having clear criteria for inclusion. 

Comment by Arepo on Big List of Cause Candidates · 2020-12-30T00:05:45.156Z · EA · GW

I would like to see more about 'minor' GCRs and our chance of actually becoming an interstellar civilisation given various forms of backslide. In practice, the EA movement seems to treat the probability as 1. We can see this attitude in this very post, 

I don't think this is remotely justified. The arguments I've seen are generally of the form 'we'll still be able to salvage enough resources to theoretically recreate any given  technology', which  doesn't mean we can get anywhere near the economies of scale needed to create global industry on today's scale, let alone that we actually will given realistic political development. And the industry would need to reach the point where we're a reliably spacefaring civilisation, well beyond today's technology, in order to avoid the usual definition of being an existential catastrophe (drastic curtailment of life's potential).

If the chance of recovery from any given backslide is 99%, then that's only two orders of magnitude between its expected badness and the badness of outright extinction, even ignoring other negative effects. And given the uncertainty around various GCRs, a couple of orders of magnitude isn't that big a deal (Toby Ord's The Precipice puts an order of magnitude or two between the probability of many of the existential risks we're typically concerned with).

Things I would like to see more discussion of in this area:

  • General principles for assessing the probability of reaching interstellar travel given specific backslide parameters and then, with reference to this:
  • Kessler syndrome
  • Solar storm disruption
  • CO2 emissions from fossil fuels and other climate change rendering the atmosphere unbreathable (this would be a good old fashioned X-risk, but seems like one that no-one has discussed - in Toby's book he details some extreme scenarios where a lot of CO2 could be released that wouldn't necessarily cause human extinction by global warming, but that some of my back-of-the-envelope maths based on his figures seemed consistent with this scenario)
  • CO2 emissions from fossil fuels and other climate change substantially reducing IQs
  • Various 'normal' concerns: antibiotic resistant bacteria; peak oil; peak phosphorus; substantial agricultural collapse; moderate climate change; major wars; reverse Flynn effect; supporting interplanetary colonisation; zombie apocalypse
  • Other concerns that I don't know of, or that no-one has yet thought of, that might otherwise be dismissed by zealous X-riskers as 'not a big deal'
Comment by Arepo on Big List of Cause Candidates · 2020-12-29T23:39:28.703Z · EA · GW

I wish we could finally strike off cryonics from the list. The most popular answers in the linked 'Is there a hedonistic utilitarian case for Cryonics? (Discuss)' essay seem to be essentially 'no'.

The claim that 'it might also divert money from wealthy people who would otherwise spend it on more selfish things' gives no reason to suppose that spending money on yourself in this context is somehow unselfish. 

As for 'Further, cryonics might help people take long-term risks more seriously'. Sure. So might giving people better health, or, say, funding long-term risk outreach. At least equally as plausibly to me, constantly telling people that they don't fear death enough and should sign up for cryonics seems likely to make people fear death more, which seems like a pretty miserable thing to inflict on them.

I just don't see any positive case for this to be on the list. It seems to be a vestige of a cultural habit among Less Wrongers that has no place in the EA world.

Comment by Arepo on Is it possible to change user name? · 2020-06-27T15:18:34.277Z · EA · GW

Is that an intentional policy, or just a feature that hasn't been implemented yet?

If intentional, could you say why? Obviously it could be confusing, but there are some substantial downsides to preventing it.

Comment by Arepo on 80,000 Hours: Anonymous contributors on flaws of the EA community · 2020-06-26T11:15:06.846Z · EA · GW

I'm not sure how public the hiring methodology is, but if it's fully public then I'd expect the candidates to be 'lost' before the point of sending in a CV.

If it's less public that would be less likely, though perhaps the best candidates (assuming they consider applying for jobs at all, and aren't always just headhunted) would only apply to jobs that had a transparent methodology that revealed a short hiring process.

Comment by Arepo on Forum update: Tags are live! Go use them! · 2020-06-02T15:21:43.013Z · EA · GW

I think this will make the forum far more useful. Could you add some kind of taglist (or prominent link to one) to the home page?

Comment by Arepo on Tips for overcoming low back pain · 2020-03-26T09:33:29.474Z · EA · GW

I wonder if there's a case for carrying heavier loads on your front if you can't easily use hands only. It seems counterintuitive, since that would pull you forward into a hunch, but maybe what matters would be working your posterior chain rather than the actual posture it temporarily puts you in.

Comment by Arepo on What are the best arguments for an exclusively hedonistic view of value? · 2020-03-10T10:56:25.522Z · EA · GW

I've got a very slowly in-progress multipart essay attempting to definitively answer this question without resort to (what we normally mean by) intuition: http://www.valence-utilitarianism.com/posts/choose-your-preference-utilitarianism-carefully-part-1

Comment by Arepo on 80,000 Hours: Anonymous contributors on flaws of the EA community · 2020-03-07T10:52:20.491Z · EA · GW

Kudos to 80K for both asking and publishing this. I think I literally agree with every single one of these (quite strongly with most). In particular, the hiring practices criticism - I think there was a tendency especially with early EA orgs to hire for EAness first and competence/experience second, and that this has led to a sort of hiring practice lock in where they value the characteristics if not to the same degree then with a greater bias than a lean efficiency-minded org should have.

A related concern is overinterviewing - I read somewhere (unfortunately I can't remember the source) the claim that the longer and more thorough your interview process, the more you select for people with the willingness and lack of competition for their time to go through all those steps.

This (if I'm right) would have the quadruple effect of wasting EAs' times, which you'd hope would be counterfactually valuable, wasting the organisations' times, ditto, potentially reducing the fidelity of the hiring practice, and of increasing the aforementioned bias towards willingness.

Comment by Arepo on I find this forum increasingly difficult to navigate · 2019-07-05T23:11:45.024Z · EA · GW
Re: searching for great posts, there is also an archive page where you can order by top and other things in the gear menu.

Ok, that's quite a lot more helpful than I'd realised - why not make it more prominent though? I didn't see these options even when actively looking for them, and even knowing they're there, unless I deep link to the page as someone above suggested, it's several clicks to reach where I want to be. Though (more on this below), the 'top' option is the only one I can see myself ever using.

Can you say more about how you used the old forum? I’m hearing something like “A couple of times per year I’d look at the top-posts list and read new things there”. (I infer a couple of times per year because once you’ve done it once or twice I’d guess you’ve read all the top posts.) I think that’s still very doable using the archive feature.

I mainly used the 'top posts in <various time periods>' option (typically the 1 or 3 month options, IIRC); median time between visits was probably something like 1-3 months, so that fit pretty well. That said, even on the old forum I strongly wished for a way to filter by subject. Honestly, my favourite forums for UX were probably the old phpBB style ones, where you'd have forums devoted to arbitrarily many subtopics. I don't think they're anywhere near the pinnacle of forum design, but 'subtopic' is such an important divider, that I feel much less clear on how I can get value from a forum without it (which is part of why I've never spent a huge amount of time on the EA forums - though a bigger part is just not having much time to spare)

To a lesser degree, I found the metadata on who'd been active recently. It let me pseudo-follow certain users (though I suspect an actual follow function would be more helpful)

Am also surprised that you lose posts. My sense is that for a post to leave the frontpage takes a couple of days to a week. Do you keep tabs open that long? Or are you finding the posts somewhere else?

Often a friend would link me to a post that had already been around for a week or two when I read it.

Comment by Arepo on I find this forum increasingly difficult to navigate · 2019-07-05T22:54:55.653Z · EA · GW
My impression, incidentally, is that the search functionality is decidedly better than it was on the old forum: the search results seem to be more related to what I'm looking for, and be easier to sort through (eg separating 'comments' and 'posts')

For what it's worth, my main concerns are the visual navigation (esp filtering and sorting) rather than a search feature - the latter I find Google invariably better for, as long as you can persuade the bots to index frequently.


Comment by Arepo on I find this forum increasingly difficult to navigate · 2019-07-05T11:23:58.747Z · EA · GW

(also worth noting that for me it'd be really helpful to have a user-categorisation or tagging system, so we could easily filter by subject matter. Even just old-school subforums would be swell, but the ideal might be allowing non-authors to tag posts as well)

Comment by Arepo on What's the best structure for optimal allocation of EA capital? · 2019-06-05T18:58:24.893Z · EA · GW

A less drastic option would be for OpenPhil to just hire more research staff. I think there's some argument for this given that they're apparently struggling to find ways to distribute their money:

1) a new researcher doesn't need to be as valuable as Holden to have positive EV against the counterfactual of the money sitting around waiting for Holden to find somewhere to donate it to in 5 years

2) the more researchers are hired, even (/especially) when they're ones who Holden doesn't agree with, the more they guard against the risk of any blind spots/particular passions etc of Holden's coming to dominate and causing missed opportunities, since ultimately as far as I can tell there aren't really other strong feedback mechanisms on the grants he ends up making than internal peer review.

(I wouldn't argue strongly for this, but I haven't seen a counterpoint to these arguments that I find compelling)

Comment by Arepo on Aging research and population ethics · 2019-04-28T16:01:53.159Z · EA · GW
The PA view doesn't need to assign disvalue to death to make increasing lifespans valuable. It just needs to assign to death a smaller value than being alive.

It depends how you interpret PA. I don't think there is a standard view - it could be 'maximise the aggregate lifetime utility of everyone currently existing', in which case what you say would be true, or 'maximise the happiness of everyone currently existing while they continue to do so', which I think would turn out to be a form of averaging utilitarianism, and on which what you say would be false.

If we make LEV nearer we don't increase the distress anti-aging therapies will cause to people at first. We just anticipate the distress.

Yes, but this was a comment about the desirability of public advocacy of longevity therapies rather than the desirability of longevity therapies themselves. It's quite plausible that the latter is desirable and the former undesirable - perhaps enough so to outweigh the latter.

This doesn't matter though, since, as I wrote, impact under the neutral view is actually bigger.

Your argument was that it's bigger subject to its not reducing the birthrate and adding net population in the near future is good in the long run. Both are claims for which I think there's a reasonable case, neither are claims that seem to have .75 probability (I would go lower for at least the second one, but YMMV). With a .44+ probability that one assumption is false, I think it matters a lot.

Financing aging research has only the effect of hastening it, so moving the date of LEV closer. The ripple effect that defeating aging would cause on the far future would remain the same. People living 5000 years from now wouldn't care if we hit LEV now or in 2040. So this isn't even a measure of impact.

Again this is totally wrong. Technologies don't just come along and make some predetermined set of changes then leave the world otherwise unchanged - they have hugely divergent effects based on the culture of the time and countless other factors. You might as well argue that if humanity hadn't developed the atomic bomb until last year, the world would look identical to today's except that Japan would have two fewer cities (and that in a few years, after they'd been rebuilt, it would look identical again).

Also, my next post is exactly on the shorter term impact. I think it'll be published in a couple of weeks. It will cover DALYs averted at the end of life, impact on life satisfaction, the economic and societal benefits, impact on non-human animals.

Looking forward to it :)

Comment by Arepo on Aging research and population ethics · 2019-04-28T09:32:31.793Z · EA · GW

I think it's an interesting cause area (upvoted for investigating something new), though I have three important quibbles with this analysis (in ascending order of importance):

1) The person-affecting (PA) view doesn't make this a slam-dunk. PAness doesn't signify that death in itself has negative value, so given your assumption 'that there isn't suffering at the end of life and people get replaced immediately', on the base PA view, increasing lifespans wouldn't in itself generate value. No doubt there are flavours of PA that would claim death *does* have disvalue, but those would need to be argued for separately.

Obviously there often *is* profound suffering at the end of life, which IMO is a much stronger argument for longevity research - on both PA and totalising views. Though I would also be very wary of writing articles arguing on those grounds, since most people very sensibly try to come to terms with the process of ageing to reduce its subjective harm to them, and undoing that for the sake of moving LEV forward a few years might cause more psychological harm than it prevented.

2) My impression is that the PA view is held by a fairly small minority of EAs and consequentialist moral philosophers (for advocates of nonconsequentialist moral views, I'm not sure the question would even make sense - and it would make a lot less sense to argue for longevity research based on its consequences), and if so, treating it as having equal evidential weight as totalising views is misleading.

It's obviously too large a topic to give much of an inside view on here, but if your view of ethics is basically monist (as opposed to dualist - ie queer-sort-of-moral-fact-ist) I don't think there's any convincing way you could map real-world processes onto a PA view, such that the PA view would make any sense. There's too much vagary about what would qualify as the 'same' or a 'different' person, and no scientific basis for drawing lines in one place rather than another (and hence, none for drawing any lines at all).

3) 'Reminder: most of the impact of aging research comes from making the date of LEV come closer and saving the people who wouldn't otherwise have hit LEV.'

This is almost entirely wrong. Unless we a) wipe ourselves out shortly after hitting it (which would be an odd notion of longevity), or b) reach it within the lifespans of most existing people *and* take a death-averse-PA view, the vast majority of LEV's impact of it will come on the ripple effect on the far future, and the vast majority of its expected impact will be our best guess as to that.

EAs tend to give near-term poverty/animal welfare causes a pass on that estimation, perhaps due to some PA intuitions, perhaps because they're doing good and (almost) immediate work, which if nothing else gives them a good baseline for comparison, perhaps because the immediate measurable value might be as good a proxy as any for far-future expectation in the absence of good alternative ways to think about the latter (and plenty of people would argue that these are all wrong, and hence that we should focus more directly on the far future. But I doubt many of the people who disagree with *them* would claim on reflection that 'most of the impact of poverty reduction comes from the individuals you've pulled out of poverty').

Longevity research doesn't really share these properties, though, and certainly doesn't have them to the same degree, so it's unlikely to have the same intuitive appeal, in which case it's hard to argue that it *should*. Figuring out the short-term effects is probably the best first step towards doing this, but we shouldn't confuse it with the end goal.

Comment by Arepo on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-04T23:38:25.863Z · EA · GW
the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.

This seems very wrong to me. I work at Founders Pledge in London, and I doubt a single one of the staff there would disagree with a proposition like 'the magnitude of London rents has a profound effect on my lifestyle'.

They also pay substantially closer to market rate salaries now than they did for the first 2-3 years of existence, during which people no doubt would have been far more sympathetic to the claim.

Comment by Arepo on Why is the EA Hotel having trouble fundraising? · 2019-03-27T23:02:34.617Z · EA · GW

A couple of thoughts I'd add (as another trustee):

3. Demand for the hotel has been increasing more or less linearly (until we hit current funding difficulties). As long as that continues, the projects will tend to get better.

This seems like a standard trajectory for meta-charities: for eg I doubt 80k's early career shifts looked anywhere near as high value as the average one does now. I should know - I *was* one of them, back when their 'career consultation' was 'speculating in a pub about earning to give' (and I was a far worse prospect than any 80k advisee or hotel resident today!)

Meanwhile it's easy to scorn such projects as novel-writing, but have we forgotten this? For better or worse, if Eliezer hadn't written that book the rationality and EA communities would look very different now.

6. This might be true as a psychological explanation, but, ceteris paribus, it's actually a reason *to* donate, since it (by definition) makes the hotel a more neglected cause.

Comment by Arepo on EA Hotel with free accommodation and board for two years · 2019-03-19T23:06:08.398Z · EA · GW

I would be wary of equivocating different forms of 'inconvenience'. There are at least three being alluded to here:

1) Fighting the akrasia of craving animal products

2) The hassle of finding vegan premade food (else of having to prepare meals for yourself)

3) Reduced productivity gains from missing certain nutrients (else of having to carefully supplement constantly)

Of these, the first basically irrelevant in the hotel - you can remove it as a factor by just not giving people the easy option to ingest them. The second is completely irrelevant, since it's serving or supplying 90% of the food people will be eating.

So that only leaves three, which is much talked about, but so far as I know, little studied, so this 'inconvenience' could even have the wrong sign: the only study on the subject I found from a very quick search showed increased productivity from veganism for health reasons; also on certain models of willpower that treat it as analogous to a muscle, it could turn out that depriving yourself (even by default, from the absence of offered foods) you improve your willpower and thus become more productive.

I've spoken to a number of people who eat meat/animal products for the third reason, but so far as I know they rarely seem to have reviewed any data on the question, and almost never to have actually done any controlled experiments on themselves. Honestly I suspect many of them are using the first two to justify a suspicion of the third (for eg, I know several EAs who eat meat with productivity justifications, but form whom it's usually *processed* meat in the context of other dubious dietary choices, so they demonstrably aren't optimising their diet for maximal productivity).

Also, if the third does turn out to be a real factor, it seems very unlikely that more than a tiny bit of meat every few days would be necessary to fix the problem for most people, and going to the shops to buy that for themselves seems unlikely to cause them any serious inconvenience.

Comment by Arepo on EA is vetting-constrained · 2019-03-08T22:53:03.401Z · EA · GW

I can't help but appreciate the irony that 5 hours after having been posted this is still awaiting moderator approval.

Comment by Arepo on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T23:44:43.543Z · EA · GW
Given that other organizations can raise large funds, an alternative explanation is that donors think that the expected impact of the organizations that cannot get funding is low.

It's not entirely obvious how that looks different from EA being funding constrained. No donors are perfectly rational and they surely tend to be irrational in relatively consistent ways, which means that some orgs having surplus funds is totally consistent with there not being enough money to fund all worthwhile orgs. (this essentially seems like a microcosm of the world having enough money to fix all its problems with ease, and yet there ever having been a niche for EA funding).

Also, if we take the estimates of the value of EA marginal hires on the survey from a couple of years back literally, EA orgs tend to massively underpay their staff compared to their value, and presumably suffer from a lower quality hiring pool as a result.

Comment by Arepo on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T23:07:37.316Z · EA · GW

I agree with all of this, though I'd add that I think part of the problem is the recent denigration of earning to give, which is often all that someone realistically *can* do, at least in the short term.

Comment by Arepo on A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity · 2019-01-13T22:10:53.060Z · EA · GW

Can I suggest keying the maths in the post, so that those of us wanting to try and parse it but without a mathematical background can feasibly do so?

Comment by Arepo on Why we have over-rated Cool Earth · 2018-12-04T21:47:56.725Z · EA · GW
I think nobody delved into the Cool Earth numbers because it wasn't worth their time, because climate change charities generally aren't competitive with the standard EA donation opportunities

This claim seems exactly what people felt was too hubristic - how could anyone be so confident on the basis of a quick survey of such a complex area that climate didn't match up to other donation opportunities?

Comment by Arepo on EA Hotel with free accommodation and board for two years · 2018-06-07T12:20:09.290Z · EA · GW

Is there any particular reason why the role needs to be filled by an EA? I think we as a community are too focused on hiring internally in general, and in this case almost no engagement with the ideas of EA seems like it would necessary - they just need to be good at running a hotel (and ok with working around a bunch of oddballs).

Comment by Arepo on EA Hotel with free accommodation and board for two years · 2018-06-04T22:15:51.631Z · EA · GW

Hey Greg, this is a super interesting project - I really hope it takes off. Some thoughts on your essay:

1) Re the hotel name, I feel like this should primarily be made with the possibility of paying non-EAs in mind. EAs will - I hope - hear of the project by reputation rather than name, so the other guests are the ones you're most likely to need make a strong first impression on. 'Effective Altruism Hotel' definitely seems poor in that regard - 'Athena' seems ok (though maybe there's some benefits to renaming for the sake of renaming if the hotel was failing when you bought it)

2) > Another idea for empty rooms is offering outsiders the chance to purchase a kind of “catastrophic risk insurance”; paying, say, £1/day to reserve the right to live at the hotel in the event of a global (or regional) catastrophe.

This seems dubious to me (it's the only point of your essay I particularly disagreed with). It's a fairly small revenue stream for you, but means you're attracting people who're that little bit more willing to spend on their own self-interest (ie that little bit less altruistic), and penalises people who just hadn't heard of the project. Meanwhile, in the actual event, what practical effect would it have? Would you turn away people who showed up early when the sponsors arrived for their room?

If you want an explicit policy on using it as a GCR shelter, it seems like 'first come first served' would be at least as meritocratic, require less bureaucracy and offer a much more enforceable Schelling point.

3) As you say, I think this will be more appealing the more people it has involved from the beginning, so I would suggest aggressively marketing the idea in all EA circles which seem vaguely relevant, subject to the agreement of the relevant moderators - not that high a proportion of EAs read this forum, and of those who do, not that many will see this post. It's a really cool idea that I hope people will talk about, but again they'll do so a lot more if it's already seen as a success.

4) You describe it in the link, but maybe worth describing the Trustee role where you first mention it - or at least linking to it at that point.

Comment by Arepo on Founders Pledge is seeking a Community Manager · 2018-03-16T14:54:35.645Z · EA · GW

K. I'll consider my wrist duly slapped!

Comment by Arepo on Cognitive and emotional barriers to EA's growth · 2018-03-12T01:35:31.881Z · EA · GW

Great stuff! A few quibbles:

  • It feels odd to specify an exact year EA (or any movement) was 'founded'. Givewell (surprisingly not mentioned other than a logo on slide 6) have been around since 2007; MIRI since 2000; FHI since 2005; Giving What We Can since 2009. Some or all of these (eg GWWC) didn't exactly have a clear founding date, though, rather becoming more like their modern organisations over years. One might not consider some of them more strictly 'EA orgs than others' - but that's kind of the point.

  • I'd be wary of including 'moral offsetting' as an EA idea. It's fairly controversial, and sounds like the sort of thing that could turn people off the other ideas

  • Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).

  • Slide 12 talks about suffering exclusively; without getting into whether happiness can counterweigh it, it seems like it could mention positive experiences as well

  • I'd be wary of criticising intuitive morality for not updating on moral uncertainty. The latter seems like a fringe idea that's received a lot of publicity in the EA community, but that's far from universally accepted even by eg utilitarians and EAs

  • On slide 18 it seems odd to have an 'other' category on the right, but omit it on the left with a tiny 'clothing' category. Presumably animals are used and killed in other contexts than those four, so why not just replace clothing with 'other' - which I think would make the graph clearer

  • I also find the colours on the same graph a bit too similar - my brain keeps telling me that 'farm' is the second biggest categorical recipient when I glance it it, for eg

  • I haven't read the Marino paper and now want to, 'cause it looks like it might update me against this, but provisionally: it still seems quite defensible to believe that chickens experience substantially less total valence per individual than larger animals, esp mammals, even if it's becoming rapidly less defensible to believe that they don't experience something qualitatively similar to our own phenomenal experiences. [ETA] Having now read-skimmed it, I didn't update much on the quantitative issue (though it seems fairly clear chickens have some phenomenal experience, or at least there's no defensible reason to assume they don't)

  • Slide 20 'human' should be pluralised

  • Slide 22 'important' and 'unimportant' seem like loaded terms. I would replace with something more factual like (ideally a much less clunkily phrased) 'causes large magnitude of suffering', 'causes comparatively small magnitude of suffering'

  • I don't understand the phrase 'aestivatable future light-cone'. What's aestivation got to do with the scale of the future? (I know there are proposals to shepherd matter and energy to the later stages of the universe for more efficient computing, but that seems way beyond the scope of this presentation, and presumably not what you're getting at)

  • I would change 'the species would survive' on slide 25 to 'would probably survive', and maybe caveat it further, since the relevant question for expected utility is whether we could reach interstellar technology after being set back by a global catastrophe, not whether it would immediately kill us (cf eg https://www.openphilanthropy.org/blog/long-term-significance-reducing-global-catastrophic-risks) - similarly I'd be less emphatic on slide 27 about the comparative magnitude of climate change vs the other events as an 'X-risk', esp where X-risk is defined as here: https://nickbostrom.com/existential/risks.html)

  • Where did the 10^35 number for future sentient lives come from for slide 26? These numbers seem to vary wildly among futurists, but that one actually seems quite small to me. Bostrom estimates 10^38 lost just for a century's delayed colonization. Getting more wildly speculative, Isaac Arthur, my favourite futurist, estimates a galaxy of Matrioska brains could emulate 10^44 minds - it's slightly unclear, but I think he means running them at normal human subjective speed, which would give them about 10^12 times the length of a human life between now and the end of the stelliferous era. The number of galaxies in the Laniakea supercluster is approx 10^5, so that would be 10^61 total, which we can shade by a few orders of magnitude to account for inefficiencies etc and still end up with a vastly high number than yours. And if Arthur's claims about farming Hawking radiation and gravitational energy in the post-stellar eras are remotely plausible, then the number of sentient beings Black Hole era would dwarf that number again! (ok, this maybe turned into an excuse to talk about my favourite v/podcast)

  • Re slide 29, I think EA has long stopped being 'mostly moral philosophers & computer scientists' if it ever strictly was, although they're obviously (very) overrepresented. To what end do you note this, though? It maybe makes more sense in the talk, but in the context of the slide, it's not clear whether it's a boast of a great status quo or a call to arms of a need for change

  • I would say EA needs more money and talent - there are still tonnes of underfunded projects!

Comment by Arepo on Founders Pledge is seeking a Community Manager · 2018-03-09T00:34:36.680Z · EA · GW

I'm agnostic on the issue. FB groups have their own drawbacks, but I appreciate the clutter concern. In the interest of balance, perhaps anyone who agrees with you can upvote your comment, anyone who disagrees can upvote this comment (and hopefully people won't upvote them for any other reason) and if there's a decent discrepancy we can consider the question answered?

Comment by Arepo on Why not to rush to translate effective altruism into other languages · 2018-03-08T00:10:13.309Z · EA · GW

Seconding Evan - it's great to have this laid out as a clear argument.

Re this:

In this way, any kind of broad based outreach is risky because it’s hard to reverse. Once your message is out there, it tends to stick around for years, so if you get the message wrong, you’ve harmed years of future efforts. We call this the risk of “lock in”.

I think there are some ways that this could still pan out as net positive, in reverse order of importance:

1) It relies on the arguments against E2G as a premium EA cause, which I'm still sceptical of given numerous very large funding gaps in EA causes and orgs. Admittedly In the case of China (and other semideveloped countries) the case against E2G seems stronger though, since the potential earnings are substantially lower, and the potential for direct work might be as strong or higher.

2) Depending on how you discount over time, and (relatedly) how seriously you take the haste consideration, getting a bunch of people involved sooner might be worth slower takeup later.

3) You mentioned somewhere in the discussion that you've rarely known anyone to be more amenable to EA because they'd encountered the ideas, but this seems like underestimating the nudge effects on which 99% of marketing are based. Almost no-one ever consciously thinks 'given that advert, I'm going to buy that product' - but when you see the product on the shelf, it just feels marginally more trustworthy because you already 'know' it. It seems like mass media EA outreach could function similarly. If so, lock-in might be a price worth paying.


This isn't to say that I think your argument is wrong, just that I don't yet think it's clear-cut.

It also seems like the risks/reward ratio might vary substantially from country to country, so it's perhaps worth thinking about at least each major economy separately?

To the degree that the argument does vary from country to country, I wonder whether there's any mileage in running some experiments with outreach in less economically significant countries, esp when they have historically similar cultures? Eg perhaps for China, it would be worth trialling a comparatively short termist strategy in Taiwan.

Comment by Arepo on Why not to rush to translate effective altruism into other languages · 2018-03-07T23:34:39.762Z · EA · GW

This seems no more (to me, less) of a concern than that having a diversity of languages and cultures would help avoid it becoming tribalised.

Also, re the idea of coordination, cf my comment above about 'thought leaders'. I know it's something Will's been pushing for, but I'm concerned about the overconcentration of influence in eg EA funds (although that's a slightly different issue from an overemphasis on the ideas of certain people)

Comment by Arepo on Why not to rush to translate effective altruism into other languages · 2018-03-07T18:03:21.720Z · EA · GW

Somewhat tangentially, am I unusual in finding the idea of 'thought leaders' for a movement about careful and conscientious consideration of ideas profoundly uncomfortable?

Comment by Arepo on EA Funds hands out money very infrequently - should we be worried? · 2018-02-11T21:38:21.850Z · EA · GW

Huh, that seems like a missed opportunity. I know very little about investing, but aren't there short-term investments with modest returns that would have a one-off setup cost for the fund, such that all future money could go into them fairly easily?

Comment by Arepo on The almighty Hive will · 2018-02-02T23:01:45.197Z · EA · GW

Keep in mind such insurance can happen at pretty much any scale - per Joey's description (above) of richer EAs just providing some support for poorer friends (even if the poorer friends are actually quite wealthy and E2G), for organisations supporting their employees, donors supporting their organisations (in the sense of giving them licence to take risks of financial loss that have positive EV), or EA collectives (such as the EA funds) backing any type of smaller entity.

Comment by Arepo on EA Funds hands out money very infrequently - should we be worried? · 2018-02-01T00:39:46.781Z · EA · GW

I also feel that, perhaps not now but if they grow much more, it would be worth sharing the responsibility among more than just one person per fund. They don't have to disagree vociferously on many subjects, just provide a basic sanity check on controversial decisions (and spreading the work might speed things up if research time is a limiting factor)

Comment by Arepo on The almighty Hive will · 2018-01-30T20:37:05.122Z · EA · GW

I pretty much agree with this - though I would add that you could also spend the money on just attracting existing talent. I doubt the Venn diagram of 'people who would plausibly be the best employee for any given EA job' and 'people who would seriously be interested in it given a relatively low EA wage' always forms a perfect circle.

Comment by Arepo on How scale is often misused as a metric and how to fix it · 2018-01-30T20:27:33.177Z · EA · GW

I read this the same way as Max. The issue of cost to solve (eg) all cases of malaria is really tractability, not scale. Scale is how many people would be helped (and to what degree) by doing so. Divide the latter by the former, and you have a sensible-looking cost-benefit analysis, (that is sensitive to the 'size and intensity of the problem', ie the former).

I do think there are scale-related issues with drawing lines between 'problems', though - if a marginal contribution to malaria nets now achieves twice as much good as the same marginal contribution would in 5 years, are combatting malaria now and combatting malaria in five years 'different problems', or do you just try to average out the cost-benefit ratio between somewhat arbitrary points (eg now and when the last case of malaria is prevented/cured). But I also think the models Max and Owen have written about on the CEA blog do a decent job of dealing with this kind of question.

Comment by Arepo on The almighty Hive will · 2018-01-30T20:11:38.331Z · EA · GW

I had a feeling that might be the case. That page still leaves some possible alternatives, though, eg this exemption:

an employer is usually expected to provide accommodation for people doing that type of work (for example a manager living above a pub, or a vicar looking after a parish)

It seems unlikely, but worth looking at whether developing a sufficient culture of EA orgs offering accommodation might suffice the 'usually expected' criterion.

It also seems a bit vague about what would happen if the EA org actually owned the accommodation rather than reimbursing rent as an expense, or if a wealthy EA would-be donor did, and let employees (potentially of multiple EA orgs) stay in it for little or no money (and if so, in the latter case, whether 'wealthy would-be donor' could potentially be a conglomerate a la EA funds)

There seems at least some precedent for this in the UK in that some schools and universities offer free accommodation to their staff, which don't seem to come under any of the exemptions listed on the page.

Obviously other countries with an EA presence might have more/less flexibility around this sort of thing. But if you have an organisation giving accommodation to 10 employees in a major developed world city, it seems like you'd be saving (in the UK) 20% tax on something in the order of £800 per month per employee, ie about £7600 per year, which seems like a far better return than investing the money would get (not to mention, if it's offered as a benefit for the job, being essentially doubly invested - once on the tax savings, once on the normal value of owning a property).

So while I'm far from confident that it would be ultimately workable, it seems like there would be high EV in an EA with tax law experience looking into it in each country with an EA org.

Comment by Arepo on Against neglectedness · 2017-11-06T20:16:15.804Z · EA · GW

Ta Holly - done.