Comment by gregory_lewis on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-16T18:44:01.262Z · score: 6 (5 votes) · EA · GW

Excellent. This series of interviews with superforecasters is also interesting. [H/T Ozzie]

Comment by gregory_lewis on EA Survey 2018 Series: Donation Data · 2018-12-10T23:45:59.445Z · score: 5 (3 votes) · EA · GW

Thanks. I should say that I didn't mean to endorse stepwise when I mentioned it (for reasons Gelman and commenters note here), but that I thought it might be something one might have tried given it is the variable selection technique available 'out of the box' in programs like STATA or SPSS (it is something I used to use when I started doing work like this, for example).

Although not important here (but maybe helpful for next time), I'd caution against using goodness of fit estimators (e.g. AIC going down, R2 going up) too heavily in assessing the model as one tends to end up with over-fitting. I think the standard recommendations are something like:

  • Specify a model before looking at the data, and caveat any further explanations as post-hoc. (which sounds like essentially what you did).
  • Split your data into an exploration and confirmation set, where you play with whatever you like on the former, then use the model you think is best on the latter and report these findings (better, although slightly trickier, are things like k-fold cross validation rather than a single holdout).
  • LASSO, Ridge regression (or related regularisation methods) if you are going to select predictors 'hypothesis free' on your whole data.

(Further aside: Multiple imputation methods for missing data might also be worth contemplating in the future, although it is a tricky judgement call).

Comment by gregory_lewis on Giving more won't make you happier · 2018-12-10T23:13:58.822Z · score: 27 (13 votes) · EA · GW

Neither of your examples backs up your point.

The 80000 hours article you cite notes in its summary only that:

Giving some money to charity is unlikely to make you less happy, and may well make you happier. (My emphasis)

The GWWC piece reads thus:

Giving 10% of your income to effective charities can make an incredible difference to some of the most deprived people in the world. But what effect will giving have on you? You may be concerned that it will damage your quality of life and make you less happy. This is a perfectly reasonable concern, and there is no shame in wanting to live a full and happy life.

The good news is that giving can often make you happier.... (My emphasis)

As I noted in prior discussion, not only do these sources not claim 'giving effectively will increase your happiness', I'm not aware of this being claimed by any major EA source. Thus the objection "This line of argument confuses the effect of donating at all with the effect of donating effectively" targets a straw man.

Comment by gregory_lewis on Open Thread #43 · 2018-12-09T19:17:24.830Z · score: 12 (5 votes) · EA · GW

My impression FWIW is that the 'giving makes you happier' point wasn't/isn't advanced to claim that the optimal portfolio for one's personal happiness would include (e.g.) 10% of charitable donations (to effective causes), but that doing so isn't such a 'hit' to one's personal fulfilment as it appears at first glance. This is usually advanced in conjunction with the evidence on diminishing returns to money (i.e. even if you just lost - say - 10% of your income, if you're a middle class person in a rich country, this isn't a huge loss to your welfare - and given this evidence on the wellbeing benefits to giving, the impact is likely to be reduced further).

E.g. (and with apologies to the reader for inflicting my juvenilia upon them):

[Still being in the a high global wealth percentile post-giving] partly explains why I don’t feel poorly off or destitute. There are other parts. One is that giving generally makes you happier, and often more happier than buying things for yourself. Another is that I am fortunate in non-monetary respects: my biggest medical problem is dandruff, I have a loving family, a wide and interesting circle of friends, a fulfilling job, an e-reader which I can use to store (and occasionally read) the finest works of western literature, an internet connection I should use for better things than loitering on social media, and so on, and so on, and so on. I am blessed beyond all measure of desert.
So I don’t think that my giving has made me ‘worse off’. If you put a gun to my head and said, “Here’s the money you gave away back. You must spend it solely to further your own happiness”, I probably wouldn’t give it away: I guess a mix of holidays, savings, books, music and trips to the theatre might make me even happier (but who knows? people are bad at affective forecasting). But I’m pretty confident giving has made me happier compared to the case where I never had the money in the first place. So the downside looks like, “By giving, I have made myself even happier from an already very happy baseline, but foregone opportunities to give myself a larger happiness increment still”. This seems a trivial downside at worst, and not worth mentioning across the scales from the upside, which might be several lives saved, or a larger number of lives improved and horrible diseases prevented.
Comment by gregory_lewis on EA Survey 2018 Series: Donation Data · 2018-12-09T13:16:07.099Z · score: 3 (2 votes) · EA · GW

Thanks for these interesting results. I have a minor technical question (which I don't think was covered in the methodology post, nor in the Github repository from a quick review):

How did you select the variables (and interaction term) for the regression model? A priori? Stepwise? Something else?

Comment by gregory_lewis on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-11-27T19:44:30.141Z · score: 2 (1 votes) · EA · GW

Minor: I'd say the travel times in 'Loxbridge' are somewhat longer than an hour.

Time from (e.g.) Oxford train station to London train station is an hour, but adding on the travel time from 'somewhere in Oxford/London to the train station' would push this up to ~2 hours. Oxford to Cambridge takes 3-4 hours by public transport.

The general topic looks tricky. I'd guess if you did a kernel density map over the bay, you'd get a (reasonably) even gradient over the 3k square miles. If you did the same over 'Loxbridge' you'd get very strong foci over the areas that correspond to London/Oxford/Cambridge. I'd also guess you'd get reasonable traffic between subareas in the bay area, but in Loxbridge you'd have some Oxford/London and Cambridge/London (a lot of professionals make this sort of commute daily) but very little Oxford/Cambridge traffic.

What criteria one uses to chunk large connurbations into natural language looks necessarily imprecise. I'd guess if you had the ground truth and ran typical clustering algos on it, you'd probably get a 'bay area' cluster though. What might be more satisfying is establishing whether the bay acts like a single community: if instead there is a distinguishable (e.g.) East Bay and South Bay community, where people in one or the other group tend to go to (e.g.) events in one or the other and visit the other occasionally (akin to how an Oxford-EA like me may mostly attend Oxford events but occasionally visit London ones), this would justify splitting it up.

Comment by gregory_lewis on Cross-post: Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions, by 80,000 Hours. · 2018-11-20T08:29:15.217Z · score: 6 (3 votes) · EA · GW

Although orgs tacitly colluding with one another to pay their staff less than they otherwise would may also have an adverse effect on recruitment and retention...

Comment by gregory_lewis on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-17T20:55:52.930Z · score: 27 (13 votes) · EA · GW

Sure.

I don't take, "[DGB] misrepresents sources structurally, and this is a convincing sign it is written in bad faith." to be either:

  • True. The OP strikes me as tendentiously uncharitable and 'out for blood' (given the earlier versions was calling for Will to be disavowed by EA per Gleb Tsipursky, trust in Will down to 0, etc.), and the very worst that should be inferred, even if we grant all the matters under dispute in its favour - which we shouldn't - would be something like "sloppy, and perhaps with a subconscious finger on the scale tilting the errors to be favourable to the thesis of the book" rather than deceit, malice, or other 'bad faith'.
  • Helpful. False accusations of bad faith are obviously toxic. But even true ones should be made with care. I was one of the co-authors on the Intentional Insights document, and in that case (with much stronger evidence suggestive of 'structural misrepresentation' or 'writing things in bad faith') we refrained as far as practicable from making these adverse inferences. We were criticised for this at the time (perhaps rightly), but I think this is the better direction to err in.
  • Kind. Self explanatory.

I'm sure Siebe makes their comment in good faith, and I agree some parts of the comment are worthwhile (e.g. I agree it is important that folks in EA can be criticised). But not overall.

Comment by gregory_lewis on Crohn's disease · 2018-11-16T14:15:46.358Z · score: 9 (2 votes) · EA · GW

In hope but little expectation:

You could cast about for various relevant base-rates ("What is the chance of any given proposed conjecture in medical science being true?" "What is the chance of a given medical trial giving a positive result?"). Crisp data on these questions are hard to find, but the proportion for either is comfortably less than even. (Maybe ~5% for the first, ~20% for the second).

From something like this one can make further adjustments based on the particular circumstances, which are generally in the adverse direction:

  • Typical trials have more than n=6 non-consecutive case series behind them, and so this should be less likely to replicate than the typical member of this class.
  • (Particularly, heterodox theories of pathogenesis tend to do worse, and on cursory search I can find a alternative theories of Crohn's which seem about as facially plausible as this).
  • The wild theory also imposes a penalty: even if the minimal prediction doesn't demand the wider 'malasezzia causes it etc.', that the hypothesis is generated through these means is a further cost.
  • There's also information I have from medical training which speaks against this (i.e. if antifungals had such dramatic effects as proposed, it probably would have risen to attention somewhat sooner).
  • All the second order things I noted in my first comment.

As Ryan has explained, standard significance testing puts a floor of 2.5% of a (false) positive result in any trial even if the true effect is zero. There is some chance the ground truth really is that itraconazole cures Crohn's (given some evidence of TNFa downstream effects, background knowledge of fungal microbiota disregulation, and the very slender case series), which gives it a small boost above this, although this in itself is somewhat discounted by the limited power of the proposed study (i.e. even if Itraconazole works, the study might miss it).

Comment by gregory_lewis on Crohn's disease · 2018-11-15T22:54:38.027Z · score: 10 (3 votes) · EA · GW

~3% (Standard significance testing means there's a 2.5% chance of a false positive result favouring the treatment group under the null).

Comment by gregory_lewis on Crohn's disease · 2018-11-15T19:59:49.140Z · score: 9 (2 votes) · EA · GW

The idea of doing an intermediate piece of work is so one can abandon the project if it is negative whilst having spent less than 500k. Even independent of the adverse indicators I note above, the prior on case series finding replicating out in RCT is very low.

Another cheap option would be talking to the original investigators. They may have reasons why they haven't followed this finding up themselves.

Comment by gregory_lewis on Crohn's disease · 2018-11-15T15:45:05.007Z · score: 7 (4 votes) · EA · GW

A cheaper alternative (also by about an order of magnitude) is to do a hospital record study where you look at subsequent Crohn's admissions or similar proxies of disease activity in those recently prescribed antifungals versus those who aren't.

I also imagine it would get better data than a poorly powered RCT.

Comment by gregory_lewis on Crohn's disease · 2018-11-14T08:27:43.642Z · score: 29 (11 votes) · EA · GW

I strong-downvoted this post. I had hoped the reasons why would be obvious. Alas not.

Scientific (in)credibility

The comments so far have mainly focused on the cost-effectiveness calculation. Yet it is the science itself that is replete with red flags: from grandiose free-wheeling, to misreporting cited results, to gross medical and scientific misunderstanding. [As background: I am a doctor who has published on the genetics of inflammatory bowel disease]

Several examples before I succumbed:

  • Samuel et al. 2010 is a retrospective database review of 6 patients treated with itraconazole for histoplasmosis in Crohn's Disease (CD) (N.B. Observational, not controlled, and as a letter, editor- rather than peer-reviewed). It did not "report it cured patients with CD by clearing fungus from the gut": the authors' own (appropriately tentative - unlike the OP) conjecture was any therapeutic effect was mediated by immunomodulatory effects of azole drugs downstream of TNF-a. It emphatically didn't "suggest oral itraconazole may be effective against Malassezia in the gut" (as claimed in the linked website's FAQ) as the presence or subsequent elimination of Malassezia was never assessed - nor was Malassezia mentioned.
  • Crohn's disease is not a spondyloarthritis! (and neither is psoriasis, ulcerative colitis, or acute anterior uveitis). As the name suggests, spondyloarthritides are arthritides (i.e. diseases principally of joints - the 'spondylo' prefix points to joints between vertebrae); Crohn's a disease of the GI tract. Crohn's can be associated with a spondyloarthritis (enteropathic spondyloarthritis). As the word 'associated' suggests, these are not one and the same: only a minority of those with Crohn's develop joint sequelae. (cf. Standard lists of spondyloarthrides - note Crohn's simpliciter isn't on them).
  • Chronic inflammation isn't a symptom ('spondoyloarthritide' or otherwise), and symptoms (rather than diseases) are only cured in the colloquial use of the term.
  • However one parses "[P]roving beyond all doubt that Crohn's disease is caused by this fungus will very likely lead to a cure for all spondyloarthritide symptoms using antifungal drugs." ('Merely' relieving all back pain from spondyloarthritides? Relieving all symptoms that arise from the set of (correctly defined) spondyloarthritides? Curing all spondyloarthritides? Curing (and/or relieving all symptoms) from the author's grab bag of symptoms/diseases which include CD, Ulcerative Collitis, Ankylosing spondylitis, Psoriasis and chronic back pain?) The antecedent (one n=40 therapeutic study won't prove Malassezia causes Crohn's, especially with a competing immunomodulatory mechanism already proposed); the consequent (anti-fungal drugs as some autoimmune disease panacea of uncertain scope); and the implication (even if Malasezzia is proven to cause Crohn's, the likelihood of this result (and therapy) generalising is basically nil) are all absurd.
  • The 'I love details!' page notes at then end "These findings satisfy Koch’s postulates for disease causation, albeit scattered across several related diseases." Which demonstrate the author doesn't understand Koch's postulates: you can't 'mix and match' across diseases, and the postulates need to be satisfied in sequence (i.e. you find the microorganism only present in cases of the disease (1), culture it (2), induce the disease in a healthy individual with such a culture (3), and extract the organism again from such individuals (4)).
  • The work reported in that page, here, and elsewhere also directly contradict Koch's first postulate. Malasezzia is not found in abundance in cases of disease (pick any of them) and not in healthy individuals (postulate 1): the author himself states Malasezzia is ubiquitous across individuals, diseased or not (and this ubiquity is cited as why this genus is being proposed in the first place).

Intermezzo

I'd also rather not know how much has been spent on this so far. Whatever it is, investing another half a million dollars is profoundly ill-advised (putting the money in a pile and burning it is mildly preferable, even when one factors in climate change impacts). At least an order of magnitude cheaper is buying the time of someone who works in Crohn's to offer their assessment. I doubt it would be less scathing than mine.

Meta moaning

Most EAs had the good judgement to avoid the terrible mistake of a medical degree. One of the few downsides of so doing is (usually) not possessing the background knowledge to appraise something like this. As a community, we might worry about our collective understanding being led astray without the happy accident of someone with specialised knowledge (yet atrocious time-management and prioritisation skills among manifold other relevant personal failings) happening onto the right threads.

Have no fear: I have some handy advice/despairing pleas:

  • Medical science isn't completely civilizationally inadequate, and thus projects that resort to being pitched directly to inexpert funders have a pretty poor base rate (cf. DRACO)
  • Although these are imperfect, if the person behind the project doesn't have credentials in a relevant field (bioinformatics rather than gastroenterology, say), and/or a fairly slender relevant publication record, and scant/no interest from recognised experts, these are also adverse indicators. (Remember the nobel-prize winner endorsed Vit C megadosing?)
  • It can be hard to set the right incredulity prior: we all want take advantage of our risk neutrality to chase hits, but not all upsides that vindicate a low likelihood are credible. A rule-of-thumb I commend is 10^-(3+n(miracles)). So when someone suggests they have discovered the key mechanism of action (and consequent fix) for Crohn's disease, and ulcerative colitis, and ankylosing spondylitis, and reactive arthritis, and psoriasis, and psoriatic arthritis, and acute anterior uveitis, and oligoarthritis, and multiple sclerosis, and rheumatoid arthritis, and systemic lupus erythematosus, and prostate cancer, and benign prostatic hyperplasia, and chronic back pain (n~14), there may be some cause for concern.
  • Spot-checking bits of the write-up can be a great 'sniff test', especially in key areas where one isn't sure of one's ground ("Well, the fermi seems reasonable, but I wonder what this extra-sensory perception thing is all about").
  • Post value tends to be multiplicative (e.g. the antecedent of "If we have a cure for Crohn's, how good would it be?" may be the crucial consideration), and so its key to have an to develop an understanding across the key topics. Otherwise one risks conversational bikeshedding. Worse, there could be Sokal-hoax-esque effects where nonsense can end up well-received (say, moderately upvoted) provided it sends the right signals on non-substantive metrics like style, approach, sentiment, etc.

I see these aspects of epistemic culture as an important team sport, with 'amateur' participation encouraged (for my part, implored). I had hoped when I clicked the 'downvote' arrow for a few seconds I could leave this to fade in obscurity thereafter. When instead I find it being both upvoted and discussed like it has been, I become worried that it might actually attract resources from other EAs who might mistakenly take conversation thus-far to represent the balance of reason, and detract from EA's reputation with those who recognise it does not (cf. "The scientific revolution for altruism" aspiration). So I feel I have to am to write something more comprehensive. This took a lot longer than a few seconds, although fortunately my time is essentially worthless. Next time we may not be so lucky.

Comment by gregory_lewis on Even non-theists should act as if theism is true · 2018-11-09T22:07:37.681Z · score: 5 (3 votes) · EA · GW

The meat of this post seems to be a version of Plantinga's EAAN.

Comment by gregory_lewis on Mind Ease: a promising new mental health intervention · 2018-10-23T22:17:21.312Z · score: 19 (16 votes) · EA · GW

[based on an internally run study of 250 uses] Mind Ease reduces anxiety by 51% on average, and helps people feel better 80% of the time.

Extraordinary claims like this (and it's not the only one - e.g. "very likely" to help myself or people who I know who suffer from anxiety elsewhere in the post, "And for anxiety [discovering which interventions work best] is what we've done, '45% reduction in negative feelings' in the app itself) demands much fuller and more rigorous description and justification. e.g. (and cf. PICO):

  • (Population): How are you recruiting the users? Mturk? Positly? Convenience sample from sharing the link? Are they paid for participation? Are they 'people validated (somehow) as having an anxiety disorder' or (as I guess) 'people interested in reducing their anxiety/having something to help when they are particularly anxious?'
  • (Population): Are the "250 uses" 250 individuals each using Mindease once? If not, what's the distribution of duplicates?
  • (Intervention): Does "250 uses" include everyone who fired up the app, or only those who 'finished' the exercise (and presumably filled out the post-exposure assessment)?
  • (Comparator): Is this a pre-post result? Or is this vs. the sham control mentioned later? (If so, what is the effect size on the sham control?)
  • (Outcome): If pre-post, is the postexp assessment immediately subsequent to the intervention?
  • (Outcome): "reduces anxiety by 51%" on what metric? (Playing with the app suggests 5-level Likert scales?)
  • (Outcome): Ditto 'feels better' (measured how?)
  • (Outcome): Effect size (51% from what to what?) Inferential stats on the same (SE/CI, etc.)

There are also natural external validity worries. If (as I think it is) the objective is 'immediate symptomatic relief', results are inevitably confounded by anxiety a symptom that is often transient (or at least fluctuating in intensity), and one with high rates of placebo response. An app which does literally nothing but waits a couple of days before assessing (symptomatic) anxiety again will probably show great reductions in self-reported anxiety on pre-post, as people will be preferentially selected to use the app when feeling particularly anxious, and severity will tend to regress. This effect could apply to much shorter intervals (e.g. those required to perform a recommended exercise).

(Aside: An interesting validity test would be using GAD-7 for pre-post assessment. As all the items on GAD-7 are 'how often do you get X over the last 2 weeks', significant reduction in this metric immediately after the intervention should raise alarm).

In candour (and with regret) this write-up raises a lot of red flags to me. There is a large relevant literature which this post does not demonstrate command of. For example, there's a small hill of descriptive epidemiology papers on prevalence of anxiety as a symptom or anxiety disorders - including large population samples for GAD-7, which would look better routes to prevalence estimates than conducting a 300-person survey (and if you do run this survey, finding a prevalence in your sample of 73% >5 GAD given the population studies (e.g.) give means and medians ~2-3 and proportions >5 ~ 25% prompt obvious questions).

Likewise there are well-understood pitfalls in conducting research (some them particularly acute for intervention studies, and even moreso in intervention studies on mental health), which the 'marketing copy' style presentation (heavy on exuberant confidence, light on how this is substantiated) gives little reassurance they were in fact avoided. I appreciate "writing for an interested lay audience" (i.e. this one) demands a different style than writing to cater to academic scepticism. Yet the latter should be satisfied (either here or in a linked write-up), especially when attempting pioneering work in this area and claiming "extraordinarily good" results. We'd be cautious in accepting this from outside sources - we should mete out similar measure to projects developed 'in house'.

I hope subsequent work proves my worries unfounded.

Comment by gregory_lewis on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-14T08:47:21.969Z · score: 27 (26 votes) · EA · GW

My hunch is (as implied elsewhere) 'talent-constraint' with 'talent' not further specified is apt to mislead. My impression for longtermist orgs (I understand from Peter and others this may apply less to orgs without this as the predominant focus) is there are two broad classes, which imperfectly line up with 'senior' versus 'junior'.

The 'senior' class probably does fit (commensensically understood) 'talent-constraint', in that orgs or the wider ecosystem want to take everyone who clears a given bar. Yet these bars are high even when conditioned on the already able cohort of (longtermist/)EAs. It might be things like 'ready to run a research group', 'can manage operations for an org' (cf. Tara's and Tanya's podcasts), 'subject matter expertise/ability/track record'.

One common feature is that these people add little further load on current (limited) management capacity, either because they are managing others or are already 'up to speed' to contribute themselves without extensive training or supervision. (Aside: I suspect this is a under-emphasised bonus of 'value-aligned operations staff' - their tacit knowledge of the community/mission/wider ecosystem may permit looser management than bringing on able professionals 'from outside'.) From the perspective of the archetypal 'pluripotent EA' a few years out from undergrad, these are skills which are hard to develop and harder to demonstrate.

More 'junior' roles are those where the criteria are broader (at least in terms of legible ones: 'what it takes' to be a good generalist researcher may be similarly rare to 'what it takes' to be a good technical AI safety researcher, but more can easily 'rule themselves out' of the latter than the former), where 'upskilling' is a major objective, or where there's expectation of extensive 'hands-on' management.

There might be similarly convex returns to getting a slightly better top candidate (e.g. 'excellent versus very good' might be 3x rather than 1.3x). Regardless, there will not be enough positions for all the talented candidates available: even if someone at an org decided to spend their time only managing and training junior staff (and haste considerations might lead them to spending more of their time doing work themselves than investing in the 'next generation'), they can't manage dozens at a time.

I think confusing these two broad classes is an easy way of burning a lot of good people (cf. Denise's remarks). If Alice the 23-year-old management consultant might reason on current messaging, "EA jobs are much better for the world than management consultancy, and they're after good people - I seem to fit the bill, so I should switch career into this". She might then forsake her promising early career for an unedifying and unsuccessful period as 'EA perennial applicant', ending up worse than she was at the start. EA has a vocational quality to it - key it does not become a siren song.

There seem a few ways to do this better, as alluded to in prior discussions here and elsewhere:

0) If I'm right, it'd be worth communicating the 'person spec' for cases where (common-sense) talent constraint applies, and where we really would absorb basically as many as we could get (e.g. "We want philosophers to contribute to GPR, and we're after people who either already have a publication record in this area, or have signals of 'superstar' ability even conditioned on philosophy academia. If this is you, please get in touch.").

1) Concurrently, it'd be worth publicising typical applicants:place or similar measures of competition for hiring rounds in more junior roles to allow applicants to be better calibrated/emphasise the importance of plan B. (e.g. "We have early-career roles for people thinking of working as GPR researchers, which serves the purpose of talent identification and development. We generally look for XYZ. Applications for this are extremely competitive (~12:1). Other good first steps for people who want to work in this field are these"). {MIRI's research fellows page does a lot of this well}.

2) It would be good for there to be further work addressed to avoiding 'EA underemployment', as I would guess growth in strong candidates for EA roles will outstrip intra-EA opportunities. Some possibilities:

2.1) There are some areas I'd want to add to the longtermist portfolio which might be broadened into useful niches for people with comparative advantage in them (macrohistory, productivity coaching and nearby versions, EA-relevant bits of psychology, etc.) I don't think these are 'easier' than the existing 'hot' areas, but they are hard in different ways, and so broaden opportunities.

2.2) Another option would be 'pre-caching human capital' into areas which are plausible candidates for becoming important as time goes on. I imagine something like international relations turning out to be crucial (or, contrariwise, relatively unimportant), but it seems better rather than waiting for this to be figured out for instead people to coordinate and invest themselves across the portfolio of plausible candidates. (Easier said than done from the first person perspective, as such a strategy potentially involves making an uncertain bet with many years of one's career, and if it turns out to be a bust ex post the good ex ante EV may not be complete consolation).

2.3) There seem a lot of stakeholders where it would be good for EAs to enter due to the second-order benefits even if their direct work is of limited direct relevance (e.g. having more EAs in tech companies looks promising to me, even if they aren't doing AI safety). (Again, not easy from the first person-perspective).

2.4) A lot of skills for more 'senior' roles can and have been attained outside of the EA community. Grad school is often a good idea for researchers, and professional/management aptitude is often a transferable skill. So some of the options above can be seen as a holding-pattern/bet hedging approach: they hopefully make one a stronger applicant for such roles, but in the meanwhile one is doing useful things (and also potentially earning to give, although I think this should be a minor consideration for longtermist EAs given the field is increasingly flush with cash).

If the framing is changed to something like, "These positions are very valuable, but very competitive - it is definitely worth you applying (as you in expectation increase the quality of the appointed candidate, and the returns of a slightly better candidate are very high), don't bet the farm (or quit the day job) on your application - and if you don't get in, here's things you could do to slant your career to have a bigger impact", I'd hope the burn risk falls dramatically: in many fields there are lots of competitive oversubscribed positions which don't impose huge costs to unsuccessful applicants.

Comment by gregory_lewis on 2018 list of half-baked volunteer research ideas · 2018-09-20T08:55:13.801Z · score: 4 (3 votes) · EA · GW

Something similar perhaps worth exploring is putting up awards/bounties for doing particular research projects. A central clearing-house of this could be interesting (I know myself and a couple of others have done this on an ad-hoc basis - that said, efforts to produce central repositories for self-contained projects etc. in EA have not been wildly successful).

A couple of related questions/topics I'd be excited for someone to have a look at:

1. Is rationality a skill, or a trait? Stanovich's RQ correlates with IQ fairly strongly, but I imagine going through the literature could uncover how much of a positive manifold there is between 'features' of rationality which is orthogonal to intelligence, and then investigation of how/whether this can be trained (with sub-questions around transfer, what seems particularly promising, etc.

2. I think a lot of people have looked into the superforecasting literature for themselves, but a general write-up for public consumption (e.g. How 'traity' is superforecasting? What exactly does GJP do to get a reported 10% boost from pre-selected superforecasters? Are there useful heuristics people can borrow to improve their own performance beyond practice/logging predictions? (And what is the returns curve to practice, anyway?)) could spare lots of private duplication.

3. More generally, I imagine lots of relevant books (e.g. Deep Work, superforecasting, better angels) could be concisely summarised. That said, I think there are already services that do this, so less clear if this already exists whether it is worth EA time to repeat 'in house'.

Comment by gregory_lewis on Current Estimates for Likelihood of X-Risk? · 2018-08-06T20:46:04.825Z · score: 6 (10 votes) · EA · GW

Thanks for posting this.

I don't think there are any other sources you're missing - at least, if you're missing them, I'm missing them too (and I work at FHI). I guess my overall feeling is these estimates are hard to make and necessarily imprecise: long-run large scale estimates (e.g. what was the likelihood of a nuclear exchange between the US and Russia between 1960 and 1970?) are still very hard to make ex post, leave alone ex ante.

One question might be how important further VoI is for particular questions. I guess the overall 'x risk chance' may have surprisingly small action relevance. The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.

Risk share seems more important (e.g. how much more worrying is AI than nuclear war?), yet these comparative judgements can be generally made in relative terms, without having to cash out the absolute values.

Comment by gregory_lewis on Leverage Research: reviewing the basic facts · 2018-08-05T17:47:05.220Z · score: 22 (24 votes) · EA · GW

[My views only]

Although few materials remain from the early days of Leverage (I am confident they acted to remove themselves from wayback, as other sites link to wayback versions of their old documents which now 404), there are some interesting remnants:

  • A (non-wayback) website snapshot from 2013
  • A version of Leverage's plan
  • An early Connection Theory paper

I think this material (and the surprising absence of material since) speaks for itself - although I might write more later anyway.

Per other comments, I'm also excited by the plan of greater transparency from Leverage. I'm particularly eager to find out whether they still work on Connection Theory (and what the current theory is), whether they addressed any of the criticism (e.g. 1, 2) levelled at CT years ago, whether the further evidence and argument mentioned as forthcoming in early documents and comment threads will materialise, and generally what research (on CT or anything else) have they done in the last several years, and when this will be made public.

Comment by gregory_lewis on EA Forum 2.0 Initial Announcement · 2018-07-23T15:45:00.525Z · score: 3 (3 votes) · EA · GW

Relatedly, some comments could be marked as "only readable by the author", because it's a remark about sensitive information. For example, feedback on someone's writing style or a warning about information hazards when the warning itself is also an information hazard. A risk of this feature is that it will be overused, which reduces how much information is spread to all the readers.

Forgive me if I'm being slow, but wouldn't private messages (already in the LW2 codebase) accomplish this?

Comment by gregory_lewis on EA Forum 2.0 Initial Announcement · 2018-07-21T11:52:31.509Z · score: 6 (6 votes) · EA · GW

One solution would be to demand that every down-vote comes with a reason, to which the original poster can reply.

This has been proposed a couple of times before (/removing downvotes entirely), and I get the sentiment than writing something and having someone 'drive-by-downvote' is disheartening/frustrating (it doesn't keep me up at night, but a lot of my posts and comments have 1-2 downvotes on them even if they end up net-positive, but I don't really have a steer as to what problem the downvoters wanted to highlight).

That said, I think this is a better cost to bear than erecting a large barrier for expressions of 'less of this'. I might be inclined to downvote some extremely long and tendentious line-by-line 'fisking' criticism, without having to become the target of a similar reply myself by explaining why I downvoted it. I also expect a norm of 'explaining your reasoning' will lead to lots of unedifying 'rowing with the ref' meta-discussions ("I downvoted your post because of X"/ "How dare you, that's completely unreasonable! So I have in turn downvoted your reply!")

Comment by gregory_lewis on Impact Investing - A Viable Option for EAs? · 2018-07-12T01:09:06.133Z · score: 3 (3 votes) · EA · GW

I'd also guess the social impact estimate would regress quite a long way to the mean if it was investigated to a similar level of depth as something like Cool Earth.

Comment by gregory_lewis on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T15:05:26.289Z · score: 4 (4 votes) · EA · GW

One key challenge I see is something like 'grant-making talent constraint'. The skills needed to make good grants (e.g. good judgement, domain knowledge, maybe tacit knowledge, maybe relevant network, possibly commissioning/governance/operations skill) are not commonplace, and hard to explicitly 'train' outside i) having a lot of money of your own to practise with, or ii) working in a relevant field (so people might approach you for advice). (Open Philanthropy's recent hiring round might provide another route, but places were limited and extraordinarily competitive).

Yet the talents needed to end up at (i) or (ii) are somewhat different, as are the skills to acquire: neither (e.g.) having a lot of money and being interested in AI safety, nor being an AI safety researcher oneself, guarantee making good AI safety grants; time one spends doing either of these things is time one cannot dedicate to gaining grant-making experience.

Dividing this labour (as the suggestions in the OP point towards) seem the way to go. Yet this can only get you so far if 'grantmaking talent' is not only limited among people with the opportunity to make grants, but limited across the EA population in general. Further, good grant-makers will gravitate to the largest pools of funding (reasonably enough, as this is where their contribution has the greatest leverage). This predictably leads to gaps in the funding ecosystem where 'good projects from the point of view of the universe' and 'good projects from the point of view of the big funders' subtly differ: I'm not sure I agree with the suggestions in the OP (i.e. upskilling people, new orgs), but I find Carl Shulman's remarks here persuasive.

Comment by gregory_lewis on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-11T06:14:39.291Z · score: 2 (2 votes) · EA · GW

I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), I'm not aware of a good macrohistorical dataset that could answer this question - reality in any case may prove underpowered.

Yet whether or not in fact things would change with more democratised decision-making/intelligence gathering/ etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many 'bits' of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and there's a 'clownside' risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate 'X is an important issue' may be much lower than 'can contribute usefully to X'.

A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. I'd guess relevant 'EA open problems' are a mix, but this makes me hesitant for there to be a general shove in this direction.

I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some 'research agenda for the most important open problems in EA'). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/feel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a 'cause X' (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental health - or non-communicable disease - but not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/SCI/etc.)

Comment by gregory_lewis on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T10:14:18.300Z · score: 1 (1 votes) · EA · GW

Sorry for being unclear. I've changed the sentence to (hopefully) make it clearer. The idea was there could be other explanations for why people tend to gravitate to future stuff (group think, information cascades, selection effects) besides the balance of reason weighs on its side.

I do mean considerations like population ethics etc. for the second thing. :)

Comment by gregory_lewis on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-03T23:38:20.350Z · score: 3 (5 votes) · EA · GW

Excellent work. I hope you'll forgive me taking issue with a smaller point:

Given the uncertainty they are facing, most of OpenPhil's charity recommendations and CEA's community-building policies should be overturned or radically altered in the next few decades. That is, if they actually discover their mistakes. This means it's crucial for them to encourage more people to do local, contained experiments and then integrate their results into more accurate models. (my emphasis)

I'm not so sure that this is true, although it depends on how big an area you imagine will / should be 'overturned'. This also somewhat ties into the discussion about how likely we should expect to be missing a 'cause X'.

If cause X is another entire cause area, I'd be pretty surprised to see a new one in (say) 10 years which is similar to animals or global health, and even more surprised to see one that supplants long term future. My rationale for this is I see broad funnel where EAs tend to move into the long term future/x-risk/AI, and once there they tend not to leave (I can think of a fair number of people who made the move from (e.g.) global health --> far future, but I'm not aware of anyone who moved from far future --> anything else). There are also people who have been toiling in the long term future vinyard for a long time (e.g. MIRI), and the fact we do not see many people moving elsewhere suggests this is pretty stable attractor.

There are other reasons for a cause area being a stable attractor besides all reasonable roads lead to it. That said, I'd suggest one can point to general principles which would somewhat favour this (e.g. the scope of the long term future, that the light cone commons, stewarded well, permits mature moral action in the universe to whatever in fact has most value, etc.) I'd say similar points to a lesser degree to apply to the broad landscape of 'on reflection moral commitments', and so the existing cause areas mostly exhaust this moral landscape.

Naturally, I wouldn't want to bet the farm on what might prove overconfidence, but insofar as it goes it supplies less impetus for lots of exploratory work of this type. At a finer level of granulariy (and so a bit further down your diagram), I see less resilience (e.g. maybe we should tilt the existing global poverty portfolio more one way or the other depending how the cash transfer literature turns out, maybe we should add more 'avoid great power conflict' to the long term future cause area, etc.) Yet I still struggle to see this adding up to radical alteration.

Comment by gregory_lewis on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-07-03T22:47:53.130Z · score: 4 (4 votes) · EA · GW

0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this.

A further heuristic which seems right to me is one should disclose information in the way that maximally disadvantages bad actors versus good ones. There are a wide spectrum of approaches that could be taken that lie between ‘try to forget about it’, and ‘broadcast publicly’, and I think one of the intermediate options is often best.

1: I disagree with many of the considerations which push towards more open disclosure and discussion.

1.1: I don’t think we should be confident there is little downside in disclosing dangers a sophisticated bad actor would likely rediscover themselves. Not all plausible bad actors are sophisticated: a typical criminal or terrorist is no mastermind, and so may not make (to us) relatively straightforward insights, but could still ‘pick them up’ from elsewhere.

1.2: Although a big fan of epistemic modesty (and generally a detractor of ‘EA exceptionalism’), EAs do have an impressive track record in coming up with novel and important ideas. So there is some chance of coming up with something novel and dangerous even without exceptional effort.

1.3: I emphatically disagree we are at ‘infohazard saturation’ where the situation re. Infohazards ‘can’t get any worse’. I also find it unfathomable ever being confident enough in this claim to base strategy upon its assumption (cf. eukaryote’s comment).

1.4: There are some benefits to getting out ‘in front’ of more reckless disclosure by someone else. Yet in cases where one wouldn’t want to disclose it oneself, delaying the downsides of wide disclosure as long as possible seems usually more important, and so rules against bringing this to an end by disclosing yourself save in (rare) cases one knows disclosure is imminent rather than merely possible.

2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.

2.1: It seems easy to imagine cases where the general idea comprises most of the danger. The conceptual step to a ‘key insight’ of how something could be dangerously misused ‘in principle’ might be much harder to make than subsequent steps from this insight to realising this danger ‘in practice’. In such cases the insight is the key bottleneck for bad actors traversing the risk pipeline, and so comprises a major information hazard.

2.2: For similar reasons, highlighting a neglected-by-public-discussion part of the risk landscape where one suspects information hazards lie has a considerable downside, as increased attention could prompt investigation which brings these currently dormant hazards to light.

3: Even if I take the downside risks as weightier than you, one still needs to weigh these against the benefits. I take the benefit of ‘general (or public) disclosure’ to have little marginal benefit above more limited disclosure targeted to key stakeholders. As the latter approach greatly reduces the downside risks, this is usually the better strategy by the lights of cost/benefit. At least trying targeted disclosure first seems a robustly better strategy than skipping straight to public discussion (cf.).

3.1: In bio (and I think elsewhere) the set of people who are relevant setting strategy and otherwise contributing to reducing a given risk is usually small and known (e.g. particular academics, parts of the government, civil society, and so on). A particular scientist unwittingly performing research with misuse potential might need to know the risks of their work (likewise some relevant policy and security stakeholders), but the added upside to illustrating these risks in the scientific literature is limited (and the added downsides much greater). The upside of discussing them in the popular/generalist literature (including EA literature not narrowly targeted at those working on biorisk) is limited still further.

3.2: Information also informs decisions around how to weigh causes relative to one another. Yet less-hazardous information (e.g. the basic motivation given here or here, and you could throw in social epistemic steers from the prevailing views of EA ‘cognoscenti’) is sufficient for most decisions and decision-makers. The cases where this nonetheless might be ‘worth it’ (e.g. you are a decision maker allocating a large pool of human or monetary capital between cause areas) are few and so targeted disclosure (similar to 3.1 above) looks better.

3.3: Beyond the direct cost of potentially giving bad actors good ideas, the benefits of more public discussion may not be very high. There are many ways public discussion could be counter-productive (e.g. alarmism, ill-advised remarks poisoning our relationship with scientific groups, etc.). I’d suggest the examples of cryonics, AI safety, GMOs and other lowlights of public communication of policy and science are relevant cautionary examples.

4: I also want to supply other more general considerations which point towards a very high degree caution:

4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.

4.2: I take the examples of analogous fields to also support great caution. As you note, there is a norm in computer security of ‘don’t publicise a vulnerability until there’s a fix in place’, and initially informing a responsible party to give them the opportunity to to do this pre-publication. Applied to bio, this suggests targeted disclosure to those best placed to mitigate the information hazard, rather than public discussion in the hopes of prompting a fix to be produced. (Not to mention a ‘fix’ in this area might prove much more challenging than pushing a software update.)

4.3: More distantly, adversarial work (e.g. red-teaming exercises) is usually done by professionals, with a concrete decision-relevant objective in mind, with exceptional care paid to operational security, and their results are seldom made publicly available. This is for exercises which generate information hazards for a particular group or organisation - similar or greater caution should apply to exercises that one anticipates could generate information hazardous for everyone.

4.4: Even more distantly, norms of intellectual openness are used more in some areas, and much less in others (compare the research performed in academia to security services). In areas like bio, the fact that a significant proportion of the risk arises from deliberate misuse by malicious actors means security services seem to provide the closer analogy, and ‘public/open discussion’ is seldom found desirable in these contexts.

5: In my work, I try to approach potentially hazardous areas as obliquely as possible, more along the lines of general considerations of the risk landscape or from the perspective of safety-enhancing technologies and countermeasures. I do basically no ‘red-teamy’ types of research (e.g. brainstorm the nastiest things I can think of, figure out the ‘best’ ways of defeating existing protections, etc.)

(Concretely, this would comprise asking questions like, “How are disease surveillance systems forecast to improve over the medium term, and are there any robustly beneficial characteristics for preventing high-consequence events that can be pushed for?” or “Are there relevant limits which give insight to whether surveillance will be a key plank of the ‘next-gen biosecurity’ portfolio?”, and not things like, “What are the most effective approaches to make pathogen X maximally damaging yet minimally detectable?”)

I expect a non-professional doing more red-teamy work would generate less upside (e.g. less well networked to people who may be in a position to mitigate vulnerabilities they discover, less likely to unwittingly duplicate work) and more downside (e.g. less experience with trying to manage info-hazards well) than I. Given I think this work is usually a bad idea for me to do, I think it’s definitely a bad idea for non-professionals to try.

I therefore hope people working independently on this topic approach ‘object level’ work here with similar aversion to more ‘red-teamy’ stuff, or instead focus on improving their capital by gaining credentials/experience/etc. (this has other benefits: a lot of the best levers in biorisk are working with/alongside existing stakeholders rather than striking out on one’s own, and it’s hard to get a role without (e.g.) graduate training in a relevant field). I hope to produce a list of self-contained projects to help direct laudable ‘EA energy’ to the best ends.

Comment by gregory_lewis on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-07-03T22:45:21.218Z · score: 13 (8 votes) · EA · GW

Thanks for writing this. How best to manage hazardous information is fraught, and although I have some work in draft and under review, much remains unclear - as you say, almost anything could have some some downside risk, and never discussing anything seems a poor approach.

Yet I strongly disagree with the conclusion that the default should be to discuss potentially hazardous (but non-technical) information publicly, and I think your proposals of how to manage these dangers (e.g. talk to one scientist first) generally err too lax. I provide the substance of this disagreement in a child comment.

I’d strongly endorse a heuristic along the lines of, “Try to avoid coming up with (and don’t publish) things which are novel and potentially dangerous”, with the standard of novelty being a relatively uninformed bad actor rather than an expert (e.g. highlighting/elaborating something dangerous which can be found buried in the scientific literature should be avoided).

This expressly includes more general information as well as particular technical points (e.g. “No one seems to be talking about technology X, but here’s why it has really dangerous misuse potential” would ‘count’, even if a particular ‘worked example’ wasn’t included).

I agree it would be good to have direct channels of communication for people considering things like this to get advice on whether projects they have in mind are wise to pursue, and to communicate concerns they have without feeling they need to resort to internet broadcast (cf. Jan Kulveit’s remark).

To these ends, people with concerns/questions of this nature are warmly welcomed and encouraged to contact me to arrange further discussion.

Comment by gregory_lewis on EA Hotel with free accommodation and board for two years · 2018-06-21T08:59:28.368Z · score: 10 (13 votes) · EA · GW

I'm getting tired of the 'veganism is only a minor inconvenience' point being made:

  • V*ganism shows very high 'recidivism' rates in the general population. Most people who try to stop eating meat/animal products usually end up returning to eat these things before long.
  • The general public health literature on behaviour/lifestyle change seldom says these things are easy/straightforward to effect.
  • When this point is made by EAAs, there is almost always lots of EAs who they say, 'No, actually, I found going v*gan really hard', or, 'I tried it but I struggled so much I felt I had to switch back'.
  • (The selection effect that could explain why 'ongoing v*gans' found the change only a minor convenience is left as an exercise to the reader).

I don't know many times we need to rehearse this such that people stop saying 'V*ganism is a minor inconvenience'. But I do know it has happened enough times that other people in previous discussions have also wondered how many times this needs to be rehearsed such that people stop saying this.

Of course, even if it is a major inconvenience (FWIW, I'm a vegetarian, and I'd find the relatively small 'step further' to be exclusively vegan a major inconvenience), this could still be outweighed by other factors across the scales (there's discussion to be had 'relative aversion', some second-order stuff about appropriate cooperative norms, etc. etc.). Yet discussions of the cost-benefit proceed better if the costs are not wrongly dismissed.

Comment by gregory_lewis on EA Hotel with free accommodation and board for two years · 2018-06-05T22:42:02.990Z · score: 11 (13 votes) · EA · GW

Bravo!

I'm not so sure whether this is targetting the narrowest constraint for developing human capital in EA, but I'm glad this is being thrashed out in reality rather than by the medium of internet commentary.

A more proximal worry is this. The project seems to rely on finding a good hotel manager. On the face of it, this looks like a pretty unattractive role for an EA to take on: it seems the sort of thing that demands quite a lot of operations skill, altready in short supply - further, 20k is just over half the pay of similar roles in the private sector (and below many unis typical grad starting salary), I imagine trying to run a hotel (even an atypical one) is hard and uninspiring work with less of the upsides the guests will enjoy, and you're in a depressed seaside town.

Obviously, if there's already good applicants, good for them (and us!), and best of luck going forward.

Comment by gregory_lewis on Why Groups Should Consider Direct Work · 2018-05-28T19:41:42.549Z · score: 7 (11 votes) · EA · GW

I'd be hesitant to recommend direct efforts for the purpose of membership retention, and I don't think considerations on these lines should play a role in whether a group should 'do' direct work projects. My understanding is many charities use unskilled volunteering opportunities principally as a means to secure subsequent donations, rather than the object level value of the work being done. If so, this strikes me as unpleasantly disingenuous.

I think similar sentiments would apply if groups offered 'direct work opportunities' to their membership in the knowledge they are ineffective but for their impact on recruitment and retention (or at least, if they are going to do so, they should be transparent about the motivation). If (say) it just is the case the prototypical EA undergraduate is better served reallocating their time from (e.g.) birthday fundraisers to 'inward looking' efforts to improve their human capital, we should be candid about this. I don't think we should regret cases where able and morally laudable people are 'put off' EA because they resiliently disagree with things we think are actually true - if anything, this seems better for both parties.

Whether the 'standard view' expressed in the introduction is true (i.e. "undergrads generally are cash- and expertise- poor compared to professionals, and so their main focus should be on self-development rather than direct work") is open to question. There are definitely exceptions for individuals: I can think of a few undergraduates in my 'field' who are making extremely helpful contributions.

Yet this depends on a particular background or skill set which would not be in common among a local group. Perhaps the forthcoming post will persuade me otherwise, but it seems to me that the 'bar' for making useful direct contributions is almost always higher than the 'bar' for joining an EA student group, and thus opportunities for corporate direct work which are better than standard view 'indirect' (e.g. recruitment) and 'bide your time' (e.g. train up in particular skills important to your comparative advantage) will be necessarily rare.

Directly: if a group like EA Oxford could fund-raise together to produce $100 000 for effective charities (double the donations reported across all groups in the LEAN survey), or they could work independently on their own development such that one of their members becomes a research analyst at a place at Open Phil in the future, I'd emphatically prefer they take the latter approach.

Comment by gregory_lewis on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-06T12:43:37.206Z · score: 1 (1 votes) · EA · GW

I picked the 'updates' purely in the interests of time (easier to skim), that it gives some sense of what orgs are considered 'EA orgs' rather than 'orgs doing EA work' (a distinction which I accept is imprecise: would a GW top charity 'count'?), and I (forlornly) hoped pointing to a method, however brief, would forestall suspicion about cherry-picking.

I meant the quick-and-dirty data gathering to be more an indicative sample than a census. I'd therefore expect significant margin of error (but not so significant as to change the bottom line). Other relevant candidate groups are also left out: BERI, Charity Science, Founder's Pledge, ?ALLFED. I'd expect there are more.

Comment by gregory_lewis on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-05T01:06:42.068Z · score: 13 (9 votes) · EA · GW

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment by gregory_lewis on Giving Later in Life: Giving More · 2018-05-05T00:12:16.589Z · score: 2 (2 votes) · EA · GW

I think there is fair consensus that providing oneself financial security is desirable before making altruistic efforts (charitable or otherwise) (cf. 80k)

I think the question of whether it is good to give later is more controversial. There is some existing discussion on this topic usually under the heading of 'giving now versus giving later' (I particularly like Christiano's treatment). As Nelson says, there are social rates of return/haste considerations that favour earlier investment. I think received (albeit non-resilient) EA wisdom here is the best opportunities at least give impulses that outstrip typical market returns, and thus holding money to give later is a competitive strategy if one has opportunities to greatly 'beat the market'.

Comment by gregory_lewis on Should there be an EA crowdfunding platform? · 2018-05-02T18:10:23.360Z · score: 4 (4 votes) · EA · GW

Thanks for the even-handed explication of an interesting idea.

I appreciate the example you gave was more meant as illustration than proposal. I nonetheless wonder whether further examination of the underlying problem might lead to ideas drawn tighter to the proposed limitations.

You note this set of challenges:

  1. Open Phil targets larger grantees
  2. EA funds/grants have limited evaluation capacity
  3. Peripheral EAs tend to channel funding to more central groups
  4. Core groups may have trouble evaluating people, which is often an important factor in whether to fund projects.

The result is a good person (but not known to the right people) with a good small idea is nonetheless left out in the cold.

I'm less sure about #2 - or rather, whether this is the key limitation. Max Dalton wrote on one of the FB threads linked.

In the first round of EA Grants, we were somewhat limited by staff time and funding, but we were also limited by the number of projects we were excited about funding. For instance, time constraints were not the main limiting factor on the percentage of people we interviewed. We are currently hiring for a part-time grants evaluator to help us to run EA Grants this year[...]

FWIW (and non-resiliently), I don't look around and see lots of promising but funding starved projects. More relevantly, I don't review recent history and find lots of cases of stuff rejected by major funders then supported by more peripheral funders which are doing really exciting things.

If not, then the idea here (in essence, of crowd-sourcing evaluation to respected people in the community) could help. Yet it doesn't seem to address #3 or #4.

If most of the money (even from the community) ends up going through the 'core' funnel, then a competitive approach would be advocacy to these groups to change their strategy, instead of providing a parallel route and hoping funders will come.

More importantly, if funders generally want to 'find good people', the crowd-sourced project evaluation only helps so much. For people more on the periphery of the community, this uncertainty from funders will remain even the anonymised feedback on the project is very positive.

Per Michael, I'm not sure what this idea has over (say) posting a 'pitch' on this forum, doing a kickstarter, etc.

Comment by gregory_lewis on Empirical data on value drift · 2018-04-24T01:31:05.676Z · score: 9 (11 votes) · EA · GW

Very interesting. As you say, this data is naturally rough, but it also roughly agrees with own available anecdata (my impression is somewhat more optimistic, although attenuated by likely biases). A note of caution:

The framing in the post generally implies value drift is essentially value decay (e.g. it is called a 'risk', the comparison of value drift to unwanted weight gain/poor diet/ etc.). If so, then value drift/decay should be something to guard against, and maybe precommitment strategies/'lashing oneself to the mast' seems a good idea, like how we might block social media, don't have sweets in the house, and so on.

I'd be slightly surprised if the account someone who 'drifted' would often fit well with the sort of thing you'd expect someone to say if (e.g.) they failed to give up smoking or lose weight. Taking the strongest example, I'd guess someone who dropped from 50% to 10ish% after marrying and starting a family would say something like, "I still think these EA things are important, but now I have other things I consider more morally important still (i.e. my spouse and my kids). So I need to allocate more of my efforts to these, thus I can provide proportionately less to EA matters".

It is much less clear whether this person would think they've made a mistake in allocating more of themselves away from EA, either at t2-now (they don't regret they now have a family which takes their attention away from EA things), or at t1-past (if their previous EA-self could forecast them being in this situation, they would not be disappointed in themselves). If so, these would not be options that their t1-self should be trying to shut off, as (all things considered) the option might be on balance good.

I am sure there are cases where 'life gets in the way' in a manner it is reasonable to regret. But I would be chary if the only story we can tell for why someone would be 'less EA' are essentially greater or lesser degrees of moral failure, disappointed if suspicion attaches to EAs starting a family or enjoying (conventional) professional success, and caution against pre-commitment strategies which involve closing off or greatly hobbling aspects of one's future which would be seen as desirable by common-sense morality.

Comment by gregory_lewis on The person-affecting value of existential risk reduction · 2018-04-20T21:26:54.288Z · score: 4 (4 votes) · EA · GW

1) Happiness levels seem to trend strongly positive, given things like the world values survey (in the most recent wave - 2014, only Egypt had <50% of people reporting being either 'happy' or 'very happy', although in fairness there were a lot of poorer countries with missing data. The association between wealth and happiness is there, but pretty weak (e.g. Zimbabwe gets 80+%, Bulgaria 55%). Given this (and when you throw in implied preferences, commonsensical intuitions whereby we don't wonder about whether we should jump in the pond to save the child as we're genuinely uncertain it is good for them to extent their life), it seems the average human takes themselves to have a life worth living. (q.v.)

2) My understanding from essays by Shulman and Tomasik is that even intensive factory farming plausibly leads to a net reduction in animal populations, given a greater reduction in wild animals due to habitat reduction. So if human extinction leads to another ~100M years of wildlife, this looks pretty bad by asymmetric views.

Of course, these estimates are highly non-resilient even with respect to sign. Yet the objective of the essay wasn't to show the result was robust to all reasonable moral considerations, but that the value of x-risk reduction isn't wholly ablated on a popular view of population ethics - somewhat akin to how Givewell analysis on cash transfers don't try and factor in poor meat eater considerations.

3) I neither 'tout' - nor even state - this is a finding that 'xrisk reduction is highly effective for person-affecting views'. Indeed, I say the opposite:

Although it seems unlikely x-risk reduction is the best buy from the lights of the [ed: typo - as context suggests, meant 'person-affecting'] total view (we should be suspicious if it were), given $13000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.

Comment by gregory_lewis on Comparative advantage in the talent market · 2018-04-20T12:17:35.270Z · score: 0 (0 votes) · EA · GW

I'd hesitate to extrapolate my experience across to operational roles for the reasons you say. That said, my impression was operations folks place a similar emphasis on these things as I. Tanya Singh (one my colleagues) gave a talk on 'x risk/EA ops'. From the Q&A (with apologies to Roxanne and Tanya for my poor transcription):

One common retort we get about people who are interested in operations is maybe they don't need to be value-aligned. Surely we can just hire someone who has operations skills but doesn't also buy into the cause. How true do you think this claim is?

I am by no means an expert, but I have a very strong opinion. I think it is extremely important to be values aligned to the cause, because in my narrow slice of personal experience that has led to me being happy, being content, and that's made a big difference as to how I approach work. I'm not sure you can be a crucial piece of a big puzzle or a tightly knit group if you don't buy into the values that everyone is trying to push towards. So I think it's very very important.

Comment by gregory_lewis on The person-affecting value of existential risk reduction · 2018-04-14T00:07:15.325Z · score: 0 (0 votes) · EA · GW

The EV in question is the reduction in x-risk for a single year, not across the century. I'll change the wording to make this clearer.

Comment by gregory_lewis on The person-affecting value of existential risk reduction · 2018-04-13T22:19:18.725Z · score: 4 (4 votes) · EA · GW

As you've noted in the comments, you model this as $1bn total, rather than $1bn a year. Ignoring the fact that the person affecting advocate (PAA) only cares about present people (at time of initial decision to spend), if the cost-effectivneness is even 10 lower then it probably no long counts as a good buy.

No, in my comments I note precisely the opposite. The model assumes 1B per year. If the cost is 1B total to reduce risk for the subsequent century, the numbers get more optimistic (100x optimistic if you buy counterpart-y views, but still somewhat better if you discount the benefit in future years by how many from the initial cohort remain alive).

Further, the model is time-uniform, so it can collapse into a 'I can spend 1B in 2018 to reduce xrisk in this year by 1% from a 0.01% baseline, and the same number gets spit out. So if a PAA buys these numbers (as Alex says, I think my offers skew conservative to xrisk consensus if we take them as amortized across-century risk, they might be about right/'optimistic' if they are taken as an estimate for this year alone), this looks an approximately good buy.

Population ethics generally, and PA views within them, are far from my expertise. I guess I'd be surprised if pricing by TRIA gives a huge discount, as I take most people consider themselves pretty psychologically continuous from the ages of ~15 onwards. If this isn't true, or consensus view amongst PAAs is "TRIA, and we're mistaken to our degree of psychological continuity", then this plausibly shaves off an order of magnitude-ish and plonks it more in the 'probably not a good buy' category.

Comment by gregory_lewis on The person-affecting value of existential risk reduction · 2018-04-13T21:31:26.153Z · score: 1 (1 votes) · EA · GW

+1. Navigating this is easier said than done, and one might worry about some sort of temporal parochialism being self-defeating (persons at t1-tn are all better off if they cooperate across cohort with future-regarding efforts instead of all concerning themselves with those who are morally salient at their corresponding t).

My impression is those with person-affecting sympathies prefer trying to meet these challenges rather than accept the moral character of destructive acts change with a (long enough) delay, or trying to reconcile this with the commonsensical moral importance of more normal future-regarding acts (e.g. climate change, town planning, etc.)

Comment by gregory_lewis on The person-affecting value of existential risk reduction · 2018-04-13T11:58:51.159Z · score: 2 (2 votes) · EA · GW

The mistake might be on my part, but I think where this may be going wrong is I assume the cost needs to be repeated each year (i.e. you spent 1B to reduce risk by 1% in 2018, then have to spend another 1B to reduce risk by 1% in 2019). So if you assume a single 1B pulse reduces x risk across the century by 1%, then you do get 100 fold better results.

I mainly chose the device of some costly 'project X' as it is hard to get a handle on (e.g.) whether 10^-10 reduction in xrisk/$ is a plausible figure or not. Given this, I might see if I can tweak the wording to make it clearer - or at least make any mistake I am making easier to diagnose.

The person-affecting value of existential risk reduction

2018-04-13T01:44:54.244Z · score: 36 (29 votes)
Comment by gregory_lewis on Comparative advantage in the talent market · 2018-04-12T07:19:30.317Z · score: 8 (8 votes) · EA · GW

Bravo!

FWIW I am one of the people doing something similar to what you advocate: I work in biorisk for comparative advantage reasons, although I think AI risk is a bigger deal.

That said, this sort of trading might be easier within broad cause areas than between them. My impression is received wisdom among the far future EAs is that both AI and bio are both 'big deals': AI might be (even more) important, yet bio (even more) neglected. For this reason even though I suspect most (myself included) would recommend a 'pluripotent far future EA' to look into AI first, it wouldn't take much to tilt the scales the other way (e.g. disposition, comparative advantage, and other things you cite). It also means individuals may not suffer a motivation hit if they are merely doing a very good thing rather than the very best thing by their lights. I think a similar thing applies to means that further a particular cause (whether to strike out on ones own versus looking for a role in an existing group, operations versus research, etc.)

When the issue is between cause areas, one needs to grapple with decisive considerations open chasms which are hard to cross with talent arbitrage. In the far future case, the usual story around astronomical waste etc. implies (pace Tomasik) that work on the far future is hugely more valuable than work in another cause area like animal welfare. Thus even if one is comparatively advantaged in animal welfare, one may still think their marginal effect is much greater in the far future cause area.

As you say, this could still be fertile ground for moral trade, and I also worry about more cynical reasons that explain this hasn't happened (cf. fairly limited donation trading so far). Nonetheless, I'd like to offer a few less cynical reasons that draw the balance of my credence.

As you say, although Allison and Bettina should think, "This is great, by doing this I get to have a better version of me do work on the cause I think is most important!" They might mutually recognise their cognitive foibles will mean they will struggle with their commitment to a cause they both consider objectively less important, and this term might outweigh their comparative advantage.

It also may be the case that developing considerable sympathy to a cause area may not be enough. Both intra- and outside EA, I generally salute well-intentioned efforts to make the world better: I wish folks working on animal welfare, global poverty, or (developed world) public health every success. Yet when I was doing the latter, despite finding it intrinsically valuable, I struggled considerably with motivation. I imagine the same would apply if I traded places with an 'animal-EA' for comparative advantage reasons.

It would been (prudentially) better if I could 'hack' my beliefs to find this work more intrinsically valuable. Yet people are (rightly) chary to try and hack prudentially useful beliefs (cf. Pascal's wager, where Pascal anticipated the 'I can't just change my belief in God' point, and recommended atheists go to church and other things which would encourage religious faith to take root), given it may have spillover into other domains where they take epistemic accuracy is very important. If cause area decisions mostly rely on these (which I hope they do), there may not be much opportunity to hack away this motivational bracken to provide fertile ground for moral trade. 'Attitude hacking' (e.g. I really like research, but I'd be better at ops, so I try to make myself more motivated by operations work) lacks this downside, and so looks much more promising.

Further, a better ex ante strategy across the EA community might be not to settle for moral trade, but instead discuss the merits of the different cause areas. Both Allison and Bettina take the balance of reason on their side, and so might hope either a) they get their counterpart to join them, or b) they realise they are mistaken and so migrate to something more important. Perhaps this implies an idealistic view of how likely people are to change their minds about these matters. Yet the track record of quite a lot people changing their minds about what cause areas are the most important (I am one example) gives some cause for hope.

Comment by gregory_lewis on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-03-04T12:20:40.016Z · score: 0 (0 votes) · EA · GW

I regret I don't have much insight to offer on the general point. When I was looking into the bibliometrics myself, very broad comparison to (e.g.) Norwegian computer scientists gave figures like '~0.5 to 1 paper per person year', which MIRI's track record seemed about on par if we look at peer reviewed technical work. I wouldn't be surprised to find better performing research groups (in terms of papers/highly cited papers), but slightly moreso if these groups were doing AI safety work.

Comment by gregory_lewis on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-03-01T10:18:30.543Z · score: 1 (1 votes) · EA · GW

This paper was in 2016, and is included in the proceedings of the UAI conference that year. Does this not count?

Comment by gregory_lewis on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-03-01T05:54:42.925Z · score: 3 (3 votes) · EA · GW

I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations.

Technical research on AI generally (although not exclusively) falls under the heading of computer science. In this field, it is not only the prevailing (but not universal) view of practitioners that conference presentations are academically 'better' (here, here, etc.), but that they tend to have similar citation counts too.

Comment by gregory_lewis on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-02-28T06:06:00.831Z · score: 3 (5 votes) · EA · GW

Disclosure: I'm both a direct and indirect beneficiary of Open Phil funding. I am also a donor to MIRI, albeit an unorthodox one

[I]f you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter).

I have a 2 year out-of-date rough draft on bibliometrics re. MIRI, which likely won't get updated due to being superceded by Lark's excellent work and other constraints on my time. That said:

My impression of computer science academia was that (unlike most other fields) conference presentations are significantly more important than journal publications. Further when one looks at work on MIRI's page from 2016-2018, I see 2 papers at Uncertainty of AI, which this site suggests is a 'top-tier' conference. (Granted, for one of these neither or the authors have a MIRI institutional affiliation, although 'many people at MIRI' are acknowledged).

Comment by gregory_lewis on In defence of epistemic modesty · 2018-02-27T02:16:14.649Z · score: 0 (0 votes) · EA · GW

Apropos of which, SEP published an article on disagreement last week, which provides an (even more) up to date survey of philosophical discussion in this area.

Comment by gregory_lewis on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T22:24:11.366Z · score: 14 (20 votes) · EA · GW

Thank you for writing this post. An evergreen difficulty that applies to discussing topics of such a broad scope is the large number of matters that are relevant, difficult to judge, and where one's judgement (whatever it may be) can be reasonably challenged. I hope to offer a crisper summary of why I am not persuaded.

I understand from this the primary motivation of MCE is avoiding AI-based dystopias, with the implied causal chain being along the lines of, “If we ensure the humans generating the AI have a broader circle of moral concern, the resulting post-human civilization is less likely to include dystopic scenarios involving great multitudes of suffering sentiences.”

There are two considerations that speak against this being a greater priority than AI alignment research: 1) Back-chaining from AI dystopias leaves relatively few occasions where MCE would make a crucial difference. 2) The current portfolio of ‘EA-based’ MCE is poorly addressed to averting AI-based dystopias.

Re. 1): MCE may prove neither necessary nor sufficient for ensuring AI goes well. On one hand, AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators. On the other, even if the AI-designers have the desired broad moral circle, they may have other crucial moral faults (maybe parochial in other respects, maybe selfish, maybe insufficiently reflective, maybe some mistaken particular moral judgements, maybe naive approaches to cooperation or population ethics, and so on) - even if they do not, there are manifold ways in the wider environment (e.g. arms races), or in terms of technical implementation, that may incur disaster.

It seems clear to me that, pro tanto, the less speciesist the AI-designer, the better the AI. Yet for this issue to be of such fundamental importance to be comparable to AI safety research generally, the implication is of an implausible doctrine of ‘AI immaculate conception’: only by ensuring we ourselves are free from sin can we conceive an AI which will not err in a morally important way.

Re 2): As Plant notes, MCE does not arise from animal causes alone: global poverty, climate change also act to extend moral circles, as well as propagating other valuable moral norms. Looking at things the other way, one should expect the animal causes found most valuable from the perspective of avoiding AI-based dystopia to diverge considerably from those picked on face-value animal welfare. Companion animal causes are far inferior from the latter perspective, but unclear on the former if this a good way of fostering concern for animals; if the crucial thing is for AI-creators not to be speciest over the general population, targeted interventions like ‘Start a petting zoo at Deepmind’ look better than broader ones, like the abolition of factory farming.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

Notwithstanding the above, the approach outlined above has a role to play in some ideal ‘far future portfolio’, and it may be reasonable for some people to prioritise work on this area, if only for reasons of comparative advantage. Yet I aver it should remain a fairly junior member of this portfolio compared to AI-safety work.

Comment by gregory_lewis on How fragile was history? · 2018-02-02T18:48:12.968Z · score: 2 (2 votes) · EA · GW

That seems surprising to me, given the natural model for the counterpart in the case you describe would be a sibling, and observed behaviour between sibs is pretty divergent. I grant your counterfactual sibling would be more likely than a random member of the population to be writing something similar to the parent comment, but the absolute likelihood remains very low.

The fairly intermediate heritabilities of things like intelligence, personality traits etc. also look pretty variable. Not least, there's about a 0.5 chance your counterpart would be the opposite sex to you.

I agree even if history is chaotic in some respects, it is not chaotic to everything, and there can be forcing interventions (one can grab a double pendulum, etc), yet less overwhelming interventions may be pretty hard to fathom in the chaotic case (It's too early to say whether the french revolution was good or bad, etc.)

How fragile was history?

2018-02-02T06:23:54.282Z · score: 11 (13 votes)

In defence of epistemic modesty

2017-10-29T19:15:10.455Z · score: 47 (43 votes)

Beware surprising and suspicious convergence

2016-01-24T19:11:12.437Z · score: 33 (39 votes)

At what cost, carnivory?

2015-10-29T23:37:13.619Z · score: 5 (5 votes)

Don't sweat diet?

2015-10-22T20:15:20.773Z · score: 11 (13 votes)

Log-normal lamentations

2015-05-19T21:07:28.986Z · score: 11 (13 votes)

How best to aggregate judgements about donations?

2015-04-12T04:19:33.582Z · score: 4 (4 votes)

Saving the World, and Healing the Sick

2015-02-12T19:03:05.269Z · score: 12 (12 votes)

Expected value estimates you can take (somewhat) literally

2014-11-24T15:55:29.144Z · score: 4 (4 votes)