Posts

Complex cluelessness as credal fragility 2021-02-08T16:59:16.639Z
Take care with notation for uncertain quantities 2020-09-16T19:26:43.873Z
Challenges in evaluating forecaster performance 2020-09-08T20:37:17.318Z
Use resilience, instead of imprecision, to communicate uncertainty 2020-07-18T12:09:36.901Z
Reality is often underpowered 2019-10-10T13:14:08.605Z
Risk Communication Strategies for the Very Worst of Cases 2019-03-09T06:56:12.480Z
The person-affecting value of existential risk reduction 2018-04-13T01:44:54.244Z
How fragile was history? 2018-02-02T06:23:54.282Z
In defence of epistemic modesty 2017-10-29T19:15:10.455Z
Beware surprising and suspicious convergence 2016-01-24T19:11:12.437Z
At what cost, carnivory? 2015-10-29T23:37:13.619Z
Don't sweat diet? 2015-10-22T20:15:20.773Z
Log-normal lamentations 2015-05-19T21:07:28.986Z
How best to aggregate judgements about donations? 2015-04-12T04:19:33.582Z
Saving the World, and Healing the Sick 2015-02-12T19:03:05.269Z
Expected value estimates you can take (somewhat) literally 2014-11-24T15:55:29.144Z

Comments

Comment by Gregory_Lewis on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-18T05:17:59.613Z · EA · GW

Although I understand the nationalism example isn't meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous. 

If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively 'moved on' from these, contemporary introductions to the field shouldn't feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.

Yet, however you slice it, EA as it stands now hasn't by-and-large 'moved on' to be 'basically longtermism', where its interest in (e.g) global health is clearly atavistic. I'd be willing to go to bat for substantial slants to longtermism, as (I aver) its over-representation amongst the more highly engaged and the disproportionate migration of folks to longtermism from other areas warrants claims that epistocratic weighting of consensus would favour longtermism over anything else. But even this has limits which 'greatly favouring longtermism over everything else' exceeds.  

How you choose to frame an introduction is up for grabs, and I don't think 'the big three' is the only appropriate game in town. Yet if your alternative way of framing an introduction to X ends up strongly favouring one aspect (further, the one you are sympathetic to) disproportionate to any reasonable account of its prominence within X, something has gone wrong.

Comment by Gregory_Lewis on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-18T05:17:40.824Z · EA · GW

Per others: This selection isn't really 'leans towards a focus on longtermism', but rather 'almost exclusively focuses on longtermism': roughly any 'object level' cause which isn't longtermism gets a passing mention, whilst longtermism is the subject of 3/10 of the selection. Even some not-explicitly-longtermist inclusions (e.g. Tetlock, MacAskill, Greaves) 'lean towards' longtermism either in terms of subject matter or affinity.

Despite being a longtermist myself, I think this is dubious for a purported 'introduction to EA as a whole': EA isn't all-but-exclusively longtermist in either corporate thought or deed.

Were I a more suspicious sort, I'd also find the 'impartial' rationales offered for why non-longtermist things keep getting the short (if not pointy) end of the stick scarcely credible:

i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.

The first episode with Karnofsky also covers longtermism and AI - at least as much as global health and animals. Yet this didn't stop episodes on the specific cause areas of longtermism (Ord) and AI (Christiano) being included. Ditto the instance of "entrepreneurship, independent thinking, and general creativity" one wanted to highlight just-so-happens to be a longtermist intervention (versus, e.g. this).

Comment by Gregory_Lewis on Proposed Longtermist Flag · 2021-03-24T15:46:33.929Z · EA · GW

I also thought along similar lines, although (lacking subtlety) I thought you could shove in a light cone from the dot, which can serve double duty as the expanding future. Another thing you could do is play with a gradient so this curve/the future gets brighter as well as bigger, but perhaps someone who can at least successfully colour in have a comparative advantage here.

test

Comment by Gregory_Lewis on Progress Open Thread: March 2021 · 2021-03-24T13:30:03.505Z · EA · GW

A less important motivation/mechanism is probabilities/ratios (instead of odds) are bounded above by one. For rare events 'doubling the probability' versus 'doubling the odds' get basically the same answer, but not so for more common events. Loosely, flipping a coin three times 'trebles' my risk of observing it landing tails, but the probability isn't 1.5. (cf).

E.g.

Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like an additional 60% chance of your other child going through at least some level of abuse (and I would estimate something like a 15% chance of serious abuse). [my emphasis]

If you used the 80% definition instead of 20%, then the '4x' risk factor implied by 60% additional chance (with 20% base rate) would give instead an additional 240% chance.

[(Of interest, 20% to 38% absolute likelihood would correspond to an odds ratio of ~2.5, in the ballpark of 3-4x risk factors discussed before. So maybe extrapolating extreme event ratios to less-extreme event ratios can do okay if you keep them in odds form. The underlying story might have something to do with logistic distributions closely resemble normal distributions (save at the tails), so thinking about shifting a normal distribution across the x axis so (non-linearly) more or less of it lies over a threshold loosely resembles adding increments to log-odds (equivalent to multiplying odds by a constant multiple) giving (non-linear) changes when traversing a logistic CDF.

But it still breaks down when extrapolating very large ORs from very rare events. Perhaps the underlying story here may have something to do with higher kurtosis : '>2SD events' are only (I think) ~5X more likely than >3SD events for logistic distributions, versus ~20X in normal distribution land. So large shifts in likelihood of rare(r) events would imply large logistic-land shifts (which dramatically change the whole distribution, e.g. an OR of 10 makes evens --> >90%) much more modest in normal-land (e.g. moving up an SD gives OR>10 for previously 3SD events, but ~2 for previously 'above average' ones)]

Comment by Gregory_Lewis on Tristan Cook's Shortform · 2021-03-12T20:44:03.329Z · EA · GW

Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of'X conclusion's out there). Trying to weigh these up comparatively are fraught.

In your comparison, it seems there's a straightforward dominance argument if the 'OC' and 'RC' are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.

There's also the worry in a pairwise comparison one might inadvertently pick a counterexample for one 'side' that turns the screws less than the counterexample for the other one. Most people find the 'very repugnant conclusion' (where not only Z > A, but 'large enough Z and some arbitrary number having awful lives > A') even more costly than the 'standard' RC. So using the more or less costly variant on one side of the scales may alter intuitive responses.

By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.

It seems the main engine of RC-like examples is the aggregation - it feels like one is being nickel-and-dimed taking a lot of very small things to outweigh one very large thing, even though the aggregate is much higher. The typical worry a negative view avoids is trading major suffering for sufficient amounts of minor happiness - most typically think this is priced too cheaply, particularly at extremes. The typical worry of the (absolute) negative view itself is it fails to price happiness at all - yet often we're inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of 'upside'.

So with this procedure the putative counter-example to the classical view would be the vRC. Although negative views may not give crisp recommendations against the RC (e.g. if we stipulate no one ever suffers in any of the worlds, but are more or less happy), its addition clearly recommends against the vRC: the great suffering isn't outweighed by the large amounts of relatively trivial happiness (but it would be on the classical view).

Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC - by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia - Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas. The negative view ranks Z > A: the negative view only considers the pinpricks in this utopia, and sufficiently huge magnitudes of these can worse than awful lives (the classical view, which wouldn't discount all the upside in A, would not). In general, this negative view can countenance any amount of awful suffering if this is the price to pay to abolish a near-utopia of sufficient size.

(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) and - depending how you litigate - susceptible to a sadistic conclusion. If the axiology claims welfare is capped above by 0, then there's never an option of adding positive welfare lives so nothing can be sadistic. If instead it discounts positive welfare, then it prefers (given half of A) adding half of Z (very negative welfare lives) to adding the other half of A (very positive lives)).

I take this to make absolute negative utilitarianism (similar to average utilitarianism) a non-starter. In the same way folks look for a better articulation of egalitarian-esque commitments that make one (at least initially) sympathetic to average utilitarianism, so folks with negative-esque sympathies may look for better articulations of this commitment. One candidate could be what one is really interested in cases of severe rather than trivial suffering, so this rather than suffering in general should be the object of sole/lexically prior concern. (Obviously there are many other lines, and corresponding objections to each).

But note this is an anti-aggregation move. Analogous ones are available for classical utilitarians to avoid the (v/)RC (e.g. a critical-level view which discounts positive welfare below some threshold). So if one is trying to evaluate a particular principle out of a set, it would be wise to aim for 'like-for-like': e.g. perhaps a 'negative plus a lexical threshold' view is more palatable than classical util, yet CLU would fare even better than either.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-12T01:32:59.048Z · EA · GW

[Mea culpa re. messing up the formatting again]

1) I don't closely follow the current state of play in terms of 'shorttermist' evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) "Why aren't you factoring in impacts on climate change for these interventions?" would be some mix of:

a) "We have looked at this, and we're confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc."

b) "We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn't vary appreciably between interventions) so we get higher yield investigating other things."

c) "We are explicit our analysis is predicated on moral (e.g. "human lives are so much more important than animals lives any impact on the latter is ~moot") or epistemic (e.g. some 'common sense anti-cluelessness' position) claims which either we corporately endorse and/or our audience typically endorses." 

Perhaps such hopes would be generally disappointed.

2) Similar to above, I don't object to (re. animals) positions like "Our view is this consideration isn't a concern as X" or "Given this consideration, we target Y rather than Z", or "Although we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation."

But I at least used to see folks appeal to motivations which obviate (inverse/) logic of the larder issues, particularly re. diet change ("Sure, it's actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what we're aiming for"). Yet this overriding motivation typically only 'came up' in the context of this discussion, and corollary questions  like:

*  "Is maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?"

* "Is encouraging carnivores to adopt a vegan diet the best way to influence attitudes?"

* "Shouldn't we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/be bad by the lights of many/most non-consequentialist views?" 

seemed seldom asked. 

Naturally I hope this is a relic of my perhaps jaundiced memory.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-12T00:15:44.538Z · EA · GW

FWIW, I don't think 'risks' is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. "Give Directly reduces extinction risk by reducing poverty, a known cause of conflict"); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.

(Apologies in advance I'm rehashing unhelpfully)

The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a 'confirmed discovery'). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, there's a natural response of 'shouldn't we try something which targets this on purpose?'; if it were 0, we wouldn't attend to it further; if it meant you were -10, you wouldn't give to (now net EV = "-9") GiveDirectly. 

The right response where all three scenarios are credible (plus all the intermediates) but you're unsure which one you're in isn't intuitively obvious (at least to me). Even if (like me) you're sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just 'run the numbers' and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.

The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something 'extra' to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms 'cancel out'), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better. 

This can be put in plain(er) English (although familiar-to-EA jargon like 'EV' may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isn't even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty. 

Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/vocab to grapple with the problem. They may also think we don't have a good 'answer' yet of what to do in these situations, so may hesitate to give 'accept there's uncertainty but don't be paralysed by it' advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-04T15:04:55.636Z · EA · GW

Belatedly:

I read the stakes here differently to you. I don't think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to 'everything which isn't longtermism'. At least, that isn't my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to. 

The AMF discussions around cluelessness in the OP are intended as toy example - if you like, deliberating purely on "is it good or bad to give to AMF versus this particular alternative?" instead of "Out of all options, should it be AMF?" Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.

So if that isn't a main motivation, what is? Perhaps something like this:

1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land - particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to 'backfire' are fairly trivial, but how seriously credible ones should be investigated is up for grabs.

Although "just be indifferent if it is hard to figure out" is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:

a) People not tracking when the ground of appeal for an intervention has changed. Although I don't see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an 'inverse logic of the larder' (see), such as "per area, a factory farm has a lower intensity of animal suffering than the environment it replaced". 

Even if so, it wouldn't follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of 'animal suffering averted per $' remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.

b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).

Discussions here are typically marred by proponents either completely ignoring considerations on the 'other side' of the population growth question, or giving very unequal time to them/sheltering behind uncertainty (e.g. "Considerations X, Y, and Z all tentatively support more population growth, admittedly there's A, B, C, but we do not cover those in the interests of time - yet, if we had, they probably would tentatively oppose more population growth"). 

2) Given my fairly deflationary OP, I don't think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think I'm right, I don't think I'm obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant it's own label.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-03T16:11:03.423Z · EA · GW

I may be missing the thread, but the 'ignoring' I'd have in mind for resilient cluelessness would be straight-ticket precision, which shouldn't be intransitive (or have issues with principle of indifference).

E.g. Say I'm sure I can make no progress on (e.g.) the moral weight of chickens versus humans in moral calculation - maybe I'm confident there's no fact of the matter, or interpretation of the empirical basis is beyond our capabilities forevermore, or whatever else.

Yet (I urge) I should still make a precise assignment (which is not obliged to be indifferent/symmetrical), and I can still be in reflective equilibrium between these assignments even if I'm resiliently uncertain. 

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-03T15:53:10.083Z · EA · GW

Mea culpa. I've belatedly 'fixed' it by putting it into text.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-03T15:52:04.244Z · EA · GW

The issue is more the being stuck than the range: say it is (0.4, 0.6) rather than (0, 1), you'd still be inert. Vallinder (2018) discusses this extensively, including issues around infectiousness and generality.

Comment by Gregory_Lewis on Thoughts on whether we're living at the most influential time in history · 2020-11-11T07:56:14.210Z · EA · GW

For my part, I'm more partial to 'blaming the reader', but (evidently) better people mete out better measure than I in turn.

Insofar as it goes, I think the challenge (at least for me) is qualitative terms can cover multitudes (or orders of magnitudes) of precision. I'd take ~0.3% to be 'significant' credence for some values of significant. 'Strong' 'compelling' or 'good' arguments could be an LR of 2 (after all, RCT confirmation can be ~3) or 200. 

I also think quantitative articulation would help the reader (or at least this reader) better benchmark the considerations here. Taking the rough posterior of 0.1% and prior of 1 in 100 million, this implies a likelihood ratio of ~~100 000 - loosely, ultra-decisive evidence. If we partition out the risk-based considerations (which it discussion seems to set as 'less than decisive' so <100), the other considerations (perhaps mostly those in S5) give you a LR of > ~1000 - loosely, very decisive evidence. 

Yet the discussion of the considerations in S5 doesn't give the impression we should conclude they give us 'massive updates'. You note there are important caveats to these considerations, you say in summing up these arguments are 'far from watertight', and I also inferred the sort of criticisms given in S3 around our limited reasoning ability and scepticism of informal arguments would also apply here too. Hence my presumption these other considerations, although more persuasive than object level arguments around risks, would still end up below the LR ~ 100 for 'decisive' evidence, rather than much higher. 

Another way this would help would be illustrating the uncertainty. Given some indicative priors you note vary by ten orders of magnitude, the prior is not just astronomical but extremely uncertain. By my lights, the update doesn't greatly reduce our uncertainty (and could compound it, given challenges in calibrating around  very high LRs). If the posterior odds could be 'out by 100 000x either way' the central estimate being at ~0.3%  could still give you (given some naive log-uniform) 20%+ mass distributed at better than even odds of HH. 

The moaning about hiding the ball arises from the sense this numerical articulation reveals (I think) some powerful objections the more qualitative treatment obscures. E.g.

  • Typical HH proponents are including considerations around earliness/single planet/ etc. in their background knowledge/prior when discussing object level risks. Noting the prior becomes astronomically adverse when we subtract these out of background knowledge, and so the object level case for (e.g.) AI risk can't possibly be enough to carry the day alone seems a bait-and-switch: you agree the prior becomes massively less astronomical when we include single planet etc. in background knowledge, and in fact things like 'we live on only one planet' are in our background knowledge (and were being assumed at least tacitly by HH proponents). 
  • The attempt to 'bound' object level arguments by their LR (e.g. "Well, these are informal, and it looks fishy, etc. so it is hard to see how you can get LR >100 from these") doesn't seem persuasive when your view is that the set of germane considerations (all of which seem informal, have caveats attached, etc.) in concert are giving you an LR of ~100 000 or more. If this set of informal considerations can get you more than half way from the astronomical prior to significant credence, why be so sure additional ones (e.g.) articulating a given danger can't carry you the rest of the way? 
  • I do a lot of forecasting, and I struggle to get a sense of what priors of 1/ 100 M or decisive evidence to the tune of LR 1000 would look like in 'real life' scenarios. Numbers this huge (where you end up virtually 'off the end of the tail' of your stipulated prior) raise worries about consilience (cf. "I guess the sub-prime morgage crisis was a 10 sigma event"), but moreover pragmatic defeat: there seems a lot to distrust in an epistemic procedure along the lines of "With anthropics given stipulated subtracted background knowledge we end up with an astronomically minute prior (where we could be off by many orders of magnitude), but when we update on adding back in elements of our actual background knowledge this shoots up by many orders of magnitude (but we are likely still off by many orders of magnitude)". Taking it face value would mean a minute update to our 'pre theoretic prior' on the topic before embarking on this exercise (providing these overlapped and was not as radically uncertain, varying no more than a couple rather than many orders of magnitude). If we suspect (which I think we should) this procedure of partitioning out background knowledge into update steps which approach log log variance and where we have minimal calibration is less reliable than using our intuitive gestalt over our background knowledge as whole, we should discount its deliverances still further. 
Comment by Gregory_Lewis on Thoughts on whether we're living at the most influential time in history · 2020-11-10T07:23:42.701Z · EA · GW

But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million. I want to push on this because if your own credences are inconsistent with your argument, the reasons why seem both important to explore and to make clear to readers, who may be mislead into taking this at 'face value'. 

From this on page 13 I guess a generous estimate (/upper bound) is something like 1/ 1 million for the 'among most important million people':

[W]e can assess the quality of the arguments given in favour of the Time of Perils or Value Lock-in views, to see whether, despite the a priori implausibility and fishiness of HH, the evidence is strong enough to give us a high posterior in HH. It would take us too far afield to discuss in sufficient depth the arguments made in Superintelligence, or Pale Blue Dot, or The Precipice. But it seems hard to see how these arguments could be strong enough to move us from a very low prior all the way to significant credence in HH. As a comparison, a randomised controlled trial with a p-value of 0.05, under certain reasonable assumptions, gives a Bayes factor of around 3 in favour of the hypothesis; a Bayes factor of 100 is regarded as ‘decisive’ evidence. In order to move from a prior of 1 in 100 million to a posterior of 1 in 10, one would need a Bayes factor of 10 million — extraordinarily strong evidence.

I.e. a prior of ~ 1/ 100 million (which is less averse than others you moot earlier), and a Bayes factor < 100 (i.e. we should not think the balance of reason, all considered, is 'decisive' evidence), so you end up at best at ~1/ 1 million. If this argument is right, you can be 'super confident' giving a credence of 0.1% is wrong (out by an ratio of >~ 1000, the difference between ~ 1% and 91%), and vice-versa. 

Yet I don't think your credence on 'this is the most important century' is 1/ 1 million. Among other things it seems to imply we can essentially dismiss things like short TAI timelines, Bostrom-Yudkowsky AI accounts etc, as these are essentially upper-bounded by the 1/ 1M credence above.*

So (presuming I'm right and you don't place negligible credence on these things) I'm not sure how these things can be in reflective equilibrium.

1: 'Among the most important million people' and 'this is the most important century' are not the same thing, and so perhaps one has a (much) higher prior on the latter than the former. But if the action really was here, then the precisification of 'hinge of history' as the former claim seems misguided: "Oh, this being the most important century could have significant credence, but this other sort-of related proposition nonetheless has an astronomically adverse prior" confuses rather than clarifies.

2: Another possibility is there are sources of evidence which give us huge updates, even if the object level arguments in (e.g.) Superintelligence, The Precipice etc. are not among them. Per the linked conversation, maybe earliness gives a huge shift up from the astronomically adverse prior, so this plus the weak object level evidence gets you to lowish but not negligible credence. 

Whether cashed out via prior or update, it seems important to make such considerations explicit, as the true case in favour of HH would include these considerations too. Yet the discussion of 'how far you should update' on p11-13ish doesn't mention these massive adjustments, instead noting reasons to be generally sceptical (e.g. fishiness) and the informal/heuristic arguments for object level risks should not be getting you Bayes factors ~100 or more. This seems to be hiding the ball if in fact your posterior is ultimately 1000x or more your astronomically adverse prior, but not for reasons which are discussed (and so a reader may neglect to include when forming their own judgement).

 

*: I think there's also a presumptuous philosopher-type objection lurking here too. Folks (e.g.) could have used a similar argument to essentially rule out any x-risk from nuclear winter before any scientific analysis, as this implies significant credence in HH, which the argument above essentially rules out. Similar to 'using anthropics to hunt', something seems to be going wrong where the mental exercise of estimating potentially-vast future populations can also allow us to infer the overwhelming probable answers for disparate matters in climate modelling, AI development, the control problem, civilisation recovery and so on. 

Comment by Gregory_Lewis on Thoughts on whether we're living at the most influential time in history · 2020-11-05T20:17:59.146Z · EA · GW

“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that we’re (in expectation at least) at an enormously influential time - as I say in the blog post and the comments, I endorse those arguments! I think we should update massively away from our prior, in particular on the basis of the current rate of economic growth. (My emphasis)


Asserting an astronomically adverse prior, then a massive update, yet being confident you're in the right ballpark re. orders of magnitude does look pretty fishy though. For a few reasons:

First, (in the webpage version you quoted) you don't seem sure of a given prior probability, merely that it is 'astronomical': yet astronomical numbers (including variations you note about whether to multiply by how many accessible galaxies there are or not, etc.) vary by substantially more than three orders of magnitude - you note two possible prior probabilities (of being among the million most influential people) of 1 in a million trillion (10^-18) and 1 in a hundred million (10^-8) - a span of 10 orders of magnitude. 

It seems hard to see how a Bayesian update from this (seemingly) extremely wide prior would give a central estimate at a (not astronomically minute) value, yet confidently rule against values 'only' 3 orders of magnitude higher (a distance a ten millionth the width of this implicit span in prior probability). [It also suggests the highest VoI is to winnow this huge prior range, rather than spending effort evaluating considerations around the likelihood ratio]

Second, whatever (very) small value we use for our prior probability, getting to non-astronomical posteriors implies likelihood ratios/Bayes factors which are huge. From (say) 10^-8 to 10^-4 is a factor of 10 000. As you say in your piece, this is much much stronger than the benchmark for decisive evidence of ~100. It seems hard to say (e.g.) evidence from the rate of economic growth is 'decisive' in this sense, and so it is hard to see how in concert with other heuristic considerations you get 10-100x more confirmation (indeed, your subsequent discussion seems to supply many defeaters exactly this). Further, similar to worries about calibration out on the tail, it seems unlikely many of us can accurately assess LRs > 100 which are not direct observations within orders of magnitude. 

Third, priors should be consilient, and can be essentially refuted by posteriors. A prior that get surprised to the tune of a 1-in-millions should get hugely penalized versus any alternative (including naive intuitive gestalts) which do not. It seems particularly costly as non-negligible credences in (e.g.) nuclear winter, the industrial revolution being crucial etc. are facially represent this prior being surprised by '1 in large X' events at a rate much greater than 1/X.

To end up with not-vastly lower posteriors than your interlocutors (presuming Buck's suggestion of 0.1% is fair, and not something like 1/million), it seems one asserts both a much lower prior which is mostly (but not completely) cancelled out by a much stronger update step.  This prior seems to be ranging over many orders of magnitude, yet the posterior does not - yet it is hard to see where the orders of magnitude of better resolution are arising from (if we knew for sure the prior is 10^-12 versus knowing for sure it is 10^-8, shouldn't the posterior shift a lot between the two cases?)

It seems more reasonable to say 'our' prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion.

Comment by Gregory_Lewis on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T20:39:04.637Z · EA · GW

I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck with the BATNA regardless.
 

I think so. 

In the abstract, 'negotiating via ultimatum' (e.g. "you must cancel the talk, or I will do this") does not mean one is acting in bad faith. Alice may foresee there is no bargaining frontier, but is informing you what your BATNA looks like and gives you the opportunity to consider whether 'giving in' is nonetheless better for you (this may not be very 'nice', but it isn't 'blackmail'). A lot turns on whether her 'or else' is plausibly recommended by the lights of her interests (e.g. she would do these things if we had already held the event/she believed our pre-commitment to do so) or she is threatening spiteful actions where their primary value is her hope they alter our behaviour (e.g. she would at least privately wish she didn't have to 'follow through' if we defied her). 

The reason these are important to distinguish is 'folk game theory' gives a pro tanto reason to not give in the latter case, even if doing so is better than suffering the consequences (as you deter future attempts to coerce you). But not in the former one, as Alice's motivation to retaliate does not rely on the chance you may acquiesce to her threats, and so she will not 'go away' after you've credibly demonstrated to her you will never do this. 

On the particular case I think some of it was plausibly bad faith (i.e. if a major driver was 'fleet in being' threat that people would antisocially disrupt the event) but a lot of it probably wasn't: "People badmouthing/thinking less of us for doing this" or (as Habryka put it) the 'very explicit threat' of an organisation removing their affiliation from EA Munich are all credibly/probably good faith warnings even if the only way to avoid them would have been complete concession. (There are lots of potential reasons I would threaten to stop associating with someone or something where the only way for me to relent is their complete surrender)

(I would be cautious about labelling things as mobs or cancel culture.)


[G]iven that she's taking actions that destroy value for Bob without generating value for Alice (except via their impact on Bob's actions), I think it is fine to think of this as a threat. (I am less attached to the bully metaphor -- I meant that as an example of a threat.)

Let me take a more in-group example readers will find sympathetic.

When the NYT suggested it will run an article using Scott's legal name, may of his supporters responded by complaining to the editor, organising petitions, cancelling their subscriptions (and encouraging others to do likewise), trying to coordinate sources/public figures to refuse access to NYT journalists, and so on. These are straightforwardly actions which 'destroy value' for the NYT, are substantially motivated to try and influence its behaviour, and was an ultimatum to boot (i.e. the only way the NYT can placate this 'online mob' is to fully concede on not using Scott's legal name). 

Yet presumably this strategy was not predicated on 'only we are allowed to (or smart enough to) use game theory, so we can expect the NYT to irrationally give in to our threats when they should be ostentatiously doing exactly what we don't want them to do to demonstrate they won't be bullied'. For although these actions are 'threats', they are warnings/ good faith/ non-spiteful, as these responses are not just out of hope to coerce: these people would be minded to retaliate similarly if they only found out NYT's intention after the article had been published. 

Naturally the hope is that one can resolve conflict by a meeting of the minds: we might hope we can convince Alice to see things our way; and the NYT probably hopes the same. But if the disagreement prompting conflict remains, we should be cautious about how we use the word threat, especially in equivocating between commonsense use of the term (e.g. "I threaten to castigate Charlie publicly if she holds a conference on holocaust denial") and the subspecies where folk game theory - and our own self-righteousness - strongly urges us to refute (e.g. "Life would be easier for us at the NYT if we acquiesced to those threatening to harm our reputation and livelihoods if we report things they don't want us to. But we will never surrender the integrity of our journalism to bullies and blackmailers.")

Comment by Gregory_Lewis on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T11:35:03.478Z · EA · GW

Another case where 'precommitment  to refute all threats' is an unwise strategy (and a case more relevant to the discussion, as I don't think all opponents to hosting a speaker like Hanson either see themselves or should be seen as bullies attempting coercion) is where your opponent is trying to warn you rather than trying to blackmail you. (cf. 1, 2)

Suppose Alice sincerely believes some of Bob's writing is unapologetically misogynistic. She believes it is important one does not give misogynists a platform and implicit approbation. Thus she finds hosting Bob abhorrent, and is dismayed that a group at her university is planning to do just this. She approaches this group, making clear her objections and stating her intention to, if this goes ahead, to (e.g.) protest this event, stridently criticise the group in the student paper for hosting him, petition the university to withdraw affiliation, and so on. 

This could be an attempt to bully (where usual game theory provides a good reason to refuse to concede anything on principle). But it also could not be: Alice may be explaining what responses she would make to protect her interests which the groups planned action would harm, and hoping to find a better negotiated agreement for her and the EA group besides "They do X and I do Y". 

It can be hard to tell the difference, but some elements in this example speak against Alice being a bully wanting to blackmail the group to get her way: First is the plausibility of her interests recommending these actions to her even if they had no deterrent effect whatsoever (i.e. she'd do the same if the event had already happened). Second the actions she intends falls roughly falls in 'fair game' of how one can retaliate against those doing something they're allowed to do which you deem to be wrong. 

Alice is still not a bully even if her motivating beliefs re. Bob are both completely mistaken and unreasonable. She's also still not a bully even if Alice's implied second-order norms are wrong (e.g. maybe the public square would be better off if people didn't stridently object to hosting speakers based on their supposed views on topics they are not speaking upon, etc.) Conflict is typically easy to navigate when you can dictate to your opponent what their interests should be and what they can license themselves to do. Alas such cases are rare.

It is extremely important not to respond to Alice as if she was a bully if in fact she is not, for two reasons. First, if she is acting in good faith, pre-committing to refuse any compromise for 'do not give in to bullying' reasons means one always ends up at ones respective BATNAs even if there was mutually beneficial compromises to be struck. Maybe there is no good compromise with Alice this time, but there may be the next time one finds oneself at cross-purposes.

Second, wrongly presuming bad faith for Alice seems apt to induce her to make a symmetrical mistake presuming bad faith for you. To Alice, malice explains well why you were unwilling to even contemplate compromise, why you considered yourself obliged out of principle  to persist with actions that harm her interests, and why you call her desire to combat misogyny bullying and blackmail. If Alice also thinks about these things through the lens of game theory (although perhaps not in the most sophisticated way), she may reason she is rationally obliged to retaliate against you (even spitefully) to deter you from doing harm again. 

The stage is set for continued escalation. Presumptive bad faith is pernicious, and can easily lead to martyring oneself needlessly on the wrong hill. I also note that 'leaning into righteous anger' or 'take oneself as justified in thinking the worst of those opposed to you' are not widely recognised as promising approaches in conflict resolution, bargaining, or negotiation.

Comment by Gregory_Lewis on What actually is the argument for effective altruism? · 2020-09-27T17:09:03.912Z · EA · GW

This isn't much more than a rotation (or maybe just a rephrasing), but:

When I offer a 10 second or less description of Effective Altruism, it is hard avoid making it sound platitudinous. Things like "using evidence and reason to do the most good", or "trying to find the best things to do, then doing them" are things I can imagine the typical person nodding along with, but then wondering what the fuss is about ("Sure, I'm also a fan of doing more good rather than less good - aren't we all?") I feel I need to elaborate with a distinctive example (e.g. "I left clinical practice because I did some amateur health econ on how much good a doctor does, and thought I could make a greater contribution elsewhere") for someone to get a good sense of what I am driving at.

I think a related problem is the 'thin' version of EA can seem slippery when engaging with those who object to it. "If indeed intervention Y was the best thing to do, we would of course support intervention Y" may (hopefully!) be true, but is seldom the heart of the issue. I take most common objections are not against the principle but the application (I also suspect this may inadvertently annoy an objector, given this reply can paint them as - bizarrely - 'preferring less good to more good'). 

My best try at what makes EA distinctive is a summary of what you spell out with spread, identifiability, etc: that there are very large returns to reason  for beneficence (maybe 'deliberation' instead of 'reason', or whatever).  I think the typical person does "use reason and evidence to do the most good", and can be said to be doing some sort of search for the best actions. I think the core of EA (at least the 'E' bit) is the appeal that people should do a lot more of this than they would otherwise - as, if they do, their beneficence would tend to accomplish much more.

Per OP, motivating this is easier said than done. The best case is for global health, as there is a lot more (common sense) evidence one can point to about some things being a lot better than others, and these object level matters a hypothetical interlocutor is fairly likely to accept also offers support for the 'returns to reason' story. For most other cause areas, the motivating reasons are typically controversial, and the (common sense) evidence is scant-to-absent. Perhaps the best moves are here would be pointing to these as salient considerations which plausibly could dramatically change ones priorities, and so exploring to uncover these is better than exploiting after more limited deliberation (but cf. cluelessness).

 

Comment by Gregory_Lewis on Challenges in evaluating forecaster performance · 2020-09-12T13:33:39.006Z · EA · GW

I'm afraid I'm also not following. Take an extreme case (which is not that extreme given I think 'average number of forecasts per forecaster per question on GJO is 1.something). Alice predicts a year out P(X) = 0.2 and never touches her forecast again, whilst Bob predicts P(X) = 0.3, but decrements proportionately as time elapses. Say X doesn't happen (and say the right ex ante probability a year out was indeed 0.2). Although Alice > Bob on the initial forecast (and so if we just scored that day she would be better), if we carry forward Bob overtakes her overall [I haven't checked the maths for this example, but we can tweak initial forecasts so he does].

As time elapses, Alice's forecast steadily diverges from the 'true' ex ante likelihood, whilst Bob's converges to it. A similar story applies if new evidence emerges which dramatically changes the probability, if Bob updates on it and Alice doesn't. This seems roughly consonant with things like the stock-market - trading off month (or more) old prices rather than current prices seems unlikely to go well.

Comment by Gregory_Lewis on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-04T16:19:22.762Z · EA · GW

FWIW I agree with Owen. I agree the direction of effect supplies a pro tanto consideration which will typically lean in favour of other options, but it is not decisive (in addition to the scenarios he notes, some people have pursued higher degrees concurrently with RSP).

So I don't think you need to worry about potentially leading folks astray by suggesting this as an option for them to consider - although, naturally, they should carefully weigh their options up (including considerations around which sorts of career capital are most valuable for their longer term career planning).

Comment by Gregory_Lewis on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T12:34:13.124Z · EA · GW
As such, blackmail feels like a totally fair characterization [of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).]

As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent's remarks about it being bad to submit to blackmail are inapposite.

Crucially, (q.v. Daniel's comment), not all instances where someone says (or implies), "If you do X (which I say harms my interests), I'm going to do Y (and Y harms your interests)" are fairly characterised as (essentially equivalent to) blackmail. To give a much lower resolution of Daniel's treatment, if (conditional on you doing X) it would be in my interest to respond with Y independent of any harm it may do to you (and any coercive pull it would have on you doing X in the first place), informing you of my intentions is credibly not a blackmail attempt, but a better-faith "You do X then I do Y is our BATNA here, can we negotiate something better?" (In some treatments these are termed warnings versus threats, or using terms like 'spiteful', 'malicious' or 'bad faith' to make the distinction).

The 'very explicit threat' of disassociation you mention is a prime example of 'plausibly (/prima facie) not-blackmail'. There are many credible motivations to (e.g.) renounce (or denounce) a group which invites a controversial speaker you find objectionable independent from any hope threatening this makes them ultimately resile from running the event after all. So too 'trenchantly criticising you for holding the event', 'no longer supporting your group', 'leaving in protest (and encouraging others to do the same)' etc. etc. Any or all of these might be wrong for other reasons - but (again, per Daniels) 'they're trying to blackmail us!' is not necessarily one of them.

(Less-than-coincidentally, the above are also acts of protest which are typically considered 'fair game', versus disrupting events, intimidating participants, campaigns to get someone fired, etc. I presume neither of us take various responses made to the NYT when they were planning to write an article about Scott to be (morally objectionable) attempts to blackmail them, even if many of them can be called 'threats' in natural language).

Of course, even if something could plausibly not be a blackmail attempt, it may in fact be exactly this. I may posture that my own interests would drive me to Y, but I would privately regret having to 'follow through' with this after X happens; or I may pretend my threat of Y is 'only meant as a friendly warning'. Yet although our counterparty's mind is not transparent to us, we can make reasonable guesses.

It is important to get this right, as the right strategy to deal with threats is a very wrong one to deal with warnings. If you think I'm trying to blackmail you when I say "If you do X, I will do Y", then all the usual stuff around 'don't give in to the bullies' applies: by refuting my threat, you deter me (and others) from attempting to bully you in future. But if you think I am giving a good-faith warning when I say this, it is worth looking for a compromise. Being intransigent as a matter of policy - at best - means we always end up at our mutual BATNAs even when there were better-for-you negotiated agreements we could have reached.

At worst, it may induce me to make the symmetrical mistake - wrongly believing your behaviour in is bad faith. That your real reasons for doing X, and for being unwilling to entertain the idea of compromise to mitigate the harm X will do to me, are because you're actually 'out to get me'. Game theory will often recommend retaliation as a way of deterring you from doing this again. So the stage is set for escalating conflict.

Directly: Widely across the comments here you have urged for charity and good faith to be extended to evaluating Hanson's behaviour which others have taken exception to - that adverse inferences (beyond perhaps "inadvertently causes offence") are not only mistaken but often indicate a violation of discourse norms vital for EA-land to maintain. I'm a big fan of extending charity and good faith in principle (although perhaps putting this into practice remains a work in progress for me). Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out. Beyond this being normatively unjust, it is also prudentially unwise - presuming bad faith in those who object to your actions is a recipe for making a lot of enemies you didn't need to, especially in already-fractious intellectual terrain.

You could still be right - despite the highlighted 'very explicit threat' which is also very plausibly not blackmail, despite the other 'threats' alluded to which seem also plausibly not blackmail and 'fair game' protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.

Comment by Gregory_Lewis on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T13:34:31.257Z · EA · GW

I'm fairly sure the real story is much better than that, although still bad in objective terms: In culture war threads, the typical norms re karma roughly morph into 'barely restricted tribal warfare'. So people have much lower thresholds both to slavishly upvote their 'team',and to downvote the opposing one.

Comment by Gregory_Lewis on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-30T23:34:56.885Z · EA · GW

Talk of 'blackmail' (here and elsethread) is substantially missing the mark. To my understanding, there were no 'threats' being acquiesced to here.

If some party external to the Munich group pressured them into cancelling the event with Hanson (and without this, they would want to hold the event), then the standard story of 'if you give in to the bullies you encourage them to bully you more' applies.

Yet unless I'm missing something, the Munich group changed their minds of their own accord, and not in response to pressure from third parties. Whether or not that was a good decision, it does not signal they're vulnerable to 'blackmail threats'. If anything, they've signalled the opposite by not reversing course after various folks castigated them on Twitter etc.

The distinction between 'changing our minds on the merits' and 'bowing to public pressure' can get murky (e.g. public outcry could genuinely prompt someone to change their mind that what they were doing was wrong after all, but people will often say this insincerely when what really happened is they were cowed by opprobrium). But again, the apparent absence of people pressuring Munich to 'cancel Hanson' makes this moot.

(I agree with Linch that the incentives look a little weird here given if Munich had found out about work by Hanson they deemed objectionable before they invited him, they presumably would not have invited him and none of us would be any the wiser. It's not clear "Vet more carefully so you don't have to rescind invitations to controversial speakers (with attendant internet drama) rather than not inviting them in the first place" is the lesson folks would want to be learned from this episode.)

Comment by Gregory_Lewis on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-07T14:02:26.822Z · EA · GW

I recall Hsiung being in favour of conducting disruptive protests against EAG 2015:

I honestly think this is an opportunity. "EAs get into fight with Elon Musk over eating animals" is a great story line that would travel well on both social and possibly mainstream media.
...

Organize a group. Come forward with an initially private demand (and threaten to escalate, maybe even with a press release). Then start a big fight if they don't comply.

Even if you lose, you still win because you'll generate massive dialogue!

It is unclear whether the motivation was more 'blackmail threats to stop them serving meat' or 'as Elon Musk will be there we can co-opt this to raise our profile'. Whether Hsiung calls himself an EA or not, he evidently missed the memo on 'eschew narrow minded obnoxious defection against others in the EA community'.

For similar reasons, it seems generally wiser for a community not to help people who previously wanted to throw it under the bus.

Comment by Gregory_Lewis on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-23T23:19:03.253Z · EA · GW

My reply is a mix of the considerations you anticipate. With apologies for brevity:

  • It's not clear to me whether avoiding anchoring favours (e.g.) round numbers or not. If my listener, in virtue of being human, is going to anchor on whatever number I provide them, I might as well anchor them on a number I believe to be more accurate.
  • I expect there are better forms of words for my examples which can better avoid the downsides you note (e.g. maybe saying 'roughly 12%' instead of '12%' still helps, even if you give a later articulation).
  • I'm less fussed about precision re. resilience (e.g. 'I'd typically expect drift of several percent from this with a few more hours to think about it' doesn't seem much worse than 'the standard error of this forecast is 6% versus me with 5 hours more thinking time' or similar). I'd still insist something at least pseudo-quantitative is important, as verbal riders may not put the listener in the right ballpark (e.g. does 'roughly' 10% pretty much rule out it being 30%?)
  • Similar to the 'trip to the shops' example in the OP, there's plenty of cases where precision isn't a good way to spend time and words (e.g. I could have counter-productively littered many of the sentences above with precise yet non-resilient forecasts). I'd guess there's also cases where it is better to sacrifice precision to better communicate with your listener (e.g. despite the rider on resilience you offer, they will still think '12%' is claimed to be accurate to the nearest percent, but if you say 'roughly 10%' they will better approximate what you have in mind). I still think when the stakes are sufficiently high, it is worth taking pains on this.
Comment by Gregory_Lewis on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-23T22:50:31.876Z · EA · GW

I had in mind the information-theoretic sense (per Nix). I agree the 'first half' is more valuable than the second half, but I think this is better parsed as diminishing marginal returns to information.

Very minor, re. child thread: You don't need to calculate numerically, as: , and . Admittedly the numbers (or maybe the remark in the OP generally) weren't chosen well, given 'number of decimal places' seems the more salient difference than the squaring (e.g. per-thousandths does not have double the information of per-cents, but 50% more)

Comment by Gregory_Lewis on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-23T22:27:36.206Z · EA · GW

It's fairly context dependent, but I generally remain a fan.

There's a mix of ancillary issues:

  • There could be a 'why should we care what you think?' if EA estimates diverge from consensus estimates, although I imagine folks tend to gravitate to neglected topics etc.
  • There might be less value in 'relative to self-ish' accounts of resilience: major estimates in a front facing report I'd expect to be fairly resilient, and so less "might shift significantly if we spent another hour on it".
  • Relative to some quasi-ideal seems valuable though: E.g. "Our view re. X is resilient, but we have a lot of knightian uncertainty, so we're only 60% sure we'd be within an order of magnitude of X estimated by a hypothetical expert panel/liquid prediction market/etc."
  • There might be better or worse ways to package this given people are often sceptical of any quantitative assessment of uncertainty (at least in some domains). Perhaps something like 'subjective confidence intervals' (cf.), although these aren't perfect.

But ultimately, if you want to tell someone an important number you aren't sure about, it seems worth taking pains to be precise, both on it and its uncertainty.

Comment by Gregory_Lewis on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2020-07-15T17:30:31.706Z · EA · GW

It is true that given the primary source (presumably this), the implication is that rounding supers to 0.1 hurt them, but 0.05 didn't:

To explore this relationship, we rounded forecasts to the nearest 0.05, 0.10, or 0.33 to see whether Brier scores became less accurate on the basis of rounded forecasts rather than unrounded forecasts. [...]
For superforecasters, rounding to the nearest 0.10 produced significantly worse Brier scores [by implication, rounding to the nearest 0.05 did not]. However, for the other two groups, rounding to the nearest 0.10 had no influence. It was not until rounding was done to the nearest 0.33 that accuracy declined.

Prolonged aside:

That said, despite the absent evidence I'm confident accuracy with superforecasters (and ~anyone else - more later, and elsewhere) does numerically drop with rounding to 0.05 (or anything else), even if has not been demonstrated to be statistically significant:

From first principles, if the estimate has signal, shaving bits of information from it by rounding should make it less accurate (and it obviously shouldn't make it more accurate, pretty reliably setting the upper bound of our uncertainty to 0).

Further, there seems very little motivation for the idea we have n discrete 'bins' of probability across the number line (often equidistant!) inside our heads, and as we become better forecasters n increases. That we have some standard error to our guesses (which ~smoothly falls with increasing skill) seems significantly more plausible. As such the 'rounding' tests should be taken as loose proxies to assess this error.

Yet if error process is this, rather than 'n real values + jitter no more than 0.025', undersampling and aliasing should introduce a further distortion. Even if you think there really are n bins someone can 'really' discriminate between, intermediate values are best seen as a form of anti-aliasing ("Think it is more likely 0.1 than 0.15, but not sure, maybe its 60/40 between them so I'll say 0.12") which rounding ablates. In other words 'accurate to the nearest 0.1' does not mean the second decimal place carries no information.

Also, if you are forecasting distributions rather than point estimates (cf. Metaculus), said forecast distributions typically imply many intermediate value forecasts.

Empirically, there's much to suggest a T2 error explanation of the lack of a 'significant' drop. As you'd expect, the size of the accuracy loss grows with both how coarsely things are rounded, and the performance of the forecaster. Even if relatively finer coarsening makes things slightly worse, we may expect to miss it. This looks better to me on priors than these trends 'hitting a wall' at a given level of granularity (so I'd guess untrained forecasters are numerically worse if rounded to 0.1, even if the worse performance means there is less signal to be lost, and in turn makes this hard to 'statistically significantly' detect).

I'd adduce other facts against too. One is simply that superforecasters are prone to not give forecasts on a 5% scale, using intermediate values instead: given their good callibration, you'd expect them to iron out this Brier-score-costly jitter (also, this would be one of the few things they are doing worse than regular forecasters). You'd also expect discretization in things like their calibration curve (e.g. events they say happen 12% of the time in fact happen 10% of time, whilst events that they say happen 13% of the time in fact happen 15% of the time), or other derived figures like ROC.

This is ironically foxy, so I wouldn't be shocked for this to be slain by the numerical data. But I'd bet at good odds (north of 3:1) things like "Typically, for 'superforecasts' of X%, these events happened more frequently than those forecast at (X-1)%, (X-2)%, etc."

Comment by Gregory_Lewis on EA Forum feature suggestion thread · 2020-06-20T07:53:23.878Z · EA · GW

On-site image hosting for posts/comments? This is mostly a minor QoL benefit, and maybe there would be challenges with storage. Another benefit would be that images would not vanish if their original source does.

Comment by Gregory_Lewis on EA Forum feature suggestion thread · 2020-06-20T07:49:06.302Z · EA · GW

Import from HTML/gdoc/word/whatever: One feature I miss from the old forum was the ability to submit HTML directly. This allowed one to write the post in google docs or similar (with tables, footnotes, sub/superscript, special characters, etc.), export it as HTML, paste into the old editor, and it was (with some tweaks) good to go.

This is how I posted my epistemic modesty piece (which has a table which survived the migration, although the footnote links no longer work). In contrast, when cross-posting it to LW2, I needed the kind help of a moderator - and even they needed to make some adjustments (e.g. 'writing out' the table).

Given such a feature was available before, hopefully it can be done again. It would be particularly valuable for the EA forum as:

  • A fair proportion of posts here are longer documents which benefit from the features available in things like word or gdocs. (But typically less mathematics than LW, so the nifty LATEX editor finds less value here than there).
  • The current editor has much less functionality than word/gdocs, and catching up 'most of the way' seems very labour intensive and could take a while.
  • Most users are more familiar with gdocs/word than editor/markdown/latex (i.e. although I can add and other special characters with the Latex editor and a some googling, I'm more familiar with doing this in gdocs - and I guess folks who have less experience with Latex or using a command line would find this difference greater).
  • Most users are probably drafting longer posts on google docs anyway.
  • Clunkily re-typesetting long documents in the forum editor manually (e.g. tables as image files) poses a barrier to entry, and so encourages linking rather than posting, with (I guess?) less engagement.

A direct 'import from gdoc/word/etc.' would be even better, but an HTML import function alone (given software which has both wordprocessing and HTML export 'sorted' are prevalent) would solve a lot of these problems at a stroke.

Comment by Gregory_Lewis on EA Forum feature suggestion thread · 2020-06-20T06:54:33.518Z · EA · GW

Footnote support in the 'standard' editor: For folks who aren't fluent in markdown (like me), the current process is switching the editor back and forth to 'markdown mode' to add these footnotes, which I find pretty cumbersome.[1]

[1] So much so I lazily default to doing it with plain text.

Comment by Gregory_Lewis on Examples of people who didn't get into EA in the past but made it after a few years · 2020-05-30T18:54:55.082Z · EA · GW

I applied for a research role at GWWC a few years ago (?2015 or so), and wasn't selected. I now do research at FHI.

In the interim I worked as a public health doctor. Although I think this helped me 'improve' in a variety of respects, 'levelling up for an EA research role' wasn't the purpose in mind: I was expecting to continue as a PH doctor rather than 'switching across' to EA research in the future; if I was offered the role at GWWC, I'm not sure whether I would have taken it.

There's a couple of points I'd want to emphasise.

1. Per Khorton, I think most of the most valuable roles (certainly in my 'field' but I suspect in many others, especially the more applied/concrete) will not be at 'avowedly EA organisations'. Thus, depending on what contributions you want to make, 'EA employment' may not be the best thing to aim for.

2. Pragmatically, 'avowedly EA organisation roles' (especially in research) tend oversubscribed and highly competitive. Thus (notwithstanding the above) this is ones primary target, it seems wise to have a career plan which does not rely on securing such a role (or at least have a backup).

3. Although there's a sense of ways one can build 'EA street cred' (or whatever), it's not clear these forms of 'EA career capital' are best even for employment at avowedly EA organisations. I'd guess my current role owes more to (e.g.) my medical and public health background than it does to my forum oeuvre (such as it is).

Comment by Gregory_Lewis on Why not give 90%? · 2020-03-26T11:42:23.593Z · EA · GW

Part of the story, on a consequentialising-virtue account, is typically desire for luxury is amenable to being changed in general, if not in Agape's case in particular. Thus her attitude of regret rather than shrugging her shoulders typically makes things go better, if not for her but for third parties who have a shot at improving this aspect of themselves.

I think most non-consequentialist views (including ones I'm personally sympathetic to) would fuzzily circumscribe character traits where moral blameworthiness can apply even if they are incorrigible. To pick two extremes: if Agape was born blind, and this substantially impeded her from doing as much good as she would like, the commonsense view could sympathise with her regret, but insist she really has 'nothing to be sorry about'; yet if Agape couldn't help being a vicious racist, and this substantially impeded her from helping others (say, because the beneficiaries are members of racial groups she despises), this is a character-staining fault Agape should at least feel bad about even if being otherwise is beyond her - plausibly, it would recommend her make strenuous efforts to change even if both she and others knew for sure all such attempts are futile.

Comment by Gregory_Lewis on Why not give 90%? · 2020-03-25T12:15:34.912Z · EA · GW

Nice one. Apologies for once again offering my 'c-minor mood' key variation: Although I agree with the policy upshot, 'obligatory, demanding effective altruism' does have some disquieting consequences for agents following this policy in terms of their moral self-evaluation.

As you say, Agape does the right thing if she realises (similar to prof procrastinate) that although, in theory, she could give 90% (or whatever) of her income/effort to help others, in practice she knows this isn't going to work out, and so given she wants to do the most good, she should opt for doing somewhat less (10% or whatever), as she foresees being able to sustain this.

Yet the underlying reason for this is a feature of her character which should be the subject of great moral regret. Bluntly: she likes her luxuries so much that she can't abide being without them, despite being aware (inter alia) that a) many people have no choice but to go without the luxuries she licenses herself to enjoy; b) said self-provision implies grave costs to those in great need if (per impossible) she could give more; c) her competing 'need' doesn't have great non-consequentialist defences (cf. if she was giving 10% rather than 90% due to looking after members of her family); d) there's probably not a reasonable story of desert for why she is in this fortunate position in the first place; e) she is aware of other people, similarly situated to her, who nonetheless do manage to do without similar luxuries and give more of themselves to help others.

This seems distinct from other prudential limitations a wise person should attend to. Agape, when making sure she gets enough sleep, may in some sense 'regret' she has to sleep for several hours each day. Yet it is wise for Agape to sleep enough, and needing to sleep (even if she needs to sleep more than others) is not a blameworthy trait. It is also wise for Agape to give less in the OP given her disposition of, essentially, "I know I won't keep giving to charity unless I also have a sports car". But even if Agape can't help this no more than needing to sleep, this trait is blameworthy.

Agape is not alone in having blameworthy features of her character - I, for one, have many; moral saintliness is rare, and most readers probably could do more to make the world better were they better people. 'Obligatory, demanding effective altruism' would also make recommendations against responses to this fact which are counterproductive (e.g. excessive self-flagellation, scrupulosity). I'd agree, but want to say slightly more about the appropriate attitude as well as the right action - something along the lines of non-destructive and non-aggrandising regret.[1] I often feel EAs tend to err in the direction of being estranged from their own virtue; but they should also try to avoid being too complaisant to their own vice.


[1] Cf. Kierkegaard, Sickness unto Death

Either in confused obscurity about oneself and one’s significance, or with a trace of hypocrisy, or by the help of cunning and sophistry which is present in all despair, despair over sin is not indisposed to bestow upon itself the appearance of something good. So it is supposed to be an expression for a deep nature which thus takes its sin so much to heart. I will adduce an example. When a man who has been addicted to one sin or another, but then for a long while has withstood temptation and conquered -- if he has a relapse and again succumbs to temptation, the dejection which ensues is by no means always sorrow over sin. It may be something else, for the matter of that it may be exasperation against providence, as if it were providence which had allowed him to fall into temptation, as if it ought not to have been so hard on him, since for a long while he had victoriously withstood temptation. But at any rate it is womanish [recte maudlin] without more ado to regard this sorrow as good, not to be in the least observant of the duplicity there is in all passionateness, which in turn has this ominous consequence that at times the passionate man understands afterwards, almost to the point of frenzy, that he has said exactly the opposite of that which he meant to say. Such a man asseverated with stronger and stronger expressions how much this relapse tortures and torments him, how it brings him to despair, "I can never forgive myself for it"; he says. And all this is supposed to be the expression for how much good there dwells within him, what a deep nature he is.

Comment by Gregory_Lewis on Thoughts on The Weapon of Openness · 2020-02-17T05:15:52.123Z · EA · GW
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off 'cutting ones losses' - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.

The proposed trend of 'getting steadily worse' isn't apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesn't give an impression they got dramatically worse despite the 30 years of secrecy's supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a 'more open' counterfactual) but the challenge of dissecting out the 'being secret * time' interaction term and showing it is negative is a challenge that should be borne by the affirmative case.

Comment by Gregory_Lewis on EA Survey 2019 Series: Donation Data · 2020-02-14T18:24:26.187Z · EA · GW

Minor:

Like last year, we ran a full model with all interactions, and used backwards selection to select predictors.

Presuming backwards selection is stepwise elimination, this is not a great approach to model generation. See e.g. this from Frank Harrell: in essence, stepwise tends to be a recipe for overfitting, and thus the models it generates tend to have inflated goodness of fit measures (e.g. R2), overestimated coefficient estimates, and very hard to interpret p values (given the implicit multiple testing in the prior 'steps'). These problems are compounded by generating a large number of new variables (all interaction terms) for stepwise to play with.

Some improvements would be:

1. Select the variables by your judgement, and report that model. If you do any post-hoc additions (e.g. suspecting an interaction term), report these with the rider it is a post-hoc assessment.

2. Have a hold-out dataset to test your model (however you choose to generate it) against. (Cross-validation is an imperfect substitute).

3. Ridge, Lasso, elastic net or other approaches to variable selection.

Comment by Gregory_Lewis on Thoughts on The Weapon of Openness · 2020-02-13T14:28:35.051Z · EA · GW

Thanks for this, both the original work and your commentary was an edifying read.

I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm 'for their own good' could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.

Although the relevant evidence can neither be fully observed or fairly sampled, there's a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. There's some wisdom of the crowd account that secrecy is the default for some 'adversarial' research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.

Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsides - the one that springs to mind from my 'field' is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.

Given what I said above, citing some favourable examples doesn't say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there aren't really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors 'catch up'), where there aren't more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad ones - and this requires some degree of something like secrecy.

Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what it's worth, I think 'security service' norms tend closer to the mark than 'academic' ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as 'don't publish the bug until the vendor can push a fix' may not perform as well as one might naively hope: for example, 'white hats' postponing their discoveries hinders collective technological progress, and risks falling behind a 'black hat' community avidly trading tips and tricks. This consideration can also point the other way: if the 'white hats' are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work 'giving bad people good ideas'. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.

Comment by Gregory_Lewis on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T03:46:27.270Z · EA · GW

All of your examples seem much better than the index case I am arguing against. Commonsense morality attaches much less distaste to cases where those 'in peril' are not crisply identified (e.g. "how many will die in some pandemic in the future" is better than "how many will die in this particular outbreak", which is better than "will Alice, currently ill, live or die?"). It should also find bets on historical events are (essentially) fine, as whatever good or ill implicit in these has already occurred.

Of course, I agree they your examples would be construed as to some degree morbid. But my recommendation wasn't "refrain from betting in any question where we we can show the topic is to some degree morbid" (after all, betting on GDP of a given country could be construed this way, given its large downstream impacts on welfare). It was to refrain in those cases where it appears very distasteful and for which there's no sufficient justification. As it seems I'm not expressing this balancing consideration well, I'll belabour it.

#

Say, God forbid, one of my friend's children has a life-limiting disease. On its face, it seems tasteless for me to compose predictions at all on questions like, "will they still be alive by Christmas?" Carefully scrutinising whether they will live or die seems to run counter to the service I should be providing as a supporter of my friends family and someone with the child's best interests at heart. It goes without saying opening a book on a question like this seems deplorable, and offering (and confirming bets) where I take the pessimistic side despicable.

Yet other people do have good reason for trying to compose an accurate prediction on survival or prognosis. The child's doctor may find themselves in the invidious position where they recognise they their duty to give my friend's family the best estimate they can runs at cross purposes to other moral imperatives that apply too. The commonsense/virtue-ethicsy hope would be the doctor can strike the balance best satisfies these cross-purposes, thus otherwise callous thoughts and deeds are justified by their connection to providing important information to the family

Yet any incremental information benefit isn't enough to justify anything of any degree of distastefulness. If the doctor opened a prediction market on a local children's hospice, I think (even if they were solely and sincerely motivated for good purposes, such as to provide families with in-expectation better prognostication now and the future) they have gravely missed the mark.

Of the options available, 'bringing money' into it generally looks more ghoulish the closer the connection is between 'something horrible happening' and 'payday!'. A mere prediction platform is better (although still probably the wrong side of the line unless we have specific evidence it will give a large benefit), also paying people to make predictions on said platform (but paying for activity and aggregate accuracy rather than direct 'bet results') is also slightly better. "This family's loss (of their child) will be my gain (of some money)" is the sort of grotesque counterfactual good people would strenuously avoid being party to save exceptionally good reason.

#

To repeat: the it is the balance of these factors - which come in degrees - which is determines the final evaluation. So, for example, I'm not against people forecasting the 'nCoV' question (indeed, I do as well), but the addition of money takes it the wrong side of the line (notwithstanding the money being ridden on this for laudable motivation). Likewise I'm happy to for people to prop bet on some of your questions pretty freely, but not for the 'nCoV' (or some even more extreme versions) because the question is somewhat less ghoulish, etc. etc. etc.

I confess some irritation. Because I think whilst you and Oli are pressing arguments (sorry - "noticing confusion") re. there not being a crisp quality that obtains to the objectionable ones yet not the less objectionable ones (e.g. 'You say this question is 'morbid' - but look here! here are some other questions which are qualitatively morbid too, and we shouldn't rule them all out') you are in fact committed to some sort of balancing account.

I presume (hopefully?) you don't think 'child hospice sweepstakes' would be a good idea for someone to try (even if it may improve our calibration! and it would give useful information re. paediatric prognosticiation which could be of value to the wider world! and capitalism is built on accurate price signals! etc. etc.) As you're not biting the bullet on these reductios (nor bmg's, nor others) you implicitly accept all the considerations about why betting is a good thing are pro tanto and can be overcome at some extreme limit of ghoulishness etc.

How to weigh these considerations is up for grabs. Yet picking each individual feature of ghoulishness in turn and showing it, alone, is not enough to warrant refraining from highly ghoulish bets (where the true case against would be composed of other factors alongside the one being shown to be individually insufficient) seems an exercise in the fallacy of division.

#

I also note that all the (few) prop bets I recall in EA up until now (including one I made with you) weren't morbid. Which suggests you wouldn't appreciably reduce the track record of prop bets which show (as Oli sees it) admirable EA virtues of skin in the game.

Comment by Gregory_Lewis on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T00:09:24.330Z · EA · GW
Both of these are environments in which people participate in something very similar to betting. In the first case they are competing pretty directly for internet points, and in the second they are competing for monetary prices.
Those two institutions strike me as great examples of the benefit of having a culture of betting like this, and also strike me as similarly likely to create offense in others.

I'm extremely confident a lot more opprobrium attaches to bets where the payoff is in money versus those where the payoff is in internet points etc. As you note, I agree certain forecasting questions (even without cash) provoke distaste: if those same questions were on a prediction market the reaction would be worse. (There's also likely an issue the money leading to a question of ones motivation - if epi types are trying to predict a death toll and not getting money for their efforts, it seems their efforts have a laudable purpose in mind, less so if they are riding money on it).

I agree with you that were there only the occasional one-off bet on the forum that was being critiqued here, the epistemic cost would be minor. But I am confident that a community that had a relationship to betting that was more analogous to how Chi's relationship to betting appears to be, we would have never actually built the Metaculus prediction platform.

This looks like a stretch to me. Chi can speak for themselves, but their remarks don't seem to entail a 'relationship to betting' writ large, but an uneasy relationship to morbid topics in particular. Thus the policy I take them to be recommending (which I also endorse) of refraining making 'morbid' or 'tasteless' bets (but feel free to prop bet to heart's desire on other topics) seems to have very minor epistemic costs, rather than threatening some transformation of epistemic culture which would mean people stop caring about predictions.

For similar reasons, this also seems relatively costless in terms of other perceptions: refraining from 'morbid' topics for betting only excludes a small minority of questions one can bet upon, leaving plenty of opportunities to signal its virtuous characteristics re. taking ideas seriously whilst avoiding those which reflect poorly upon it.

Comment by Gregory_Lewis on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T21:53:48.141Z · EA · GW

I emphatically object to this position (and agree with Chi's). As best as I can tell, Chi's comment is more accurate and better argued than this critique, and so the relative karma between the two dismays me.

I think it is fairly obvious that 'betting on how many people are going to die' looks ghoulish to commonsense morality. I think the articulation why this would be objectionable is only slightly less obvious: the party on the 'worse side' of the bet seems to be deliberately situating themselves to be rewarded as a consequence of the misery others suffer; there would also be suspicion about whether the person might try and contribute to the bad situation seeking a pay-off; and perhaps a sense one belittles the moral gravity of the situation by using it for prop betting.

Thus I'm confident if we ran some survey on confronting the 'person on the street' with the idea of people making this sort of bet, they would not think "wow, isn't it great they're willing to put their own money behind their convictions", but something much more adverse around "holding a sweepstake on how many die".

(I can't find an easy instrument for this beyond than asking people/anecdata: the couple of non-EA people I've run this by have reacted either negatively or very negatively, and I know comments on forecasting questions which boil down to "will public figure X die before date Y" register their distaste. If there is a more objective assessment accessible, I'd offer odds at around 4:1 on the ratio of positive:negative sentiment being <1).

Of course, I think such an initial 'commonsense' impression would very unfair to Sean or Justin: I'm confident they engaged in this exercise only out of a sincere (and laudable) desire to try and better understand an important topic. Nonetheless (and to hold them much higher standards than my own behaviour) one may suggest it is nonetheless a lapse of practical wisdom if, whilst acting to fulfil one laudable motivation, not tempering this with other moral concerns one should also be mindful of.

One needs to weigh the 'epistemic' benefits of betting (including higher order terms) against the 'tasteless' complaint (both in moral-pluralism case of it possibly being bad, but also the more prudential case of it looking bad to third parties). If the epistemic benefits were great enough, we should reconcile ourselves to the costs of sometimes acting tastelessly (triage is distasteful too) or third parties (reasonably, if mistakenly) thinking less of us.

Yet the epistemic benefits on the table here (especially on the margin of 'feel free to bet, save on commonsense ghoulish topics') are extremely slim. The rate of betting in EA/rationalist land on any question is very low, so the signal you get from small-n bets are trivial. There are other options, especially for this question, which give you much more signal per unit activity - given, unlike the stock market, people are interested in the answer for-other-than pecuniary motivations: both metacalus and the John's Hopkins platform prediction have relevant questions which are much active, and where people are offering more information.

Given the marginal benefits are so slim, they are easily outweighed by the costs Chi notes. And they are.

Comment by Gregory_Lewis on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-15T07:13:15.527Z · EA · GW

Thanks. I think it would be better, given you are recommending joining and remaining in the party, the 'price' isn't quoted as a single month of membership.

One estimate could be the rate of leadership transitions. There have been ~17 in the last century of the Labour party (ignoring acting leaders). Rounding up, this gives an expected vote for every 5 years of membership, and so a price of ~£4.38*60 = ~£250 per leadership contest vote. This looks a much less attractive value proposition to me.

Comment by Gregory_Lewis on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T21:29:56.998Z · EA · GW

Forgive me, but your post didn't exactly avoid any doubt, given:

1) The recommendation in the second paragraph is addressed to everyone regardless of political sympathy:

We believe that, if you're a UK citizen or have lived in the UK for the last year, you should pay £4.38 to register to vote in the current Labour leadership, so you can help decide 1 of the 2 next candidates for Prime Minister. (My emphasis)

2) Your OP itself gives a few reasons for why those "indifferent or hostile to Labour Party politics" would want to be part of the selection. As you say:

For £4.38, you have a reasonable chance of determining the next candidate PM, and therefore having an impact in the order of billions of pounds. (Your emphasis)

Even a committed conservative should have preferences on "conditional on Labour winning in the next GE, which Labour MP would I prefer as PM?" (/plus the more Machiavellian "who is the candidate I'd most want leading Labour, given I want them to lose to the Conservatives?").

3) Although the post doesn't advocate joining just to cancel after voting, noting that one can 'cancel any time', alongside the main motivation being offered taking advantage of a time-limited opportunity for impact (and alongside the quoted cost being a single month of membership) makes this strategy not a dazzling feat of implicature (indeed, it would be the EV-maximising option taking the OP's argument at face value).

#

Had the post merely used the oncoming selection in Labour to note there is an argument for political party participation similar to voting (i.e. getting a say in the handful of leading political figures); clearly stressed this applied across the political spectrum (and so was more a recommendation that EAs consider this reason to join the party they are politically sympathetic in expectation of voting in future leadership contests, rather than the one which happens to have a leadership contest on now); and strenuously disclaimed any suggestion of hit and run entryism (noting defection for various norms with existing members of the party, membership mechanisms being somewhat based on trust that folks aren't going to 'game them', etc.), I would have no complaints. But it didn't (although I hope it will), so here we are.

Comment by Gregory_Lewis on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T16:44:05.746Z · EA · GW

I'm not a huge fan of schemes like this, as it seems the path to impact relies upon strategic defection of various implicit norms.

Whether or not political party membership asks one to make some sort of political declaration, the spirit of membership is surely meant for those who sincerely intend to support the politics of the party in question.

I don't think Labour members (perhaps authors of this post excluded) or leadership would want to sell a vote for their future leader at £4.38 each to anyone willing to fill out an application form - especially to those indifferent or hostile to Labour Party politics. That we can buy one anyway (i.e. sign up then leave a month later) suggests we do so by taking advantage of their good faith: that folks signing up aren't just doing it to get a vote on the leadership election, that they intend to stick around for a while, that they'll generally vote for and support Labour, etc.

If this 'hit and run entryism' became a common tactic (e.g. suppose 'tactical tories' pretended to defect from the Conservatives this month to vote for the Labour candidate the Conservatives wanted to face in the next election) we would see parties act to close this vulnerability (I think the Conservatives did something like this in terms of restricting eligible members to those joining before a certain date for their most recent leadership contest).

I'd also guess that ongoing attempts to 'game' this sort of thing is bad for the broader political climate, as (as best as I can tell) a lot of it runs on trust rather than being carefully proofed against canny selectoral tactics (e.g. although all parties state you shouldn't be a member of more than one at a time, I'm guessing it isn't that hard to 'get away with it'). Perhaps leader selection is too important to justly leave to only party members (perhaps there should be 'open primaries'), but 'hit and run entryism' seems very unlikely to drive one towards this, but merely greater barriers to entry for party political participation, and lingering suspicion and mistrust.

Comment by Gregory_Lewis on Has pledging 10% made meeting other financial goals substantially more difficult? · 2020-01-09T14:56:32.313Z · EA · GW

FWIW I have found it more costly - I think this almost has to be true, as $X given to charity is $X I cannot put towards savings, mortgages, etc. - but, owed to fortunate circumstances, not very burdensome to deal with. I expect others will have better insight to offer.

Given your worries, an alternative to the GWWC pledge which might be worth contemplating is the one at The Life You Can Save. Their recommended proportion varies by income (i.e. a higher % with larger incomes), and is typically smaller than GWWC across most income bands (on their calculator, you only give 10% at ~$500 000 USD, and <2% up to ~$100 000).

Another suggestion I would make is it might be worth waiting for a while longer than "Once I have a job and I'm financially secure" before making a decision like this. It sounds like some of your uncertainties may become clearer with time (e.g. once you enter your career you may get a clearer sense of what your earning trajectory is going to look like, developments in your personal circumstances may steer you towards or away from buying a house). Further, 'experimenting' with giving different proportions may also give useful information.

How long to wait figuring things out doesn't have an easy answer: most decisions can be improved by waiting to gather more information, but most also shouldn't be 'put off' indefinitely. That said, commonsense advice would be to give oneself plenty of time when weighing up whether to make important lifelong commitments. Personally speaking, I'm glad I joined GWWC (when I was still a student), and I think doing so was the right decision, but - although I didn't rush in a whim - I think a wiser version of me would have taken greater care than I in fact did.

Comment by Gregory_Lewis on In praise of unhistoric heroism · 2020-01-08T11:03:15.042Z · EA · GW

Bravo.

Forgive me playing to type and offering a minor-key variation on the OP's theme. Any EA predisposition for vainglorious grasping after heroism is not only an unedifying shape to draw one's life, but also implies attitudes that are themselves morally ugly.

There are some (mercifully few) healthcare professionals who are in prison: so addicted to the thrill of 'saving lives' they deliberately inflicted medical emergencies on their patients so they had the opportunity to 'rescue' them.

The error in 'EA-land' is of a similar kind (but a much lower degree): it is much better from the point of view of the universe that no one needs your help. To wish instead they are arranged in jeopardy as some potemkin vale of soul-making to demonstrate one's virtue (rightly, ego) upon is perverse.

(I dislike 'opportunity' accounts of EA for similar reasons: that (for example) millions of children are likely to die before their fifth birthday is a grotesque outrage to the human condition. Excitement that this also means one has the opportunity make this number smaller is inapt.)

Likewise, 'total lifetime impact (in expectation)' is the wrong unit of account to judge oneself. Not only because moral luck intervenes in who you happen to be (more intelligent counterparts of mine could 'do more good' than I - but this can't be helped), but also in what world one happens to inhabit.

I think most people I met in medical school (among other comparison classes) are better people than I am: across the set of relevant possible circumstances we could find ourselves, I'd typically 'do less good' than the cohort average. If it transpires I end up doing much more good than them, it will be due to the accident where particular features of mine - mainly those I cannot take moral credit for, and some of which are blameworthy - happen to match usefully to particular features of the world which themselves should only be the subject of deep regret. Said accident is scant cause for celebration.

Comment by Gregory_Lewis on EA Survey 2019 Series: Cause Prioritization · 2020-01-07T15:18:16.919Z · EA · GW

It was commendable to seek advice, but I fear in this case the recommendation you got doesn't hit the mark.

I don't see the use of 'act (as if)' as helping much. Firstly, it is not clear what it means to be 'wrong about' 'acting as if the null hypothesis is false', but I don't think however one cashes this out it avoids the problem of the absent prior. Even if we say "We will follow the policy of rejecting the null whenever p < alpha", knowing the error rate of this policy overall still demands a 'grand prior' of something like "how likely is a given (/randomly selected?) null hypothesis we are considering to be true?"

Perhaps what Lakens has in mind is as we expand the set of null hypothesis we are testing to some very large set the prior becomes maximally uninformative (and so alpha converges to the significance threshold), but this is deeply uncertain to me - and, besides, we want to know (and a reader might reasonably interpret the rider as being about) the likelihood of this policy getting the wrong result for the particular null hypothesis under discussion.

--

As I fear this thread demonstrates, p values are a subject which tends to get more opaque the more one tries to make them clear (only typically rivalled by 'confidence interval'). They're also generally much lower yield than most other bits of statistical information (i.e. we generally care a lot more about narrowing down the universe of possible hypotheses by effect size etc. rather than simply excluding one). The write-up should be welcomed for providing higher yield bits of information (e.g. effect sizes with CIs, regression coefficients, etc.) where it can.

Most statistical work never bothers to crisply explain exactly what it means by 'significantly different (P = 0.03)' or similar, and I think it is defensible to leave it at that rather than wading into the treacherous territory of trying to give a clear explanation (notwithstanding the fact the typical reader will misunderstand what this means). My attempt would be not to provide an 'in-line explanation', but offer an explanatory footnote (maybe after the first p value), something like this:

Our data suggests a trend/association between X and Y. Yet we could also explain this as a matter of luck: even though in reality X and Y are not correlated [or whatever], it may we just happened to sample people where those high in X also tended to be high in Y, in the same way a fair coin might happen to give more heads than tails when we flip it a number of times.
A p-value tells us how surprising our results would be if they really were just a matter of luck: strictly, it is the probability of our study giving results as or more unusual than our data if the 'null hypothesis' (in this case, there is no correlation between X and Y) was true. So a p-value of 0.01 means our data is in the top 1% of unusual results, a p-value of 0.5 means our data is in the top half of unusual results, and so on.
A p-value doesn't say all that much by itself - crucially, it doesn't tell us the probability of the null hypothesis itself being true. For example, a p-value of 0.01 doesn't mean there's a 99% probability the null hypothesis is false. A coin being flipped 10 times and landing heads on all of them is in the top percentile (indeed, roughly the top 0.1%) of unusual results presuming the coin is fair (the 'null hypothesis'), but we might have reasons to believe, even after seeing only heads after flipping it 10 times, to believe it is probably fair anyway (maybe we made it ourselves with fastidious care, maybe its being simulated on a computer and we've audited the code, or whatever). At the other extreme, a P value of 1.0 doesn't mean we know for sure the null hypothesis is true: although seeing 5 heads and 5 tails from 10 flips is the least unusual result given the null hypothesis (and so all possible results are 'as more more unusual' than what we've seen), it could be the coin is unfair and we just didn't see it.
What we can use a p-value for is as a rule of thumb for which apparent trends are worth considering further. If the p-value is high the 'just a matter of luck' explanation for the trend between X and Y is credible enough we shouldn't over-interpret it, on the other hand, a low p-value makes the apparent trend between X and Y an unusual result if it really were just a matter of luck, and so we might consider alternative explanations (e.g. our data wouldn't be such an unusual finding if there really was some factor that causes those high in X to also be high in Y).
'High' and 'low' are matters of degree, but one usually sets a 'significance threshold' to make the rule of thumb concrete: when a p-value is higher than this threshold, we dismiss an apparent trend as just a matter of luck - if it is lower, we deem it significant. The standard convention is for this threshold to be p=0.05.
Comment by Gregory_Lewis on EA Survey 2019 Series: Cause Prioritization · 2020-01-03T11:14:24.032Z · EA · GW

Good work. A minor point:

I don't think the riders when discussing significant results along the lines of "being wrong 5% of the time in the long run" sometimes doesn't make sense. Compare

How substantial are these (likely overestimated) associations? We highlight here only the largest detected effects in our data (odds ratio close to or above 2 times greater) that would be surprising to see, if there were no associations in reality and we accepted being wrong 5% of the time in the long run.

To:

Welch t-tests of gender against these scaled cause ratings have p-values of 0.003 or lower, so we can act as if the null hypothesis of no difference between genders is false, and we would not be wrong more than 5% of the time in the long run.

Although commonly the significance threshold is equated with the 'type 1 error rate' which in turn is equated with 'the chance of falsely rejecting the null hypothesis', this is mistaken (1). P values are not estimates of the likelihood of the null hypothesis, but of the observation (as or more extreme) conditioned on that hypothesis. P(Null|significant result) needs one to specify the prior. Likewise, T1 errors are best thought of as the 'risk' of the test giving the wrong indication, rather than the risk of you making the wrong judgement. (There's also some remarks on family-wise versus false discovery rates which can be neglected.)

So the first quote is sort-of right (although assuming the null then talking about the probability of being wrong may confuse rather than clarify), but the second one isn't: you may (following standard statistical practice) reject the null hypothesis given P < 0.05, but this doesn't tell you there is a 5% chance of the null being true when you do so.




Comment by Gregory_Lewis on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-30T19:00:25.931Z · EA · GW

I think it would be worthwhile to separate these out from the text, and (especially) to generate predictions that are crisp, distinctive, and can be resolved in the near term. The QRI questions on metaculus are admirably crisp (and fairly near term), but not distinctive (they are about whether certain drugs will be licensed for certain conditions - or whether evidence will emerge supporting drug X for condition Y, which offer very limited evidence for QRI's wider account 'either way').

This is somewhat more promising from your most recent post:

I’d expect to see substantially less energy in low-frequency CSHWs [Connectome-Specific Harmonic Waves] after trauma, and substantially more energy in low-frequency CSHWs during both therapeutic psychedelic use (e.g. MDMA therapy) and during psychological integration work.

This is crisp, plausibly distinctive, yet resolving this requires a lot of neuroimaging work which (presumably) won't be conducted anytime soon. In the interim, there isn't much to persuade a sceptical prior.

Comment by Gregory_Lewis on EA Hotel Fundraiser 5: Out of runway! · 2019-11-25T09:49:33.692Z · EA · GW

I agree it would surprise if EA happened upon the optimal cohabitation level (although perhaps not that surprising, given individuals can act by the lights of their best interest which may reasonably approximate the global optimum), yet I maintain the charitable intervention hypothetical is a poor intuition pump as most people would be dissuaded from 'intervening' to push towards the 'optimal cohabitation level' for 'in practice' reasons - e.g. much larger potential side-effects of trying to twiddle this dial, preserving the norm of leaving people to manage their personal lives as they see best, etc.

I'd probably want to suggest the optimal cohabitation level is below what we currently observe (e.g. besides the issue Khorton mentions, cohabitation with your employees/bosses/colleagues or funder/fundee seems to run predictable risks), yet be reluctant to 'intervene' any further up the coercion hierarchy than expressing my reasons for caution.

Comment by Gregory_Lewis on Are comment "disclaimers" necessary? · 2019-11-25T09:16:09.848Z · EA · GW

I sometimes disclaim (versus trying to always disclose relevant CoI), with a rule-of-thumb along the lines of the expected disvalue of being misconstrued as presenting a corporate view of my org.

This is a mix of likelihood (e.g. I probably wouldn't bother disclaiming an opinion on - say - SCI versus AMF, as a reasonable person is unlikely to think there's going to be an 'FHI view' on global health interventions) and impact (e.g. in those - astronomically rare - cases I write an asperous criticism of something-or-other, even if its pretty obvious I'm not speaking on behalf of my colleagues, I might want to make extra-sure).

I agree it isn't ideal (cf. Twitter, where it seems a lot of people need to expressly disclaim retweets are not endorsements, despite this norm being widely acknowledged and understood). Alas, some 'defensive' writing may be necessary if there are uncharitable or malicious members of ones audience, and on the internet this can be virtually guaranteed.

Also, boilerplate disclaimers don't magically prevent what you say reflecting upon your affiliates. I doubt EA org X, who has some association with Org Y, would be happy with a staffer saying something like, "Figuratively speaking, I hope we burn the awful edifice of Org Y - wrought out of its crooked and rotten timber from which nothing good and straight was ever made - to the ground, extirpate every wheedling tendril of its fell influence in our community, and salt the sewage-suffused earth from whence it came [speaking for myself, not my employer]". I get the impression I bite my tongue less than the typical 'EA org employee': it may be they are wiser, rather than I braver.

Comment by Gregory_Lewis on EA Hotel Fundraiser 5: Out of runway! · 2019-11-25T07:48:09.788Z · EA · GW

The reversal test doesn't mean 'if you don't think a charity for X is promising, you should be in favour of more ¬X'. I may not find homeless shelters, education, or climate change charities promising, yet not want to move in the direction of greater homelessness, illiteracy, or pollution.

If (like me) you'd prefer EA to move in the direction of 'professional association' rather than 'social movement', this attitude's general recommendation to move away from communal living (generally not a feature of the former, given the emphasis on distinguishing between personal and professional lives) does pass the reversal test, as I'd forecast having the same view even if the status quo was everyone already living in group house (or vice versa).