Complex cluelessness as credal fragility 2021-02-08T16:59:16.639Z
Take care with notation for uncertain quantities 2020-09-16T19:26:43.873Z
Challenges in evaluating forecaster performance 2020-09-08T20:37:17.318Z
Use resilience, instead of imprecision, to communicate uncertainty 2020-07-18T12:09:36.901Z
Reducing global catastrophic biological risks 2020-03-01T08:00:00.000Z
Reality is often underpowered 2019-10-10T13:14:08.605Z
Risk Communication Strategies for the Very Worst of Cases 2019-03-09T06:56:12.480Z
The person-affecting value of existential risk reduction 2018-04-13T01:44:54.244Z
How fragile was history? 2018-02-02T06:23:54.282Z
In defence of epistemic modesty 2017-10-29T19:15:10.455Z
Beware surprising and suspicious convergence 2016-01-24T19:11:12.437Z
At what cost, carnivory? 2015-10-29T23:37:13.619Z
Don't sweat diet? 2015-10-22T20:15:20.773Z
Log-normal lamentations 2015-05-19T21:07:28.986Z
How best to aggregate judgements about donations? 2015-04-12T04:19:33.582Z
Saving the World, and Healing the Sick 2015-02-12T19:03:05.269Z
Expected value estimates you can take (somewhat) literally 2014-11-24T15:55:29.144Z


Comment by Gregory_Lewis on Clarifying the Petrov Day Exercise · 2021-09-27T10:32:46.875Z · EA · GW

I've been accused of many things in my time, but inarticulate is a new one. ;)

Comment by Gregory_Lewis on Clarifying the Petrov Day Exercise · 2021-09-27T00:05:57.763Z · EA · GW

I strongly agree with all this. Another downside I've felt from this exercise is it feels like I've been dragged into a community ritual I'm not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval. 

I generally think EA- and longtermist- land could benefit from more 'professional distance': that folks can contribute to these things without having to adopt an identity or community that steadily metastasises over the rest of their life - with at-best-murky EV to both themselves and the 'cause'.  I also think particular attempts at ritual often feel kitsch and prone to bathos: I imagine my feelings towards the 'big red button' at the top of the site might be similar to how many Christians react to some of their brethren 'reenacting' the crucifixion themselves.

But hey, I'm (thankfully) not the one carrying down the stone tablets of community norms from the point of view of the universe here - to each their own. Alas this restraint is not universal, as this is becoming a (capital C) Community ritual, where 'success' or 'failure' is taken to be important (at least by some) not only for those who do or don't hit the button, but corporate praxis generally. 

As someone who is already ambivalent, it rankles that my inaction will be taken as tacit support for some after-action pean to some sticky-back-plastic icon of 'who we are as a Community'. Yet although 'protesting' by '''''nuking''''' [sic] ([sic]) LW has some benefit of a) probably won't get opted in again and b) maybe make this less likely to be an ongoing 'thing', it has some downsides. I'm less worried about 'losing rep' (I have more than enough of both e-clout and ego to make counter-signalling an attractive proposition; '''''nuking''''' LW in a fit of 'take this and shove it' pique is pretty on-brand for me), but more that some people take this (very) seriously and would be sad if this self-imposed risk is realised. Even I disagree (and think this is borderline infantile), protesting in this way feels a bit like trying to refute a child's belief their beloved toy is sapient by destroying it in front of them.  

I guess we can all be thankful 'writing asperous forum comments' provides a means of de-escalation.

Comment by Gregory_Lewis on A Primer on the Symmetry Theory of Valence · 2021-09-10T09:04:02.285Z · EA · GW

Thanks, but I've already seen them. Presuming the implication here is something like "Given these developments, don't you think you should walk back what you originally said?", the answer is "Not really, no": subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.

(Apologies if I mistake what you are trying to say here. If it helps generally, I expect - per my parent comment - to continue to affirm what I've said before however the morass of commentary elsewhere on this post shakes out.)

Comment by Gregory_Lewis on A Primer on the Symmetry Theory of Valence · 2021-09-08T15:56:19.965Z · EA · GW

For the avoidance of doubt, I remain entirely comfortable with the position expressed in my comment: I wholeheartedly and emphatically stand behind everything I said. I am cheerfully reconciled to the prospect some of those replying to or reading my earlier comment judge me adversely for it - I invite these folks to take my endorsement here as reinforcing whatever negative impressions they formed from what I said there

The only thing I am uncomfortable with is that someone felt they had to be anonymous to criticise something I wrote. I hope the measure I mete out to others makes it clear I am happy for similar to be meted out to me in turn. I also hope reasonable folks like the anonymous commenter are encouraged to be forthright when they think I err - this is something I would be generally grateful to them for, regardless of whether I agree with their admonishment in a particular instance. I regret to whatever degree my behaviour has led others to doubt this is the case.

Comment by Gregory_Lewis on A Primer on the Symmetry Theory of Valence · 2021-09-07T06:15:54.082Z · EA · GW

[Own views]

I'm not sure 'enjoy' is the right word, but I also noticed the various attempts to patronize Hoskin. 

This ranges from the straightforward "I'm sure once you know more about your own subject you'll discover I am right":

I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD

'Well-meaning suggestions' alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit. 

I’m a little baffled by the emotional intensity here but I’d suggest approaching this as an opportunity to learn about a new neuroimaging method, literally pioneered by your alma mater. :) 

[Adding a smiley after something insulting or patronizing doesn't magically make you the 'nice guy' in the conversation, but makes you read like a passive-aggressive ass who is nonetheless too craven for candid confrontation. I'm sure once you reflect on what I said and grow up a bit you'll improve so your writing inflicts less of a tax on our collective intelligence and good taste. I know you'll make us proud! :)]

Or just straight-up belittling her knowledge and expertise with varying degrees of passive-aggressiveness.

I understand it may feel significant that you have published work using fMRI, and that you hold a master’s degree in neuroscience.


I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field.

I think this sort of smug and catty talking down would be odious even if the OP really did have much more expertise than their critic: I hope I wouldn't write similarly in response to criticism (however strident) from someone more junior in my own field. 

What makes this kinda amusing, though, is although the OP is trying to set himself up as some guru trying to dismiss his critic with the textual equivalent of patting her on the head, virtually any reasonable third party would judge the balance of expertise to weigh in the other direction. Typically we'd take, "Post-graduate degree, current doctoral student, and relevant publication record" over "Basically nothing I could put on an academic CV, but I've written loads of stuff about my grand theory of neuroscience." 

In that context (plus the genders of the participants) I guess you could call it 'mansplaining'. 

Comment by Gregory_Lewis on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-20T22:31:34.889Z · EA · GW

[Predictable disclaimers, although in my defence, I've been banging this drum long before I had (or anticipated to have) a conflict of interest.]

I also find the reluctance to wholeheartedly endorse the 'econ-101' story (i.e. if you want more labour, try offering more money for people to sell labour to you) perplexing:

  • EA-land tends sympathetic using 'econ-101' accounts reflexively on basically everything else in creation. I thought the received wisdom these approaches are reasonable at least for first-pass analysis, and we'd need persuading to depart greatly from them.
  • Considerations why 'econ-101' won't (significantly) apply here don't seem to extend to closely analogous cases:  we don't fret (and typically argue against others fretting) about other charity's paying their staff too much, we don't think (cf. reversal test) that google could improve its human capital by cutting pay - keeping the 'truly committed googlers', generally sympathetic to public servants getting paid more if they add much more social value (and don't presume these people are insensitive to compensation beyond some limit), prefer simple market mechs over more elaborate tacit transfer system (e.g. just give people money) etc. etc.
  • The precise situation makes the 'econ-101' intervention particularly appetising: if you value labour much more than the current price, and you are sitting atop a ungodly pile of lucre so vast you earnestly worry about how you can spend big enough chunks of it at once, 'try throwing money at your long-standing labour shortages' seems all the more promising.
  • Insofar as it goes, the observed track record looks pretty supportive of the econ-101 story - besides all the points Ryan mentions, compare "price suppression results in shortages" to the years-long (and still going strong) record of orgs lamenting they can't get the staff.

Perhaps the underlying story is as EA-land is generally on the same team, one might hope you can do better than taking one's cue from 'econ-101', given the typically adversarial/competitive dynamics it presumes between firms, and employee/employer. I think this hope is forlorn: EA-land might be full aspiring moral saints, but aspiring moral saints remain approximate to homo economicus. So the usual stories about the general benefits econ efficiency prove hard to better- and (play-pumps style) attempts to try feel apt to backfire (1, 2, 3, 4 - ad nauseum).
However, although I don't think 'PR concerns' should guide behaviour (if X really is better than ¬X, the costs of people reasonably - if mistakenly - thinking less of you for doing X is typically better than strategising to hide this disagreement), many things look bad because they are bad.

In the good old days, I realised I was behind on my GWWC pledge so used some of my holiday to volunteer for a week of night-shifts as a junior doctor on a cancer ward. If in the future my 'EA praxis' is tantamount to splashing billionaire largess on a lifestyle for myself of comfort and affluence scarcely conceivable to my erstwhile beneficiaries, spending my days on intangible labour in well-appointed offices located among the richest places heretofore observed in human history, an outside observer may wonder what went wrong. 

I doubt they would be persuaded by my defence is any better than obscene: "Not all heroes wear capes; some nobly spend thousands on yuppie accoutrements they deem strictly necessary for them to do the most good!". Nor would they be moved by my remorse: self-effacing acknowledgement is not expiation, nor complaisance to my own vices atonement. I still think jacking up pay may be good policy, but personally, perhaps I should doubt myself too.   

Comment by Gregory_Lewis on Denise_Melchin's Shortform · 2021-08-13T22:08:18.267Z · EA · GW

If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).

Although I think this stylized fact remains interesting, I wonder if there's an ex-ante/ ex-post issue lurking here. You get to see the endpoint with money a lot earlier than direct work contributions, and there's probably a lot of lottery-esque dynamics. I'd guess these as corollaries:

First, the ex ante 'expected $ raised' from folks aiming at E2G (e.g. at a similar early career stage) is much more even than the ex post distribution. Algo-trader Alice and Entrepreneur Edward may have similar expected lifetime income, but Edward has much higher variance - ditto one of entrepreneur Edward and Edith may swamp the other if one (but not the other) hits the jackpot. 

Second, part of the reason direct work contributions look more even is this is largely an ex ante estimate - a clairvoyant ex post assessment would likely be much more starkly skewed. E.g. If work on AI paradigm X alone was sufficient to avert existential catastrophe (which turned out to be the only such danger), the impact of the lead researcher(s) re. X is astronomically larger than everything else everyone else is doing. 

Third, I also wonder that raw $ value may mislead in credit assignment for donation impact. The entrepreneur who makes a billion $ company hasn't done all the work themselves, and it's facially plausible some shapley/whatever credit sharing between these founders and (e.g.) current junior staff would not be as disproportionate as the money which ends up in their respective bank accounts.

Maybe not: perhaps the reward in terms of 'getting things off the ground', taking lots of risk, etc. do mean the tech founder megadonor bucks should be attributed ~wholly to them. But similar reasoning could be applied to direct work as well. Perhaps the lion's share of all contributions for global health work up to now should be accorded to (e.g.) Peter Singer, as all subsequent work is essentially 'footnotes to Famine, Affluence, and Morality'; or AI work to those who toiled in the vineyards over a decade ago, even if now their work is a much smaller proportion of the contemporary aggregate contribution.

Comment by Gregory_Lewis on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T11:29:46.743Z · EA · GW

I'd guess the story might be a) 'XR primacy' (~~ that x-risk reduction has far bigger bang for one's buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally 'buying the index' of technological development (as I take Progress Studies to be keen on) to be uncertain.

"XR primacy"

Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, borrowing from the initial car analogy, is you have piles of open road/runway available if you need to use it, so velocity and acceleration are in themselves much less important than direction - you can cover much more ground in expectation if you make sure you're not headed into a crash first. 

This typically (but not necessarily, cf.) implies longtermism. 'Global catastrophic risk', as a longtermist term of art, plausibly excludes the vast majority of things common sense would call 'global catastrophes'. E.g.:

[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilizing enough to permanently worsen humanity’s future or lead to human extinction. (Open Phil)

My impression is a 'century more poverty' probably isn't a GCR in this sense. As the (pre-industrial) normal, the track record suggests it wasn't globally destabilising to humanity or human civilisation. Even moreso if the matter is of a somewhat-greater versus somewhat-lower rate in its elimination. 

This makes it's continued existence no less an outrage to human condition. But, across the scales from threats to humankind's entire future, it becomes a lower priority. Insofar as these things are traded-off (which seems to be implicit in any prioritisation given both compete for resources, whether or not there's any direct cross-purposes in activity) the currency of XR reduction has much greater value.

Per discussion, there are a variety of ways the story sketched above could be wrong:

  • Longtermist consequentialism (the typical, if not uniquely necessary motivation for the above) is false, so we our exchange rate for common sense global catastrophes (inter alia) versus XR should be higher.
  • XR is either very low, or intractable, so XR reduction isn't a good buy even on the exchange rate XR views endorse. 
  • Perhaps the promise of the future could be lost with less of a bang but a whimper. Perhaps prolonged periods of economic or stagnation should be substantial subjects of XR concern in their own right, so PS-land and XR-land converge on PS-y aspirations.

I don't see Pascalian worries as looming particularly large apart from these. XR-land typically takes the disjunction of risks and envelope of mitigation to be substantial/non-pascalian values. Although costly activity that buys an absolute risk reduction of 1/trillions looks dubious to common sense, 1/thousands + (e.g.) is commonplace (and commonsensical) when stakes are high enough. 

It's not clear how much of a strike that Pascalian counter-examples are constructable from the resources of a given view, and although the view wouldn't endorse them, it doesn't have a crisp story of decision theoretic arcana why not. Facially, PS seems susceptible to the same (e.g. a PS-ers work is worth billions per year, given the yield if you compound an (in expectation) 0.0000001% marginal increase in world GDP growth for centuries).

Buying the technological progress index?

Granting the story sketched above, there's not a straightforward upshot on whether this makes technological progress generally good or bad. The ramifications of any given technological advance for XR are hard to forecast; aggregating over all of them to get a moving average harder still. Yet there seems a lot to temper fairly unalloyed enthusiasm around technological progress I take as the typical attitude in PS-land.

  • There's obviously the appeal to the above sense of uncertainty: if at least significant bits of the technological progress portfolio credibly have very bad dividends for XR, you probably hope humanity is pretty selective and cautious in its corporate investments. It'd also generally surprise for what is best for XR to also be best for 'progress' (cf.)
  • The recent track record doesn't seem greatly reassuring. The dual-use worries around nuclear technology remain profound 70+ years after their initial development, and 'derisking' these downsides remain remote. It's hard to assess the true ex ante probability of a strategic nuclear exchange during the cold war, nor exactly how disastrous it would have been, but pricing in reasonable estimates of both probably takes a large chunk out of the generally sunny story of progress we observe ex post over the last century.
  • Insofar as folks consider disasters arising from emerging technologies (like AI) to represent the bulk of XR, this supplies concern against their rapid development in particular, and against exuberant technological development which may generate further dangers in general.

Some of this may just be a confusion of messaging (e.g. even though PS folks portray themselves as more enthusiastic and XR folks less so, both would actually be similarly un/enthusiastic for each particular case). I'd guess more of it is more substantive around the balance of promise and danger posed by given technologies (and the prospects/best means to mitigate the latter), which then feeds into more or less 'generalized techno-optimism'.

But I'd guess the majority of the action is around the 'modal XR account' of XR being a great moral priority, which can be significantly reduced, and is substantially composed of risks from emerging technology. "Technocircumspection" seems a fairly sound corollary from this set of controversial conjuncts.   

Comment by Gregory_Lewis on [Link] 80,000 Hours Nov 2020 annual review · 2021-05-19T13:48:00.684Z · EA · GW

[Own views etc.]

I'm unsure why this got downvoted, but I strongly agree with the sentiment in the parent. Although I understand the impulse of "We're all roughly on the same team here, so we can try and sculpt something better than the typically competitive/adversarial relationships between firms, or employers and employees", I think this is apt to mislead one into ideas which are typically economically short-sighted, often morally objectionable, and occasionally legally dubious. 

In the extreme case, it's obviously unacceptable for Org X to not hire candidate A (their best applicant), because they believe its better they stay at Org Y. Not only (per the parent) that A is probably a better judge of where they are best placed,[1] but Org X screws over not only itself (they now appoint someone they think are not quite as good) and A themselves (who doesn't get the job they want), for the benefit of Org Y. 

These sort of oligosponic machinations are at best a breach of various fiduciary duties (e.g. Org X to their donors to use their money to get the best staff rather than opaque de facto transfer contributions of labour to another organisation), and at least colourably illegal in many jurisdictions due to labour law around anti-trust, non-discrimination, etc. (see)

Similar sentiments apply to less extreme examples, such as 'not proactively 'poaching'' (the linked case above was about alleged "no cold call" agreements). The typical story for why these practices are disliked is a mix of econ efficiency arguments (e.g. labour market liquidity, competition over conditions is a mechanism for higher performing staff to match into higher performing orgs) and worker welfare ones (e.g. the net result typically disadvantages workers by suppressing their pay, conditions, and reducing their ability to change to roles they prefer).

I think these rationales apply roughly as well to EA-land as anywhere else-land. Orgs should accept that staff may occasionally leave to other orgs for a variety of reasons. If they find that they consistently  lose out for familiar reasons, they should either get better or accept the consequences for remaining worse.

[1]: Although, for the avoidance of doubt, I think it is wholly acceptable for people to switch EA jobs for wholly 'non-EA' reasons - e.g. "Yeah, I expect I'd do less good at Org X than Org Y, but Org X will pay me 20% more and I want a higher standard of living." Moral sainthood is scarce as well as precious. It is unrealistic that all candidates are saintly in this sense, and mutual pretence to the contrary unhelpful.

If anything, 'no poaching' (etc.) practices are even worse in these cases than the more saintly 'moving so I can do even more good!' rationale. In the latter case, Orgs are merely being immodest in presuming to know better than applicants what their best opportunity to contribute is; in the former, Orgs conspire to make their employees' lives worse than they could otherwise be.

Comment by Gregory_Lewis on Draft report on existential risk from power-seeking AI · 2021-05-02T09:53:41.735Z · EA · GW

Maybe not 'insight', but re. 'accuracy' this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I've seen other forecasters do similar things on occasion. (The general practice of 'breaking down problems' to evaluate sub-issues is recommended in Superforecasting IIRC).

I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance 'something happens' and tend to be underdamped in increasing the likelihood of something based on suggestive antecedents (e.g. chance of a war given an altercation, etc.) So attending to "Even if A, for it to lead to D one should attend to P(B|A), P(C|B) etc. etc.", tend to lead to downwards corrections. 

Naturally, you can mess this up. Although it's not obvious you are at greater risk if you arrange your decomposed considerations conjunctively or disjunctively: "All of A-E must be true for P to be true" ~also means "if any of ¬A-¬E are true, then ¬P".  In natural language and heuristics, I can imagine "Here are several different paths to P, and each of these seem not-too-improbable, so P must be highly likely" could also lead one astray. 

Comment by Gregory_Lewis on Thoughts on being overqualified for EA positions · 2021-05-01T23:21:29.987Z · EA · GW

Similar to Ozzie, I would guess the 'over-qualified' hesitation often has less to do with, "I fear I would be under-utilised and become disinterested if I took a more junior role, and thus do less than the most good I could", but a more straightforward, "Roles which are junior, have unclear progression and don't look amazing on my CV if I move on aren't as good for my career as other opportunities available to me." 

This opportunity cost (as the OP notes) is not always huge, and it can be outweighed by other considerations. But my guess is it is often a substantial disincentive:

  • In terms of traditional/typical kudos/cred/whatever, getting in early on something which is going up like a rocket offers a great return on invested reputational or human capital. It is a riskier return, though: by analogy I'd guess being "employee #10" for some start-ups is much better than working at google, but I'd guess for the median start-up it is worse.
  • Many EA orgs have been around for a few years now, and their track-record so far might incline one against rocketing success by conventional and legible metrics. (Not least, many of them are targeting a very different sort of success than a tech enterprise, consulting firm, hedge fund, etc. etc.)
  • Junior positions at conventionally shiny high-status things have good career capital. I'd guess my stint as a junior doctor 'looks good' on my CV even when applying to roles with ~nothing to do with clinical practice. Ditto stuff like ex-googler, ex-management consultant, ?ex-military officer, etc. "Ex-junior-staffer-at-smallish-nonprofit" usually won't carry the same cachet. 
  • As careers have a lot of cumulative/progressive characteristics, 'sideways' moves earlier on may have disproportionate impact on ones trajectory. E.g. 'longtermist careerists' might want very outsized compensation for such a  'tour' to make up for compounded loss of earnings (in expectation) for pausing their climb up various ladders.

None of this means 'EA jobs' are only for suckers. There are a lot of upsides even from a 'pure careerism' perspective (especially for particular career plans), and obvious pluses for folks who value the mission/impact too. But insofar as folks aren't perfectly noble, and care somewhat about the former as well as the latter (ditto other things like lifestyle, pay, etc. etc.) these disincentives are likely to be stronger pushes for more 'overqualified' folks. 

And insofar as EA orgs would like to recruit more 'overqualified' folks for their positions (despite, as I understand it, their job openings being broadly oversubscribed with willing and able - but perhaps not 'overqualified' - applicants), I'd guess it's fairly heavy-going as these disincentives are hard to 'fix'.

Comment by Gregory_Lewis on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-18T05:17:59.613Z · EA · GW

Although I understand the nationalism example isn't meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous. 

If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively 'moved on' from these, contemporary introductions to the field shouldn't feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.

Yet, however you slice it, EA as it stands now hasn't by-and-large 'moved on' to be 'basically longtermism', where its interest in (e.g) global health is clearly atavistic. I'd be willing to go to bat for substantial slants to longtermism, as (I aver) its over-representation amongst the more highly engaged and the disproportionate migration of folks to longtermism from other areas warrants claims that epistocratic weighting of consensus would favour longtermism over anything else. But even this has limits which 'greatly favouring longtermism over everything else' exceeds.  

How you choose to frame an introduction is up for grabs, and I don't think 'the big three' is the only appropriate game in town. Yet if your alternative way of framing an introduction to X ends up strongly favouring one aspect (further, the one you are sympathetic to) disproportionate to any reasonable account of its prominence within X, something has gone wrong.

Comment by Gregory_Lewis on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-18T05:17:40.824Z · EA · GW

Per others: This selection isn't really 'leans towards a focus on longtermism', but rather 'almost exclusively focuses on longtermism': roughly any 'object level' cause which isn't longtermism gets a passing mention, whilst longtermism is the subject of 3/10 of the selection. Even some not-explicitly-longtermist inclusions (e.g. Tetlock, MacAskill, Greaves) 'lean towards' longtermism either in terms of subject matter or affinity.

Despite being a longtermist myself, I think this is dubious for a purported 'introduction to EA as a whole': EA isn't all-but-exclusively longtermist in either corporate thought or deed.

Were I a more suspicious sort, I'd also find the 'impartial' rationales offered for why non-longtermist things keep getting the short (if not pointy) end of the stick scarcely credible:

i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.

The first episode with Karnofsky also covers longtermism and AI - at least as much as global health and animals. Yet this didn't stop episodes on the specific cause areas of longtermism (Ord) and AI (Christiano) being included. Ditto the instance of "entrepreneurship, independent thinking, and general creativity" one wanted to highlight just-so-happens to be a longtermist intervention (versus, e.g. this).

Comment by Gregory_Lewis on Proposed Longtermist Flag · 2021-03-24T15:46:33.929Z · EA · GW

I also thought along similar lines, although (lacking subtlety) I thought you could shove in a light cone from the dot, which can serve double duty as the expanding future. Another thing you could do is play with a gradient so this curve/the future gets brighter as well as bigger, but perhaps someone who can at least successfully colour in have a comparative advantage here.


Comment by Gregory_Lewis on Progress Open Thread: March 2021 · 2021-03-24T13:30:03.505Z · EA · GW

A less important motivation/mechanism is probabilities/ratios (instead of odds) are bounded above by one. For rare events 'doubling the probability' versus 'doubling the odds' get basically the same answer, but not so for more common events. Loosely, flipping a coin three times 'trebles' my risk of observing it landing tails, but the probability isn't 1.5. (cf).


Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like an additional 60% chance of your other child going through at least some level of abuse (and I would estimate something like a 15% chance of serious abuse). [my emphasis]

If you used the 80% definition instead of 20%, then the '4x' risk factor implied by 60% additional chance (with 20% base rate) would give instead an additional 240% chance.

[(Of interest, 20% to 38% absolute likelihood would correspond to an odds ratio of ~2.5, in the ballpark of 3-4x risk factors discussed before. So maybe extrapolating extreme event ratios to less-extreme event ratios can do okay if you keep them in odds form. The underlying story might have something to do with logistic distributions closely resemble normal distributions (save at the tails), so thinking about shifting a normal distribution across the x axis so (non-linearly) more or less of it lies over a threshold loosely resembles adding increments to log-odds (equivalent to multiplying odds by a constant multiple) giving (non-linear) changes when traversing a logistic CDF.

But it still breaks down when extrapolating very large ORs from very rare events. Perhaps the underlying story here may have something to do with higher kurtosis : '>2SD events' are only (I think) ~5X more likely than >3SD events for logistic distributions, versus ~20X in normal distribution land. So large shifts in likelihood of rare(r) events would imply large logistic-land shifts (which dramatically change the whole distribution, e.g. an OR of 10 makes evens --> >90%) much more modest in normal-land (e.g. moving up an SD gives OR>10 for previously 3SD events, but ~2 for previously 'above average' ones)]

Comment by Gregory_Lewis on Tristan Cook's Shortform · 2021-03-12T20:44:03.329Z · EA · GW

Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of'X conclusion's out there). Trying to weigh these up comparatively are fraught.

In your comparison, it seems there's a straightforward dominance argument if the 'OC' and 'RC' are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.

There's also the worry in a pairwise comparison one might inadvertently pick a counterexample for one 'side' that turns the screws less than the counterexample for the other one. Most people find the 'very repugnant conclusion' (where not only Z > A, but 'large enough Z and some arbitrary number having awful lives > A') even more costly than the 'standard' RC. So using the more or less costly variant on one side of the scales may alter intuitive responses.

By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.

It seems the main engine of RC-like examples is the aggregation - it feels like one is being nickel-and-dimed taking a lot of very small things to outweigh one very large thing, even though the aggregate is much higher. The typical worry a negative view avoids is trading major suffering for sufficient amounts of minor happiness - most typically think this is priced too cheaply, particularly at extremes. The typical worry of the (absolute) negative view itself is it fails to price happiness at all - yet often we're inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of 'upside'.

So with this procedure the putative counter-example to the classical view would be the vRC. Although negative views may not give crisp recommendations against the RC (e.g. if we stipulate no one ever suffers in any of the worlds, but are more or less happy), its addition clearly recommends against the vRC: the great suffering isn't outweighed by the large amounts of relatively trivial happiness (but it would be on the classical view).

Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC - by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia - Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas. The negative view ranks Z > A: the negative view only considers the pinpricks in this utopia, and sufficiently huge magnitudes of these can worse than awful lives (the classical view, which wouldn't discount all the upside in A, would not). In general, this negative view can countenance any amount of awful suffering if this is the price to pay to abolish a near-utopia of sufficient size.

(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) and - depending how you litigate - susceptible to a sadistic conclusion. If the axiology claims welfare is capped above by 0, then there's never an option of adding positive welfare lives so nothing can be sadistic. If instead it discounts positive welfare, then it prefers (given half of A) adding half of Z (very negative welfare lives) to adding the other half of A (very positive lives)).

I take this to make absolute negative utilitarianism (similar to average utilitarianism) a non-starter. In the same way folks look for a better articulation of egalitarian-esque commitments that make one (at least initially) sympathetic to average utilitarianism, so folks with negative-esque sympathies may look for better articulations of this commitment. One candidate could be what one is really interested in cases of severe rather than trivial suffering, so this rather than suffering in general should be the object of sole/lexically prior concern. (Obviously there are many other lines, and corresponding objections to each).

But note this is an anti-aggregation move. Analogous ones are available for classical utilitarians to avoid the (v/)RC (e.g. a critical-level view which discounts positive welfare below some threshold). So if one is trying to evaluate a particular principle out of a set, it would be wise to aim for 'like-for-like': e.g. perhaps a 'negative plus a lexical threshold' view is more palatable than classical util, yet CLU would fare even better than either.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-12T01:32:59.048Z · EA · GW

[Mea culpa re. messing up the formatting again]

1) I don't closely follow the current state of play in terms of 'shorttermist' evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) "Why aren't you factoring in impacts on climate change for these interventions?" would be some mix of:

a) "We have looked at this, and we're confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc."

b) "We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn't vary appreciably between interventions) so we get higher yield investigating other things."

c) "We are explicit our analysis is predicated on moral (e.g. "human lives are so much more important than animals lives any impact on the latter is ~moot") or epistemic (e.g. some 'common sense anti-cluelessness' position) claims which either we corporately endorse and/or our audience typically endorses." 

Perhaps such hopes would be generally disappointed.

2) Similar to above, I don't object to (re. animals) positions like "Our view is this consideration isn't a concern as X" or "Given this consideration, we target Y rather than Z", or "Although we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation."

But I at least used to see folks appeal to motivations which obviate (inverse/) logic of the larder issues, particularly re. diet change ("Sure, it's actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what we're aiming for"). Yet this overriding motivation typically only 'came up' in the context of this discussion, and corollary questions  like:

*  "Is maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?"

* "Is encouraging carnivores to adopt a vegan diet the best way to influence attitudes?"

* "Shouldn't we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/be bad by the lights of many/most non-consequentialist views?" 

seemed seldom asked. 

Naturally I hope this is a relic of my perhaps jaundiced memory.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-12T00:15:44.538Z · EA · GW

FWIW, I don't think 'risks' is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. "Give Directly reduces extinction risk by reducing poverty, a known cause of conflict"); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.

(Apologies in advance I'm rehashing unhelpfully)

The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a 'confirmed discovery'). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, there's a natural response of 'shouldn't we try something which targets this on purpose?'; if it were 0, we wouldn't attend to it further; if it meant you were -10, you wouldn't give to (now net EV = "-9") GiveDirectly. 

The right response where all three scenarios are credible (plus all the intermediates) but you're unsure which one you're in isn't intuitively obvious (at least to me). Even if (like me) you're sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just 'run the numbers' and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.

The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something 'extra' to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms 'cancel out'), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better. 

This can be put in plain(er) English (although familiar-to-EA jargon like 'EV' may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isn't even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty. 

Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/vocab to grapple with the problem. They may also think we don't have a good 'answer' yet of what to do in these situations, so may hesitate to give 'accept there's uncertainty but don't be paralysed by it' advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-04T15:04:55.636Z · EA · GW


I read the stakes here differently to you. I don't think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to 'everything which isn't longtermism'. At least, that isn't my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to. 

The AMF discussions around cluelessness in the OP are intended as toy example - if you like, deliberating purely on "is it good or bad to give to AMF versus this particular alternative?" instead of "Out of all options, should it be AMF?" Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.

So if that isn't a main motivation, what is? Perhaps something like this:

1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land - particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to 'backfire' are fairly trivial, but how seriously credible ones should be investigated is up for grabs.

Although "just be indifferent if it is hard to figure out" is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:

a) People not tracking when the ground of appeal for an intervention has changed. Although I don't see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an 'inverse logic of the larder' (see), such as "per area, a factory farm has a lower intensity of animal suffering than the environment it replaced". 

Even if so, it wouldn't follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of 'animal suffering averted per $' remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.

b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).

Discussions here are typically marred by proponents either completely ignoring considerations on the 'other side' of the population growth question, or giving very unequal time to them/sheltering behind uncertainty (e.g. "Considerations X, Y, and Z all tentatively support more population growth, admittedly there's A, B, C, but we do not cover those in the interests of time - yet, if we had, they probably would tentatively oppose more population growth"). 

2) Given my fairly deflationary OP, I don't think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think I'm right, I don't think I'm obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant it's own label.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-03T16:11:03.423Z · EA · GW

I may be missing the thread, but the 'ignoring' I'd have in mind for resilient cluelessness would be straight-ticket precision, which shouldn't be intransitive (or have issues with principle of indifference).

E.g. Say I'm sure I can make no progress on (e.g.) the moral weight of chickens versus humans in moral calculation - maybe I'm confident there's no fact of the matter, or interpretation of the empirical basis is beyond our capabilities forevermore, or whatever else.

Yet (I urge) I should still make a precise assignment (which is not obliged to be indifferent/symmetrical), and I can still be in reflective equilibrium between these assignments even if I'm resiliently uncertain. 

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-03T15:53:10.083Z · EA · GW

Mea culpa. I've belatedly 'fixed' it by putting it into text.

Comment by Gregory_Lewis on Complex cluelessness as credal fragility · 2021-03-03T15:52:04.244Z · EA · GW

The issue is more the being stuck than the range: say it is (0.4, 0.6) rather than (0, 1), you'd still be inert. Vallinder (2018) discusses this extensively, including issues around infectiousness and generality.

Comment by Gregory_Lewis on Thoughts on whether we're living at the most influential time in history · 2020-11-11T07:56:14.210Z · EA · GW

For my part, I'm more partial to 'blaming the reader', but (evidently) better people mete out better measure than I in turn.

Insofar as it goes, I think the challenge (at least for me) is qualitative terms can cover multitudes (or orders of magnitudes) of precision. I'd take ~0.3% to be 'significant' credence for some values of significant. 'Strong' 'compelling' or 'good' arguments could be an LR of 2 (after all, RCT confirmation can be ~3) or 200. 

I also think quantitative articulation would help the reader (or at least this reader) better benchmark the considerations here. Taking the rough posterior of 0.1% and prior of 1 in 100 million, this implies a likelihood ratio of ~~100 000 - loosely, ultra-decisive evidence. If we partition out the risk-based considerations (which it discussion seems to set as 'less than decisive' so <100), the other considerations (perhaps mostly those in S5) give you a LR of > ~1000 - loosely, very decisive evidence. 

Yet the discussion of the considerations in S5 doesn't give the impression we should conclude they give us 'massive updates'. You note there are important caveats to these considerations, you say in summing up these arguments are 'far from watertight', and I also inferred the sort of criticisms given in S3 around our limited reasoning ability and scepticism of informal arguments would also apply here too. Hence my presumption these other considerations, although more persuasive than object level arguments around risks, would still end up below the LR ~ 100 for 'decisive' evidence, rather than much higher. 

Another way this would help would be illustrating the uncertainty. Given some indicative priors you note vary by ten orders of magnitude, the prior is not just astronomical but extremely uncertain. By my lights, the update doesn't greatly reduce our uncertainty (and could compound it, given challenges in calibrating around  very high LRs). If the posterior odds could be 'out by 100 000x either way' the central estimate being at ~0.3%  could still give you (given some naive log-uniform) 20%+ mass distributed at better than even odds of HH. 

The moaning about hiding the ball arises from the sense this numerical articulation reveals (I think) some powerful objections the more qualitative treatment obscures. E.g.

  • Typical HH proponents are including considerations around earliness/single planet/ etc. in their background knowledge/prior when discussing object level risks. Noting the prior becomes astronomically adverse when we subtract these out of background knowledge, and so the object level case for (e.g.) AI risk can't possibly be enough to carry the day alone seems a bait-and-switch: you agree the prior becomes massively less astronomical when we include single planet etc. in background knowledge, and in fact things like 'we live on only one planet' are in our background knowledge (and were being assumed at least tacitly by HH proponents). 
  • The attempt to 'bound' object level arguments by their LR (e.g. "Well, these are informal, and it looks fishy, etc. so it is hard to see how you can get LR >100 from these") doesn't seem persuasive when your view is that the set of germane considerations (all of which seem informal, have caveats attached, etc.) in concert are giving you an LR of ~100 000 or more. If this set of informal considerations can get you more than half way from the astronomical prior to significant credence, why be so sure additional ones (e.g.) articulating a given danger can't carry you the rest of the way? 
  • I do a lot of forecasting, and I struggle to get a sense of what priors of 1/ 100 M or decisive evidence to the tune of LR 1000 would look like in 'real life' scenarios. Numbers this huge (where you end up virtually 'off the end of the tail' of your stipulated prior) raise worries about consilience (cf. "I guess the sub-prime morgage crisis was a 10 sigma event"), but moreover pragmatic defeat: there seems a lot to distrust in an epistemic procedure along the lines of "With anthropics given stipulated subtracted background knowledge we end up with an astronomically minute prior (where we could be off by many orders of magnitude), but when we update on adding back in elements of our actual background knowledge this shoots up by many orders of magnitude (but we are likely still off by many orders of magnitude)". Taking it face value would mean a minute update to our 'pre theoretic prior' on the topic before embarking on this exercise (providing these overlapped and was not as radically uncertain, varying no more than a couple rather than many orders of magnitude). If we suspect (which I think we should) this procedure of partitioning out background knowledge into update steps which approach log log variance and where we have minimal calibration is less reliable than using our intuitive gestalt over our background knowledge as whole, we should discount its deliverances still further. 
Comment by Gregory_Lewis on Thoughts on whether we're living at the most influential time in history · 2020-11-10T07:23:42.701Z · EA · GW

But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million. I want to push on this because if your own credences are inconsistent with your argument, the reasons why seem both important to explore and to make clear to readers, who may be mislead into taking this at 'face value'. 

From this on page 13 I guess a generous estimate (/upper bound) is something like 1/ 1 million for the 'among most important million people':

[W]e can assess the quality of the arguments given in favour of the Time of Perils or Value Lock-in views, to see whether, despite the a priori implausibility and fishiness of HH, the evidence is strong enough to give us a high posterior in HH. It would take us too far afield to discuss in sufficient depth the arguments made in Superintelligence, or Pale Blue Dot, or The Precipice. But it seems hard to see how these arguments could be strong enough to move us from a very low prior all the way to significant credence in HH. As a comparison, a randomised controlled trial with a p-value of 0.05, under certain reasonable assumptions, gives a Bayes factor of around 3 in favour of the hypothesis; a Bayes factor of 100 is regarded as ‘decisive’ evidence. In order to move from a prior of 1 in 100 million to a posterior of 1 in 10, one would need a Bayes factor of 10 million — extraordinarily strong evidence.

I.e. a prior of ~ 1/ 100 million (which is less averse than others you moot earlier), and a Bayes factor < 100 (i.e. we should not think the balance of reason, all considered, is 'decisive' evidence), so you end up at best at ~1/ 1 million. If this argument is right, you can be 'super confident' giving a credence of 0.1% is wrong (out by an ratio of >~ 1000, the difference between ~ 1% and 91%), and vice-versa. 

Yet I don't think your credence on 'this is the most important century' is 1/ 1 million. Among other things it seems to imply we can essentially dismiss things like short TAI timelines, Bostrom-Yudkowsky AI accounts etc, as these are essentially upper-bounded by the 1/ 1M credence above.*

So (presuming I'm right and you don't place negligible credence on these things) I'm not sure how these things can be in reflective equilibrium.

1: 'Among the most important million people' and 'this is the most important century' are not the same thing, and so perhaps one has a (much) higher prior on the latter than the former. But if the action really was here, then the precisification of 'hinge of history' as the former claim seems misguided: "Oh, this being the most important century could have significant credence, but this other sort-of related proposition nonetheless has an astronomically adverse prior" confuses rather than clarifies.

2: Another possibility is there are sources of evidence which give us huge updates, even if the object level arguments in (e.g.) Superintelligence, The Precipice etc. are not among them. Per the linked conversation, maybe earliness gives a huge shift up from the astronomically adverse prior, so this plus the weak object level evidence gets you to lowish but not negligible credence. 

Whether cashed out via prior or update, it seems important to make such considerations explicit, as the true case in favour of HH would include these considerations too. Yet the discussion of 'how far you should update' on p11-13ish doesn't mention these massive adjustments, instead noting reasons to be generally sceptical (e.g. fishiness) and the informal/heuristic arguments for object level risks should not be getting you Bayes factors ~100 or more. This seems to be hiding the ball if in fact your posterior is ultimately 1000x or more your astronomically adverse prior, but not for reasons which are discussed (and so a reader may neglect to include when forming their own judgement).


*: I think there's also a presumptuous philosopher-type objection lurking here too. Folks (e.g.) could have used a similar argument to essentially rule out any x-risk from nuclear winter before any scientific analysis, as this implies significant credence in HH, which the argument above essentially rules out. Similar to 'using anthropics to hunt', something seems to be going wrong where the mental exercise of estimating potentially-vast future populations can also allow us to infer the overwhelming probable answers for disparate matters in climate modelling, AI development, the control problem, civilisation recovery and so on. 

Comment by Gregory_Lewis on Thoughts on whether we're living at the most influential time in history · 2020-11-05T20:17:59.146Z · EA · GW

“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that we’re (in expectation at least) at an enormously influential time - as I say in the blog post and the comments, I endorse those arguments! I think we should update massively away from our prior, in particular on the basis of the current rate of economic growth. (My emphasis)

Asserting an astronomically adverse prior, then a massive update, yet being confident you're in the right ballpark re. orders of magnitude does look pretty fishy though. For a few reasons:

First, (in the webpage version you quoted) you don't seem sure of a given prior probability, merely that it is 'astronomical': yet astronomical numbers (including variations you note about whether to multiply by how many accessible galaxies there are or not, etc.) vary by substantially more than three orders of magnitude - you note two possible prior probabilities (of being among the million most influential people) of 1 in a million trillion (10^-18) and 1 in a hundred million (10^-8) - a span of 10 orders of magnitude. 

It seems hard to see how a Bayesian update from this (seemingly) extremely wide prior would give a central estimate at a (not astronomically minute) value, yet confidently rule against values 'only' 3 orders of magnitude higher (a distance a ten millionth the width of this implicit span in prior probability). [It also suggests the highest VoI is to winnow this huge prior range, rather than spending effort evaluating considerations around the likelihood ratio]

Second, whatever (very) small value we use for our prior probability, getting to non-astronomical posteriors implies likelihood ratios/Bayes factors which are huge. From (say) 10^-8 to 10^-4 is a factor of 10 000. As you say in your piece, this is much much stronger than the benchmark for decisive evidence of ~100. It seems hard to say (e.g.) evidence from the rate of economic growth is 'decisive' in this sense, and so it is hard to see how in concert with other heuristic considerations you get 10-100x more confirmation (indeed, your subsequent discussion seems to supply many defeaters exactly this). Further, similar to worries about calibration out on the tail, it seems unlikely many of us can accurately assess LRs > 100 which are not direct observations within orders of magnitude. 

Third, priors should be consilient, and can be essentially refuted by posteriors. A prior that get surprised to the tune of a 1-in-millions should get hugely penalized versus any alternative (including naive intuitive gestalts) which do not. It seems particularly costly as non-negligible credences in (e.g.) nuclear winter, the industrial revolution being crucial etc. are facially represent this prior being surprised by '1 in large X' events at a rate much greater than 1/X.

To end up with not-vastly lower posteriors than your interlocutors (presuming Buck's suggestion of 0.1% is fair, and not something like 1/million), it seems one asserts both a much lower prior which is mostly (but not completely) cancelled out by a much stronger update step.  This prior seems to be ranging over many orders of magnitude, yet the posterior does not - yet it is hard to see where the orders of magnitude of better resolution are arising from (if we knew for sure the prior is 10^-12 versus knowing for sure it is 10^-8, shouldn't the posterior shift a lot between the two cases?)

It seems more reasonable to say 'our' prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion.

Comment by Gregory_Lewis on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T20:39:04.637Z · EA · GW

I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck with the BATNA regardless.

I think so. 

In the abstract, 'negotiating via ultimatum' (e.g. "you must cancel the talk, or I will do this") does not mean one is acting in bad faith. Alice may foresee there is no bargaining frontier, but is informing you what your BATNA looks like and gives you the opportunity to consider whether 'giving in' is nonetheless better for you (this may not be very 'nice', but it isn't 'blackmail'). A lot turns on whether her 'or else' is plausibly recommended by the lights of her interests (e.g. she would do these things if we had already held the event/she believed our pre-commitment to do so) or she is threatening spiteful actions where their primary value is her hope they alter our behaviour (e.g. she would at least privately wish she didn't have to 'follow through' if we defied her). 

The reason these are important to distinguish is 'folk game theory' gives a pro tanto reason to not give in the latter case, even if doing so is better than suffering the consequences (as you deter future attempts to coerce you). But not in the former one, as Alice's motivation to retaliate does not rely on the chance you may acquiesce to her threats, and so she will not 'go away' after you've credibly demonstrated to her you will never do this. 

On the particular case I think some of it was plausibly bad faith (i.e. if a major driver was 'fleet in being' threat that people would antisocially disrupt the event) but a lot of it probably wasn't: "People badmouthing/thinking less of us for doing this" or (as Habryka put it) the 'very explicit threat' of an organisation removing their affiliation from EA Munich are all credibly/probably good faith warnings even if the only way to avoid them would have been complete concession. (There are lots of potential reasons I would threaten to stop associating with someone or something where the only way for me to relent is their complete surrender)

(I would be cautious about labelling things as mobs or cancel culture.)

[G]iven that she's taking actions that destroy value for Bob without generating value for Alice (except via their impact on Bob's actions), I think it is fine to think of this as a threat. (I am less attached to the bully metaphor -- I meant that as an example of a threat.)

Let me take a more in-group example readers will find sympathetic.

When the NYT suggested it will run an article using Scott's legal name, may of his supporters responded by complaining to the editor, organising petitions, cancelling their subscriptions (and encouraging others to do likewise), trying to coordinate sources/public figures to refuse access to NYT journalists, and so on. These are straightforwardly actions which 'destroy value' for the NYT, are substantially motivated to try and influence its behaviour, and was an ultimatum to boot (i.e. the only way the NYT can placate this 'online mob' is to fully concede on not using Scott's legal name). 

Yet presumably this strategy was not predicated on 'only we are allowed to (or smart enough to) use game theory, so we can expect the NYT to irrationally give in to our threats when they should be ostentatiously doing exactly what we don't want them to do to demonstrate they won't be bullied'. For although these actions are 'threats', they are warnings/ good faith/ non-spiteful, as these responses are not just out of hope to coerce: these people would be minded to retaliate similarly if they only found out NYT's intention after the article had been published. 

Naturally the hope is that one can resolve conflict by a meeting of the minds: we might hope we can convince Alice to see things our way; and the NYT probably hopes the same. But if the disagreement prompting conflict remains, we should be cautious about how we use the word threat, especially in equivocating between commonsense use of the term (e.g. "I threaten to castigate Charlie publicly if she holds a conference on holocaust denial") and the subspecies where folk game theory - and our own self-righteousness - strongly urges us to refute (e.g. "Life would be easier for us at the NYT if we acquiesced to those threatening to harm our reputation and livelihoods if we report things they don't want us to. But we will never surrender the integrity of our journalism to bullies and blackmailers.")

Comment by Gregory_Lewis on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T11:35:03.478Z · EA · GW

Another case where 'precommitment  to refute all threats' is an unwise strategy (and a case more relevant to the discussion, as I don't think all opponents to hosting a speaker like Hanson either see themselves or should be seen as bullies attempting coercion) is where your opponent is trying to warn you rather than trying to blackmail you. (cf. 1, 2)

Suppose Alice sincerely believes some of Bob's writing is unapologetically misogynistic. She believes it is important one does not give misogynists a platform and implicit approbation. Thus she finds hosting Bob abhorrent, and is dismayed that a group at her university is planning to do just this. She approaches this group, making clear her objections and stating her intention to, if this goes ahead, to (e.g.) protest this event, stridently criticise the group in the student paper for hosting him, petition the university to withdraw affiliation, and so on. 

This could be an attempt to bully (where usual game theory provides a good reason to refuse to concede anything on principle). But it also could not be: Alice may be explaining what responses she would make to protect her interests which the groups planned action would harm, and hoping to find a better negotiated agreement for her and the EA group besides "They do X and I do Y". 

It can be hard to tell the difference, but some elements in this example speak against Alice being a bully wanting to blackmail the group to get her way: First is the plausibility of her interests recommending these actions to her even if they had no deterrent effect whatsoever (i.e. she'd do the same if the event had already happened). Second the actions she intends falls roughly falls in 'fair game' of how one can retaliate against those doing something they're allowed to do which you deem to be wrong. 

Alice is still not a bully even if her motivating beliefs re. Bob are both completely mistaken and unreasonable. She's also still not a bully even if Alice's implied second-order norms are wrong (e.g. maybe the public square would be better off if people didn't stridently object to hosting speakers based on their supposed views on topics they are not speaking upon, etc.) Conflict is typically easy to navigate when you can dictate to your opponent what their interests should be and what they can license themselves to do. Alas such cases are rare.

It is extremely important not to respond to Alice as if she was a bully if in fact she is not, for two reasons. First, if she is acting in good faith, pre-committing to refuse any compromise for 'do not give in to bullying' reasons means one always ends up at ones respective BATNAs even if there was mutually beneficial compromises to be struck. Maybe there is no good compromise with Alice this time, but there may be the next time one finds oneself at cross-purposes.

Second, wrongly presuming bad faith for Alice seems apt to induce her to make a symmetrical mistake presuming bad faith for you. To Alice, malice explains well why you were unwilling to even contemplate compromise, why you considered yourself obliged out of principle  to persist with actions that harm her interests, and why you call her desire to combat misogyny bullying and blackmail. If Alice also thinks about these things through the lens of game theory (although perhaps not in the most sophisticated way), she may reason she is rationally obliged to retaliate against you (even spitefully) to deter you from doing harm again. 

The stage is set for continued escalation. Presumptive bad faith is pernicious, and can easily lead to martyring oneself needlessly on the wrong hill. I also note that 'leaning into righteous anger' or 'take oneself as justified in thinking the worst of those opposed to you' are not widely recognised as promising approaches in conflict resolution, bargaining, or negotiation.

Comment by Gregory_Lewis on What actually is the argument for effective altruism? · 2020-09-27T17:09:03.912Z · EA · GW

This isn't much more than a rotation (or maybe just a rephrasing), but:

When I offer a 10 second or less description of Effective Altruism, it is hard avoid making it sound platitudinous. Things like "using evidence and reason to do the most good", or "trying to find the best things to do, then doing them" are things I can imagine the typical person nodding along with, but then wondering what the fuss is about ("Sure, I'm also a fan of doing more good rather than less good - aren't we all?") I feel I need to elaborate with a distinctive example (e.g. "I left clinical practice because I did some amateur health econ on how much good a doctor does, and thought I could make a greater contribution elsewhere") for someone to get a good sense of what I am driving at.

I think a related problem is the 'thin' version of EA can seem slippery when engaging with those who object to it. "If indeed intervention Y was the best thing to do, we would of course support intervention Y" may (hopefully!) be true, but is seldom the heart of the issue. I take most common objections are not against the principle but the application (I also suspect this may inadvertently annoy an objector, given this reply can paint them as - bizarrely - 'preferring less good to more good'). 

My best try at what makes EA distinctive is a summary of what you spell out with spread, identifiability, etc: that there are very large returns to reason  for beneficence (maybe 'deliberation' instead of 'reason', or whatever).  I think the typical person does "use reason and evidence to do the most good", and can be said to be doing some sort of search for the best actions. I think the core of EA (at least the 'E' bit) is the appeal that people should do a lot more of this than they would otherwise - as, if they do, their beneficence would tend to accomplish much more.

Per OP, motivating this is easier said than done. The best case is for global health, as there is a lot more (common sense) evidence one can point to about some things being a lot better than others, and these object level matters a hypothetical interlocutor is fairly likely to accept also offers support for the 'returns to reason' story. For most other cause areas, the motivating reasons are typically controversial, and the (common sense) evidence is scant-to-absent. Perhaps the best moves are here would be pointing to these as salient considerations which plausibly could dramatically change ones priorities, and so exploring to uncover these is better than exploiting after more limited deliberation (but cf. cluelessness).


Comment by Gregory_Lewis on Challenges in evaluating forecaster performance · 2020-09-12T13:33:39.006Z · EA · GW

I'm afraid I'm also not following. Take an extreme case (which is not that extreme given I think 'average number of forecasts per forecaster per question on GJO is 1.something). Alice predicts a year out P(X) = 0.2 and never touches her forecast again, whilst Bob predicts P(X) = 0.3, but decrements proportionately as time elapses. Say X doesn't happen (and say the right ex ante probability a year out was indeed 0.2). Although Alice > Bob on the initial forecast (and so if we just scored that day she would be better), if we carry forward Bob overtakes her overall [I haven't checked the maths for this example, but we can tweak initial forecasts so he does].

As time elapses, Alice's forecast steadily diverges from the 'true' ex ante likelihood, whilst Bob's converges to it. A similar story applies if new evidence emerges which dramatically changes the probability, if Bob updates on it and Alice doesn't. This seems roughly consonant with things like the stock-market - trading off month (or more) old prices rather than current prices seems unlikely to go well.

Comment by Gregory_Lewis on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-04T16:19:22.762Z · EA · GW

FWIW I agree with Owen. I agree the direction of effect supplies a pro tanto consideration which will typically lean in favour of other options, but it is not decisive (in addition to the scenarios he notes, some people have pursued higher degrees concurrently with RSP).

So I don't think you need to worry about potentially leading folks astray by suggesting this as an option for them to consider - although, naturally, they should carefully weigh their options up (including considerations around which sorts of career capital are most valuable for their longer term career planning).

Comment by Gregory_Lewis on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T12:34:13.124Z · EA · GW
As such, blackmail feels like a totally fair characterization [of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).]

As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent's remarks about it being bad to submit to blackmail are inapposite.

Crucially, (q.v. Daniel's comment), not all instances where someone says (or implies), "If you do X (which I say harms my interests), I'm going to do Y (and Y harms your interests)" are fairly characterised as (essentially equivalent to) blackmail. To give a much lower resolution of Daniel's treatment, if (conditional on you doing X) it would be in my interest to respond with Y independent of any harm it may do to you (and any coercive pull it would have on you doing X in the first place), informing you of my intentions is credibly not a blackmail attempt, but a better-faith "You do X then I do Y is our BATNA here, can we negotiate something better?" (In some treatments these are termed warnings versus threats, or using terms like 'spiteful', 'malicious' or 'bad faith' to make the distinction).

The 'very explicit threat' of disassociation you mention is a prime example of 'plausibly (/prima facie) not-blackmail'. There are many credible motivations to (e.g.) renounce (or denounce) a group which invites a controversial speaker you find objectionable independent from any hope threatening this makes them ultimately resile from running the event after all. So too 'trenchantly criticising you for holding the event', 'no longer supporting your group', 'leaving in protest (and encouraging others to do the same)' etc. etc. Any or all of these might be wrong for other reasons - but (again, per Daniels) 'they're trying to blackmail us!' is not necessarily one of them.

(Less-than-coincidentally, the above are also acts of protest which are typically considered 'fair game', versus disrupting events, intimidating participants, campaigns to get someone fired, etc. I presume neither of us take various responses made to the NYT when they were planning to write an article about Scott to be (morally objectionable) attempts to blackmail them, even if many of them can be called 'threats' in natural language).

Of course, even if something could plausibly not be a blackmail attempt, it may in fact be exactly this. I may posture that my own interests would drive me to Y, but I would privately regret having to 'follow through' with this after X happens; or I may pretend my threat of Y is 'only meant as a friendly warning'. Yet although our counterparty's mind is not transparent to us, we can make reasonable guesses.

It is important to get this right, as the right strategy to deal with threats is a very wrong one to deal with warnings. If you think I'm trying to blackmail you when I say "If you do X, I will do Y", then all the usual stuff around 'don't give in to the bullies' applies: by refuting my threat, you deter me (and others) from attempting to bully you in future. But if you think I am giving a good-faith warning when I say this, it is worth looking for a compromise. Being intransigent as a matter of policy - at best - means we always end up at our mutual BATNAs even when there were better-for-you negotiated agreements we could have reached.

At worst, it may induce me to make the symmetrical mistake - wrongly believing your behaviour in is bad faith. That your real reasons for doing X, and for being unwilling to entertain the idea of compromise to mitigate the harm X will do to me, are because you're actually 'out to get me'. Game theory will often recommend retaliation as a way of deterring you from doing this again. So the stage is set for escalating conflict.

Directly: Widely across the comments here you have urged for charity and good faith to be extended to evaluating Hanson's behaviour which others have taken exception to - that adverse inferences (beyond perhaps "inadvertently causes offence") are not only mistaken but often indicate a violation of discourse norms vital for EA-land to maintain. I'm a big fan of extending charity and good faith in principle (although perhaps putting this into practice remains a work in progress for me). Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out. Beyond this being normatively unjust, it is also prudentially unwise - presuming bad faith in those who object to your actions is a recipe for making a lot of enemies you didn't need to, especially in already-fractious intellectual terrain.

You could still be right - despite the highlighted 'very explicit threat' which is also very plausibly not blackmail, despite the other 'threats' alluded to which seem also plausibly not blackmail and 'fair game' protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.

Comment by Gregory_Lewis on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T13:34:31.257Z · EA · GW

I'm fairly sure the real story is much better than that, although still bad in objective terms: In culture war threads, the typical norms re karma roughly morph into 'barely restricted tribal warfare'. So people have much lower thresholds both to slavishly upvote their 'team',and to downvote the opposing one.

Comment by Gregory_Lewis on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-30T23:34:56.885Z · EA · GW

Talk of 'blackmail' (here and elsethread) is substantially missing the mark. To my understanding, there were no 'threats' being acquiesced to here.

If some party external to the Munich group pressured them into cancelling the event with Hanson (and without this, they would want to hold the event), then the standard story of 'if you give in to the bullies you encourage them to bully you more' applies.

Yet unless I'm missing something, the Munich group changed their minds of their own accord, and not in response to pressure from third parties. Whether or not that was a good decision, it does not signal they're vulnerable to 'blackmail threats'. If anything, they've signalled the opposite by not reversing course after various folks castigated them on Twitter etc.

The distinction between 'changing our minds on the merits' and 'bowing to public pressure' can get murky (e.g. public outcry could genuinely prompt someone to change their mind that what they were doing was wrong after all, but people will often say this insincerely when what really happened is they were cowed by opprobrium). But again, the apparent absence of people pressuring Munich to 'cancel Hanson' makes this moot.

(I agree with Linch that the incentives look a little weird here given if Munich had found out about work by Hanson they deemed objectionable before they invited him, they presumably would not have invited him and none of us would be any the wiser. It's not clear "Vet more carefully so you don't have to rescind invitations to controversial speakers (with attendant internet drama) rather than not inviting them in the first place" is the lesson folks would want to be learned from this episode.)

Comment by Gregory_Lewis on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-07T14:02:26.822Z · EA · GW

I recall Hsiung being in favour of conducting disruptive protests against EAG 2015:

I honestly think this is an opportunity. "EAs get into fight with Elon Musk over eating animals" is a great story line that would travel well on both social and possibly mainstream media.

Organize a group. Come forward with an initially private demand (and threaten to escalate, maybe even with a press release). Then start a big fight if they don't comply.

Even if you lose, you still win because you'll generate massive dialogue!

It is unclear whether the motivation was more 'blackmail threats to stop them serving meat' or 'as Elon Musk will be there we can co-opt this to raise our profile'. Whether Hsiung calls himself an EA or not, he evidently missed the memo on 'eschew narrow minded obnoxious defection against others in the EA community'.

For similar reasons, it seems generally wiser for a community not to help people who previously wanted to throw it under the bus.

Comment by Gregory_Lewis on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-23T23:19:03.253Z · EA · GW

My reply is a mix of the considerations you anticipate. With apologies for brevity:

  • It's not clear to me whether avoiding anchoring favours (e.g.) round numbers or not. If my listener, in virtue of being human, is going to anchor on whatever number I provide them, I might as well anchor them on a number I believe to be more accurate.
  • I expect there are better forms of words for my examples which can better avoid the downsides you note (e.g. maybe saying 'roughly 12%' instead of '12%' still helps, even if you give a later articulation).
  • I'm less fussed about precision re. resilience (e.g. 'I'd typically expect drift of several percent from this with a few more hours to think about it' doesn't seem much worse than 'the standard error of this forecast is 6% versus me with 5 hours more thinking time' or similar). I'd still insist something at least pseudo-quantitative is important, as verbal riders may not put the listener in the right ballpark (e.g. does 'roughly' 10% pretty much rule out it being 30%?)
  • Similar to the 'trip to the shops' example in the OP, there's plenty of cases where precision isn't a good way to spend time and words (e.g. I could have counter-productively littered many of the sentences above with precise yet non-resilient forecasts). I'd guess there's also cases where it is better to sacrifice precision to better communicate with your listener (e.g. despite the rider on resilience you offer, they will still think '12%' is claimed to be accurate to the nearest percent, but if you say 'roughly 10%' they will better approximate what you have in mind). I still think when the stakes are sufficiently high, it is worth taking pains on this.
Comment by Gregory_Lewis on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-23T22:50:31.876Z · EA · GW

I had in mind the information-theoretic sense (per Nix). I agree the 'first half' is more valuable than the second half, but I think this is better parsed as diminishing marginal returns to information.

Very minor, re. child thread: You don't need to calculate numerically, as: , and . Admittedly the numbers (or maybe the remark in the OP generally) weren't chosen well, given 'number of decimal places' seems the more salient difference than the squaring (e.g. per-thousandths does not have double the information of per-cents, but 50% more)

Comment by Gregory_Lewis on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-23T22:27:36.206Z · EA · GW

It's fairly context dependent, but I generally remain a fan.

There's a mix of ancillary issues:

  • There could be a 'why should we care what you think?' if EA estimates diverge from consensus estimates, although I imagine folks tend to gravitate to neglected topics etc.
  • There might be less value in 'relative to self-ish' accounts of resilience: major estimates in a front facing report I'd expect to be fairly resilient, and so less "might shift significantly if we spent another hour on it".
  • Relative to some quasi-ideal seems valuable though: E.g. "Our view re. X is resilient, but we have a lot of knightian uncertainty, so we're only 60% sure we'd be within an order of magnitude of X estimated by a hypothetical expert panel/liquid prediction market/etc."
  • There might be better or worse ways to package this given people are often sceptical of any quantitative assessment of uncertainty (at least in some domains). Perhaps something like 'subjective confidence intervals' (cf.), although these aren't perfect.

But ultimately, if you want to tell someone an important number you aren't sure about, it seems worth taking pains to be precise, both on it and its uncertainty.

Comment by Gregory_Lewis on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2020-07-15T17:30:31.706Z · EA · GW

It is true that given the primary source (presumably this), the implication is that rounding supers to 0.1 hurt them, but 0.05 didn't:

To explore this relationship, we rounded forecasts to the nearest 0.05, 0.10, or 0.33 to see whether Brier scores became less accurate on the basis of rounded forecasts rather than unrounded forecasts. [...]
For superforecasters, rounding to the nearest 0.10 produced significantly worse Brier scores [by implication, rounding to the nearest 0.05 did not]. However, for the other two groups, rounding to the nearest 0.10 had no influence. It was not until rounding was done to the nearest 0.33 that accuracy declined.

Prolonged aside:

That said, despite the absent evidence I'm confident accuracy with superforecasters (and ~anyone else - more later, and elsewhere) does numerically drop with rounding to 0.05 (or anything else), even if has not been demonstrated to be statistically significant:

From first principles, if the estimate has signal, shaving bits of information from it by rounding should make it less accurate (and it obviously shouldn't make it more accurate, pretty reliably setting the upper bound of our uncertainty to 0).

Further, there seems very little motivation for the idea we have n discrete 'bins' of probability across the number line (often equidistant!) inside our heads, and as we become better forecasters n increases. That we have some standard error to our guesses (which ~smoothly falls with increasing skill) seems significantly more plausible. As such the 'rounding' tests should be taken as loose proxies to assess this error.

Yet if error process is this, rather than 'n real values + jitter no more than 0.025', undersampling and aliasing should introduce a further distortion. Even if you think there really are n bins someone can 'really' discriminate between, intermediate values are best seen as a form of anti-aliasing ("Think it is more likely 0.1 than 0.15, but not sure, maybe its 60/40 between them so I'll say 0.12") which rounding ablates. In other words 'accurate to the nearest 0.1' does not mean the second decimal place carries no information.

Also, if you are forecasting distributions rather than point estimates (cf. Metaculus), said forecast distributions typically imply many intermediate value forecasts.

Empirically, there's much to suggest a T2 error explanation of the lack of a 'significant' drop. As you'd expect, the size of the accuracy loss grows with both how coarsely things are rounded, and the performance of the forecaster. Even if relatively finer coarsening makes things slightly worse, we may expect to miss it. This looks better to me on priors than these trends 'hitting a wall' at a given level of granularity (so I'd guess untrained forecasters are numerically worse if rounded to 0.1, even if the worse performance means there is less signal to be lost, and in turn makes this hard to 'statistically significantly' detect).

I'd adduce other facts against too. One is simply that superforecasters are prone to not give forecasts on a 5% scale, using intermediate values instead: given their good callibration, you'd expect them to iron out this Brier-score-costly jitter (also, this would be one of the few things they are doing worse than regular forecasters). You'd also expect discretization in things like their calibration curve (e.g. events they say happen 12% of the time in fact happen 10% of time, whilst events that they say happen 13% of the time in fact happen 15% of the time), or other derived figures like ROC.

This is ironically foxy, so I wouldn't be shocked for this to be slain by the numerical data. But I'd bet at good odds (north of 3:1) things like "Typically, for 'superforecasts' of X%, these events happened more frequently than those forecast at (X-1)%, (X-2)%, etc."

Comment by Gregory_Lewis on EA Forum feature suggestion thread · 2020-06-20T07:53:23.878Z · EA · GW

On-site image hosting for posts/comments? This is mostly a minor QoL benefit, and maybe there would be challenges with storage. Another benefit would be that images would not vanish if their original source does.

Comment by Gregory_Lewis on EA Forum feature suggestion thread · 2020-06-20T07:49:06.302Z · EA · GW

Import from HTML/gdoc/word/whatever: One feature I miss from the old forum was the ability to submit HTML directly. This allowed one to write the post in google docs or similar (with tables, footnotes, sub/superscript, special characters, etc.), export it as HTML, paste into the old editor, and it was (with some tweaks) good to go.

This is how I posted my epistemic modesty piece (which has a table which survived the migration, although the footnote links no longer work). In contrast, when cross-posting it to LW2, I needed the kind help of a moderator - and even they needed to make some adjustments (e.g. 'writing out' the table).

Given such a feature was available before, hopefully it can be done again. It would be particularly valuable for the EA forum as:

  • A fair proportion of posts here are longer documents which benefit from the features available in things like word or gdocs. (But typically less mathematics than LW, so the nifty LATEX editor finds less value here than there).
  • The current editor has much less functionality than word/gdocs, and catching up 'most of the way' seems very labour intensive and could take a while.
  • Most users are more familiar with gdocs/word than editor/markdown/latex (i.e. although I can add and other special characters with the Latex editor and a some googling, I'm more familiar with doing this in gdocs - and I guess folks who have less experience with Latex or using a command line would find this difference greater).
  • Most users are probably drafting longer posts on google docs anyway.
  • Clunkily re-typesetting long documents in the forum editor manually (e.g. tables as image files) poses a barrier to entry, and so encourages linking rather than posting, with (I guess?) less engagement.

A direct 'import from gdoc/word/etc.' would be even better, but an HTML import function alone (given software which has both wordprocessing and HTML export 'sorted' are prevalent) would solve a lot of these problems at a stroke.

Comment by Gregory_Lewis on EA Forum feature suggestion thread · 2020-06-20T06:54:33.518Z · EA · GW

Footnote support in the 'standard' editor: For folks who aren't fluent in markdown (like me), the current process is switching the editor back and forth to 'markdown mode' to add these footnotes, which I find pretty cumbersome.[1]

[1] So much so I lazily default to doing it with plain text.

Comment by Gregory_Lewis on Examples of people who didn't get into EA in the past but made it after a few years · 2020-05-30T18:54:55.082Z · EA · GW

I applied for a research role at GWWC a few years ago (?2015 or so), and wasn't selected. I now do research at FHI.

In the interim I worked as a public health doctor. Although I think this helped me 'improve' in a variety of respects, 'levelling up for an EA research role' wasn't the purpose in mind: I was expecting to continue as a PH doctor rather than 'switching across' to EA research in the future; if I was offered the role at GWWC, I'm not sure whether I would have taken it.

There's a couple of points I'd want to emphasise.

1. Per Khorton, I think most of the most valuable roles (certainly in my 'field' but I suspect in many others, especially the more applied/concrete) will not be at 'avowedly EA organisations'. Thus, depending on what contributions you want to make, 'EA employment' may not be the best thing to aim for.

2. Pragmatically, 'avowedly EA organisation roles' (especially in research) tend oversubscribed and highly competitive. Thus (notwithstanding the above) this is ones primary target, it seems wise to have a career plan which does not rely on securing such a role (or at least have a backup).

3. Although there's a sense of ways one can build 'EA street cred' (or whatever), it's not clear these forms of 'EA career capital' are best even for employment at avowedly EA organisations. I'd guess my current role owes more to (e.g.) my medical and public health background than it does to my forum oeuvre (such as it is).

Comment by Gregory_Lewis on Why not give 90%? · 2020-03-26T11:42:23.593Z · EA · GW

Part of the story, on a consequentialising-virtue account, is typically desire for luxury is amenable to being changed in general, if not in Agape's case in particular. Thus her attitude of regret rather than shrugging her shoulders typically makes things go better, if not for her but for third parties who have a shot at improving this aspect of themselves.

I think most non-consequentialist views (including ones I'm personally sympathetic to) would fuzzily circumscribe character traits where moral blameworthiness can apply even if they are incorrigible. To pick two extremes: if Agape was born blind, and this substantially impeded her from doing as much good as she would like, the commonsense view could sympathise with her regret, but insist she really has 'nothing to be sorry about'; yet if Agape couldn't help being a vicious racist, and this substantially impeded her from helping others (say, because the beneficiaries are members of racial groups she despises), this is a character-staining fault Agape should at least feel bad about even if being otherwise is beyond her - plausibly, it would recommend her make strenuous efforts to change even if both she and others knew for sure all such attempts are futile.

Comment by Gregory_Lewis on Why not give 90%? · 2020-03-25T12:15:34.912Z · EA · GW

Nice one. Apologies for once again offering my 'c-minor mood' key variation: Although I agree with the policy upshot, 'obligatory, demanding effective altruism' does have some disquieting consequences for agents following this policy in terms of their moral self-evaluation.

As you say, Agape does the right thing if she realises (similar to prof procrastinate) that although, in theory, she could give 90% (or whatever) of her income/effort to help others, in practice she knows this isn't going to work out, and so given she wants to do the most good, she should opt for doing somewhat less (10% or whatever), as she foresees being able to sustain this.

Yet the underlying reason for this is a feature of her character which should be the subject of great moral regret. Bluntly: she likes her luxuries so much that she can't abide being without them, despite being aware (inter alia) that a) many people have no choice but to go without the luxuries she licenses herself to enjoy; b) said self-provision implies grave costs to those in great need if (per impossible) she could give more; c) her competing 'need' doesn't have great non-consequentialist defences (cf. if she was giving 10% rather than 90% due to looking after members of her family); d) there's probably not a reasonable story of desert for why she is in this fortunate position in the first place; e) she is aware of other people, similarly situated to her, who nonetheless do manage to do without similar luxuries and give more of themselves to help others.

This seems distinct from other prudential limitations a wise person should attend to. Agape, when making sure she gets enough sleep, may in some sense 'regret' she has to sleep for several hours each day. Yet it is wise for Agape to sleep enough, and needing to sleep (even if she needs to sleep more than others) is not a blameworthy trait. It is also wise for Agape to give less in the OP given her disposition of, essentially, "I know I won't keep giving to charity unless I also have a sports car". But even if Agape can't help this no more than needing to sleep, this trait is blameworthy.

Agape is not alone in having blameworthy features of her character - I, for one, have many; moral saintliness is rare, and most readers probably could do more to make the world better were they better people. 'Obligatory, demanding effective altruism' would also make recommendations against responses to this fact which are counterproductive (e.g. excessive self-flagellation, scrupulosity). I'd agree, but want to say slightly more about the appropriate attitude as well as the right action - something along the lines of non-destructive and non-aggrandising regret.[1] I often feel EAs tend to err in the direction of being estranged from their own virtue; but they should also try to avoid being too complaisant to their own vice.

[1] Cf. Kierkegaard, Sickness unto Death

Either in confused obscurity about oneself and one’s significance, or with a trace of hypocrisy, or by the help of cunning and sophistry which is present in all despair, despair over sin is not indisposed to bestow upon itself the appearance of something good. So it is supposed to be an expression for a deep nature which thus takes its sin so much to heart. I will adduce an example. When a man who has been addicted to one sin or another, but then for a long while has withstood temptation and conquered -- if he has a relapse and again succumbs to temptation, the dejection which ensues is by no means always sorrow over sin. It may be something else, for the matter of that it may be exasperation against providence, as if it were providence which had allowed him to fall into temptation, as if it ought not to have been so hard on him, since for a long while he had victoriously withstood temptation. But at any rate it is womanish [recte maudlin] without more ado to regard this sorrow as good, not to be in the least observant of the duplicity there is in all passionateness, which in turn has this ominous consequence that at times the passionate man understands afterwards, almost to the point of frenzy, that he has said exactly the opposite of that which he meant to say. Such a man asseverated with stronger and stronger expressions how much this relapse tortures and torments him, how it brings him to despair, "I can never forgive myself for it"; he says. And all this is supposed to be the expression for how much good there dwells within him, what a deep nature he is.

Comment by Gregory_Lewis on Thoughts on The Weapon of Openness · 2020-02-17T05:15:52.123Z · EA · GW
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off 'cutting ones losses' - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.

The proposed trend of 'getting steadily worse' isn't apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesn't give an impression they got dramatically worse despite the 30 years of secrecy's supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a 'more open' counterfactual) but the challenge of dissecting out the 'being secret * time' interaction term and showing it is negative is a challenge that should be borne by the affirmative case.

Comment by Gregory_Lewis on EA Survey 2019 Series: Donation Data · 2020-02-14T18:24:26.187Z · EA · GW


Like last year, we ran a full model with all interactions, and used backwards selection to select predictors.

Presuming backwards selection is stepwise elimination, this is not a great approach to model generation. See e.g. this from Frank Harrell: in essence, stepwise tends to be a recipe for overfitting, and thus the models it generates tend to have inflated goodness of fit measures (e.g. R2), overestimated coefficient estimates, and very hard to interpret p values (given the implicit multiple testing in the prior 'steps'). These problems are compounded by generating a large number of new variables (all interaction terms) for stepwise to play with.

Some improvements would be:

1. Select the variables by your judgement, and report that model. If you do any post-hoc additions (e.g. suspecting an interaction term), report these with the rider it is a post-hoc assessment.

2. Have a hold-out dataset to test your model (however you choose to generate it) against. (Cross-validation is an imperfect substitute).

3. Ridge, Lasso, elastic net or other approaches to variable selection.

Comment by Gregory_Lewis on Thoughts on The Weapon of Openness · 2020-02-13T14:28:35.051Z · EA · GW

Thanks for this, both the original work and your commentary was an edifying read.

I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm 'for their own good' could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.

Although the relevant evidence can neither be fully observed or fairly sampled, there's a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. There's some wisdom of the crowd account that secrecy is the default for some 'adversarial' research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.

Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsides - the one that springs to mind from my 'field' is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.

Given what I said above, citing some favourable examples doesn't say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there aren't really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors 'catch up'), where there aren't more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad ones - and this requires some degree of something like secrecy.

Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what it's worth, I think 'security service' norms tend closer to the mark than 'academic' ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as 'don't publish the bug until the vendor can push a fix' may not perform as well as one might naively hope: for example, 'white hats' postponing their discoveries hinders collective technological progress, and risks falling behind a 'black hat' community avidly trading tips and tricks. This consideration can also point the other way: if the 'white hats' are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work 'giving bad people good ideas'. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.

Comment by Gregory_Lewis on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T03:46:27.270Z · EA · GW

All of your examples seem much better than the index case I am arguing against. Commonsense morality attaches much less distaste to cases where those 'in peril' are not crisply identified (e.g. "how many will die in some pandemic in the future" is better than "how many will die in this particular outbreak", which is better than "will Alice, currently ill, live or die?"). It should also find bets on historical events are (essentially) fine, as whatever good or ill implicit in these has already occurred.

Of course, I agree they your examples would be construed as to some degree morbid. But my recommendation wasn't "refrain from betting in any question where we we can show the topic is to some degree morbid" (after all, betting on GDP of a given country could be construed this way, given its large downstream impacts on welfare). It was to refrain in those cases where it appears very distasteful and for which there's no sufficient justification. As it seems I'm not expressing this balancing consideration well, I'll belabour it.


Say, God forbid, one of my friend's children has a life-limiting disease. On its face, it seems tasteless for me to compose predictions at all on questions like, "will they still be alive by Christmas?" Carefully scrutinising whether they will live or die seems to run counter to the service I should be providing as a supporter of my friends family and someone with the child's best interests at heart. It goes without saying opening a book on a question like this seems deplorable, and offering (and confirming bets) where I take the pessimistic side despicable.

Yet other people do have good reason for trying to compose an accurate prediction on survival or prognosis. The child's doctor may find themselves in the invidious position where they recognise they their duty to give my friend's family the best estimate they can runs at cross purposes to other moral imperatives that apply too. The commonsense/virtue-ethicsy hope would be the doctor can strike the balance best satisfies these cross-purposes, thus otherwise callous thoughts and deeds are justified by their connection to providing important information to the family

Yet any incremental information benefit isn't enough to justify anything of any degree of distastefulness. If the doctor opened a prediction market on a local children's hospice, I think (even if they were solely and sincerely motivated for good purposes, such as to provide families with in-expectation better prognostication now and the future) they have gravely missed the mark.

Of the options available, 'bringing money' into it generally looks more ghoulish the closer the connection is between 'something horrible happening' and 'payday!'. A mere prediction platform is better (although still probably the wrong side of the line unless we have specific evidence it will give a large benefit), also paying people to make predictions on said platform (but paying for activity and aggregate accuracy rather than direct 'bet results') is also slightly better. "This family's loss (of their child) will be my gain (of some money)" is the sort of grotesque counterfactual good people would strenuously avoid being party to save exceptionally good reason.


To repeat: the it is the balance of these factors - which come in degrees - which is determines the final evaluation. So, for example, I'm not against people forecasting the 'nCoV' question (indeed, I do as well), but the addition of money takes it the wrong side of the line (notwithstanding the money being ridden on this for laudable motivation). Likewise I'm happy to for people to prop bet on some of your questions pretty freely, but not for the 'nCoV' (or some even more extreme versions) because the question is somewhat less ghoulish, etc. etc. etc.

I confess some irritation. Because I think whilst you and Oli are pressing arguments (sorry - "noticing confusion") re. there not being a crisp quality that obtains to the objectionable ones yet not the less objectionable ones (e.g. 'You say this question is 'morbid' - but look here! here are some other questions which are qualitatively morbid too, and we shouldn't rule them all out') you are in fact committed to some sort of balancing account.

I presume (hopefully?) you don't think 'child hospice sweepstakes' would be a good idea for someone to try (even if it may improve our calibration! and it would give useful information re. paediatric prognosticiation which could be of value to the wider world! and capitalism is built on accurate price signals! etc. etc.) As you're not biting the bullet on these reductios (nor bmg's, nor others) you implicitly accept all the considerations about why betting is a good thing are pro tanto and can be overcome at some extreme limit of ghoulishness etc.

How to weigh these considerations is up for grabs. Yet picking each individual feature of ghoulishness in turn and showing it, alone, is not enough to warrant refraining from highly ghoulish bets (where the true case against would be composed of other factors alongside the one being shown to be individually insufficient) seems an exercise in the fallacy of division.


I also note that all the (few) prop bets I recall in EA up until now (including one I made with you) weren't morbid. Which suggests you wouldn't appreciably reduce the track record of prop bets which show (as Oli sees it) admirable EA virtues of skin in the game.

Comment by Gregory_Lewis on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T00:09:24.330Z · EA · GW
Both of these are environments in which people participate in something very similar to betting. In the first case they are competing pretty directly for internet points, and in the second they are competing for monetary prices.
Those two institutions strike me as great examples of the benefit of having a culture of betting like this, and also strike me as similarly likely to create offense in others.

I'm extremely confident a lot more opprobrium attaches to bets where the payoff is in money versus those where the payoff is in internet points etc. As you note, I agree certain forecasting questions (even without cash) provoke distaste: if those same questions were on a prediction market the reaction would be worse. (There's also likely an issue the money leading to a question of ones motivation - if epi types are trying to predict a death toll and not getting money for their efforts, it seems their efforts have a laudable purpose in mind, less so if they are riding money on it).

I agree with you that were there only the occasional one-off bet on the forum that was being critiqued here, the epistemic cost would be minor. But I am confident that a community that had a relationship to betting that was more analogous to how Chi's relationship to betting appears to be, we would have never actually built the Metaculus prediction platform.

This looks like a stretch to me. Chi can speak for themselves, but their remarks don't seem to entail a 'relationship to betting' writ large, but an uneasy relationship to morbid topics in particular. Thus the policy I take them to be recommending (which I also endorse) of refraining making 'morbid' or 'tasteless' bets (but feel free to prop bet to heart's desire on other topics) seems to have very minor epistemic costs, rather than threatening some transformation of epistemic culture which would mean people stop caring about predictions.

For similar reasons, this also seems relatively costless in terms of other perceptions: refraining from 'morbid' topics for betting only excludes a small minority of questions one can bet upon, leaving plenty of opportunities to signal its virtuous characteristics re. taking ideas seriously whilst avoiding those which reflect poorly upon it.

Comment by Gregory_Lewis on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T21:53:48.141Z · EA · GW

I emphatically object to this position (and agree with Chi's). As best as I can tell, Chi's comment is more accurate and better argued than this critique, and so the relative karma between the two dismays me.

I think it is fairly obvious that 'betting on how many people are going to die' looks ghoulish to commonsense morality. I think the articulation why this would be objectionable is only slightly less obvious: the party on the 'worse side' of the bet seems to be deliberately situating themselves to be rewarded as a consequence of the misery others suffer; there would also be suspicion about whether the person might try and contribute to the bad situation seeking a pay-off; and perhaps a sense one belittles the moral gravity of the situation by using it for prop betting.

Thus I'm confident if we ran some survey on confronting the 'person on the street' with the idea of people making this sort of bet, they would not think "wow, isn't it great they're willing to put their own money behind their convictions", but something much more adverse around "holding a sweepstake on how many die".

(I can't find an easy instrument for this beyond than asking people/anecdata: the couple of non-EA people I've run this by have reacted either negatively or very negatively, and I know comments on forecasting questions which boil down to "will public figure X die before date Y" register their distaste. If there is a more objective assessment accessible, I'd offer odds at around 4:1 on the ratio of positive:negative sentiment being <1).

Of course, I think such an initial 'commonsense' impression would very unfair to Sean or Justin: I'm confident they engaged in this exercise only out of a sincere (and laudable) desire to try and better understand an important topic. Nonetheless (and to hold them much higher standards than my own behaviour) one may suggest it is nonetheless a lapse of practical wisdom if, whilst acting to fulfil one laudable motivation, not tempering this with other moral concerns one should also be mindful of.

One needs to weigh the 'epistemic' benefits of betting (including higher order terms) against the 'tasteless' complaint (both in moral-pluralism case of it possibly being bad, but also the more prudential case of it looking bad to third parties). If the epistemic benefits were great enough, we should reconcile ourselves to the costs of sometimes acting tastelessly (triage is distasteful too) or third parties (reasonably, if mistakenly) thinking less of us.

Yet the epistemic benefits on the table here (especially on the margin of 'feel free to bet, save on commonsense ghoulish topics') are extremely slim. The rate of betting in EA/rationalist land on any question is very low, so the signal you get from small-n bets are trivial. There are other options, especially for this question, which give you much more signal per unit activity - given, unlike the stock market, people are interested in the answer for-other-than pecuniary motivations: both metacalus and the John's Hopkins platform prediction have relevant questions which are much active, and where people are offering more information.

Given the marginal benefits are so slim, they are easily outweighed by the costs Chi notes. And they are.