Posts

Centre for the Study of Existential Risk update 2016-08-31T09:06:12.796Z · score: 16 (17 votes)
Environmental risk postdoctoral research position at CSER. 2016-04-20T12:12:10.471Z · score: 7 (9 votes)
New Leverhulme Centre on the Future of AI (developed at CSER with spokes led by Bostrom, Russell, Shanahan) 2015-12-03T10:02:10.216Z · score: 11 (11 votes)
New positions and recent hires at the Centre for the Study of Existential Risk 2015-10-05T17:55:07.679Z · score: 10 (10 votes)
Postdoctoral research positions at CSER (Cambridge, UK) 2015-03-26T18:03:27.544Z · score: 5 (5 votes)

Comments

Comment by sean_o_h on What are examples of EA work being reviewed by non-EA researchers? · 2020-03-29T15:53:53.878Z · score: 8 (4 votes) · EA · GW

Likewise for publications at CSER. I'd add that for policy work, written policy submissions often provide summaries and key takaways and action-relevant points based on 'primary' work done by the centre and its collaborators, where the primary work is peer-reviewed.

We've received informal/private feedback from people in policy/government roles at various points that our submissions and presentations have been particularly useful or influential. And we'll have some confidential written testimony to support this for a few examples for University REF (research excellence framework) assessment purposes; however unfortunately I don't have permission to share these publicly at this time. However, this comment I wrote last year provides some info that could be used as indirect indications of the work being seen as high-quality (being chosen as a select number to be invited to present orally; follow-up engagement, etc).

https://forum.effectivealtruism.org/posts/whDMv4NjsMcPrLq2b/cser-and-fhi-advice-to-un-high-level-panel-on-digital?commentId=y7DjYFE3gjZZ9caij

Comment by sean_o_h on Coronavirus Research Ideas for EAs · 2020-03-28T16:56:30.920Z · score: 2 (1 votes) · EA · GW

Thanks Peter, that's awesome!

Comment by sean_o_h on Coronavirus Research Ideas for EAs · 2020-03-28T09:52:10.920Z · score: 14 (8 votes) · EA · GW

Thank you for writing this up; it's extremely helpful, especially in such a rapidly developing space. A very optional request: might you consider updating this e.g. once a week with significant relevant developments on these ideas/questions? With so many of us involved in many different ways, it could provide a helpful evolving roadmap. Feel free to ignore if too much hassle or redundant with summaries elsewhere.

Comment by sean_o_h on The best places to donate for COVID-19 · 2020-03-21T12:25:47.808Z · score: 14 (7 votes) · EA · GW

[disclaimer: I am co-director of CSER, but giving an individual view]. Hi, a quick comment (apologies that I may not have time to respond to replies, very busy period).

>“We understand that CSER’s work mostly has little direct relevance to COVID-19, but some of it is relevant to pandemics and that they are looking to expand this element of their team. We believe that this may be a suitable choice for funders inspired to support pandemics as a result of the coronavirus outbreak.”

This is accurate in my view. However, I would emphasise that for EA funders keen to support (a) *direct* response to Covid-19 and/or (b) most time-effective use of funds relating to the current situation within the next 6 months, my view is that there are likely to be more timely interventions than supporting CSER at this immediate time.

E.g. we ourselves are working to support other initiatives by collaborators relating to the immediate situation (I have been looking for ways to support Univursa*, whose researchers we’ve worked with before, and which I individually consider particularly promising in the current situation). As the writeup says, our work is more focused on broader GCR and pandemic/biorisk goverance and preparedness. We are in the process of making a number of hires (50% of whom are biorisk/epidemiology/biosecurity specialists). I expect we will have a lesser need for additional funding in the 0-6 month window. In the >6 month window, as the world (hopefully) moves from immediate crisis response to better preparedness/governance/biosecurity, and as our expanded bio team develops expands its work relevant to this, we are likely to have significantly more RFMF (although I could not give a view at this time on comparative value of funds with other orgs in future). I should also mention that some of our work is likely to be under the banner of other initiatives our researchers are a part of (e.g. the Biorisc initiative, which has gained good traction in the UK policy context https://www.caths.cam.ac.uk/research/biorisc)

Very grateful to Sanjay, and to everyone else working hard to identify opportunities to combat Covid-19!

*Footnote on my being excited about Univursa: While the approach was initially developed with a focus on haemorrhagic epidemics (e.g. ebola), based on my analysis of the method, and discussion with the researchers, I believe it will be very suitable for adaptation to covid-19 diagnostics (although no guarantees can be made until database development and field testing completed); and could play a v important role in resource-limited settings like sub-saharan Africa where testing and outbreak detection ability is extremely limited. Further, above and beyond regional benefits, it is my understanding that unless appropriate tools are provided to these regions, getting this pandemic under control globally will be a lot more challenging.

Comment by sean_o_h on Toby Ord’s ‘The Precipice’ is published! · 2020-03-11T20:23:16.827Z · score: 2 (1 votes) · EA · GW

That would be a shame. If you're fairly familiar with Xrisk literature and FHI's work in particular, then a lot of the juiciest facts and details are in the footnotes - I found them fascinating.

Comment by sean_o_h on COVID-19 brief for friends and family · 2020-03-04T15:59:02.957Z · score: 11 (8 votes) · EA · GW

Datapoint (my general considerations/thought processes around this, feeding into case-by case decisions about my own activities rather than a blanket decision): I am (young healthy male) pretty unconcerned personally about risk to myself individually; but quite concerned about becoming a vector for spread (especially to older or less robust people). While I have a higher-than-some-people personal risk tolerance, I don't like the idea of imposing my risk tolerance on others. Particularly when travelling/fatigued/jetlagged, I'm not 100% sure I trust my own attention to detail quite enough on reliably taking all the necessary steps carefully enough, so this makes me a little hesitant to take on long-haul travel to international events (I also work/interact with older colleagues reasonably regularly, and am concerned re: the indirect activities of my actions on them).

I would also like to see society-level actions that reduce disease spread, and I intuitively feel that EA should be a participant in such actions, given it takes such risks seriously as a community.

Comment by sean_o_h on COVID-19 brief for friends and family · 2020-03-03T15:35:31.232Z · score: 4 (2 votes) · EA · GW

The information Singapore is gathering, collating and making available is fascinating.

https://twitter.com/RyutaroUchiyama/status/1234616723615166465

Singapore is also one of the nations that appears to be dealing most effectively with their coronavirus outbreak (rate of new cases is comparatively low). The country also had a very effective response to SARS in 2003. (Although by Western standards the extent to which they gather information on the population might be uncomfortable).

Comment by sean_o_h on COVID-19 brief for friends and family · 2020-03-02T19:53:51.266Z · score: 5 (3 votes) · EA · GW

6 deaths now reported in Washington State is also consistent with the outbreak there being substantially larger than the 14 cases currently recorded.

Comment by sean_o_h on COVID-19 brief for friends and family · 2020-03-01T10:23:28.425Z · score: 11 (5 votes) · EA · GW

FYI, sequencing from the Snohomish county washington cases suggest there has been cryptic transmission in washington state for the last 3-6 weeks, and potentially a substantial outbreak (a few hundred cases) ongoing in washington state (likely missed because of the focus on travellers returning from China).

https://twitter.com/trvrb/status/1233970271318503426

Comment by sean_o_h on COVID-19 brief for friends and family · 2020-02-29T15:25:55.538Z · score: 11 (5 votes) · EA · GW

Too early to have confidence on higher temperatures limiting spread IMO (although some reason to hope, certainly); cases in japan are only <2.5x higher than singapore (234 vs 102 last I saw, and IIRC it got to japan slightly earlier); surveillance and testing in African nations unlikely to be as extensive as e.g. Japan/SK; likely less volume of travel going through african nations than some of the Asian hubs.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-27T18:49:06.364Z · score: 4 (3 votes) · EA · GW

I must admit, I would not make the same bet at the same odds on the 27th of February 2020.

Comment by sean_o_h on My Charitable Giving Report 2019 · 2020-02-27T17:47:46.020Z · score: 9 (4 votes) · EA · GW

Well done on your charitable giving, and thank you for sharing! For me, it's important and inspirational to hear about giving at all levels (and sometimes we hear less about giving at the level less high-earning people can afford, so this is great).

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-27T12:35:38.863Z · score: 9 (3 votes) · EA · GW

Hi Wei,

Sorry I missed this. My strongest responses over the last while have fallen into the categories of: (1) responding to people claiming existential risk-or-approaching potential (or sharing papers by people like Taleb stating we are entering a phase where this is near-certain; e.g. https://static1.squarespace.com/static/5b68a4e4a2772c2a206180a1/t/5e2efaa2ff2cf27efbe8fc91/1580137123173/Systemic_Risk_of_Pandemic_via_Novel_Path.pdf

(shared in one xrisk group, for example, as "X-riskers, it would appear your time is now: "With increasing transportation we are close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens.". My response: "We are **not** "close to a transition to conditions in which extinction becomes certain both because of rapid spread and because of the selective dominance of increasingly worse pathogens".)

Or, responding to speculation that nCov is a deliberately developed bioweapon, or was accidentally released from a BSL4 lab in Wuhan. There isn't evidence for either of these and I think they are unhelpful types of speculation to be made without evidence, and such speculations can spread widely. Further, some people making the latter speculation didn't seem to be aware what a common class of virus coronaviruses are (ranging from common cold thru to SARS). Whether or not a coronavirus was being studied at the Wuhan lab, I think it would not be a major coincidence to find a lab studying a coronavirus in a major city.

A third example was clarifying that the event 201 exercise Johns Hopkins did (which involved 65 million hypothetical deaths) was a tabletop simulation , not a prediction, and therefore could not be used to extrapolate an expectation of 65 million deaths from the current outbreak.

I made various other comments as part of discussions, but more providing context or points for discussion etc as I recall as opposed to disagreeing per se, and don't have time to dig them up.

The latter examples don't relate to predictions of the severity of the outbreak, more so to what I perceived at the time to be misunderstandings, misinformation, and unhelpful/ungrounded speculations.

Comment by sean_o_h on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-25T10:41:23.474Z · score: 14 (6 votes) · EA · GW

On (2), I would note that the 'hype' criticism is one that is commonly made about the claims of both a range of individual groups in AI, and about the field as a whole. Criticisms of DeepMind's claims, and IBM's (usefulness/impact of IBM Watson in health) come immediately to mind, as well as claims by a range of groups re: deployment of self-driving cars. It's also a criticism made of the field as a whole (e.g. see various of Gary Marcus, Jack Stilgoe's comments etc). This does not necessarily mean that it's untrue of OpenAI (or that OpenAI are not one of the 'hypier'), but I think it's worth noting that this is not unique to OpenAI.

Comment by sean_o_h on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-21T09:51:39.182Z · score: 25 (14 votes) · EA · GW

A few comments from Xrisk/EA folks that I've seen (which I agree with):

FHI's Markus Andjerlung: https://twitter.com/Manderljung/status/1229863911249391618

CSER's Haydn Belfield: https://twitter.com/HaydnBelfield/status/1230119965178630149


To me, AI heavyweight and past president of AAAI (and past critic of OpenAI) Rao Kambhampati put it well - written like / has tone of a hit piece, but without an actual hit (i.e. any relevation that actually justifies it):

https://twitter.com/rao2z/status/1229599668683673600

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T19:55:21.002Z · score: 2 (1 votes) · EA · GW

I don't think so to any significant extent in most circumstances. And any tiny spike counterbalanced by general benefits pointed to by David. My understanding (former competitive runner) is that extended periods of heavily overdoing it with exercise (overtraining) can lead to an inhibited immune system among other symptoms, but this is rare with people generally keeping fit (other than e.g. someone jumping into marathon/triathlon training without building up). Other things to avoid/be mindful of are the usual (hanging around in damp clothes in the cold, hygiene in group sporting/exercise contexts etc).

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T17:15:30.019Z · score: 5 (3 votes) · EA · GW

Thanks bmg. FWIW, I provide my justification (from my personal perspective) here: https://forum.effectivealtruism.org/posts/g2F5BBfhTNESR5PJJ/concerning-the-recent-wuhan-coronavirus-outbreak?commentId=mWi2L4S4sRZiSehJq

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T11:37:32.193Z · score: 5 (3 votes) · EA · GW

Thanks Khorton, nothing to apologise for. I read your comment as a concern about how the motivations of a bet might be perceived from the outside (whether in the specific case or more generally); but this led me to the conclusion that actually stating my motivations rather than assuming everyone reading knows would be helpful at this stage!

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T10:45:23.204Z · score: 18 (8 votes) · EA · GW

While my read of your post is "there is the possibility that the aim could be interpreted this way" which I regard as fair, I feel I should state that 'fun and money' was not my aim, and (I strongly expect not Justin's), as I have not yet done so explicitly.

I think it's important to be as well-calibrated as reasonably possible on events of global significance. In particular, I've been seeing a lot of what appear to me to be poorly calibrated, alarmist statements, claims and musings on nCOV on social media, including from EAs, GCR researchers, Harvard epidemiologists, etc. I think these poorly calibrated/examined claims can result in substantial material harms to people, in terms of stoking up unnecessary public panic, confusing accurate assessment of the situation, and creating 'boy who cried wolf' effects for future events. I've spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.

(edit: I do not mean this to refer to Justin's fermi estimate, which was on the more severe end but had clearly reasoned and transparent thinking behind it; more a broad comment on concerns re: poor calibration and the practical value of being well-calibrated).

As Habryka has said, this community in particular is one that has a set of tools it (or some part of it) uses for calibration. So I drew on it in this case. The payoff for me is small (£50; and I'm planning to give it to AMF); the payoff for Justin is higher but he accepted it as an offer rather than proposing it and so I doubt money is a factor for him either.

In the general sense I think both the concern about motivation and how something appears to parts of the community is valid. I would hope that it is still possible to get the benefits of betting on GCR-relevant topics for the benefits-to-people I articulate above (and the broader benefits Habryka and others have articulated). I would suggest that achieving this balance may be a matter of clearly stating aims and motivations, and (as others have suggested) taking particular care with tone and framing, but I would welcome further guidance.

Lastly, I would like to note my gratitude for the careful and thoughtful analysis and considerations that Khorton, Greg, Habryka, Chi and others are bringing to the topic. There are clearly a range of important considerations to be balanced appropriately, and I'm grateful both for the time taken and the constructive nature of the discussion.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T16:53:52.504Z · score: 5 (4 votes) · EA · GW

Thanks, good to know on both, appreciate the feedback.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T14:32:55.597Z · score: 16 (9 votes) · EA · GW

I would similarly be curious to understand the level of downvoting of my comment offering to remove my comments in light of concerns raised and encouragement to consider doing so. This is by far the most downvoted comment I've ever had. This may just be an artefact of how my call for objections to removing my comments has manifested (I was anticipating posts stating an objection like Ben's and Habryka's, and for those to be upvoted if popular, but people may have simply expressed objection by downvoting the original offer). In that case that's fine.

Another possible explanation is an objection to me even making the offer in the first place. My steelman for this is that even the offer of self-censorship of certain practices in certain situations could be seen as coming at a very heavy cost to group epistemics. However from an individual-posting-to-forum perspective, this feels like an uncomfortable thing to be punished for. Posting possibly-controversial posts to a public forum has some unilateralist's curse elements to it: risk is distributed to the overall forum, and the person who posts the possibly-controversial thing is likely to be someone who deems the risk lower than others. And we are not always the best at impartially judging our own actions. So when arguments are made in good faith that an action may respond in group harm, it seems like a reasonable step to make the offer to withdraw the action, and to signal a willingness to cooperate in whatever the group (or moderators, I guess) deemed to be in the group's interest. And I built in a time delay to allow for objections and more views to be raised, before taking action. I would anticipate a more negative response if I were calling for deletion of others' comments, but this was my own comment.

I would also note that offering to delete one's comments comes at a personal cost, as does acknowledging possible fault of judgement; having an avalanche of negative karma on top of it adds to the discomfort.

If there's something else going on - e.g. a sense that I was being dishonest about following through on the offer to delete; or something else - it would be good to know. I guess there could be a negative reaction to expressing the view that Chi's perspective is valid. In my view, a point can be valid without being action-deciding. Here there are multiple considerations which I would all see as valid (value of betting to calibrate beliefs; value of doing so in public to reinforce a norm the group sees as beneficial and promote that norm to others; value of avoiding making insensitive-seeming posts that could possibly cause reputational damage to the group). The question is one of weighting of considerations - I have my own views, but it was very helpful to get a broader set of views in order to calibrate my actions.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T08:47:43.775Z · score: 19 (13 votes) · EA · GW

My take is that this at this stage has been resolved in favour of "editing for tone but keeping the bet posts". I have done the editing for tone. I am happy with this outcome, I hope most others are too.

My own personal view is that I think public betting on beliefs is good - it's why I did it (both this time and in the past) and my preference is to continue doing so. However, my take is that that the discussion highlighted that in certain circumstances around betting (such as predictions on events such as an ongoing mass fatality event) it is worth being particularly careful about tone.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-31T08:52:10.135Z · score: 4 (10 votes) · EA · GW

Re: Michael & Khorton's points, (1) Michael fully agreed, casual figure of speech that I've now deleted. I apologise. (2) I've done some further editing for tone but would be grateful if others had further suggestions.

I also agree re: Chi's comment - I've already remarked that I think the point was valid, but I would add that I found it to be respectful and considerate in how it made its point (as one of the people it was directed towards).

It's been useful for me to reflect on. I think a combination of two things for me: one is some inherent personal discomfort/concern about causing offence by effectively saying "I think you're wrong and I'm willing to bet you're wrong", which I think I unintentionally counteracted with (possibly excessive) levity. The second is how quickly the disconnect can happen from (initial discussion of very serious topic) to (checking in on forum several days later to quickly respond to some math). Both are things I will be more careful about going forward. Lastly, I may have been spending too much time around risk folk, for whom certain discussions become so standard that one forgets how they can come across.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-31T07:51:00.946Z · score: 9 (14 votes) · EA · GW

I'm happy to remove my comments; I think Chi raises a valid point. The aim was basically calibration. I think this is quite common in EA and forecasting, but agree it could look morbid from the outside, and these are publicly searchable. (I've also been upbeat in my tone for friendliness/politeness towards people with different views, but this could be misread as a lack of respect for the gravity of the situation). Unless this post receives strong objections by this evening, I will delete my comments or ask moderators to delete.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-30T21:24:52.725Z · score: 4 (3 votes) · EA · GW

10:1 on the original (1 order of magnitude) it is.

Comment by sean_o_h on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-30T10:25:30.749Z · score: 22 (8 votes) · EA · GW

Possibility of verbal confusion as this is how most people vocalise 'CSER' (where EA folk also tend to go in the UK).

(We had a 'Julius' for a while, which was excellent).

Comment by sean_o_h on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-30T09:48:11.669Z · score: 4 (4 votes) · EA · GW

Too good - how could you possibly turn this down!

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-30T09:25:46.667Z · score: 3 (2 votes) · EA · GW

This seems fair. I suggested the bet quite quickly. Without having time to work through the math of the bet, I suggested something that felt on the conservative side from the point of view of my beliefs. The more I think about it, (a) the more confident I am in my beliefs and (b) the more I feel it was not as generous as I originalyl thought*. I have a personal liking for binary bets rather than proportional payoffs. As a small concession in light of the points raised, I'd be happy to offer to modify the terms retroactively to make them more favourable to Justin, offering either of the following.

(i) Doubling the odds against me to 10:1 odds (rather than 5:1) on the original claim (at least an order of magnitude lower than his fermi). So his £50 would get £500 of mine.

OR

(ii) 5:1 on at least 1.5 orders of magnitude (50x) lower than his fermi (rather than 10x).

(My intuition is that (ii) is a better deal than (i) but I haven't worked it through)


(*i.e. at time of bet - I think the likelihood of this being a severe global pandemic is now diminishing further in my mind)

Comment by sean_o_h on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T19:00:45.038Z · score: 6 (4 votes) · EA · GW

I like Rosie's suggestions (inspired by Jonas's).

Comment by sean_o_h on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T18:42:41.315Z · score: 16 (10 votes) · EA · GW

HEAR - Hub for enabling EA Research. HEALR - Hub for enabling EA Learning and Research.

Or call it the EARL - EA Research and Learning Centre (the 'centre' bit can often easily be dropped from the acronym).

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T14:03:42.078Z · score: 3 (2 votes) · EA · GW

Re: whose mortality estimates, I suggest we use metaculus's list here (WHO has highest ranking) as standard (with the caveat above).

https://www.metaculus.com/questions/3530/how-many-people-will-die-as-a-result-of-the-2019-novel-coronavirus-2019-ncov-before-2021/

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T13:58:46.398Z · score: 4 (3 votes) · EA · GW

MERS was pretty age-agnostic. SARS had much higher mortality rates in >60s. All the current reports from China claim that it affects mainly older people or those with preexisting health conditions. Coronavirus is a broad class including everything from the common cold to MERS; not sure there's good ground to anchor too closely to SARS or MERS as a reference class.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T13:47:35.314Z · score: 20 (9 votes) · EA · GW

Agreed, thank you Justin. (I also hope I win the bet, and not for the money - while it is good to consider the possibility of the most severe plausible outcomes rigorously and soberly, it would be terrible if it came about in reality). Bet resolves 28 January 2021. (though if it's within an order of magnitude of the win criterion, and there is uncertainty re: fatalities, I'm happy to reserve final decision for 2 further years until rigorous analysis done - e.g. see swine flu epidemiology studies which updated fatalities upwards significantly several years after the outbreak).

To anyone else reading. I'm happy to provide up to a £250 GBP stake against up to £50 of yours, if you want to take the same side as Justin.

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T13:16:34.026Z · score: 14 (6 votes) · EA · GW

Though it's interesting to note Justin's fermi is not far off how one of Johns Hopkins' CHS scenarios played out (coronavirus, animal origin, 65m deaths worldwide).

http://www.centerforhealthsecurity.org/event201/scenario.html

Note: this was NOT a prediction (and had some key differences including higher mortality associated with their hypothetical virus, and significant international containment failure beyond that seen to date with nCov)

http://www.centerforhealthsecurity.org/newsroom/center-news/2020-01-24-Statement-of-Clarification-Event201.html

Comment by sean_o_h on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T12:32:38.177Z · score: 44 (16 votes) · EA · GW

Hmm. interesting. This goes strongly against my intuitions. In case of interest I'd be happy to give you 5:1 odds that this Fermi estimate is at least an order of magnitude too severe (for a small stake of up to £500 on my end, £100 on yours). Resolved in your favour if 1 year from now the fatalities are >1/670 (or 11.6M based on current world population); in my favour if <1/670.

(Happy to discuss/modify/clarify terms of above.)


Edit: We have since amended the terms to 10:1 (50GBP of Justin's to 500GBP of mine).

Comment by sean_o_h on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T12:27:10.641Z · score: 5 (4 votes) · EA · GW

Thanks Vaidehi, these are very good points.

I agree that SJ is more diffuse and less central - I think this is one of the reasons thinking of it in terms of a movement that one might ally with is a little unnatural to me. I also agree that EA is more centralised and purposeful.

Your point that about what level of discourse suggests what kind of engagement is also a good one. I think this also links to the issue that (in my view) it's in the nature of EA that there's a 'thick' and a 'thin' version of EA in terms of the people involved. Here 'thick' is a movement of people who self-identify as EA and see themselves as part a strong social and intellectual community, and who are influenced by movement leaders and shapers.

Then there's a 'thin' version that includes people who might do one or multiple of the following (a) work in EA-endorsed cause areas with EA-compatible approaches (b) find EA frameworks and literature useful to draw on (among other frameworks) (c) are generally supportive of or friendly towards some or most of the goals of EA, without necessarily making EA a core part of their identity or seeing themselves as being part of a movement. With so many people who interact with EA working primarily in cause areas rather than 'central movement' EA per se, my sense is this 'thin' EA or EA-adjacent set of people is reasonably large.

It might make perfect sense for 'thick EA' leaders to think of EA vs SJ in terms of movements, alliances, and competition for talent. While at the same time, this might be a less intuitive and more uncomfortable way for 'thin EA' folk to see the interaction being described and playing out. While I don't have answers, I think it's worth being mindful that there may be some tension there.


Comment by sean_o_h on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T10:24:28.548Z · score: 20 (11 votes) · EA · GW

I've been trying to figure out what I find a little uncomfortable about 1-3, as someone who also has links to both communities. I think it's that I personally find it more productive to think about both as frameworks/bodies of work + associated communities, more so than movements, where it feels here like these are being described as tribes (one is presented as overall better than the other; they are presented as competing for talent; there should be alliances). I acknowledge however, that in both EA/SJ, there are definitely many who see these more in the movement/tribe sense.

Through my framing, I find it easier to imagine the kinds of constructive engagements I would personally like to see - e.g. people primarily thinking through lens A adopting valuable insights and methodologies from lens B (captured nicely in your point 4). But I think this comes back to the oft-debated question (in both EA and SJ) of whether EA/SJ is (a) a movement/tribe or (b) a set of ideas/frameworks/body of knowledge. I apologise if I'm misrepresenting any views, or presenting distinctions overly strongly; I'm trying to put my finger on what might be a somewhat subtle distinction, but one which I think is important in terms of how engagement happens.

On the whole I agree with the message that engaging constructively, embracing the most valuable and relevant insights, and creating a larger, more inclusive community is very desirable.



Comment by sean_o_h on What should EAs interested in climate change do? · 2020-01-10T19:51:39.770Z · score: 12 (7 votes) · EA · GW

It can be difficult to figure out where the biggest marginal benefit will be, or even how to fully grok the landscape, when there's already quite a lot happening in different domains. A few of us at CSER have been thinking of organising a workshop or hackathon bringing together climate researchers (science, policy, related tech) and leading EA thinkers to explore in more detail where the EA skillset, and interested individual EAs with a range of backgrounds, might best fit in and contribute most effectively. Would be interested in sounding out how much interest/value people would see in this.

Comment by sean_o_h on Response to recent criticisms of EA "longtermist" thinking · 2020-01-06T21:07:29.573Z · score: 23 (14 votes) · EA · GW

I've spent quite a bit of time trying to discuss the matter privately with the main author of the white supremacy critique, as I felt the claim was v unfair in a variety of ways and know the writer personally. I do not believe I have succeeded in bringing them round. I think it's likely that there will be a journal article making this case at some point in the coming year.

At that point a decision will need to be made by thinkers in the longtermist community re: whether it is appropriate to respond or not. (It won't be me; I don't see myself as someone who 'speaks' for EA or longtermism; rather someone whose work fits within a broad longtermist/EA frame).

What makes this a little complicated, in my view, is that there are (significantly) weaker versions of these critiques - e.g. relating to the diversity, inclusiveness, founder effects and some of the strategies within EA - that are more defensible (although I think EA has made good strides in relation to most of these critiques) and these may get tangled up with this more extreme claim among those who consider those weaker critiques valid.

While I am unsure about the appropriate strategy for the extreme claim, if and when it is publicly presented, it seems good to me to steelman and engage with the less unreasonable claims.

Comment by sean_o_h on 2019 AI Alignment Literature Review and Charity Comparison · 2020-01-02T14:09:29.779Z · score: 8 (4 votes) · EA · GW

Worth noting the changes that are apparently going to be made in the UK civil service, likely by Cummings' design, and which seem quite compatible with a lot of rationalist thinking.

  • More scientists in the civil service
  • Data science, systems thinking, and superforecasting training prioritised.

https://twitter.com/JohnRentoul/status/1212486981713899520


Comment by sean_o_h on 'Longtermism' · 2019-12-11T14:13:59.491Z · score: 6 (2 votes) · EA · GW

A quick note: from googling longtermism, the hyphenated version ('long-termism') is already in use, particularly in finance/investment contexts, but in a way that in my view is broadly compatible with the use here, so I personally think it's fine (Will's version being much broader/richer/more philosophical in its scope, obviously).

long-termism in British English

NOUN

the tendency to focus attention on long-term gains

https://www.collinsdictionary.com/dictionary/english/long-termism

Examples:

https://www.americanprogress.org/issues/economy/reports/2018/10/02/458891/corporate-long-termism-transparency-public-interest/

https://www.cisl.cam.ac.uk/business-action/sustainable-finance/investment-leaders-group/promoting-long-termism

https://www.institutionalinvestor.com/article/b14z9mxp09dnn5/long-termism-versus-short-termism-time-for-the-pendulum-to-shift

Comment by sean_o_h on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-22T17:58:34.992Z · score: 13 (6 votes) · EA · GW

I really enjoy the extent to which you've both taken the ball and run with it ;)

Comment by sean_o_h on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T18:35:04.255Z · score: 12 (6 votes) · EA · GW

+2 helpful and thoughtful answers; really appreciate the time put in.

Comment by sean_o_h on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T10:14:09.484Z · score: 30 (16 votes) · EA · GW

I agree this is a very helpful comment. I would add: these roles in my view are not *lesser* in any sense, for a range of reasons and I would encourage people not to think of them in those terms.

  • You might have a bigger impact on the margins being the only - or one of the first few - people thinking in EA terms in a philanthropic foundation than by adding to the pool of excellence at OpenPhil. This goes for any role that involves influencing how resources are allocated - which is a LOT, in charity, government, industry, academic foundations etc.
  • You may not be in the presidential cabinet, or a spad to the UK prime minister, but those people are supported and enabled by people building up the resources, capacity, overton window expansion elsewhere in government and civil service. The 'senior person' on their own may not be able to achieve purchase with key policy ideas and influence.
  • A lot of xrisk research, from biosecurity to climate change, draws on and depends on a huge body of work on biology, public policy, climate science, renewable energy, insulation in homes, and much more. Often there are gaps in research on extreme scenarios due to lack of incentives for this kind of work, and other reasons - and this may make it particularly impactful at times. But that specific work can't be done well without drawing on all the underlying work. E.g., biorisk mitigation needs not just the people figuring out how to defend against the extreme scenarios, but also everything from people testing birds in vietnam for H5N1 and seals in the north sea for H7, to people planning for overflow capacity in regional hospitals, to people pushing for the value of preparedness funds in the reinsurance industry to much more. Same for climate+environment, same will be true for AI policy etc.
  • I think there's probably a good case to be made that in many or perhaps most instances the most useful place for the next generally capable EA to be is *not* an EA org. And for all 80k's great work, they can't survey and review everything, nor tailor to personal fit for the thousands, or hundreds of thousands of different-skillset people who can play a role in making the future better.

For EA to really make the future better to the extent that it has the potential, it's going to need a *much* bigger global team. And that team's going to need to be interspersed everywhere, sometimes doing glamorous stuff, sometimes doing more standard stuff that is just as important in that it makes the glamorous stuff possible. To annoy everyone with a sports analogy, the defense and midfield positions are every bit as important as the glamorous striker positions, and if you've got a team made up primarily of star strikers and wannabe star strikers, that team's going to underperform.


Comment by sean_o_h on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T14:08:12.419Z · score: 4 (3 votes) · EA · GW

Thanks, I hadn't seen this.

Comment by sean_o_h on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-18T10:13:30.371Z · score: 28 (12 votes) · EA · GW

For more on this divide/points of disagreement, see Will MacAskill's essay on the alignment forum (with responses from MIRI researchers and others)

https://www.alignmentforum.org/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory

and previously, Wolfgang Schwartz's review of Functional Decision Theory

https://www.umsu.de/wo/2018/688

(with some Lesswrong discussion here: https://www.lesswrong.com/posts/BtN6My9bSvYrNw48h/open-thread-january-2019#WocbPJvTmZcA2sKR6)


I'd also be interested in Buck's perspectives on this topic.

Comment by sean_o_h on 21 Recent Publications on Existential Risk (Sep 2019 update) · 2019-11-06T11:08:54.936Z · score: 8 (4 votes) · EA · GW

Speaking as one of the people associated with the project, I'd read or skimmed 'upper bound' (snyder-beattie), 'vulnerable world' (bostrom), 'philosophical analysis' (torres) and had been aware of 'world destruction argument' (knuttson).

Comment by sean_o_h on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-11T07:36:42.349Z · score: 11 (5 votes) · EA · GW

Thanks Howie, but a quick note that this was an individual take by me rather than necessarily capturing the whole group; different people within the group will have work they feel is more impactful and important.

Updates on a few of the more forward-looking items mentioned in that comment.


Comment by sean_o_h on EA Organization Updates: July 2019 · 2019-08-08T10:05:13.568Z · score: 6 (6 votes) · EA · GW

[disclaimer: I am co-director of CSER] Another quick organisational update from CSER is that we are also recruiting for a number of positions attached to our major research strands - in AI safety; global population, sustainability and the environment; and responsible innovation and extreme technological risk. We'd be tremendously grateful for any help in sharing the word. Application deadline: August 26th. All details here:

https://www.cser.ac.uk/about-us/careers/

Comment by sean_o_h on Information security careers for GCR reduction · 2019-06-21T09:13:05.334Z · score: 11 (5 votes) · EA · GW

Agree. Great work everyone who contributed.