Posts

Longtermism ⋂ Twitter 2020-06-15T14:19:37.044Z
RyanCarey's Shortform 2020-01-27T22:18:23.751Z
Worldwide decline of the entomofauna: A review of its drivers 2019-07-04T19:06:17.041Z
SHOW: A framework for shaping your talent for direct work 2019-03-12T17:16:44.885Z
AI alignment prize winners and next round [link] 2018-01-20T12:07:16.024Z
The Threat of Nuclear Terrorism MOOC [link] 2017-10-19T12:31:12.737Z
Informatica: Special Issue on Superintelligence 2017-05-03T05:05:55.750Z
Tell us how to improve the forum 2017-01-03T06:25:32.114Z
Improving long-run civilisational robustness 2016-05-10T11:14:47.777Z
EA Open Thread: October 2015-10-10T19:27:04.119Z
September Open Thread 2015-09-13T14:22:20.627Z
Reducing Catastrophic Risks: A Practical Introduction 2015-09-09T22:33:03.230Z
Superforecasters [link] 2015-08-20T18:38:27.846Z
The long-term significance of reducing global catastrophic risks [link] 2015-08-13T22:38:23.903Z
A response to Matthews on AI Risk 2015-08-11T12:58:38.930Z
August Open Thread: EA Global! 2015-08-01T15:42:07.625Z
July Open Thread 2015-07-02T13:41:52.991Z
[Discussion] Are academic papers a terrible discussion forum for effective altruists? 2015-06-05T23:30:32.785Z
Upcoming AMA with new MIRI Executive Director, Nate Soares: June 11th 3pm PT 2015-06-02T15:05:56.021Z
June Open Thread 2015-06-01T12:04:00.027Z
Introducing Alison, our new forum moderator 2015-05-28T16:09:26.349Z
Three new offsite posts 2015-05-18T22:26:18.674Z
May Open Thread 2015-05-01T09:53:47.278Z
Effective Altruism Handbook - Now Online 2015-04-23T14:23:28.013Z
One week left for CSER researcher applications 2015-04-17T00:40:39.961Z
How Much is Enough [LINK] 2015-04-09T18:51:48.656Z
April Open Thread 2015-04-01T22:42:48.295Z
Marcus Davis will help with moderation until early May 2015-03-25T19:12:11.614Z
Rationality: From AI to Zombies was released today! 2015-03-15T01:52:54.157Z
GiveWell Updates 2015-03-11T22:43:30.967Z
Upcoming AMA: Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project: 17th March 8pm GMT 2015-03-10T21:25:39.329Z
A call for ideas - EA Ventures 2015-03-01T14:50:59.154Z
Seth Baum AMA next Tuesday on the EA Forum 2015-02-23T12:37:51.817Z
February Open Thread 2015-02-16T17:42:35.208Z
The AI Revolution [Link] 2015-02-03T19:39:58.616Z
February Meetups Thread 2015-02-03T17:57:04.323Z
January Open Thread 2015-01-19T18:12:55.433Z
[link] Importance Motivation: a double-edged sword 2015-01-11T21:01:10.451Z
I am Samwise [link] 2015-01-08T17:44:37.793Z
The Outside Critics of Effective Altruism 2015-01-05T18:37:48.862Z
January Meetups Thread 2015-01-05T16:08:38.455Z
CFAR's annual update [link] 2014-12-26T14:05:55.599Z
MIRI posts its technical research agenda [link] 2014-12-24T00:27:30.639Z
Upcoming Christmas Meetups (Upcoming Meetups 7) 2014-12-22T13:21:17.388Z
Christmas 2014 Open Thread (Open Thread 7) 2014-12-15T16:31:35.803Z
Upcoming Meetups 6 2014-12-08T17:29:00.830Z
Open Thread 6 2014-12-01T21:58:29.063Z
Upcoming Meetups 5 2014-11-24T21:02:07.631Z
Open thread 5 2014-11-17T15:57:12.988Z
Upcoming Meetups 4 2014-11-10T13:54:39.551Z

Comments

Comment by ryancarey on Promoting EA to billionaires? · 2021-01-24T14:11:54.815Z · EA · GW

There are only a limited number of billionaires, so such an org could be very effective, or very detrimental, depending on whether it successfully attracts billionaires without putting off others permanently. At least three orgs are working near this space: Longview Philanthropy, Generation Pledge, and Founders Pledge

Comment by ryancarey on Open and Welcome Thread: January 2021 · 2021-01-21T08:03:48.952Z · EA · GW

Hi! Yes, I work on AI safety but like many others here I like to follow Dave Wasserman etc. Michael Sadowsky is one person who works with political data full-time. Whether you want to work on AI safety, political data, or just earning, then studying CS or statistics is an ideal starting point. I would suggest picking AI-relevant classes at a good school, and maybe trying some research, and that should set you up well whatever path you end up pursuing.

Comment by ryancarey on MichaelA's Shortform · 2021-01-14T04:46:50.897Z · EA · GW

https://thebulletin.org/biography/andrew-snyder-beattie/ https://thebulletin.org/biography/gregory-lewis/ https://thebulletin.org/biography/max-tegmark/

Comment by ryancarey on RyanCarey's Shortform · 2021-01-09T17:44:36.204Z · EA · GW

Another relevant one in the US Dept of State.

Comment by ryancarey on RyanCarey's Shortform · 2021-01-09T17:43:32.884Z · EA · GW

There's a new center in the Department of State, dedicated to the diplomacy surrounding new and emerging tech. This seems like great place for Americans to go and work, if they're interested in arms control in relation to AI and emerging technology.

Confusingly, it's called the "Bureau of Cyberspace Security and Emerging Technologies (CSET)". So we now have to distinguish the State CSET from the Georgetown one - the "Centre for Security and Emerging Technology".

Comment by ryancarey on EA Forum feature suggestion thread · 2021-01-08T23:06:21.357Z · EA · GW

Across the internet as a whole. I agree that a lot of discourse happens on Facebook, some of it within groups. But in terms of serious, public conversation, I think a lot of it was initially on newsgroups/mailing lists, then blogs, and now blogs (linked from Twitter) and podcasts.

Comment by ryancarey on EA Forum feature suggestion thread · 2021-01-08T20:21:41.229Z · EA · GW

I worry a bit that all the suggestions are about details, whereas the macro trend is that public discourse is moving toward Twitter, and blog content linked from Twitter. One thing that could help attract new audience would be to revive the EA Forum Twitter account, automatically, or manually.

Comment by ryancarey on RyanCarey's Shortform · 2020-12-30T14:48:13.227Z · EA · GW

This framing is not quite right, because it implies that there's a clean division of labour between thinkers and doers. A better claim would be: "we have a bunch of thinkers, now we need a bunch of thinker-doers".

Comment by ryancarey on Careers Questions Open Thread · 2020-12-28T06:41:47.680Z · EA · GW

I'm currently studying a statistics PhD while researching AI safety, after a bioinformatics msc and medical undergrad. I agree with some parts of this, but would contest others.

I agree that:

  • What you do within a major can matter more than which major you choose
  • It's easier to move from math and physics to CS.

But it's still easier to move from CS to CS, than from physics or pure math. And CS is where a decent majority of AI safety work is done. The second-most prevalent subject is statistics, due to its containing statistical learning (aka machine learning) and causal inference, although these are areas of research that are equally performed in a CS department. So if impact was the only concern, starting with CS would still be my advice, followed by statistics.

Comment by ryancarey on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-12-17T14:13:02.480Z · EA · GW

Which annual filings? Presumably the investment went to the for-profit component.

Comment by ryancarey on Books / book reviews on nuclear risk, WMDs, great power war? · 2020-12-15T01:53:00.609Z · EA · GW

I liked Command and Control, The Doomsday Machine, and The Dead Hand, but didn't get many interesting ideas from The Making of the Atomic Bomb.

Only some parts are relevant to nuclear risk, but Spy Schools by Daniel Golden taught me some interesting stuff about science and espionage. 

Comment by ryancarey on RyanCarey's Shortform · 2020-12-14T01:41:07.975Z · EA · GW

Translating EA into Republican. There are dozens of EAs in US party politics, Vox, the Obama admin, Google, and Facebook. Hardly in the Republican party, working for WSJ, appointed for Trump, or working for Palantir. A dozen community groups in places like NYC, SF, Seattle, Berkeley, Stanford, Harvard, Yale. But none in Dallas, Phoenix, Miami, the US Naval Laboratory, the Westpoint Military Academy, etc - the libertarian-leaning GMU economics department being a sole possible exception.

This is despite the fact that people passing through military academies would be disproportionately more likely to work on technological dangers in the military and public service, while the ease of competitiveness is less than more liberal colleges.

I'm coming to the view that similarly to the serious effort to rework EA ideas to align with Chinese politics and culture, we need to translate EA into Republican, and that this should be a multi-year, multi-person project.

Comment by ryancarey on Careers Questions Open Thread · 2020-12-11T20:37:24.837Z · EA · GW

Hey Anon,

I was in a similar situation to this with job offers from MIRI (research assistant) and a top quant trading firm (trading intern, with likely transition to full-time), four years ago.

I ended up taking the RA job, and not the internship. A few years later, I'm now a researcher at FHI, concurrently studying a stats PhD at Oxford.

I'm happy with what I decided, and I'd generally recommend people do the same, basically because I think there are enough multi-millionaire EAs to place talent at a large premium, relative to donations. Relative to you, I had a better background for trading, relative to academic AI - I played Poker and gambled successfully on political markets, but my education was in medicine and bioinformatics. So I think for someone like you, the case for a PhD would be stronger than for me.

That said, I do think it depends a lot on personal factors - how deeply interested in AI (safety) are you? How highly-ranked exactly are the quant firm, and the PhD where you end up getting an offer? And so on...

I'd be happy to provide more detailed public or private comments.

Comment by ryancarey on Make a $10 donation into $35 · 2020-12-11T18:12:43.554Z · EA · GW

Done - just sent $50 to the Future of Life Institute.

Comment by ryancarey on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T15:57:24.910Z · EA · GW

If you had $1B, and you weren't allowed to  give it to other grantmakers or fund prioritisation research, where might you allocate it? 

Comment by RyanCarey on [deleted post] 2020-12-02T23:15:59.666Z

This is a good idea, but it's also a recurring one in EA: see  here, here, here, and here

Comment by ryancarey on How can I bet on short timelines? · 2020-11-07T13:56:15.297Z · EA · GW

If AGI happens soon, there's a decent chance it happens at an existing industry leader. 

So one naive answer would be to buy Google (owner of DeepMind, which may be a significant fraction of their company's value). Maybe also Microsoft: it does AI research, accepts US gov contracts, and has interacted with OpenAI, including buying some rights to GPT.

Comment by ryancarey on RyanCarey's Shortform · 2020-11-02T20:45:32.710Z · EA · GW

The Emergent Ventures Prize is an example of a prize scheme that seems good to me: giving $100k prizes to great blogs, wherever on the internet they're located.

Comment by ryancarey on Getting money out of politics and into charity · 2020-11-02T12:56:05.623Z · EA · GW

Braver Angels: A vaguely aligned group - one may want to try to speak at its group meetings, or pilot programs wit hthem.

Comment by ryancarey on When you shouldn't use EA jargon and how to avoid it · 2020-10-27T14:33:36.712Z · EA · GW

"funging against" -> "cannibalising"

Comment by ryancarey on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T10:36:14.345Z · EA · GW

Interesting that one of the two main hypotheses advanced in that paper is that media is influencing public opinion, but the media is not the internet, but TV!

The rise of 24-hour partisan cable news provides another potential explanation. Partisan cable networks emerged during the period we study and arguably played a much larger role in the US than elsewhere, though this may be in part a consequence rather than a cause of growing affective polarization.9 Older demographic groups also consume more partisan cable news and have polarized more quickly than younger demographic groups in the US (Boxell et al. 2017; Martin and Yurukoglu 2017). Interestingly, the five countries with a negative linear slope for affective polarization all devote more public funds per capita to public service broadcast media than three of the countries with a positive slope (Benson and Powers 2011, Table 1; see also Benson et al. 2017). A role for partisan cable news is also consistent with visual evidence (see Figure 1) of an acceleration of the growth in affective polarization in the US following the mid-1990s, which saw the launch of Fox News and MSNBC.

(The other hypothesis is "party sorting", wherein people move to parties that align more in ideology and social identity.)

Perhaps campaigning for more money to PBS or somehow countering Fox and MSNBC could be really important for US-democracy.

Also, if TV has been so influential, it also suggests that even if online media isn't yet influential on the population-scale, it may be influential for smaller groups of people, and that it will be extremely influential in the future.

Comment by ryancarey on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-17T10:03:06.801Z · EA · GW
[Politicisation] will reduce EA's long-term impact: I have to confess I've never really understood this argument. I can think of numerous examples of social movements that have been both highly politicized and tremendously impactful.

Right, but none that have done so without risking a big fight. The status quo is that EA consists of a few thousand people, often trying to enter important technocratic roles, and achieve change without provoking big political fights (and being many-fold more efficient by doing so). The problem is that political EA efforts can inflict effectiveness penalties on other EA efforts. If EA is associated with a side e.g., "caring about the long-term" is considered a long-term issue, then other EA efforts may become associated to that side, e.g. long-term security legislation gets drawn into large battles, diminishing the effectiveness of technocratic efforts many-fold.

Basically, by bringing EA into politics, you're basically taking a few people who normally use scalpels, and arming them for a large-scale machine gun fight. The risk is not just losing a particular fight, but inflaming a multi-front war.

There are a bunch of ways of mitigating the effectiveness penalties that one accrues on others. The costs are less if political efforts are taken individually, so that they're not seen as systematic to EA. Also if they're from less prominent people, e.g. if Will and Toby stay out of the fray. It's less costly if it's symmetric between parties. For example, the cost of affiliating to a Rubio at this point might be less than the cost of affiliating to a Buttigieg, or could even be net positive.

Comment by ryancarey on Getting money out of politics and into charity · 2020-10-13T20:09:55.867Z · EA · GW

Basically funding connected to this.

Comment by ryancarey on RyanCarey's Shortform · 2020-10-11T09:59:08.089Z · EA · GW

Affector & Effector Roles as Task Y?

Longtermist EA seems relatively strong at thinking about how to do good, and raising funds for doing so, but relatively weak in affector organs, that tell us what's going on in the world, and effector organs that influence the world. Three examples of ways that EAs can actually influence behaviour are:

- working in & advising US nat sec

- working in UK & EU governments, in regulation

- working in & advising AI companies

But I expect this is not enough, and our (a/e)ffector organs are bottlenecking our impact. To be clear, it's not that these roles aren't mentally stimulating - they are. It's just that their impact lies primarily in implementing ideas, and uncovering practical considerations, rather than in an Ivory tower's pure, deep thinking.

The world is quickly becoming polarised between US and China, and this means that certain (a/e)ffector organs may be even more neglected than the others. We may want to promote: i) work as a diplomat ii) working at diplomat-adjacent think tanks, such as the Asia Society, iii) working at relevant UN bodies, relating to disarmament and bioweapon control, iv) working at UN-adjacent bodies that seek to pressure disarmament etc. These roles often reside in large entities that can accept hundreds or thousands of new staff at a wide range of skill levels, and so perhaps many people who are currently “earning to give” should move into these “affector” or “effector” roles (as well as those mentioned above, in other relevant parts of national governments). I'm also curious whether 80,000 Hours has considered diplomatic roles - I couldn't find much on a cursory search.

Comment by ryancarey on Getting money out of politics and into charity · 2020-10-07T12:03:37.965Z · EA · GW

fixed

Comment by ryancarey on Getting money out of politics and into charity · 2020-10-06T11:50:30.333Z · EA · GW

Consequentialists and EAs have certainly been interested in these questions. We were discussing the idea back in 2009. Toby Ord has written a relevant paper.

I'm not donating to politics, so wouldn't use it. I would say that if an election costs ~$10B, and you might move 0.1% of that into charities for a cost of $0.25M, that seems like a good deal. The obvious criticism, I think, is: "couldn't they benefit more from keeping the money?" I think this is surmountable because donating it may be psychologically preferable. Another reservation would be "You should figure out what happened with Repledge before trying to repeat it", which I think is basically something you should do.

I guess the funding that you initially need is probably significantly less than $250k, so it might make sense to apply for the February deadline of the EA Infrastructure Fund. If you're trying to do things before November (which seems difficult), then you might apply "off-cycle". Although there's a range of other funders of varying degrees of plausibility such as OpenPhil (mostly for funding amounts >$100k), the funders behind Progress studies (maybe the Collisons), the Survival and Flourishing Fund, the Long-term Future Fund etc.

Re choice of charities, well.. we do think that charities vary in effectiveness by many orders of magnitude, so probably it does make sense to be selective. In particular, most people who've studied the question think that those that focus on long-term impact can be orders of magnitude more effective than those that don't. So a lot of EAs (including me) work on catastrophic threats. This would be a good idea if you believe Haidt's ideas about common threats making common ground, which I think is nice. See also his Asteroids Club. This could support choices like the Nuclear Threat Initiative and Hopkins' Centre for Health Security, discussed here. To the extent that you were funding such charities, I think the case for effectiveness (and the case for EA funding) would be stronger.

The ideal choice of charities could also depend to some extent on other design choices taken: 1) do you want to allow trades other than $1:$1? 2) do you allow people to offer to do a trade, specific for one particular charity? On (1), one argument in favour would be that if one party has a larger funding base than the other, then a $1:$1 trade might favour them. Another would be that this naturally balances out the problem of charities being preferred by one side more than the other. One argument against would be that people might view 1:1 as fairer, and donate more. (2), arguments in favour would be that diversity can better satisfy people's preferences, and that you might fund certain charities too much if you just choose one. The argument against would be that people really hate choosing between charities. Overall, for (1) I'd guess "no". For (2), I'd guess "no" again, although I think it could be great to have a system where the charity rotates each week - it could help with promoting the app as well! But these are of course no more than guesses.

Anyway, those are all details - it seems like an exciting project!

Comment by ryancarey on Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety · 2020-10-04T17:38:21.059Z · EA · GW
I noticed though that your answers for #5, 7, and 8 were for the questions for the expert interviews I planned on doing, and not on questions 5-7 in the "Questions we'd like feedback on". You basically were able to answer #5 already there, so I'd just like your thoughts on #6 and #7 (on AI policy work and what questions we should ask people at local firms).

Ah, oops! 6. I'm not sure AI policy is that important in the Philippines, given that not that much AI research is happening there, compared to US/UK. 7. Relevance to AI safety is a bit tricky to gauge, and doesn't always matter that much for career capital. It might be better to just ask: do I get to do research activities, and does the team publish research papers?

On A, yeah it could make sense to push for nuclear power, or to become a local biosecurity expert. To be clear, the US China peace issue is not my area of expertise, just something that might be interesting to look into. I'm not thinking of something as simple as fighting for certain waters to be owned by China or the Philippines, but more to find ways to increase understanding and reinforce peace. Roughly: (improved traid/aid/treaties) -> (decreased tensions between China and ASEAN) -> (reduced chance of US-China war) -> (reduced risk of technology arms races between US and China) -> reduced existential risk. So maybe people in the Philippines can build links of trade, aid, treaties, etc between China/US and neutral countries. These things are probably done by foreign policy experts, diplomats and politicians, in places including embassies, the department of foreign affairs, national security organisations, and think tanks and universities.

Comment by ryancarey on Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety · 2020-10-03T18:03:07.105Z · EA · GW

I had a quick look over. I basically agree with the article. Here are some responses to some of your feedback questions:

2. Might be good to clarify that if you start a degree in US/UK, it makes it easier to get a work visa and job afterwards

3. You could argue that there's little bits in Switzerland, Czech Republic, Israel. Not so much in Aus anymore. but US, UK, Canada are the main ones.

4. Yes, it's possible. But generally you want to have some collaborators and/or be a professor. For the latter, you'd want to get a degree from a top-30 university worldwide, and then pursue professorship back home, so it wouldn't necessarily be easy.

And likewise for some of the expert interview questions:

5. You could check out Ajeya's report for some work on plausible timelines

7. Maybe, but it's hard. Either you'd need to find a startup that offers remote software work, or get a long-term job at a university

8. Same as non-Filipiino undergrads. Aim for papers and strong references.


Also, here are two other big picture elements of feedback:

A. A bigger picture question is: how can Filipinos best help to reduce existential risk? Often, the answer will be the same as if they were non-Filipinos - AI safety, biosecurity, or whatever. But one idea is that EA Filipinos could help with building US-China peace. The Philippines is close to China, and in major territorial disputes over the South China Sea. It's in ASEAN, which is big, close to China and somewhat neutral. So maybe it's useful to work for the department of foreign affairs or military, and try to reduce the chances of global conflict emerging from the South China Sea, or help to ensure that countries in ASEAN trade with both China and the US.

B. A lot of considerations for Filipino EAs interested in AI safety will be similar for a lot of EAs who aren't in Anglosphere or EU countries. But only a small fraction of these people are in The Philippines (~1%). So maybe for articles like this, it would be better to write for that larger audience.

Comment by ryancarey on RyanCarey's Shortform · 2020-09-30T08:28:20.068Z · EA · GW

EAs have reason to favour Top-5 postdocs over Top-100 tenure?

Related to Hacking Academia.

A bunch of people face a choice between being a postdoc at one of the top 5 universities, and being a professor at one of the top 100 universities. For the purpose of this post, let's set aside the possibilities of working in industry, grantmaking and nonprofits. Some of the relative strengths (+) of the top-5 postdoc route are accentuated for EAs, while some of the weaknesses (-) are attenuated:

+greater access to elite talent (extra-important for EAs)

+larger university-based EA communities, many of which are at top-5 universities

-less secure research funding (less of an issue in longtermist research)

-less career security (less important for high levels of altruism)

-can't be a sole-supervisor of a PhD student (less important if one works with a full-professor who can supervise, e.g. at Berkeley or Oxford).

-harder to set up a centre (this one does seem bad for EAs, and hard to escape)

There are also considerations relating to EAs' ability to secure tenure. Sometimes, this is decreased a bit due to the research running against prevailing trends.

Overall, I think that some EAs should still pursue professorships, especially to set up research centres, or to establish a presence in an influential location but that we will want more postdocs than is usual.

Comment by ryancarey on Estimation of probabilities to get tenure track in academia: baseline and publications during the PhD. · 2020-09-20T23:13:10.994Z · EA · GW

Interesting. The point 2 article by van Dijk seems decent. Figure 1B says that the impact factor of journals, volume of publications, and cites/h-index are all fairly predictive. University rank gets some independent weighting (among 38 features, as shown in their supplementary Table S1), but not much.

Looks like although the web version has gone offline, the source code of their model is still online!

Comment by ryancarey on Estimation of probabilities to get tenure track in academia: baseline and publications during the PhD. · 2020-09-20T20:58:54.056Z · EA · GW

Hey Pablo - thanks for working this up. It's nice to have some baseline estimates!

As you say, Tregellas et al. shows that the probability of tenure varies a lot with the number of first author publications. It would be interesting to know if tenure can be predicted better with other factors like one's institution or h-index - I could imagine such a model performing much better than the baseline.

Two other queries:

  • I feel like we're talking about tenure, rather than tenure track?
  • When you say things like "my personal estimate of the baseline probability of getting a permanent (tenured) position in academia should be with 90% probability between 10-30%", it might be clearer to say you're 90% sure that 10-30% of students get tenure? Otherwise I don't know how to interpret this probability of a probability.
Comment by ryancarey on EA Relationship Status · 2020-09-19T23:07:26.759Z · EA · GW

I agree that you should look at the things in order of the size of their prediction about the observation. But I think that a lot of the biggest effects would be in that direction.

Comment by ryancarey on EA Relationship Status · 2020-09-19T08:20:42.368Z · EA · GW

It's a reasonable question. I take the observation to be that 60% of EAs over 45 have married, where we'd expect 85%.

I think a good hypothesis is religion. In general, 60% of atheists have married, versus 80% of the religiously-affiliated have, and ~55% of that effect persists after controlling for age (see the bottom two tables). 86% of EAs are non-religious. So almost half of the reason that EAs marry less is probably just that they're atheist/agnostic, so they don't think that cohabiting is living in sin!

The other half, well, I agree with your top two points - that EAs favour work over having kids. Apart from that, two guesses would be:

  • statistical artifact: that single people are more likely to spend time online, in the kinds of places they would discover the survey.
  • that single people are more likely to sign up to join a community (to try and meet someone).

Given all the available explanations, I don't feel that surprised about the observation anymore.

Comment by ryancarey on Prabhat Soni's Shortform · 2020-09-18T21:11:32.936Z · EA · GW

I'm not sure you've understood how I'm calculating my figures, so let me show how we can set a really conservative upper bound for the number of people who would move to Greenland.

Based on current numbers, 3.5% of world population are migrants, and 6% are in deserts. So that means less than 3.5/9.5=37% of desert populations have migrated. Even if half of those had migrated because of the weather, that would be less than 20% of all desert populations. Moreover, even if people migrated uniformly according to land area, only 1.4% of migrants would move to Greenland (that's the fraction of land area occupied by Greenland). So an ultra-conservative upper bound for the number of people migrating to Greenland would be 1B*.37*.2*.014=1M.

So my initial status-quo estimate was 1e3, and my ultra-conservative estimate was 1e6. It seems pretty likely to me that the true figure will be 1e3-1e6, whereas 5e7 is certainly not a realistic estimate.

Comment by ryancarey on Formalizing longtermism · 2020-09-18T20:01:05.674Z · EA · GW

Not inconsistent, but I think Will's criteria are just one of many possible reasons that this might be the case.

Comment by ryancarey on Formalizing longtermism · 2020-09-18T08:11:23.776Z · EA · GW

Interesting - defining longtermism as rectifying future disprivelege. This is different from what I was trying to model. Honestly, it seems different from all the other definitions. Is this the sort of longtermism that you want to model?

If I was trying to model this, I would want to make reference to a baseline level of disparity, given inaction, and then consider how a (possibly causal) intervention could improve that.

Comment by ryancarey on Formalizing longtermism · 2020-09-17T23:16:24.855Z · EA · GW
I'm a bit confused by this setup

is a one-off action taken at t=0 whose effects accrue over time, analogous to . (I could be wrong, but I'm proposing that the"long-term" in longtermism refers to utility obtained at different times, not actions taken at different times, so removing the latter helps bring the definition of longtermism into focus.

This condition would also be satisfied in a world with no x-risk, where each generation becomes successively richer and happier, and there's no need for present generations to care about improving the future.

Is what you're saying that actions could vary on their short-term-goodness and long-term goodness, such that short/long-term goodness are perfectly correlated? To me, this is a world where longtermism is true - we can tell an agent's value from its long-term value, and also a world where shorttermism is true. Generations only need to care about the future if longtermism works but other heuristics fail. To your question, is just the utility at time under .

Comment by ryancarey on Formalizing longtermism · 2020-09-17T10:17:58.162Z · EA · GW

Sure, that definition is interesting - seems optimised for advancing arguments about how to do practical ethical reasoning. I think a variation it would follow from mine - an ex-ante very good decision is contained in a set of options whose ex ante effects on the very long-run future are very good.

Still, it would be good to have a definition that generalises to suboptimal agents. Suppose that what's long-term optimal for me is to work twelve hours a day, but it's vanishingly unlikely that I'll do that. Then what can longtermism do for an agent like me? It'd also make sense for us to be able to use longtermism to evaluate the actions of politicians, even if we don't think any of the actions are long- or short-term optimal.

Comment by ryancarey on How do political scientists do good? · 2020-09-16T23:01:18.204Z · EA · GW

I guess Tyler, Will, etc are approaching governance from a general, and highly idealised perspective, in discussing hypothetical institutions.

In contrast, folks like GovAI are approaching things from a more targeted, and only moderately idealised perspective. I expect a bunch of their questions will relate to how to bring existing institutions to bear on mitigating AI risks. Do your questions also differ from theirs?

Comment by ryancarey on Prabhat Soni's Shortform · 2020-09-16T12:53:51.170Z · EA · GW

The total drylands population is 35% of the world population (~6% from desert/semi-desert). The total number of migrants, however, is 3.5% of world population. So less than 10% of those from drylands have left. But most such migrants move because of politics, war, employment rather than climate. The number leaving because of climate is less (and possibly much less) than 5% of the drylands population.

So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants. Probably too few of these people will go to any country, let alone Greenland, to make it into a new superpower. But let's run the numbers for Greenland anyway. Of the world's 300M migrants, Greenland currently has only ~10k. So of an extra 50M, Greenland could be expected to take ~2k, so I'm coming in 5-6 orders of magnitude lower than the 1B figure.

It does still have some military relevance, and would be good to keep it neutral, or at least out of the hands of China/Russia.

Comment by ryancarey on Formalizing longtermism · 2020-09-16T09:19:49.644Z · EA · GW

I haven't read most of GPI's stuff on defining longtermism, but here are my thoughts. I think (2) is close to what I'd want for a definition of very strong longtermism - "the view on which long-run outcomes are of overwhelming importance"

I think we should be able to model longtermism using a simpler model than yours. Suppose you're taking a one-off action , and then you get (discounted) reward Then I'd say very strong longtermism is true iff the impact of each decisions depends overwhelmingly on their long-term impact.

where is some large number.

You could stipulate that the discounted utility of the distant future has to be within a factor , where . If you preferred, you could talk about the differences between utilities for all pairs of decisions, rather than the utility of each individual decision. Or small deviations from optimal. Or you could consider sequential decision-making, assuming that later decisions are made optimally. Or you assumed a distribution over D (e.g. the distribution of actual human decisions), and talk about the amount of variance in total utility explained by their long-term impact. But these are philosophical details - overall, we should land somewhere near your (2).

It's not super clear to me that we want to formalise longtermism - "the ethical view that is particularly concerned with ensuring long-run outcomes go well". If we did, it might say that sometimes is big, or that it can sometimes outweigh other considerations.

Your (1) is interesting, but it doesn't seem like a definition of longtermism. I'd call it something like safety investment is optimal, because it pertains to practical concerns about how to attain long-term utility.

Rather, I think it'd be more interesting to try to prove that follows from longtermism, given certain model assumptions (such as yours). To see what I have in mind, we could elaborate my setup. Setup: let the decision space be , where represents the fraction of resources you invest in the long-term. Each is an increasing function of and each is a decreasing function of . Then we could have a conjecture: Conjecture: if strong longtermism is true (for some and ), then the optimal action will be (or , some function of ). Proof: since we assume that only long-term impact matters, then the action with the best longterm impact is best overall.

Perhaps a weaker version could be proved in an economic model.

Comment by ryancarey on Some thoughts on EA outreach to high schoolers · 2020-09-15T23:27:59.053Z · EA · GW

I'm not an expert, but I think "conversion" in marketing refers to getting people to take a specific action, such as buying a product or making a donation. In this case, there's no specific action, so I read "convert" in the non-technical sense, 'change one's religious faith or other belief', which is why it's awkward.

Comment by ryancarey on How do political scientists do good? · 2020-09-15T22:22:25.523Z · EA · GW

I think there's a bunch of prior thinking on fairly related questions:

I'm not saying it's bad to try to think through things yourself as a political scientist, but it perhaps it would be useful to contrast your thoughts with the analyses from related fields, to talk about how your question differs, and how your answers differ, insofar as they do.

Comment by ryancarey on Some thoughts on EA outreach to high schoolers · 2020-09-15T02:54:30.126Z · EA · GW

Also, some may still resemble "students"/apprentices with "impact still to be determined". I guess ESPR may be hard to evaluate 4 years in, but shouldn't SPARC students be beyond that stage, if the program has run for 8 or so years? American data could be very useful...

Comment by ryancarey on Some thoughts on EA outreach to high schoolers · 2020-09-14T06:25:42.294Z · EA · GW

I think targeted high school outreach has always looked (incredibly) good a priori. The question is whether it works in practice. At least in UK/EU, I can't think of anyone who came through sparc/eurosparc/shic and is now working full-time on EA. Probably a couple of students, but their impact is still to be determined. Until a couple of years ago, people were saying the same thing in the Bay Area. Which would suggest all of these programs have a <1% conversion rate, and that high school outreach might have an even lower conversion rate than university group outreach (for whatever reasons). Your suggestiom that this is changed is interesting - if you can say more without getting into awkward "naming names" it'd be pretty useful.

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-13T16:12:49.968Z · EA · GW

Makes sense! How people deal with the uncertainty could also be informative. If they talk about calculating the expected value (in earnings) of a tournament, or expected points won from a shot, or get excited about sport statisticians' work generally - then that would be extra-encouraging.

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-13T07:47:48.454Z · EA · GW

Good point. I think I'd rather clarify/revise my claims to: 1) pro athletes will be somewhat less interested in EA than poker players, mostly due to different thinking styles, and 2) many/most pro athletes are highly generally capable but their comparative advantage won't usually be donating tournament winnings or doing research. Something like promoting disarmament, or entering politics could be. But it needs way more thought.

Comment by ryancarey on RyanCarey's Shortform · 2020-09-11T16:04:23.450Z · EA · GW

Jakarta - yep, it's also ASEAN's HQ. Worth noting, though, that Indonesia is moving its capital out of Jakarta.

Comment by ryancarey on RyanCarey's Shortform · 2020-09-11T10:24:11.609Z · EA · GW

Which longtermist hubs do we most need? (see also: Hacking Academia)

Suppose longtermism already has some presence in SF, Oxford, DC, London, Toronto, Melbourne, Boston, New York, and is already trying to boost its presence in the EU (especially Brussels, Paris, Berlin), UN (NYC, Geneva), and China (Beijing, ...). Which other cities are important?

I think there's a case for New Delhi, as the capital of India. It's the third-largest country by GDP (PPP), soon-to-be the most populous country, high-growth, and a neighbour of China. Perhaps we're neglecting it due to founder effects, because it has lower average wealth, because it's universities aren't thriving, and/or because it currently has a nationalist government.

I also see a case for Singapore - that it's government and universities could be a place from which to work on de-escalating US-China tensions. It's physically and culturally not far from China. As a city-state, it benefits a lot from peace and global trade. It's by far the most-developed member of ASEAN, which is also large, mostly neutral, and benefits from peace. It's generally very technocratic with high historical growth, and is also the HQ of APEC.

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-10T14:58:24.118Z · EA · GW

What you divide by just depends what question you're trying to answer.

I don't think we really want to know about the total earnings, or the earnings of a player with a particular ranking, as these would assume that you can capture some large fraction, or some top-tier part of the total market. On those measures, "all people" is the best pool to recruit from.

More interesting questions [if you're trying to raise donations] are things like "what are the average earnings?" or "how well-paid is an individual with a certain level of extraordinariness?". If you need to be a one-in-a-million soccer player to earn as much as a one-in-a-thousand poker player, then the soccer players are more sparse, more famous, and harder to recruit than equivalently rich poker players.