Posts

Longtermism ⋂ Twitter 2020-06-15T14:19:37.044Z · score: 49 (23 votes)
RyanCarey's Shortform 2020-01-27T22:18:23.751Z · score: 7 (1 votes)
Worldwide decline of the entomofauna: A review of its drivers 2019-07-04T19:06:17.041Z · score: 10 (5 votes)
SHOW: A framework for shaping your talent for direct work 2019-03-12T17:16:44.885Z · score: 144 (75 votes)
AI alignment prize winners and next round [link] 2018-01-20T12:07:16.024Z · score: 7 (7 votes)
The Threat of Nuclear Terrorism MOOC [link] 2017-10-19T12:31:12.737Z · score: 7 (7 votes)
Informatica: Special Issue on Superintelligence 2017-05-03T05:05:55.750Z · score: 7 (7 votes)
Tell us how to improve the forum 2017-01-03T06:25:32.114Z · score: 4 (4 votes)
Improving long-run civilisational robustness 2016-05-10T11:14:47.777Z · score: 9 (9 votes)
EA Open Thread: October 2015-10-10T19:27:04.119Z · score: 1 (1 votes)
September Open Thread 2015-09-13T14:22:20.627Z · score: 0 (0 votes)
Reducing Catastrophic Risks: A Practical Introduction 2015-09-09T22:33:03.230Z · score: 5 (5 votes)
Superforecasters [link] 2015-08-20T18:38:27.846Z · score: 6 (5 votes)
The long-term significance of reducing global catastrophic risks [link] 2015-08-13T22:38:23.903Z · score: 4 (4 votes)
A response to Matthews on AI Risk 2015-08-11T12:58:38.930Z · score: 11 (11 votes)
August Open Thread: EA Global! 2015-08-01T15:42:07.625Z · score: 3 (3 votes)
July Open Thread 2015-07-02T13:41:52.991Z · score: 4 (4 votes)
[Discussion] Are academic papers a terrible discussion forum for effective altruists? 2015-06-05T23:30:32.785Z · score: 3 (3 votes)
Upcoming AMA with new MIRI Executive Director, Nate Soares: June 11th 3pm PT 2015-06-02T15:05:56.021Z · score: 1 (3 votes)
June Open Thread 2015-06-01T12:04:00.027Z · score: 4 (4 votes)
Introducing Alison, our new forum moderator 2015-05-28T16:09:26.349Z · score: 9 (9 votes)
Three new offsite posts 2015-05-18T22:26:18.674Z · score: 4 (4 votes)
May Open Thread 2015-05-01T09:53:47.278Z · score: 1 (1 votes)
Effective Altruism Handbook - Now Online 2015-04-23T14:23:28.013Z · score: 28 (30 votes)
One week left for CSER researcher applications 2015-04-17T00:40:39.961Z · score: 2 (2 votes)
How Much is Enough [LINK] 2015-04-09T18:51:48.656Z · score: 3 (3 votes)
April Open Thread 2015-04-01T22:42:48.295Z · score: 2 (2 votes)
Marcus Davis will help with moderation until early May 2015-03-25T19:12:11.614Z · score: 5 (5 votes)
Rationality: From AI to Zombies was released today! 2015-03-15T01:52:54.157Z · score: 6 (8 votes)
GiveWell Updates 2015-03-11T22:43:30.967Z · score: 4 (4 votes)
Upcoming AMA: Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project: 17th March 8pm GMT 2015-03-10T21:25:39.329Z · score: 4 (4 votes)
A call for ideas - EA Ventures 2015-03-01T14:50:59.154Z · score: 3 (3 votes)
Seth Baum AMA next Tuesday on the EA Forum 2015-02-23T12:37:51.817Z · score: 7 (7 votes)
February Open Thread 2015-02-16T17:42:35.208Z · score: 0 (0 votes)
The AI Revolution [Link] 2015-02-03T19:39:58.616Z · score: 10 (10 votes)
February Meetups Thread 2015-02-03T17:57:04.323Z · score: 1 (1 votes)
January Open Thread 2015-01-19T18:12:55.433Z · score: 0 (0 votes)
[link] Importance Motivation: a double-edged sword 2015-01-11T21:01:10.451Z · score: 3 (3 votes)
I am Samwise [link] 2015-01-08T17:44:37.793Z · score: 4 (4 votes)
The Outside Critics of Effective Altruism 2015-01-05T18:37:48.862Z · score: 12 (12 votes)
January Meetups Thread 2015-01-05T16:08:38.455Z · score: 0 (0 votes)
CFAR's annual update [link] 2014-12-26T14:05:55.599Z · score: 1 (3 votes)
MIRI posts its technical research agenda [link] 2014-12-24T00:27:30.639Z · score: 4 (6 votes)
Upcoming Christmas Meetups (Upcoming Meetups 7) 2014-12-22T13:21:17.388Z · score: 0 (0 votes)
Christmas 2014 Open Thread (Open Thread 7) 2014-12-15T16:31:35.803Z · score: 1 (1 votes)
Upcoming Meetups 6 2014-12-08T17:29:00.830Z · score: 0 (0 votes)
Open Thread 6 2014-12-01T21:58:29.063Z · score: 1 (1 votes)
Upcoming Meetups 5 2014-11-24T21:02:07.631Z · score: 0 (0 votes)
Open thread 5 2014-11-17T15:57:12.988Z · score: 1 (1 votes)
Upcoming Meetups 4 2014-11-10T13:54:39.551Z · score: 0 (0 votes)

Comments

Comment by ryancarey on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T10:36:14.345Z · score: 3 (2 votes) · EA · GW

Interesting that one of the two main hypotheses advanced in that paper is that media is influencing public opinion, but the media is not the internet, but TV!

The rise of 24-hour partisan cable news provides another potential explanation. Partisan cable networks emerged during the period we study and arguably played a much larger role in the US than elsewhere, though this may be in part a consequence rather than a cause of growing affective polarization.9 Older demographic groups also consume more partisan cable news and have polarized more quickly than younger demographic groups in the US (Boxell et al. 2017; Martin and Yurukoglu 2017). Interestingly, the five countries with a negative linear slope for affective polarization all devote more public funds per capita to public service broadcast media than three of the countries with a positive slope (Benson and Powers 2011, Table 1; see also Benson et al. 2017). A role for partisan cable news is also consistent with visual evidence (see Figure 1) of an acceleration of the growth in affective polarization in the US following the mid-1990s, which saw the launch of Fox News and MSNBC.

(The other hypothesis is "party sorting", wherein people move to parties that align more in ideology and social identity.)

Perhaps campaigning for more money to PBS or somehow countering Fox and MSNBC could be really important for US-democracy.

Also, if TV has been so influential, it also suggests that even if online media isn't yet influential on the population-scale, it may be influential for smaller groups of people, and that it will be extremely influential in the future.

Comment by ryancarey on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-17T10:03:06.801Z · score: 22 (11 votes) · EA · GW
[Politicisation] will reduce EA's long-term impact: I have to confess I've never really understood this argument. I can think of numerous examples of social movements that have been both highly politicized and tremendously impactful.

Right, but none that have done so without risking a big fight. The status quo is that EA consists of a few thousand people, often trying to enter important technocratic roles, and achieve change without provoking big political fights (and being many-fold more efficient by doing so). The problem is that political EA efforts can inflict effectiveness penalties on other EA efforts. If EA is associated with a side e.g., "caring about the long-term" is considered a long-term issue, then other EA efforts may become associated to that side, e.g. long-term security legislation gets drawn into large battles, diminishing the effectiveness of technocratic efforts many-fold.

Basically, by bringing EA into politics, you're basically taking a few people who normally use scalpels, and arming them for a large-scale machine gun fight. The risk is not just losing a particular fight, but inflaming a multi-front war.

There are a bunch of ways of mitigating the effectiveness penalties that one accrues on others. The costs are less if political efforts are taken individually, so that they're not seen as systematic to EA. Also if they're from less prominent people, e.g. if Will and Toby stay out of the fray. It's less costly if it's symmetric between parties. For example, the cost of affiliating to a Rubio at this point might be less than the cost of affiliating to a Buttigieg, or could even be net positive.

Comment by ryancarey on Getting money out of politics and into charity · 2020-10-13T20:09:55.867Z · score: 2 (1 votes) · EA · GW

Basically funding connected to this.

Comment by ryancarey on RyanCarey's Shortform · 2020-10-11T09:59:08.089Z · score: 26 (10 votes) · EA · GW

Affector & Effector Roles as Task Y?

Longtermist EA seems relatively strong at thinking about how to do good, and raising funds for doing so, but relatively weak in affector organs, that tell us what's going on in the world, and effector organs that influence the world. Three examples of ways that EAs can actually influence behaviour are:

- working in & advising US nat sec

- working in UK & EU governments, in regulation

- working in & advising AI companies

But I expect this is not enough, and our (a/e)ffector organs are bottlenecking our impact. To be clear, it's not that these roles aren't mentally stimulating - they are. It's just that their impact lies primarily in implementing ideas, and uncovering practical considerations, rather than in an Ivory tower's pure, deep thinking.

The world is quickly becoming polarised between US and China, and this means that certain (a/e)ffector organs may be even more neglected than the others. We may want to promote: i) work as a diplomat ii) working at diplomat-adjacent think tanks, such as the Asia Society, iii) working at relevant UN bodies, relating to disarmament and bioweapon control, iv) working at UN-adjacent bodies that seek to pressure disarmament etc. These roles often reside in large entities that can accept hundreds or thousands of new staff at a wide range of skill levels, and so perhaps many people who are currently “earning to give” should move into these “affector” or “effector” roles (as well as those mentioned above, in other relevant parts of national governments). I'm also curious whether 80,000 Hours has considered diplomatic roles - I couldn't find much on a cursory search.

Comment by ryancarey on Getting money out of politics and into charity · 2020-10-07T12:03:37.965Z · score: 2 (1 votes) · EA · GW

fixed

Comment by ryancarey on Getting money out of politics and into charity · 2020-10-06T11:50:30.333Z · score: 19 (8 votes) · EA · GW

Consequentialists and EAs have certainly been interested in these questions. We were discussing the idea back in 2009. Toby Ord has written a relevant paper.

I'm not donating to politics, so wouldn't use it. I would say that if an election costs ~$10B, and you might move 0.1% of that into charities for a cost of $0.25M, that seems like a good deal. The obvious criticism, I think, is: "couldn't they benefit more from keeping the money?" I think this is surmountable because donating it may be psychologically preferable. Another reservation would be "You should figure out what happened with Repledge before trying to repeat it", which I think is basically something you should do.

I guess the funding that you initially need is probably significantly less than $250k, so it might make sense to apply for the February deadline of the EA Infrastructure Fund. If you're trying to do things before November (which seems difficult), then you might apply "off-cycle". Although there's a range of other funders of varying degrees of plausibility such as OpenPhil (mostly for funding amounts >$100k), the funders behind Progress studies (maybe the Collisons), the Survival and Flourishing Fund, the Long-term Future Fund etc.

Re choice of charities, well.. we do think that charities vary in effectiveness by many orders of magnitude, so probably it does make sense to be selective. In particular, most people who've studied the question think that those that focus on long-term impact can be orders of magnitude more effective than those that don't. So a lot of EAs (including me) work on catastrophic threats. This would be a good idea if you believe Haidt's ideas about common threats making common ground, which I think is nice. See also his Asteroids Club. This could support choices like the Nuclear Threat Initiative and Hopkins' Centre for Health Security, discussed here. To the extent that you were funding such charities, I think the case for effectiveness (and the case for EA funding) would be stronger.

The ideal choice of charities could also depend to some extent on other design choices taken: 1) do you want to allow trades other than $1:$1? 2) do you allow people to offer to do a trade, specific for one particular charity? On (1), one argument in favour would be that if one party has a larger funding base than the other, then a $1:$1 trade might favour them. Another would be that this naturally balances out the problem of charities being preferred by one side more than the other. One argument against would be that people might view 1:1 as fairer, and donate more. (2), arguments in favour would be that diversity can better satisfy people's preferences, and that you might fund certain charities too much if you just choose one. The argument against would be that people really hate choosing between charities. Overall, for (1) I'd guess "no". For (2), I'd guess "no" again, although I think it could be great to have a system where the charity rotates each week - it could help with promoting the app as well! But these are of course no more than guesses.

Anyway, those are all details - it seems like an exciting project!

Comment by ryancarey on Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety · 2020-10-04T17:38:21.059Z · score: 6 (2 votes) · EA · GW
I noticed though that your answers for #5, 7, and 8 were for the questions for the expert interviews I planned on doing, and not on questions 5-7 in the "Questions we'd like feedback on". You basically were able to answer #5 already there, so I'd just like your thoughts on #6 and #7 (on AI policy work and what questions we should ask people at local firms).

Ah, oops! 6. I'm not sure AI policy is that important in the Philippines, given that not that much AI research is happening there, compared to US/UK. 7. Relevance to AI safety is a bit tricky to gauge, and doesn't always matter that much for career capital. It might be better to just ask: do I get to do research activities, and does the team publish research papers?

On A, yeah it could make sense to push for nuclear power, or to become a local biosecurity expert. To be clear, the US China peace issue is not my area of expertise, just something that might be interesting to look into. I'm not thinking of something as simple as fighting for certain waters to be owned by China or the Philippines, but more to find ways to increase understanding and reinforce peace. Roughly: (improved traid/aid/treaties) -> (decreased tensions between China and ASEAN) -> (reduced chance of US-China war) -> (reduced risk of technology arms races between US and China) -> reduced existential risk. So maybe people in the Philippines can build links of trade, aid, treaties, etc between China/US and neutral countries. These things are probably done by foreign policy experts, diplomats and politicians, in places including embassies, the department of foreign affairs, national security organisations, and think tanks and universities.

Comment by ryancarey on Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety · 2020-10-03T18:03:07.105Z · score: 12 (3 votes) · EA · GW

I had a quick look over. I basically agree with the article. Here are some responses to some of your feedback questions:

2. Might be good to clarify that if you start a degree in US/UK, it makes it easier to get a work visa and job afterwards

3. You could argue that there's little bits in Switzerland, Czech Republic, Israel. Not so much in Aus anymore. but US, UK, Canada are the main ones.

4. Yes, it's possible. But generally you want to have some collaborators and/or be a professor. For the latter, you'd want to get a degree from a top-30 university worldwide, and then pursue professorship back home, so it wouldn't necessarily be easy.

And likewise for some of the expert interview questions:

5. You could check out Ajeya's report for some work on plausible timelines

7. Maybe, but it's hard. Either you'd need to find a startup that offers remote software work, or get a long-term job at a university

8. Same as non-Filipiino undergrads. Aim for papers and strong references.


Also, here are two other big picture elements of feedback:

A. A bigger picture question is: how can Filipinos best help to reduce existential risk? Often, the answer will be the same as if they were non-Filipinos - AI safety, biosecurity, or whatever. But one idea is that EA Filipinos could help with building US-China peace. The Philippines is close to China, and in major territorial disputes over the South China Sea. It's in ASEAN, which is big, close to China and somewhat neutral. So maybe it's useful to work for the department of foreign affairs or military, and try to reduce the chances of global conflict emerging from the South China Sea, or help to ensure that countries in ASEAN trade with both China and the US.

B. A lot of considerations for Filipino EAs interested in AI safety will be similar for a lot of EAs who aren't in Anglosphere or EU countries. But only a small fraction of these people are in The Philippines (~1%). So maybe for articles like this, it would be better to write for that larger audience.

Comment by ryancarey on RyanCarey's Shortform · 2020-09-30T08:28:20.068Z · score: 15 (6 votes) · EA · GW

EAs have reason to favour Top-5 postdocs over Top-100 tenure?

Related to Hacking Academia.

A bunch of people face a choice between being a postdoc at one of the top 5 universities, and being a professor at one of the top 100 universities. For the purpose of this post, let's set aside the possibilities of working in industry, grantmaking and nonprofits. Some of the relative strengths (+) of the top-5 postdoc route are accentuated for EAs, while some of the weaknesses (-) are attenuated:

+greater access to elite talent (extra-important for EAs)

+larger university-based EA communities, many of which are at top-5 universities

-less secure research funding (less of an issue in longtermist research)

-less career security (less important for high levels of altruism)

-can't be a sole-supervisor of a PhD student (less important if one works with a full-professor who can supervise, e.g. at Berkeley or Oxford).

-harder to set up a centre (this one does seem bad for EAs, and hard to escape)

There are also considerations relating to EAs' ability to secure tenure. Sometimes, this is decreased a bit due to the research running against prevailing trends.

Overall, I think that some EAs should still pursue professorships, especially to set up research centres, or to establish a presence in an influential location but that we will want more postdocs than is usual.

Comment by ryancarey on Estimation of probabilities to get tenure track in academia: baseline and publications during the PhD. · 2020-09-20T23:13:10.994Z · score: 5 (3 votes) · EA · GW

Interesting. The point 2 article by van Dijk seems decent. Figure 1B says that the impact factor of journals, volume of publications, and cites/h-index are all fairly predictive. University rank gets some independent weighting (among 38 features, as shown in their supplementary Table S1), but not much.

Looks like although the web version has gone offline, the source code of their model is still online!

Comment by ryancarey on Estimation of probabilities to get tenure track in academia: baseline and publications during the PhD. · 2020-09-20T20:58:54.056Z · score: 7 (4 votes) · EA · GW

Hey Pablo - thanks for working this up. It's nice to have some baseline estimates!

As you say, Tregellas et al. shows that the probability of tenure varies a lot with the number of first author publications. It would be interesting to know if tenure can be predicted better with other factors like one's institution or h-index - I could imagine such a model performing much better than the baseline.

Two other queries:

  • I feel like we're talking about tenure, rather than tenure track?
  • When you say things like "my personal estimate of the baseline probability of getting a permanent (tenured) position in academia should be with 90% probability between 10-30%", it might be clearer to say you're 90% sure that 10-30% of students get tenure? Otherwise I don't know how to interpret this probability of a probability.
Comment by ryancarey on EA Relationship Status · 2020-09-19T23:07:26.759Z · score: 2 (1 votes) · EA · GW

I agree that you should look at the things in order of the size of their prediction about the observation. But I think that a lot of the biggest effects would be in that direction.

Comment by ryancarey on EA Relationship Status · 2020-09-19T08:20:42.368Z · score: 27 (19 votes) · EA · GW

It's a reasonable question. I take the observation to be that 60% of EAs over 45 have married, where we'd expect 85%.

I think a good hypothesis is religion. In general, 60% of atheists have married, versus 80% of the religiously-affiliated have, and ~55% of that effect persists after controlling for age (see the bottom two tables). 86% of EAs are non-religious. So almost half of the reason that EAs marry less is probably just that they're atheist/agnostic, so they don't think that cohabiting is living in sin!

The other half, well, I agree with your top two points - that EAs favour work over having kids. Apart from that, two guesses would be:

  • statistical artifact: that single people are more likely to spend time online, in the kinds of places they would discover the survey.
  • that single people are more likely to sign up to join a community (to try and meet someone).

Given all the available explanations, I don't feel that surprised about the observation anymore.

Comment by ryancarey on Prabhat Soni's Shortform · 2020-09-18T21:11:32.936Z · score: 2 (1 votes) · EA · GW

I'm not sure you've understood how I'm calculating my figures, so let me show how we can set a really conservative upper bound for the number of people who would move to Greenland.

Based on current numbers, 3.5% of world population are migrants, and 6% are in deserts. So that means less than 3.5/9.5=37% of desert populations have migrated. Even if half of those had migrated because of the weather, that would be less than 20% of all desert populations. Moreover, even if people migrated uniformly according to land area, only 1.4% of migrants would move to Greenland (that's the fraction of land area occupied by Greenland). So an ultra-conservative upper bound for the number of people migrating to Greenland would be 1B*.37*.2*.014=1M.

So my initial status-quo estimate was 1e3, and my ultra-conservative estimate was 1e6. It seems pretty likely to me that the true figure will be 1e3-1e6, whereas 5e7 is certainly not a realistic estimate.

Comment by ryancarey on Formalizing longtermism · 2020-09-18T20:01:05.674Z · score: 2 (1 votes) · EA · GW

Not inconsistent, but I think Will's criteria are just one of many possible reasons that this might be the case.

Comment by ryancarey on Formalizing longtermism · 2020-09-18T08:11:23.776Z · score: 2 (1 votes) · EA · GW

Interesting - defining longtermism as rectifying future disprivelege. This is different from what I was trying to model. Honestly, it seems different from all the other definitions. Is this the sort of longtermism that you want to model?

If I was trying to model this, I would want to make reference to a baseline level of disparity, given inaction, and then consider how a (possibly causal) intervention could improve that.

Comment by ryancarey on Formalizing longtermism · 2020-09-17T23:16:24.855Z · score: 2 (1 votes) · EA · GW
I'm a bit confused by this setup

is a one-off action taken at t=0 whose effects accrue over time, analogous to . (I could be wrong, but I'm proposing that the"long-term" in longtermism refers to utility obtained at different times, not actions taken at different times, so removing the latter helps bring the definition of longtermism into focus.

This condition would also be satisfied in a world with no x-risk, where each generation becomes successively richer and happier, and there's no need for present generations to care about improving the future.

Is what you're saying that actions could vary on their short-term-goodness and long-term goodness, such that short/long-term goodness are perfectly correlated? To me, this is a world where longtermism is true - we can tell an agent's value from its long-term value, and also a world where shorttermism is true. Generations only need to care about the future if longtermism works but other heuristics fail. To your question, is just the utility at time under .

Comment by ryancarey on Formalizing longtermism · 2020-09-17T10:17:58.162Z · score: 2 (1 votes) · EA · GW

Sure, that definition is interesting - seems optimised for advancing arguments about how to do practical ethical reasoning. I think a variation it would follow from mine - an ex-ante very good decision is contained in a set of options whose ex ante effects on the very long-run future are very good.

Still, it would be good to have a definition that generalises to suboptimal agents. Suppose that what's long-term optimal for me is to work twelve hours a day, but it's vanishingly unlikely that I'll do that. Then what can longtermism do for an agent like me? It'd also make sense for us to be able to use longtermism to evaluate the actions of politicians, even if we don't think any of the actions are long- or short-term optimal.

Comment by ryancarey on How do political scientists do good? · 2020-09-16T23:01:18.204Z · score: 2 (1 votes) · EA · GW

I guess Tyler, Will, etc are approaching governance from a general, and highly idealised perspective, in discussing hypothetical institutions.

In contrast, folks like GovAI are approaching things from a more targeted, and only moderately idealised perspective. I expect a bunch of their questions will relate to how to bring existing institutions to bear on mitigating AI risks. Do your questions also differ from theirs?

Comment by ryancarey on Prabhat Soni's Shortform · 2020-09-16T12:53:51.170Z · score: 3 (2 votes) · EA · GW

The total drylands population is 35% of the world population (~6% from desert/semi-desert). The total number of migrants, however, is 3.5% of world population. So less than 10% of those from drylands have left. But most such migrants move because of politics, war, employment rather than climate. The number leaving because of climate is less (and possibly much less) than 5% of the drylands population.

So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants. Probably too few of these people will go to any country, let alone Greenland, to make it into a new superpower. But let's run the numbers for Greenland anyway. Of the world's 300M migrants, Greenland currently has only ~10k. So of an extra 50M, Greenland could be expected to take ~2k, so I'm coming in 5-6 orders of magnitude lower than the 1B figure.

It does still have some military relevance, and would be good to keep it neutral, or at least out of the hands of China/Russia.

Comment by ryancarey on Formalizing longtermism · 2020-09-16T09:19:49.644Z · score: 5 (4 votes) · EA · GW

I haven't read most of GPI's stuff on defining longtermism, but here are my thoughts. I think (2) is close to what I'd want for a definition of very strong longtermism - "the view on which long-run outcomes are of overwhelming importance"

I think we should be able to model longtermism using a simpler model than yours. Suppose you're taking a one-off action , and then you get (discounted) reward Then I'd say very strong longtermism is true iff the impact of each decisions depends overwhelmingly on their long-term impact.

where is some large number.

You could stipulate that the discounted utility of the distant future has to be within a factor , where . If you preferred, you could talk about the differences between utilities for all pairs of decisions, rather than the utility of each individual decision. Or small deviations from optimal. Or you could consider sequential decision-making, assuming that later decisions are made optimally. Or you assumed a distribution over D (e.g. the distribution of actual human decisions), and talk about the amount of variance in total utility explained by their long-term impact. But these are philosophical details - overall, we should land somewhere near your (2).

It's not super clear to me that we want to formalise longtermism - "the ethical view that is particularly concerned with ensuring long-run outcomes go well". If we did, it might say that sometimes is big, or that it can sometimes outweigh other considerations.

Your (1) is interesting, but it doesn't seem like a definition of longtermism. I'd call it something like safety investment is optimal, because it pertains to practical concerns about how to attain long-term utility.

Rather, I think it'd be more interesting to try to prove that follows from longtermism, given certain model assumptions (such as yours). To see what I have in mind, we could elaborate my setup. Setup: let the decision space be , where represents the fraction of resources you invest in the long-term. Each is an increasing function of and each is a decreasing function of . Then we could have a conjecture: Conjecture: if strong longtermism is true (for some and ), then the optimal action will be (or , some function of ). Proof: since we assume that only long-term impact matters, then the action with the best longterm impact is best overall.

Perhaps a weaker version could be proved in an economic model.

Comment by ryancarey on Some thoughts on EA outreach to high schoolers · 2020-09-15T23:27:59.053Z · score: 16 (10 votes) · EA · GW

I'm not an expert, but I think "conversion" in marketing refers to getting people to take a specific action, such as buying a product or making a donation. In this case, there's no specific action, so I read "convert" in the non-technical sense, 'change one's religious faith or other belief', which is why it's awkward.

Comment by ryancarey on How do political scientists do good? · 2020-09-15T22:22:25.523Z · score: 4 (2 votes) · EA · GW

I think there's a bunch of prior thinking on fairly related questions:

I'm not saying it's bad to try to think through things yourself as a political scientist, but it perhaps it would be useful to contrast your thoughts with the analyses from related fields, to talk about how your question differs, and how your answers differ, insofar as they do.

Comment by ryancarey on Some thoughts on EA outreach to high schoolers · 2020-09-15T02:54:30.126Z · score: 2 (1 votes) · EA · GW

Also, some may still resemble "students"/apprentices with "impact still to be determined". I guess ESPR may be hard to evaluate 4 years in, but shouldn't SPARC students be beyond that stage, if the program has run for 8 or so years? American data could be very useful...

Comment by ryancarey on Some thoughts on EA outreach to high schoolers · 2020-09-14T06:25:42.294Z · score: 13 (10 votes) · EA · GW

I think targeted high school outreach has always looked (incredibly) good a priori. The question is whether it works in practice. At least in UK/EU, I can't think of anyone who came through sparc/eurosparc/shic and is now working full-time on EA. Probably a couple of students, but their impact is still to be determined. Until a couple of years ago, people were saying the same thing in the Bay Area. Which would suggest all of these programs have a <1% conversion rate, and that high school outreach might have an even lower conversion rate than university group outreach (for whatever reasons). Your suggestiom that this is changed is interesting - if you can say more without getting into awkward "naming names" it'd be pretty useful.

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-13T16:12:49.968Z · score: 3 (2 votes) · EA · GW

Makes sense! How people deal with the uncertainty could also be informative. If they talk about calculating the expected value (in earnings) of a tournament, or expected points won from a shot, or get excited about sport statisticians' work generally - then that would be extra-encouraging.

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-13T07:47:48.454Z · score: 3 (2 votes) · EA · GW

Good point. I think I'd rather clarify/revise my claims to: 1) pro athletes will be somewhat less interested in EA than poker players, mostly due to different thinking styles, and 2) many/most pro athletes are highly generally capable but their comparative advantage won't usually be donating tournament winnings or doing research. Something like promoting disarmament, or entering politics could be. But it needs way more thought.

Comment by ryancarey on RyanCarey's Shortform · 2020-09-11T16:04:23.450Z · score: 5 (4 votes) · EA · GW

Jakarta - yep, it's also ASEAN's HQ. Worth noting, though, that Indonesia is moving its capital out of Jakarta.

Comment by ryancarey on RyanCarey's Shortform · 2020-09-11T10:24:11.609Z · score: 20 (9 votes) · EA · GW

Which longtermist hubs do we most need? (see also: Hacking Academia)

Suppose longtermism already has some presence in SF, Oxford, DC, London, Toronto, Melbourne, Boston, New York, and is already trying to boost its presence in the EU (especially Brussels, Paris, Berlin), UN (NYC, Geneva), and China (Beijing, ...). Which other cities are important?

I think there's a case for New Delhi, as the capital of India. It's the third-largest country by GDP (PPP), soon-to-be the most populous country, high-growth, and a neighbour of China. Perhaps we're neglecting it due to founder effects, because it has lower average wealth, because it's universities aren't thriving, and/or because it currently has a nationalist government.

I also see a case for Singapore - that it's government and universities could be a place from which to work on de-escalating US-China tensions. It's physically and culturally not far from China. As a city-state, it benefits a lot from peace and global trade. It's by far the most-developed member of ASEAN, which is also large, mostly neutral, and benefits from peace. It's generally very technocratic with high historical growth, and is also the HQ of APEC.

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-10T14:58:24.118Z · score: 2 (1 votes) · EA · GW

What you divide by just depends what question you're trying to answer.

I don't think we really want to know about the total earnings, or the earnings of a player with a particular ranking, as these would assume that you can capture some large fraction, or some top-tier part of the total market. On those measures, "all people" is the best pool to recruit from.

More interesting questions [if you're trying to raise donations] are things like "what are the average earnings?" or "how well-paid is an individual with a certain level of extraordinariness?". If you need to be a one-in-a-million soccer player to earn as much as a one-in-a-thousand poker player, then the soccer players are more sparse, more famous, and harder to recruit than equivalently rich poker players.

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-10T11:02:51.699Z · score: 1 (2 votes) · EA · GW

Yes, the very richest sportspeople have ~$1B to poker players' ~$0.1B. But the top sportspeople are rarer in their talents because ~100x more people try to play e.g. soccer than poker. Pro sport seems to pays less well than poker for any given level of talent. In order to equal the donations of poker players, you might have to get players who are quite elite and famous, or assemble a group across different sports. Whereas for poker it's a tight-knit group of unknown nerds - easier to do!

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-10T00:01:06.657Z · score: 6 (3 votes) · EA · GW

Is the offensive part that intelligence might be useful, or that poker players might be more intelligent?

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-09T23:39:43.394Z · score: 16 (9 votes) · EA · GW
I've seen vaguely EA-related outreach to founders, poker players, and people who inherit loads of money. The thing these groups have in common are that they've got lots of money to donate that they got all at once, which athletes also have. I don't think we should get hung up on "intelligence", rationality or ability to think in bets.

Founders, poker players, heirs, and sportspeople actually have vastly different levels of wealth. Founders have wealth ranging up to >$100B. Heirs up to >$30B. For sportspeople, it's <$1B, much less on average. For Poker players, it's <$0.1B. In other words, poker players are not among the biggest funders in EA anymore. Rather, if they are to have a really big impact, it will be by contributing their time and influence. Liv Boeree, for example, is doing a lot of social media, and some others are switching into research roles. In other words, they're doing things where their intelligence and rationality is front and centre. The amount of funds of a typical pro athlete may be similar or less than that of these top poker players. So I would expect that the intelligence or rationality of athletes will be a major factor in their impact.

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-09T17:18:21.184Z · score: 7 (6 votes) · EA · GW

OK, that's interesting, but it's not what said: that brains do help with being a good sportsperson - they're just not a predominant feature, as in poker!

Re intelligence, well, no it's not necessary for engaging with EA (I didn't say it was). But it obviously helps - a lot of EA-fans are at top universities, and smarts also help with figuring out how to do good.

Is the poker-vs-sport difference decisive? Well, poker is an extremely frustrating and difficult game/sport. A top pro-player can lose money for months, due to the swings involved. It's much easier than in sport to go on tilt. Dealing with such uncertainty is exactly the sort of thing that can help with thinking impassively about uncertain philanthropic interventions. So maybe!

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-09T12:29:34.856Z · score: 13 (10 votes) · EA · GW

Those who downvoted this: how do you think we're supposed to make good strategic decisions without having a forthright discussion of the pros and cons of different approaches? See others' much harsher versions of this argument!

Comment by ryancarey on Are there any other pro athlete aspiring EAs? · 2020-09-08T23:34:18.615Z · score: 20 (38 votes) · EA · GW

Interesting idea! A few reactions:

  • Outreach to poker players has an advantage over outreach to athletes, in that i) intelligence is a central requirement of being a good poker player, whereas it's only a secondary requirement of being a good sportsperson in general, ii) thinking about expected values and rationality is a central component of the way poker is played. Whereas it's only a medium-sized part of how sports in-general are played.
    • Edit: a lot of people seem to have been offended by this line of reasoning. But it's unavoidably true: people who calculate expected values and engage in meta-reasoning for their day job will, on average, be vastly more interested in philosophical questions related to impact evaluation, and better equipped to solve difficult societal problems, than those who don't.
  • Maybe poker players are richer, relative to how rare their skill is, due to the fact that their sport is played with money. I imagine a larger fraction of poker players are pro than tennis players, at least.
  • However, athletes are more often well-known. So maybe it makes sense for athletes to mostly focus on raising funds, running for office, things that use things other than just money.

Still, it's a cool idea - interested to see how it develops!

Comment by ryancarey on Does Economic History Point Toward a Singularity? · 2020-09-08T00:30:53.827Z · score: 4 (2 votes) · EA · GW

One item of feedback: I'd find the summary more satisfying if it gave a bit more detail on the analytic methods used to reach the conclusions. Basically, I understand the summary to say that the early data is noisy, and the new data doesn't fit a hyperbola. But does the data look hyperbolic despite the noise? What shape is the new data? Is there a systematic approach to fitting different models? What model classes are used? How is goodness of fit compared? Even a little such information could go a long way in helping readers to decide what to think, and whether to read the report in full.

Comment by ryancarey on Open Philanthropy: Our Progress in 2019 and Plans for 2020 · 2020-09-06T15:14:49.399Z · score: 2 (1 votes) · EA · GW

We didn't do any call yet!

Comment by ryancarey on Open Philanthropy: Our Progress in 2019 and Plans for 2020 · 2020-09-06T10:41:21.700Z · score: 2 (1 votes) · EA · GW

OpenPhil has introduced early career funding for people who are interested in the long-term future, including AI safety here: https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future

This should cause their overall portfolio of AI scholarships to place more weight to the relevance of research done, which seems like an improvement to me.

Comment by ryancarey on AI safety scholarships look worth-funding (if other funding is sane) · 2020-09-06T10:31:07.394Z · score: 4 (3 votes) · EA · GW

Rejoice! OpenPhil is now funding AI safety and other graduate studies here: https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future

Comment by ryancarey on A curriculum for Effective Altruists · 2020-08-28T09:37:41.176Z · score: 14 (10 votes) · EA · GW

This stuff is interesting to think about. There have been EA courses before. There could one-day be a textbook for effective altruism. There could be a successor to the RSP that offers a degree. Similar stories for "global prioritisation", "macrostrategy", and "AI safety".

Comment by ryancarey on How should we run the EA Forum Prize? · 2020-08-24T10:30:13.933Z · score: 2 (1 votes) · EA · GW

Yeah I don't know whether/when grants or prizes are better, or exactly what the optimal initial scale is, although presumably you would want to go beyond $22k/yr once it has been demonstrated to work. One would also want to look at why previous prizes, such as Paul's alignment prizes didn't work out.

I guess I would be granting to individuals similar to those who "I've enjoyed reading" according to the post above. I also wonder if someone could get Zach Weinersmith to do something EA, given how much related stuff he's already done previously.

Comment by ryancarey on How should we run the EA Forum Prize? · 2020-08-23T21:41:35.669Z · score: 2 (1 votes) · EA · GW

Here is the kind of initiative that would seem more useful to me than an EA forum prize (if you switched in EA for liberalism): https://mobile.twitter.com/tylercowen/status/1296152595728412675

Comment by ryancarey on RyanCarey's Shortform · 2020-08-17T20:15:28.778Z · score: 20 (7 votes) · EA · GW

Hacking Academia.

Certain opportunities are much more attractive to the impact-minded than to regular academics, and so may be attractive, relative to how competitive they are.

  • The secure nature of EA funding means that tenure is less important (although of course it's still good).
  • Some centers do research on EA-related topics, and are therefore more attractive, such as Oxford, GMU.
  • Universities in or near capital cities, such as Georgetown, UMD College Park, ANU, Ghent, Tsinghua or near other political centers such as NYC, Geneva may offer a perch from which to provide policy input.
  • Those doing interdisciplinary work may want to apply for a department that's strong in a field other than their own. For example, people working in AI ethics may benefit from centers that are great at AI, even if they're weak in philosophy.
  • Certain universities may be more attractive due to being in an EA hub, such as Berkeley, Oxford, UCL, UMD College Park, etc.

Thinking about an academic career in this way makes me think more people should pursue tenure at UMD, Georgetown, and Johns Hopkins (good for both biosecurity and causal models of AI), than I thought beforehand.

Comment by ryancarey on How should we run the EA Forum Prize? · 2020-07-30T11:15:38.870Z · score: 8 (4 votes) · EA · GW

In the last month or so, here are a bunch of things I've enjoyed reading that weren't on the forum:

Blogs:

News (opinion):

Other:

Comment by ryancarey on How should we run the EA Forum Prize? · 2020-07-29T11:52:01.648Z · score: 2 (1 votes) · EA · GW

Yeah, I think high-quality content is spread across many blogs. But not terribly hard to find - a lot of it is in blog posts that can be seen by following a hundred Twitter accounts.

I agree crossposting or linkposting is one way to gather content. I guess that's kind-of what subreddits/hackernews/Twitter all do, but those platforms are more-designed for that purpose. Not sure what's the best solution.

Comment by ryancarey on Max_Daniel's Shortform · 2020-07-20T14:08:30.281Z · score: 7 (3 votes) · EA · GW

To evaluate its editability, we can compare AI code to code, and to the human brain, along various dimensions: storage size, understandability, copyability, etc. (i.e. let's decompose "complexity" into "storage size" and "understandability" to ensure conceptual clarity)

For size, AI code seems more similar to humans. AI models are already pretty big, so may be around human-sized by the time a hypothetical AI is created.

For understandability, I would expect it to be more like code, than to a human brain. After all, it's created with a known design and objective that was built intentionally. Even if the learned model has a complex architecture, we should be able to understand its relatively simpler training procedure and incentives.

And then, an AI code will, like ordinary code - and unlike the human brain - be copyable, and have a digital storage medium, which are both potentially critical factors for editing.

Size (i.e. storage complexity) doesn't seem like a very significant factor here.

I'd guess the editability of AI code would resemble the editability of code moreso than that of a human brain. But even if you don't agree, I think this points at a better way to analyse the question.

Comment by ryancarey on Mike Huemer on The Case for Tyranny · 2020-07-17T11:50:24.793Z · score: 4 (4 votes) · EA · GW

It's weird that he doesn't cite https://nickbostrom.com/papers/vulnerable.pdf

Comment by ryancarey on A bill to massively expand NSF to tech domains. What's the relevance for x-risk? · 2020-07-13T13:00:22.180Z · score: 7 (4 votes) · EA · GW

A big expansion of the non-defence science budget, $8B/yr->$30B+/yr, with ML/genomics/disaster prevention being among the focus areas for the additional funding - interesting! Yet less than federal national defence spending ($60B/yr)., and much less than private R&D ($400B/yr). [1]

I guess groups that are already using defence research grants (maybe AI research) or private funding would be affected to a small-to-medium extent, whereas ones that are not (disaster prevention) could feel a big difference.

1. See Fig 3 and Table 1 at https://fas.org/sgp/crs/misc/R44307.pdf

Comment by ryancarey on Longtermism ⋂ Twitter · 2020-07-10T17:55:44.077Z · score: 13 (5 votes) · EA · GW

Counterpoints: