Posts

Longtermism ⋂ Twitter 2020-06-15T14:19:37.044Z · score: 48 (22 votes)
RyanCarey's Shortform 2020-01-27T22:18:23.751Z · score: 7 (1 votes)
Worldwide decline of the entomofauna: A review of its drivers 2019-07-04T19:06:17.041Z · score: 10 (5 votes)
SHOW: A framework for shaping your talent for direct work 2019-03-12T17:16:44.885Z · score: 140 (73 votes)
AI alignment prize winners and next round [link] 2018-01-20T12:07:16.024Z · score: 7 (7 votes)
The Threat of Nuclear Terrorism MOOC [link] 2017-10-19T12:31:12.737Z · score: 7 (7 votes)
Informatica: Special Issue on Superintelligence 2017-05-03T05:05:55.750Z · score: 7 (7 votes)
Tell us how to improve the forum 2017-01-03T06:25:32.114Z · score: 4 (4 votes)
Improving long-run civilisational robustness 2016-05-10T11:14:47.777Z · score: 9 (9 votes)
EA Open Thread: October 2015-10-10T19:27:04.119Z · score: 1 (1 votes)
September Open Thread 2015-09-13T14:22:20.627Z · score: 0 (0 votes)
Reducing Catastrophic Risks: A Practical Introduction 2015-09-09T22:33:03.230Z · score: 5 (5 votes)
Superforecasters [link] 2015-08-20T18:38:27.846Z · score: 6 (5 votes)
The long-term significance of reducing global catastrophic risks [link] 2015-08-13T22:38:23.903Z · score: 4 (4 votes)
A response to Matthews on AI Risk 2015-08-11T12:58:38.930Z · score: 11 (11 votes)
August Open Thread: EA Global! 2015-08-01T15:42:07.625Z · score: 3 (3 votes)
July Open Thread 2015-07-02T13:41:52.991Z · score: 4 (4 votes)
[Discussion] Are academic papers a terrible discussion forum for effective altruists? 2015-06-05T23:30:32.785Z · score: 3 (3 votes)
Upcoming AMA with new MIRI Executive Director, Nate Soares: June 11th 3pm PT 2015-06-02T15:05:56.021Z · score: 1 (3 votes)
June Open Thread 2015-06-01T12:04:00.027Z · score: 4 (4 votes)
Introducing Alison, our new forum moderator 2015-05-28T16:09:26.349Z · score: 9 (9 votes)
Three new offsite posts 2015-05-18T22:26:18.674Z · score: 4 (4 votes)
May Open Thread 2015-05-01T09:53:47.278Z · score: 1 (1 votes)
Effective Altruism Handbook - Now Online 2015-04-23T14:23:28.013Z · score: 27 (29 votes)
One week left for CSER researcher applications 2015-04-17T00:40:39.961Z · score: 2 (2 votes)
How Much is Enough [LINK] 2015-04-09T18:51:48.656Z · score: 3 (3 votes)
April Open Thread 2015-04-01T22:42:48.295Z · score: 2 (2 votes)
Marcus Davis will help with moderation until early May 2015-03-25T19:12:11.614Z · score: 5 (5 votes)
Rationality: From AI to Zombies was released today! 2015-03-15T01:52:54.157Z · score: 6 (8 votes)
GiveWell Updates 2015-03-11T22:43:30.967Z · score: 4 (4 votes)
Upcoming AMA: Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project: 17th March 8pm GMT 2015-03-10T21:25:39.329Z · score: 4 (4 votes)
A call for ideas - EA Ventures 2015-03-01T14:50:59.154Z · score: 3 (3 votes)
Seth Baum AMA next Tuesday on the EA Forum 2015-02-23T12:37:51.817Z · score: 7 (7 votes)
February Open Thread 2015-02-16T17:42:35.208Z · score: 0 (0 votes)
The AI Revolution [Link] 2015-02-03T19:39:58.616Z · score: 10 (10 votes)
February Meetups Thread 2015-02-03T17:57:04.323Z · score: 1 (1 votes)
January Open Thread 2015-01-19T18:12:55.433Z · score: 0 (0 votes)
[link] Importance Motivation: a double-edged sword 2015-01-11T21:01:10.451Z · score: 3 (3 votes)
I am Samwise [link] 2015-01-08T17:44:37.793Z · score: 4 (4 votes)
The Outside Critics of Effective Altruism 2015-01-05T18:37:48.862Z · score: 12 (12 votes)
January Meetups Thread 2015-01-05T16:08:38.455Z · score: 0 (0 votes)
CFAR's annual update [link] 2014-12-26T14:05:55.599Z · score: 1 (3 votes)
MIRI posts its technical research agenda [link] 2014-12-24T00:27:30.639Z · score: 4 (6 votes)
Upcoming Christmas Meetups (Upcoming Meetups 7) 2014-12-22T13:21:17.388Z · score: 0 (0 votes)
Christmas 2014 Open Thread (Open Thread 7) 2014-12-15T16:31:35.803Z · score: 1 (1 votes)
Upcoming Meetups 6 2014-12-08T17:29:00.830Z · score: 0 (0 votes)
Open Thread 6 2014-12-01T21:58:29.063Z · score: 1 (1 votes)
Upcoming Meetups 5 2014-11-24T21:02:07.631Z · score: 0 (0 votes)
Open thread 5 2014-11-17T15:57:12.988Z · score: 1 (1 votes)
Upcoming Meetups 4 2014-11-10T13:54:39.551Z · score: 0 (0 votes)

Comments

Comment by ryancarey on How should we run the EA Forum Prize? · 2020-07-30T11:15:38.870Z · score: 8 (4 votes) · EA · GW

In the last month or so, here are a bunch of things I've enjoyed reading that weren't on the forum:

Blogs:

News (opinion):

Other:

Comment by ryancarey on How should we run the EA Forum Prize? · 2020-07-29T11:52:01.648Z · score: 2 (1 votes) · EA · GW

Yeah, I think high-quality content is spread across many blogs. But not terribly hard to find - a lot of it is in blog posts that can be seen by following a hundred Twitter accounts.

I agree crossposting or linkposting is one way to gather content. I guess that's kind-of what subreddits/hackernews/Twitter all do, but those platforms are more-designed for that purpose. Not sure what's the best solution.

Comment by ryancarey on Max_Daniel's Shortform · 2020-07-20T14:08:30.281Z · score: 4 (2 votes) · EA · GW

To evaluate its editability, we can compare AI code to code, and to the human brain, along various dimensions: storage size, understandability, copyability, etc. (i.e. let's decompose "complexity" into "storage size" and "understandability" to ensure conceptual clarity)

For size, AI code seems more similar to humans. AI models are already pretty big, so may be around human-sized by the time a hypothetical AI is created.

For understandability, I would expect it to be more like code, than to a human brain. After all, it's created with a known design and objective that was built intentionally. Even if the learned model has a complex architecture, we should be able to understand its relatively simpler training procedure and incentives.

And then, an AI code will, like ordinary code - and unlike the human brain - be copyable, and have a digital storage medium, which are both potentially critical factors for editing.

Size (i.e. storage complexity) doesn't seem like a very significant factor here.

I'd guess the editability of AI code would resemble the editability of code moreso than that of a human brain. But even if you don't agree, I think this points at a better way to analyse the question.

Comment by ryancarey on Mike Huemer on The Case for Tyranny · 2020-07-17T11:50:24.793Z · score: 4 (4 votes) · EA · GW

It's weird that he doesn't cite https://nickbostrom.com/papers/vulnerable.pdf

Comment by ryancarey on A bill to massively expand NSF to tech domains. What's the relevance for x-risk? · 2020-07-13T13:00:22.180Z · score: 7 (4 votes) · EA · GW

A big expansion of the non-defence science budget, $8B/yr->$30B+/yr, with ML/genomics/disaster prevention being among the focus areas for the additional funding - interesting! Yet less than federal national defence spending ($60B/yr)., and much less than private R&D ($400B/yr). [1]

I guess groups that are already using defence research grants (maybe AI research) or private funding would be affected to a small-to-medium extent, whereas ones that are not (disaster prevention) could feel a big difference.

1. See Fig 3 and Table 1 at https://fas.org/sgp/crs/misc/R44307.pdf

Comment by ryancarey on Longtermism ⋂ Twitter · 2020-07-10T17:55:44.077Z · score: 12 (4 votes) · EA · GW

Counterpoints:

Comment by ryancarey on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-06-30T20:13:24.826Z · score: 8 (4 votes) · EA · GW

For Covid-19 spread, what seems to be the relative importance of: 1) climate, 2) behaviour, and 3) seroprevalence?

Comment by ryancarey on How should we run the EA Forum Prize? · 2020-06-26T14:47:54.222Z · score: 2 (1 votes) · EA · GW

The comment was probably strong-downvoted because it is confidently wrong in two dimensions:

1. The EA Forum only exists to promote impactful ideas. So to say that the question "where are impactful ideas?" is a distraction from the question "when should we post on the Forum?" is to have things entirely backwards. To promote good ideas, we do need to know where they are.

2. We are trying to address what a community-builder should do, not a content-creator. It is a non-sequitur to try to replace the important meta-questions of what infrastructure and incentives there should be, with the question of when an individual should post to the forum.

Comment by ryancarey on How should we run the EA Forum Prize? · 2020-06-23T15:00:37.078Z · score: 4 (5 votes) · EA · GW

Almost all content useful to EAs is not written on the forum, and almost all authors who could write such content will not write it on the forum. So it would be a lot more valuable to reward good content whether or not it is on the forum. It is harder to evaluate all content, but one can consider nominated content. If this is outside one's job description, then can one change the job description?

Comment by ryancarey on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-18T16:52:21.002Z · score: 19 (7 votes) · EA · GW

One relevant datapoint is Stripe Press. The tech company Stripe promotes some books on startups and progress studies, with the stated goal of sharing ideas that would inspire startups (that might use their product). They outsource the printing.

Does the rate of consumption of books increase when Stripe reprints them?

Yes.

  • Of its 600 ratings, the The Dream Machine has recieved 300 since Nov 2018 (published in 2001, re-published in Sep 2018), based on viewing the 10th page of ratings sorted by new. So it's read at ~10x the previous rate.
  • Of its 900 ratings, Stubborn Attachments has 300 ratings since Jun 2019 (published in Jul 2016, re-released in Oct 2018). So it seems to have doubled the previous rate.

But these books are relatively unpopular, relative to Superintelligence, which has 12k ratings, and TLYCS, which has 4k. We can see that reprinting can help revive unpopular books. But it's far from clear that it would help already-thriving ones, if it would cut the flow of that book into physical bookstores. It could just as easily hinder. So it'll be interesting to see more data.

Comment by ryancarey on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-18T12:48:24.545Z · score: 16 (7 votes) · EA · GW

Nice. We could check how many actually read the book by noting whether the book accumulated Goodreads ratings more quickly after the 10-year anniversary - especially once another 1-2 years have passed.

Comment by ryancarey on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T12:17:09.457Z · score: 19 (12 votes) · EA · GW

The key question here, is whether (and if so, to what degree) free download is a more effective means of distribution than regular book sales. So we should ask Peter Singer how the consumption of TLYCS changed with putting his book online. Or, if there are any other books that were distributed simultaneously across typical and unconventional means, then how many people did each distribution method reach?

Comment by ryancarey on Open Philanthropy: Our Progress in 2019 and Plans for 2020 · 2020-05-14T20:22:29.621Z · score: 12 (7 votes) · EA · GW

Hey Catherio, sure, I've been puzzled by this for long enough that I'll probably reach out for a call.

Community effects could still be mediated by the relevance of participants' research interests. Anyway, I'm also pretty uncertain and interested to see the results as they come in over the coming years.

Comment by ryancarey on Open Philanthropy: Our Progress in 2019 and Plans for 2020 · 2020-05-12T18:20:22.320Z · score: 45 (19 votes) · EA · GW
  • Here's an updated ipynb with OpenPhil's annual spending, showing the breakdown with respect to EA-relevant areas.

My main impressions:

  • Having Ben Delo's participation is great.
  • OpenPhil and its staff working hard on allocating these funds is absolutely great (it's obvious, yet worth saying over and over again.)
  • It would be nice to see more new kinds of grants (to longtermist causes) by EA, via OpenPhil and otherwise. The kinds of grants are relatively stagnant over the last few years. e.g. the typical x-risk grant is a few million to an academic research group. Can we also fund more interventions, or projects in other sectors?
  • The AI OpenPhil Scholarships place substantial weight on the excellence of applicants' supervision, institutional affiliation and publication record. But there seems to be very little weight on the relevance of work done - I've only come across a few papers by any of the 2018-2020 applicants through my work on various aspects of AI x-risk. I've heard many people better-informed than me argue that this is likely to be relatively unproductive, in the sense that excellent researchers working in unrelated areas will tend to accept funding without substantially shifting their research direction. I'm as excited about academic excellence as almost anyone in AI safety, yet in the case of the OpenPhil Scholarships, this assessment sounds about right to me, and I haven't really heard anyone arguing the opposing view - it would be interesting to understand this thinking better.
Comment by ryancarey on EA Forum Prize: Winners for December 2019 · 2020-01-31T23:24:47.629Z · score: 19 (11 votes) · EA · GW

Larks' post was one of the best of the year, so it's nice of him to effectively make a hundreds-of-dollars donation to the EA Forum Prize!

Comment by ryancarey on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T20:50:07.503Z · score: 2 (1 votes) · EA · GW

Yep, that's it.

Comment by ryancarey on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T16:03:53.694Z · score: 38 (15 votes) · EA · GW

Have you heard of Neumeir's naming criteria? It's designed for businesses, but I think it's an OK heuristic. I'd agree that there are better available names, e.g.:

  • CEEALAR. Distinctiveness: 1, Brevity: 1, Appropriateness: 4, Easy spelling and punctuation: 1, Likability: 2, Extendability: 1, Protectability: 4.
  • Athena Centre. 4,4,4,4,4,4,4
  • EA Study Centre. 3,3,4,3,3,3,3.
Comment by ryancarey on RyanCarey's Shortform · 2020-01-29T11:13:14.806Z · score: 3 (2 votes) · EA · GW

Tom Inglesby on nCoV response is one recent example from just the last few days. I've generally known Stefan Schubert, Eliezer Yudkowsky, Julia Galef, and others to make very insightful comments there. I'm sure there are very many other examples.

Generally speaking, though, the philosophy would be to go to the platforms that top contributors are actually using, and offer our services there, rather than trying to push them onto ours, or at least to complement the latter with the former.

Comment by ryancarey on RyanCarey's Shortform · 2020-01-27T22:18:23.891Z · score: 9 (2 votes) · EA · GW

Possible EA intervention: just like the EA Forum Prizes, but for the best Tweets (from an EA point-of-view) in a given time window.

Reasons this might be better than the EA Forum Prize:

1) Popular tweets have greater reach than popular forum posts, so this could promote EA more effectively

2) The prizes could go to EAs who are not regular forum users, which could also help to promote EA more effectively.

One would have to check the rules and regulations.

Comment by ryancarey on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-14T00:35:59.546Z · score: 2 (1 votes) · EA · GW

Hmm, but is it good or sustainable to repeatedly switch parties?

Comment by ryancarey on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T07:21:59.057Z · score: 11 (7 votes) · EA · GW

Interesting point of comparison: the Conservative Party has ~35% as many members, and had held government ~60% more often over the last 100 years, so the Leverage per member is ~4.5x higher. Although for many people, their ideology would mean they cannot credibly be involved in one or the other party.

Comment by ryancarey on Long-term investment fund at Founders Pledge · 2020-01-11T00:30:40.491Z · score: 4 (2 votes) · EA · GW

The obvious approach would be to by-default invest in the stock market, (or maybe a leveraged ETF?), and only move money from that into other investments when they have higher EV.

Comment by ryancarey on Pablo_Stafforini's Shortform · 2020-01-10T01:06:24.751Z · score: 15 (7 votes) · EA · GW

I think Pablo is right about points (1) and (3). Community Favorites is quite net-negative for my experience of the forum (because it repeatedly shows the same old content), and probably likewise for users on average. "Community" seems to needlessly complicate the posting experience, whose simplicity should be valued highly.

Comment by ryancarey on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-19T17:03:03.250Z · score: 17 (7 votes) · EA · GW
Of these categories, I am most excited by the Individual Research, Event and Platform projects. I am generally somewhat sceptical of paying people to ‘level up’ their skills.

If I'm understanding the categories correctly, I agree here.

While generally good, one side effect of this (perhaps combined with the fact that many low-hanging fruits of the insight tree have been plucked) is that a considerable amount of low-quality work has been produced. Furthermore, the conventional peer review system seems to be extremely bad at dealing with this issue... Perhaps you, enlightened reader, can judge that “How to solve AI Ethics: Just use RNNs” is not great. But is it really efficient to require everyone to independently work this out?

I agree. I think part of the equation is that peer review does not just filter papers "in" or "out" - it accepts them to a journal of a certain quality. Many bad papers will get into weak journals, but will usually get read much less. Researchers who read these papers cite them, also taking into account to their quality, thereby boosting the readership of good papers. Finally, some core of elite researchers bats down arguments that due to being weirdly attractive yet misguided, manage to make it through the earlier filters. I think this process works okay in general, and can also work okay in AI safety.

I do have some ideas for improving our process though, basically to establish a steeper incentive gradient for research quality (in the dimensions of quality that we care about): (i) more private and public criticism of misguided work, (ii) stronger filters on papers being published in safety workshops, probably by agreeing to have fewer workshops, with fewer papers, and by largely ignoring any extra workshops from "rogue" creators, and (iii) funding undersupervised talent-pipeline projects a bit more carefully.

Bar guvat V jbhyq yvxr gb frr zber bs va gur shgher vf tenagf sbe CuQ fghqragf jub jnag gb jbex va gur nern. Hasbeghangryl ng cerfrag V nz abg njner bs znal jnlf sbe vaqvivqhny qbabef gb cenpgvpnyyl fhccbeg guvf.

Svygrevat ~100 nccyvpnagf qbja gb n srj npprcgrq fpubynefuvc erpvcvragf vf abg gung qvssrerag gb jung PUNV naq SUV nyernql qb va fryrpgvat vagreaf. Gur rkcrpgrq bhgchgf frrz ng yrnfg pbzcnenoyl-uvtu. Fb V guvax pubbfvat fpubynefuvc erpvcvragf jbhyq or fvzvyneyl tbbq inyhr va grezf bs rinyhngbef' gvzr, naq nyfb n cerggl tbbq hfr bs shaqf.

--

It's an impressive effort as in previous years! One meta-thought: if you stop providing this service at some point, it might be worth reaching out to the authors of the alignment newsletter, to ask whether they or anyone they know would jump in to fill the breach.

Comment by ryancarey on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-02T09:49:03.509Z · score: 11 (6 votes) · EA · GW

Yep, I'd actually just asked to clarify this. I'm listing schools that are good for doing safety work in particular. They may also be biased toward places I know about. If people are trying to become professors, or are not interested in doing safety work in their PhD then I agree they should look at a usual CS university ranking, which would look like what you describe.

That said, at Oxford there are ~10 CS PhD students interested in safety, and a few researchers, and FHI scholarships, which is why it makes it to the Amazing tier. At Imperial, there are 2 students and one professor. But happy to see this list improved.

Comment by ryancarey on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-01T02:14:24.112Z · score: 4 (3 votes) · EA · GW

On a short skim, this seems more like a research agenda? There are a few research agendas by now...

The only lit review I've seen is [1]. I probably should've said I haven't seen any great lit reviews, because I felt this one was OK - it covered a lot of ground. However, it is a couple of years old, and it didn't organize the work in a way that was satisfying for me.

1. Everitt, Tom, Gary Lea, and Marcus Hutter. "AGI safety literature review." arXiv preprint arXiv:1805.01109 (2018).

Comment by ryancarey on Update on CEA's EA Grants Program · 2019-11-14T13:45:15.331Z · score: 7 (4 votes) · EA · GW

I think the option of having (a possible renamed) EA Grants as one option in EA funds is interesting. It could preserve almost all of the benefits (one extra independent grantmaker picking different kinds of targets) while reducing maybe half the overhead, and clarifying the difference between EA Grants and EA Funds.

Comment by ryancarey on Only a few people decide about funding for community builders world-wide · 2019-10-22T22:17:02.467Z · score: 20 (8 votes) · EA · GW

Given that community groups are much more homogenous funding targets than EA projects in-general, it makes perfect sense that we allocate one CEA team to evaluating them, while we allocate a few teams to evaluating other small-scale EA projects.

Comment by ryancarey on Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? · 2019-10-17T09:35:49.794Z · score: 13 (9 votes) · EA · GW

Many infamous ideologies have impaired decision-making in important positions leading to terrible consequences like wars and harmful revolutions: communism, fascism, ethno-nationalism, racism, etc.

Comment by ryancarey on What would EAs most want to see from a "rationality" or similar project in the EA space? · 2019-10-10T09:24:18.872Z · score: 17 (7 votes) · EA · GW

I've become pretty pessimistic about rationality-improvement as an intervention, especially to the extent that it involves techniques that are domain-general, with a large subjective element and placebo effect/participant cost. Basically most interventions of this sort haven't worked, though they induce tonnes of biases that allow them to display positive testimonials: placebo effects, liking instructors, having a break from work, getting to think about interesting stuff, branding of techniques, choice-supportive bias, biased sampling of testimonials, etc etc etc.

The nearest things that I'd be interest in would be: 1) domain-specific training that delivers skills and information from trained experts in a particular area, such as research, 2) freely available online reviews of literature on rationality interventions, similar to what gwern does for nootropics, 3) new controlled experiments on existing rationality programs such as Leverage and CFAR 4) training in risk assessment for high-risk groups like policymakers.

Comment by ryancarey on What should Founders Pledge research? · 2019-09-11T10:43:05.654Z · score: 9 (3 votes) · EA · GW

I think it's a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.

Comment by ryancarey on What should Founders Pledge research? · 2019-09-11T10:36:13.563Z · score: 5 (3 votes) · EA · GW

I don't have any inside info, and perhaps "pressure" is too strong, but Holden reported recieving advice in that direction in 2016:

"Paul Christiano and Carl Shulman–a couple of individuals I place great trust in (on this topic)–have argued to me that Open Phil’s grant to MIRI should have been larger. (Note that these individuals have some connections to MIRI and are not wholly impartial.) Some other people I significantly trust on this topic are very non-enthusiastic about MIRI’s work, but having a couple of people making the argument in favor carries substantial weight with me from a “let many flowers bloom”/”cover your bases” perspective. (However, I expect that the non-enthusiastic people will be less publicly vocal, which I think is worth keeping in mind in this context.)"
Comment by ryancarey on What should Founders Pledge research? · 2019-09-10T16:08:53.398Z · score: 22 (11 votes) · EA · GW

[My views only]

Thanks for putting up with my follow-up questions.

Out of the areas you mention, I'd be very interested in:

  • Improving science. Things like academia.edu and sci-hub have been interesting. Replacing LaTeX is interesting. Working on publishing incentives is also interesting. In general, there seems to be plenty of room for improvement!

I'd be interested in:

  • Improving political institutions and political wisdom: EA might need to escalate its involvement in many areas adjacent to this, such as policy intersected with great power relations or pivotal technologies. It would be very interesting to better-understand what can be done with funding alone.
  • Reducing political bias and partisanship: this seems hard, but somewhat important. Most lobbyists are not trying to do this. Russia is actively trying to do the opposite. It would be interesting if more can be done in this space. Fact-checking websites and investigative journalism (Bellingcat) are interesting in this space too. Another interesting area is counteracting political corruption.
  • Sundry ex risks/GCRs

I'd be a little interested in:

  • Increasing economic growth

I think the other might be disadvantageous based on my understanding that it's better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.

Out of those you haven't mentioned, but that seem similar, I'd also be interested in:

  • Promotion of effective altruism
  • Scholarships for people working on high-impact research
  • More on AI safety - OpenPhil seems to be funding high-prestige mostly-aligned figures (e.g. Stuart Russell, OpenAI) and high-prestige unaligned figures (e.g. their fellows) but has mostly not funded low-mid prestige highly-aligned figures (with notable exceptions of MIRI, Michael C and Dima K). Other small but comparably informed funders mostly favor low-mid prestige highly-aligned targets to a greater extent e.g. Paul's funding for AI safety research, and Paul and Carl argued to OpenPhil that they should fund MIRI more. I think there are residual opportunities to fund other low-mid prestige highly-aligned figures. [edited for clarity]
Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T22:30:21.532Z · score: 2 (1 votes) · EA · GW

No problem. I've also had a skim of the x-risk report to get an idea of what research you're talking about.

Would you expect the donors to be much more interested in some of the areas you mention than others, or similarly interested in all the areas?

Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T21:51:49.558Z · score: 2 (1 votes) · EA · GW

Cool! Are you able to indicate roughly what order of magnitude of donations you would expect to contribute per-year, over the next few years in the promising areas (or any of the others if they're significantly bigger than those) such as:

Donors focused on the long-term future of sentient life.
Donors focused on GCRs and existential risk.
Improving science
Sundry ex risks/GCRs
Improving political institutions and political wisdom

?

Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T21:10:29.109Z · score: 4 (5 votes) · EA · GW

I'd need a better understanding of how Founders Pledge works to be able to say anything intelligent. I'm guessing the idea is something like:

  • when founders are due to donate, you prompt them
  • you ask them what kind of advice they would like
  • you give them some research relevant to that, and do/don't make specific recommendations ???
  • they make donations directly

Is that how it actually happens?

Comment by ryancarey on Funding chains in the x-risk/AI safety ecosystem · 2019-09-09T08:53:40.007Z · score: 15 (15 votes) · EA · GW

This is interesting. However, this graph is also fairly misleading by putting OpenPhil on the same footing as an individual ETG-funder, although OpenPhil is disbursing wholly 1000x more funds. Maybe you could set edge-widths to correspond to funding volumes? Also, do you think by moving the nodes around you could reduce the extent to which lines cross over each other, to increase clarity?

Comment by ryancarey on Are we living at the most influential time in history? · 2019-09-05T15:36:39.882Z · score: 11 (9 votes) · EA · GW

Criticality is confusing because it describes the point when nuclear reaction becomes self-sustaining, and relates to "critical points" in the related area of dynamical systems, which is somewhat different from what we're talking about.

I think Hingeyness should have a simple name because it is not a complicated concept - It's how much actions affect long-run outcomes. In RL, in discussion of prioritized experience replay, we would just use something like "importance". I would generally use "(long-run) importance" or "(long-run) influence" here, though I guess pivotality (from Yudkowsky's "pivotal act") is alright in a jargon-liking context (like academic papers).

Edit: From Carl's comment, and from rereading the post, the per-resource component seems key. So maybe per-resource importance.

Comment by ryancarey on Ask Me Anything! · 2019-08-20T22:13:38.547Z · score: 41 (18 votes) · EA · GW

I think we need to figure out how to better collectively manage the fact that political affiliation is a shortcut to power (and hence impact), yet politicisation is a great recipe for blowing up the movement. It would be a shame if avoiding politics altogether is the best we can do.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:48:53.492Z · score: 3 (2 votes) · EA · GW

A lot of EAs I know consider Dennett as their favourite author - he was my favourite around that age. An unconventional philosopher who covers wide ranges of topics, from evolution, to consciousness, and whose later books (like this one) are more accessible than his early stuff.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:47:45.158Z · score: 2 (1 votes) · EA · GW

The most famous historical utilitarian, Mill, grew up as a child-prodigy with intense tutoring in university-level subjects by his father James Mill. I found it to be a moving story, and gifted teenagers might be able to relate to some of the troubles that Mill experienced some 160 years ago.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:46:06.184Z · score: 2 (1 votes) · EA · GW

Feynman is one of the great public intellectuals, and I loved this book. A gripping and hilarious read that teaches you a lot about the kind of clear thinking that is required to solve real-world problems. It could change a gifted kid's perspective for sure.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:48.009Z · score: 5 (4 votes) · EA · GW

Stories of Your Life: and Others by Ted Chiang

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:30.078Z · score: 2 (3 votes) · EA · GW

From Bacteria to Bach and Back by Daniel Dennett

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:14.386Z · score: 6 (3 votes) · EA · GW

Permutation city by Greg Egan

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:58.811Z · score: 12 (4 votes) · EA · GW

Autobiography by John Stuart Mill


Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:42.456Z · score: 22 (10 votes) · EA · GW

Surely You're Joking Mr Feynman by Richard Feynman

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:22.447Z · score: 17 (11 votes) · EA · GW

Reasons and Persons by Derek Parfit

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:06.560Z · score: 3 (3 votes) · EA · GW

(upcoming) Human Compatible by Stuart Russell

Comment by ryancarey on The EA Forum is a News Feed · 2019-07-29T13:41:44.690Z · score: 12 (5 votes) · EA · GW

I think the present EA Forum is most like Reddit, among forms of social media, so yes, kinda like a news feed. But I think the Possible Drawbacks of switching to a classic forum are probably larger than the stated Problems with the current setup. I'd rather see the problems fixed within the current framework.

On the problems,

  • I would note that it's not super-easy to improve search, as Facebook and old-school forums were never particularly easily searchable either. My preferred way to fix this would be to have a search bar, where you can type any term and see the posts on that topic sorted by upvotes like here.
  • The forum can indeed give an underwhelming impression. But perhaps this could be addressed by (i) having posts accompanied by some of their content a la Reddit, by (ii) simply placing grey horizontal lines between the commented posts, in order to delineate them, or by (iii) darkening the text to improve readability and ease of engagement

On the drawbacks:

  • Increasing overall post quality is one of the primary challenges for the forum, so that seems like a serious cost of switching to a forum. Although sometimes people who will produce great content are intimidated from doing so, the reverse is also a problem, even in the current setting - that people who produce low-quality content will proceed to do it. I don't have a strong feeling that at present we should be pushing hard in one direction nor the other.

Overall, the picture is that the current problems might be easier to fix than those that would arise in a switch to an old-school forum.