Posts

Worldwide decline of the entomofauna: A review of its drivers 2019-07-04T19:06:17.041Z · score: 10 (5 votes)
SHOW: A framework for shaping your talent for direct work 2019-03-12T17:16:44.885Z · score: 129 (68 votes)
AI alignment prize winners and next round [link] 2018-01-20T12:07:16.024Z · score: 7 (7 votes)
The Threat of Nuclear Terrorism MOOC [link] 2017-10-19T12:31:12.737Z · score: 7 (7 votes)
Informatica: Special Issue on Superintelligence 2017-05-03T05:05:55.750Z · score: 7 (7 votes)
Tell us how to improve the forum 2017-01-03T06:25:32.114Z · score: 4 (4 votes)
Improving long-run civilisational robustness 2016-05-10T11:14:47.777Z · score: 9 (9 votes)
EA Open Thread: October 2015-10-10T19:27:04.119Z · score: 1 (1 votes)
September Open Thread 2015-09-13T14:22:20.627Z · score: 0 (0 votes)
Reducing Catastrophic Risks: A Practical Introduction 2015-09-09T22:33:03.230Z · score: 5 (5 votes)
Superforecasters [link] 2015-08-20T18:38:27.846Z · score: 4 (4 votes)
The long-term significance of reducing global catastrophic risks [link] 2015-08-13T22:38:23.903Z · score: 4 (4 votes)
A response to Matthews on AI Risk 2015-08-11T12:58:38.930Z · score: 11 (11 votes)
August Open Thread: EA Global! 2015-08-01T15:42:07.625Z · score: 3 (3 votes)
July Open Thread 2015-07-02T13:41:52.991Z · score: 4 (4 votes)
[Discussion] Are academic papers a terrible discussion forum for effective altruists? 2015-06-05T23:30:32.785Z · score: 3 (3 votes)
Upcoming AMA with new MIRI Executive Director, Nate Soares: June 11th 3pm PT 2015-06-02T15:05:56.021Z · score: 1 (3 votes)
June Open Thread 2015-06-01T12:04:00.027Z · score: 4 (4 votes)
Introducing Alison, our new forum moderator 2015-05-28T16:09:26.349Z · score: 9 (9 votes)
Three new offsite posts 2015-05-18T22:26:18.674Z · score: 4 (4 votes)
May Open Thread 2015-05-01T09:53:47.278Z · score: 1 (1 votes)
Effective Altruism Handbook - Now Online 2015-04-23T14:23:28.013Z · score: 26 (28 votes)
One week left for CSER researcher applications 2015-04-17T00:40:39.961Z · score: 2 (2 votes)
How Much is Enough [LINK] 2015-04-09T18:51:48.656Z · score: 3 (3 votes)
April Open Thread 2015-04-01T22:42:48.295Z · score: 2 (2 votes)
Marcus Davis will help with moderation until early May 2015-03-25T19:12:11.614Z · score: 5 (5 votes)
Rationality: From AI to Zombies was released today! 2015-03-15T01:52:54.157Z · score: 6 (8 votes)
GiveWell Updates 2015-03-11T22:43:30.967Z · score: 4 (4 votes)
Upcoming AMA: Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project: 17th March 8pm GMT 2015-03-10T21:25:39.329Z · score: 4 (4 votes)
A call for ideas - EA Ventures 2015-03-01T14:50:59.154Z · score: 3 (3 votes)
Seth Baum AMA next Tuesday on the EA Forum 2015-02-23T12:37:51.817Z · score: 7 (7 votes)
February Open Thread 2015-02-16T17:42:35.208Z · score: 0 (0 votes)
The AI Revolution [Link] 2015-02-03T19:39:58.616Z · score: 10 (10 votes)
February Meetups Thread 2015-02-03T17:57:04.323Z · score: 1 (1 votes)
January Open Thread 2015-01-19T18:12:55.433Z · score: 0 (0 votes)
[link] Importance Motivation: a double-edged sword 2015-01-11T21:01:10.451Z · score: 3 (3 votes)
I am Samwise [link] 2015-01-08T17:44:37.793Z · score: 4 (4 votes)
The Outside Critics of Effective Altruism 2015-01-05T18:37:48.862Z · score: 11 (11 votes)
January Meetups Thread 2015-01-05T16:08:38.455Z · score: 0 (0 votes)
CFAR's annual update [link] 2014-12-26T14:05:55.599Z · score: 1 (3 votes)
MIRI posts its technical research agenda [link] 2014-12-24T00:27:30.639Z · score: 4 (6 votes)
Upcoming Christmas Meetups (Upcoming Meetups 7) 2014-12-22T13:21:17.388Z · score: 0 (0 votes)
Christmas 2014 Open Thread (Open Thread 7) 2014-12-15T16:31:35.803Z · score: 1 (1 votes)
Upcoming Meetups 6 2014-12-08T17:29:00.830Z · score: 0 (0 votes)
Open Thread 6 2014-12-01T21:58:29.063Z · score: 1 (1 votes)
Upcoming Meetups 5 2014-11-24T21:02:07.631Z · score: 0 (0 votes)
Open thread 5 2014-11-17T15:57:12.988Z · score: 1 (1 votes)
Upcoming Meetups 4 2014-11-10T13:54:39.551Z · score: 0 (0 votes)
Open Thread 4 2014-11-03T16:57:07.873Z · score: 1 (1 votes)
Upcoming Meetups 3 2014-10-27T22:02:04.564Z · score: 0 (0 votes)

Comments

Comment by ryancarey on What should Founders Pledge research? · 2019-09-11T10:43:05.654Z · score: 9 (3 votes) · EA · GW

I think it's a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.

Comment by ryancarey on What should Founders Pledge research? · 2019-09-11T10:36:13.563Z · score: 5 (3 votes) · EA · GW

I don't have any inside info, and perhaps "pressure" is too strong, but Holden reported recieving advice in that direction in 2016:

"Paul Christiano and Carl Shulman–a couple of individuals I place great trust in (on this topic)–have argued to me that Open Phil’s grant to MIRI should have been larger. (Note that these individuals have some connections to MIRI and are not wholly impartial.) Some other people I significantly trust on this topic are very non-enthusiastic about MIRI’s work, but having a couple of people making the argument in favor carries substantial weight with me from a “let many flowers bloom”/”cover your bases” perspective. (However, I expect that the non-enthusiastic people will be less publicly vocal, which I think is worth keeping in mind in this context.)"
Comment by ryancarey on What should Founders Pledge research? · 2019-09-10T16:08:53.398Z · score: 19 (9 votes) · EA · GW

[My views only]

Thanks for putting up with my follow-up questions.

Out of the areas you mention, I'd be very interested in:

  • Improving science. Things like academia.edu and sci-hub have been interesting. Replacing LaTeX is interesting. Working on publishing incentives is also interesting. In general, there seems to be plenty of room for improvement!

I'd be interested in:

  • Improving political institutions and political wisdom: EA might need to escalate its involvement in many areas adjacent to this, such as policy intersected with great power relations or pivotal technologies. It would be very interesting to better-understand what can be done with funding alone.
  • Reducing political bias and partisanship: this seems hard, but somewhat important. Most lobbyists are not trying to do this. Russia is actively trying to do the opposite. It would be interesting if more can be done in this space. Fact-checking websites and investigative journalism (Bellingcat) are interesting in this space too. Another interesting area is counteracting political corruption.
  • Sundry ex risks/GCRs

I'd be a little interested in:

  • Increasing economic growth

I think the other might be disadvantageous based on my understanding that it's better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.

Out of those you haven't mentioned, but that seem similar, I'd also be interested in:

  • Promotion of effective altruism
  • Scholarships for people working on high-impact research
  • More on AI safety - OpenPhil seems to be funding high-prestige mostly-aligned figures (e.g. Stuart Russell, OpenAI) and high-prestige unaligned figures (e.g. their fellows) but has mostly not funded low-mid prestige highly-aligned figures (with notable exceptions of MIRI, Michael C and Dima K). Other small but comparably informed funders mostly favor low-mid prestige highly-aligned targets to a greater extent e.g. Paul's funding for AI safety research, and Paul and Carl argued to OpenPhil that they should fund MIRI more. I think there are residual opportunities to fund other low-mid prestige highly-aligned figures. [edited for clarity]
Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T22:30:21.532Z · score: 2 (1 votes) · EA · GW

No problem. I've also had a skim of the x-risk report to get an idea of what research you're talking about.

Would you expect the donors to be much more interested in some of the areas you mention than others, or similarly interested in all the areas?

Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T21:51:49.558Z · score: 2 (1 votes) · EA · GW

Cool! Are you able to indicate roughly what order of magnitude of donations you would expect to contribute per-year, over the next few years in the promising areas (or any of the others if they're significantly bigger than those) such as:

Donors focused on the long-term future of sentient life.
Donors focused on GCRs and existential risk.
Improving science
Sundry ex risks/GCRs
Improving political institutions and political wisdom

?

Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T21:10:29.109Z · score: 4 (5 votes) · EA · GW

I'd need a better understanding of how Founders Pledge works to be able to say anything intelligent. I'm guessing the idea is something like:

  • when founders are due to donate, you prompt them
  • you ask them what kind of advice they would like
  • you give them some research relevant to that, and do/don't make specific recommendations ???
  • they make donations directly

Is that how it actually happens?

Comment by ryancarey on Funding chains in the x-risk/AI safety ecosystem · 2019-09-09T08:53:40.007Z · score: 16 (15 votes) · EA · GW

This is interesting. However, this graph is also fairly misleading by putting OpenPhil on the same footing as an individual ETG-funder, although OpenPhil is disbursing wholly 1000x more funds. Maybe you could set edge-widths to correspond to funding volumes? Also, do you think by moving the nodes around you could reduce the extent to which lines cross over each other, to increase clarity?

Comment by ryancarey on Are we living at the most influential time in history? · 2019-09-05T15:36:39.882Z · score: 9 (7 votes) · EA · GW

Criticality is confusing because it describes the point when nuclear reaction becomes self-sustaining, and relates to "critical points" in the related area of dynamical systems, which is somewhat different from what we're talking about.

I think Hingeyness should have a simple name because it is not a complicated concept - It's how much actions affect long-run outcomes. In RL, in discussion of prioritized experience replay, we would just use something like "importance". I would generally use "(long-run) importance" or "(long-run) influence" here, though I guess pivotality (from Yudkowsky's "pivotal act") is alright in a jargon-liking context (like academic papers).

Edit: From Carl's comment, and from rereading the post, the per-resource component seems key. So maybe per-resource importance.

Comment by ryancarey on Ask Me Anything! · 2019-08-20T22:13:38.547Z · score: 41 (18 votes) · EA · GW

I think we need to figure out how to better collectively manage the fact that political affiliation is a shortcut to power (and hence impact), yet politicisation is a great recipe for blowing up the movement. It would be a shame if avoiding politics altogether is the best we can do.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:48:53.492Z · score: 3 (2 votes) · EA · GW

A lot of EAs I know consider Dennett as their favourite author - he was my favourite around that age. An unconventional philosopher who covers wide ranges of topics, from evolution, to consciousness, and whose later books (like this one) are more accessible than his early stuff.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:47:45.158Z · score: 2 (1 votes) · EA · GW

The most famous historical utilitarian, Mill, grew up as a child-prodigy with intense tutoring in university-level subjects by his father James Mill. I found it to be a moving story, and gifted teenagers might be able to relate to some of the troubles that Mill experienced some 160 years ago.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:46:06.184Z · score: 2 (1 votes) · EA · GW

Feynman is one of the great public intellectuals, and I loved this book. A gripping and hilarious read that teaches you a lot about the kind of clear thinking that is required to solve real-world problems. It could change a gifted kid's perspective for sure.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:48.009Z · score: 4 (3 votes) · EA · GW

Stories of Your Life: and Others by Ted Chiang

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:30.078Z · score: 2 (3 votes) · EA · GW

From Bacteria to Bach and Back by Daniel Dennett

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:14.386Z · score: 6 (3 votes) · EA · GW

Permutation city by Greg Egan

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:58.811Z · score: 7 (4 votes) · EA · GW

Autobiography by John Stuart Mill


Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:42.456Z · score: 16 (9 votes) · EA · GW

Surely You're Joking Mr Feynman by Richard Feynman

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:22.447Z · score: 11 (9 votes) · EA · GW

Reasons and Persons by Derek Parfit

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:06.560Z · score: 1 (2 votes) · EA · GW

(upcoming) Human Compatible by Stuart Russell

Comment by ryancarey on The EA Forum is a News Feed · 2019-07-29T13:41:44.690Z · score: 12 (5 votes) · EA · GW

I think the present EA Forum is most like Reddit, among forms of social media, so yes, kinda like a news feed. But I think the Possible Drawbacks of switching to a classic forum are probably larger than the stated Problems with the current setup. I'd rather see the problems fixed within the current framework.

On the problems,

  • I would note that it's not super-easy to improve search, as Facebook and old-school forums were never particularly easily searchable either. My preferred way to fix this would be to have a search bar, where you can type any term and see the posts on that topic sorted by upvotes like here.
  • The forum can indeed give an underwhelming impression. But perhaps this could be addressed by (i) having posts accompanied by some of their content a la Reddit, by (ii) simply placing grey horizontal lines between the commented posts, in order to delineate them, or by (iii) darkening the text to improve readability and ease of engagement

On the drawbacks:

  • Increasing overall post quality is one of the primary challenges for the forum, so that seems like a serious cost of switching to a forum. Although sometimes people who will produce great content are intimidated from doing so, the reverse is also a problem, even in the current setting - that people who produce low-quality content will proceed to do it. I don't have a strong feeling that at present we should be pushing hard in one direction nor the other.

Overall, the picture is that the current problems might be easier to fix than those that would arise in a switch to an old-school forum.

Comment by ryancarey on Defining Effective Altruism · 2019-07-22T06:02:04.133Z · score: 23 (7 votes) · EA · GW

I think this is an excellent definition.

Broadly, I understand it as:

(1) science of applying ethics given limited resources (tentatively impartial welfarist ethics)

(2) deployment of (1).

The omission of normativity has a fifth benefit of clarifying the difference from consequentialism.

Comment by ryancarey on Announcing the launch of the Happier Lives Institute · 2019-07-07T12:18:16.717Z · score: 8 (4 votes) · EA · GW

In the UK, "Institute" is a protected term, for which you need approval from the Secretary of State to use in a business name, per https://web.archive.org/web/20080913085135/http://www.companieshouse.gov.uk/about/gbhtml/gbf3.shtml. I'm not sure how this changes if you're being a part of the university, but otherwise this could present some problems.

Comment by ryancarey on Worldwide decline of the entomofauna: A review of its drivers · 2019-07-06T00:19:11.234Z · score: 11 (4 votes) · EA · GW

Perhaps you missed the key quotation from my post (emphasized below):

...From our compilation of published scientific reports, we estimate the current proportion of insect species in decline (41%) to be twice as high as that of vertebrates, and the pace of local species extinction (10%) eight times higher, confirming previous findings (Dirzo et al., 2014). At present, about a third of all insect species are threatened with extinction in the countries studied (Table 1). Moreover, every year about 1% of all insect species are added to the list, with such biodiversity declines resulting in an annual 2.5% loss of biomass worldwide (Fig. 2)..."

Is the 2.5% estimate inaccurate?

Comment by ryancarey on I find this forum increasingly difficult to navigate · 2019-07-05T16:08:45.728Z · score: 13 (6 votes) · EA · GW

For explicit sorting and simple interface, you might like ea.greaterwrong?

Comment by ryancarey on Worldwide decline of the entomofauna: A review of its drivers · 2019-07-05T13:38:05.672Z · score: 3 (2 votes) · EA · GW

What I mean is that working on wild animal welfare is less important if there are few animals, for any axiology..

Other theoretical arguments for expecting small insect populations: (i) in the long-run future most life would be on other planets, or in extreme cases, in simulations, where there would be little reason to bring insects, (ii) in the very long-run, there's little reason to think creating insects is the optimal way for people to use limited resources to fulfill their own preferences.

Comment by ryancarey on Long Term Future Fund and EA Meta Fund applications open until June 28th · 2019-06-11T00:23:05.329Z · score: 22 (9 votes) · EA · GW

I think there should be an Oxford group that has as its audience the people in EA orgs, with activities to improve happiness, productivity, and the attractiveness of these workplaces, which is quite different from the goal of trying to grow a community of students. On this front, I've been spending time finding group housing near the new office. It would also be good to have short-term housing for visitors. It would be good to have dinners, and fun activities on a Friday night. In-principle, the range of activities that could be helped by proximity to the Oxford orgs is extremely large, but things that interact more closely with the orgs, like grant recommendations, or recruitment, just to pick a couple of arbitrary examples, would have to be worked out beforehand.

Comment by ryancarey on Long Term Future Fund and EA Meta Fund applications open until June 28th · 2019-06-10T23:16:25.302Z · score: 32 (16 votes) · EA · GW

I'm not involved with either of these funds, but here are three projects I really want to see happen:

  • More recruiting for EA orgs: FHI wants to grow a bunch and could benefit from having more great researchers referred. Probably similar is true for other orgs.
  • Targeted outreach using social media advertisements: EA is currently doing little outreach for fear of dilution, and is thereby foregoing many of the benefits of our surplus of funds and ideas. Maybe we could do more outreach in a way that doesn't bring about dilution, such as by advertising intellectual content in a way that's filtered to just intellectual audiences.
  • EA Oxford community. There's ~45 employees at FHI/GPI/Forethought/CEA-UK but almost all of the community activities are run by and directed at students.
Comment by ryancarey on [Question] 20,000/40,000 Hours- MidCareer Options · 2019-05-30T16:43:39.155Z · score: 15 (5 votes) · EA · GW

I think the main answer is that advice for mid/late-career is harder to provide. But we can improvise by leveraging the existing research:

Could one land jobs at any of the positions on 80,000 Hours' jobs board?

Could you switch to working on a high-priority area in general?

What are the main skills gained from your career? Are these needed by any of the organizations on the jobs board? Are they needed for starting any new organizations?


Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T22:54:28.137Z · score: 4 (2 votes) · EA · GW

Isn't Matt in HK?

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T20:34:25.852Z · score: 32 (17 votes) · EA · GW

It would be really useful if this was split up into separate comments that could be upvoted/downvoted separately.

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T17:17:53.418Z · score: 18 (12 votes) · EA · GW

It's a bit surprising to me that you'd want to send all four volumes.

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T13:45:27.053Z · score: 43 (22 votes) · EA · GW

This is a strong set of grants, much stronger than the EA community would've been able to assemble a couple of years ago, which is great to see.

When will you be accepting further applications and making more grants?

Comment by ryancarey on Announcing EA Hub 2.0 · 2019-04-08T13:12:03.542Z · score: 23 (11 votes) · EA · GW
In keeping with our ethos, we want to collaborate with other EA projects as much as possible. The Hub presently connects with the EA Forum, EA Work Club, PriorityWiki, EA Donation Swap and Effective Thesis.

I'm not sure much integration would be required, but did you consider linking the 80k jobs board? This seems like a really useful recent EA tool that could fit in quite well.

Comment by ryancarey on Should EA Groups Run Organ Donor Registration Drives? · 2019-03-27T18:43:34.461Z · score: 18 (6 votes) · EA · GW

I agree that registering for organ donation after death helps but does no direct harm. But I think we need to have a high bar for including an activity in the typical cache of activities that EAs promote to others. We want the act to be similar to other acts that have near-maximal impact. Donation fits that bill because once you start donating anywhere, you can switch to other donation targets that have a big long term impact.

For organ donation, though, I don't think it really gives you ideas about anything that can be done that has any real long-term significance. If you go down the organ-donation vertical, you might end up with kidney donation, or with extreme ideas about self-sacrifice. This kind of ideology is really catchy --- It brought Zell Kravinsky mild fame, and was the main object of the book Strangers Drowning. But I don't think that's the main way that long-run good is done. I think doing long-run good requires mostly a more analytical or startup mindset. If you do things like live kidney donation, I actually think you might do less good than working for the week of your operation, and donating some of that to a top longtermist charity.

I get that my claim is that the second-order effects outweigh the first-order ones here, but I don't think that should be so surprising in the context of EA outreach --- we need to carve an overall package --- that gets people to do some good in the short-run, but most-importantly, that builds up a productivity mindset, and gets people to do a lot of good over the longer term.

Comment by ryancarey on SHOW: A framework for shaping your talent for direct work · 2019-03-22T11:08:57.991Z · score: 17 (6 votes) · EA · GW

I hear more people do cold outreach about being a researcher than RA, and my guess is that 3-10x more people apply for researcher than RA jobs even when they are advertised. I think it's a combination of those two factors.

My recommendation would be that people apply more to RA jobs that are advertised, and also reach out to make opportunities for themselves when they are not.

I think about half of researchers can use research assistants, whether or not they are currently hiring for one. A major reason researchers don't make research assistant positions available is they don't expect to find one worth hiring, and so don't want to incur the administrative burden. Or maybe they don't feel comfortable asking their bosses for this. But if you are a strong candidate, coldly reaching out may result in you being hired or may trigger a hiring round for that position. Although often strong candidates would be people I have met at an EA conference, that got far in an internship application, or that has been referred to me.

I don't think the salaries would be any lower than competitive rates.

Comment by ryancarey on Request for comments: EA Projects evaluation platform · 2019-03-21T19:00:03.741Z · score: 12 (5 votes) · EA · GW

This is an uncharitable reading of my comment in many ways.

First, you suggest that I am worried that you want to recruit people not currently doing direct work. All things being equal, of course I would prefer to recruit people with fewer alternatives. But all things are not equal. If you use people you know for the initial assessments, you will much more quickly be able to iron out bugs in the process. In the testing stages, it's best to have high-quality workers that can perceive and rectify problems, so this is a good use of time for smart, trusted friends, especially since it can help you postpone the recruitment step.

Second, you suggest that I am in the dark about the importance of consensus-building. But this assumes that I believe the only use for consultation is to reach agreement. Rather, by talking to the groups working in related spaces like BERI, Brendon, EA grants, EA funds, and donors, you will of course learn some things, and your beliefs will probably get closer. On aggregate, your process will improve. But also you will build a relationship that will help you to share proposals (and in my opinion funders).

Third, you raise the issue of connecting funding with evaluation. Of course, the distortionary effect is significant. I happen to think the effect from creating an incentive for applicants to apply is larger and more important, and funders should be highly engaged. But there are also many ways that you could have funders be moderately engaged. You could check what would be a useful report for them, that would help them to decide to fund something. You could check what projects they are more likely to fund.

The more strategic issue is as follows. Consensus is hard to reach. But a funding platform is a good that scales with the size of the network of applicants (and imo funders). Somewhat of a natural monopoly (although we want there to be at least a few funders.) You eventually want widespread community-support of some form. I think that as you suggest, that means we need some compromise, but I think it also weighs in favour of more consultation, and in favour of a more experimental approach, which projects are started in a simple form.

Comment by ryancarey on Request for comments: EA Projects evaluation platform · 2019-03-21T12:11:55.128Z · score: 25 (12 votes) · EA · GW

I'm a big fan of the idea of having a new EA projects evaluation pipeline. Since I view this as an important idea, I think it's important to get the plan to the strongest point that it can be. From my perspective, there are only a smallish number of essential elements for this sort of plan. It needs a submissions form, a detailed RFP, some funders, and some evaluators. Instead, we don't yet have these (e.g. detail re desired projects, consultation with funders). But then I'm confused about some of the other things that are emphasised: large initial scale, a process for recruiting volunteer-evaluators, and fairly rigid evaluation procedures. I think the fundamentals of the idea are strong enough that this still has a chance of working, but I'd much prefer to see the idea advanced in its strongest possible form. My previous comments on this draft are pretty similar to Oliver's, and here are some of the main ones:

This makes sense to me as an overall idea. I think this is the sort of project where if you do it badly, it might dissuade others from trying the same. So I think it is worth getting some feedback on this from other evaluators (BERI/Brendon Wong). It would also probably be useful to get feedback from 1-2 funders (maybe Matt Wage? Maybe someone from OpenPhil?), so that you can get some information about whether they think your evaluation process would be of interest to them, or what might make it so. It could also be useful to have unofficial advisors.

I predict the process could be refined significantly with ~3 projects.

You only need a couple of volunteers and you know perhaps half of the best candidates, so for the purpose of a pilot, did you consider just asking a couple of people you know to do it?

I think you should provide a ~800 word request for proposals. Then you can give a much more detailed description of who you want to apply. e.g. just longtermist projects? How does this differ from the scope of EA grants, BERI, OpenPhil, etc etc? Is it sufficient to apply with just an idea? Do you need a team? A proof of concept? etc etc etc.

This would be strengthened somewhat by already having obtained the evaluators, but this may not be important.
Comment by ryancarey on SHOW: A framework for shaping your talent for direct work · 2019-03-19T18:41:49.049Z · score: 4 (2 votes) · EA · GW

I was influenced at that time by people like Matt Fallshaw and Ben Toner, who thought that for sufficiently good intellectual work, funding would be forthcoming. It seemed like insights were mostly what was needed to reduce existential risks...

Comment by ryancarey on SHOW: A framework for shaping your talent for direct work · 2019-03-19T15:51:26.758Z · score: 5 (3 votes) · EA · GW

I thought that more technical skills were rarer, were neglected in some parts of academia (e.g. in history), and were the main thing holding me back from being able to understand papers about emerging technologies... Also, I asked Carl S, and he thought that if I was to go into research, these would be the best skills to get. Nowadays, one could ask a lot more different people.

Comment by ryancarey on The career coordination problem · 2019-03-17T20:30:35.207Z · score: 12 (8 votes) · EA · GW

I don't think this idea was mine originally, but it would go a long way just to have two pi charts: the current distribution of careers in EA, and the optimal distribution.

Comment by ryancarey on SHOW: A framework for shaping your talent for direct work · 2019-03-13T02:09:47.346Z · score: 5 (3 votes) · EA · GW

Ryan/Tegan: Did you get your "something like thirty times lower" estimate from any particular research organization(s)?

This is an order-of magnitude estimate based on experience at various orgs. I've asked to be a research assistant for various top researchers, and generally I'm the only person asking at that time. I've rarely heard from researchers that someone has asked to research-assist with them. Some of this is because RA job descriptions are less common but I would guess that there is still an effect even when there are RA job descriptions.

Comment by ryancarey on Unsolicited Career Advice · 2019-03-09T11:46:50.538Z · score: 4 (2 votes) · EA · GW

Cover letters to core EA orgs from EAs generally indicate interest in EA. It's sometimes also indicated by involvement in EA groups, through a CV, by referral sources, and by interviews. You can pretty reliably tell.

Comment by ryancarey on Unsolicited Career Advice · 2019-03-05T14:27:53.505Z · score: 20 (10 votes) · EA · GW

Hundreds of EA applicants? Most EA org roles don't have that... I've been in/around MIRI, Ought, FHI and many other EA orgs. It's common to have about a hundred applicants for a role (research or ops) and the number of EA applicants is usually in the tens.

Comment by ryancarey on Pre-announcement and call for feedback: Operations Camp 2019 · 2019-02-20T17:47:57.424Z · score: 12 (6 votes) · EA · GW

Hey Jorgen,

That would honestly be my guess. Some people would call this cynical, but I think the amount of skills you're going to impart in 4 days, or even with a very long ~5 week camp, are pretty limited compared to the variation in people's innate dispositions, and the experience gained in their whole lifetime beforehand.

Comment by ryancarey on Pre-announcement and call for feedback: Operations Camp 2019 · 2019-02-20T01:29:01.249Z · score: 4 (2 votes) · EA · GW
A potential failure mode is that applicants believe the camp is a guaranteed way of being hired. Participants should not expect that this camp is guaranteeing, or making any promises whatsoever, about increasing the chances of getting a relevant position.

Yep! Although I'd emphasise that issue can also be solved by being more selective. If you pick some combo of: 1) reasonably strong candidates straight out of university who are happy to work on entry-level admin jobs, and 2) candidates with some PM experience, who are prepared to work as a PM at an EA org, including a community org, then that cohort is reasonably likely to leave happy (versus, I don't know, if you pick a bunch of people with lower levels of employment, who are strongly location restricted or are otherwise particular about the kinds of jobs they would accept). I think the impact from recruiting, identifying, filtering, and referring the already--semi-strong candidates is already something to get excited about!

Comment by ryancarey on Requesting community input on the upcoming EA Projects Platform · 2018-12-11T14:10:39.189Z · score: 20 (8 votes) · EA · GW

Yeah I agree with Jan that you should take things slowly. Also, my advice is that the following two bottlenecks are important, but also relatively easy to relieve: buy-in from community leaders, and support from EA institutions. So you should invest in these by having meetings and getting some people in relevant organizations take on advising roles.

Ultimately, I think you have the right general idea though. Current community-based orgs are capacity limited, and so some major projects like this should stand-alone.

Comment by ryancarey on Crohn's disease · 2018-11-16T10:27:32.175Z · score: 5 (3 votes) · EA · GW

He's just saying he thinks there's a 0.005 chance of detecting a real effect.

Comment by ryancarey on Crohn's disease · 2018-11-16T00:19:54.738Z · score: 5 (3 votes) · EA · GW

If you use a two tailed test and find a positive effect with p<0.05 it's <0.025 likely you'd get a positive effect that big by chance. If you don't understand that then you should look up two tailed tests.

Comment by ryancarey on Crohn's disease · 2018-11-14T21:12:33.413Z · score: 4 (2 votes) · EA · GW

"A cheap cure for Crohn's could save some large fraction of the $33B spent on Crohn's per year, and these funds could save thousands of lives per year if spent on other diseases."

Comment by ryancarey on Crohn's disease · 2018-11-14T10:43:48.020Z · score: 18 (8 votes) · EA · GW

I mostly agree with this, but I think it's also wrong in a couple of places.

Crohn's disease is not a spondyloarthritis! (and neither is psoriasis, ulcerative colitis, or acute anterior uveitis). As the name suggests, spondyloarthritides are arthritides (i.e. diseases principally of joints - the 'spondylo' prefix points to joints between vertebrae); Crohn's a disease of the GI tract.

I think this is just restating the hypothesis, that Crohn's shares (most of) its pathophysiology with the spondyloarthritides... Which is a well-known open possibility. The incidence of Crohns is >10% in people with AS and vice versa. They share heredity, HLA-B27. Apparently 2/3 of those with AS also have silent gut signs [1].

Also, I think the following is off the mark:

Although these are imperfect, if the person behind the project doesn't have credentials in a relevant field (bioinformatics rather than gastroenterology, say), and/or a fairly slender relevant publication record, and scant/no interest from recognised experts, these are also adverse indicators. (Remember the nobel-prize winner endorsed Vit C megadosing?)

Note that the author did manage to co-author his latest piece with an ophthalmologist/rheumatologist with a professorship in inflammation research and 20k cites.

Overall, the parts of the objection that I agree most with are i) that it seems very unlikely that one or two fungi would be implicated with all of these 14 various diseases, and that treating the fungus would cure the inflammatory disease (rather than the fungus just acting as an initial trigger), and ii) that there are mistakes, especially semantic ones, and especially on malassezia.org (as opposed to in the papers), with some of the medical science.

The interesting questions seems to me to be whether an overconfident-seeming author could nonetheless be correct about the minimal prediction that some antifungals would work well in at least Crohn's disease. I don't yet see why this is <1% likely.

1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2996322/

2. https://www.ncbi.nlm.nih.gov/pubmed/29675414, https://en.wikipedia.org/wiki/James_T._Rosenbaum