Posts

RyanCarey's Shortform 2020-01-27T22:18:23.751Z · score: 7 (1 votes)
Worldwide decline of the entomofauna: A review of its drivers 2019-07-04T19:06:17.041Z · score: 10 (5 votes)
SHOW: A framework for shaping your talent for direct work 2019-03-12T17:16:44.885Z · score: 129 (68 votes)
AI alignment prize winners and next round [link] 2018-01-20T12:07:16.024Z · score: 7 (7 votes)
The Threat of Nuclear Terrorism MOOC [link] 2017-10-19T12:31:12.737Z · score: 7 (7 votes)
Informatica: Special Issue on Superintelligence 2017-05-03T05:05:55.750Z · score: 7 (7 votes)
Tell us how to improve the forum 2017-01-03T06:25:32.114Z · score: 4 (4 votes)
Improving long-run civilisational robustness 2016-05-10T11:14:47.777Z · score: 9 (9 votes)
EA Open Thread: October 2015-10-10T19:27:04.119Z · score: 1 (1 votes)
September Open Thread 2015-09-13T14:22:20.627Z · score: 0 (0 votes)
Reducing Catastrophic Risks: A Practical Introduction 2015-09-09T22:33:03.230Z · score: 5 (5 votes)
Superforecasters [link] 2015-08-20T18:38:27.846Z · score: 6 (5 votes)
The long-term significance of reducing global catastrophic risks [link] 2015-08-13T22:38:23.903Z · score: 4 (4 votes)
A response to Matthews on AI Risk 2015-08-11T12:58:38.930Z · score: 11 (11 votes)
August Open Thread: EA Global! 2015-08-01T15:42:07.625Z · score: 3 (3 votes)
July Open Thread 2015-07-02T13:41:52.991Z · score: 4 (4 votes)
[Discussion] Are academic papers a terrible discussion forum for effective altruists? 2015-06-05T23:30:32.785Z · score: 3 (3 votes)
Upcoming AMA with new MIRI Executive Director, Nate Soares: June 11th 3pm PT 2015-06-02T15:05:56.021Z · score: 1 (3 votes)
June Open Thread 2015-06-01T12:04:00.027Z · score: 4 (4 votes)
Introducing Alison, our new forum moderator 2015-05-28T16:09:26.349Z · score: 9 (9 votes)
Three new offsite posts 2015-05-18T22:26:18.674Z · score: 4 (4 votes)
May Open Thread 2015-05-01T09:53:47.278Z · score: 1 (1 votes)
Effective Altruism Handbook - Now Online 2015-04-23T14:23:28.013Z · score: 27 (29 votes)
One week left for CSER researcher applications 2015-04-17T00:40:39.961Z · score: 2 (2 votes)
How Much is Enough [LINK] 2015-04-09T18:51:48.656Z · score: 3 (3 votes)
April Open Thread 2015-04-01T22:42:48.295Z · score: 2 (2 votes)
Marcus Davis will help with moderation until early May 2015-03-25T19:12:11.614Z · score: 5 (5 votes)
Rationality: From AI to Zombies was released today! 2015-03-15T01:52:54.157Z · score: 6 (8 votes)
GiveWell Updates 2015-03-11T22:43:30.967Z · score: 4 (4 votes)
Upcoming AMA: Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project: 17th March 8pm GMT 2015-03-10T21:25:39.329Z · score: 4 (4 votes)
A call for ideas - EA Ventures 2015-03-01T14:50:59.154Z · score: 3 (3 votes)
Seth Baum AMA next Tuesday on the EA Forum 2015-02-23T12:37:51.817Z · score: 7 (7 votes)
February Open Thread 2015-02-16T17:42:35.208Z · score: 0 (0 votes)
The AI Revolution [Link] 2015-02-03T19:39:58.616Z · score: 10 (10 votes)
February Meetups Thread 2015-02-03T17:57:04.323Z · score: 1 (1 votes)
January Open Thread 2015-01-19T18:12:55.433Z · score: 0 (0 votes)
[link] Importance Motivation: a double-edged sword 2015-01-11T21:01:10.451Z · score: 3 (3 votes)
I am Samwise [link] 2015-01-08T17:44:37.793Z · score: 4 (4 votes)
The Outside Critics of Effective Altruism 2015-01-05T18:37:48.862Z · score: 11 (11 votes)
January Meetups Thread 2015-01-05T16:08:38.455Z · score: 0 (0 votes)
CFAR's annual update [link] 2014-12-26T14:05:55.599Z · score: 1 (3 votes)
MIRI posts its technical research agenda [link] 2014-12-24T00:27:30.639Z · score: 4 (6 votes)
Upcoming Christmas Meetups (Upcoming Meetups 7) 2014-12-22T13:21:17.388Z · score: 0 (0 votes)
Christmas 2014 Open Thread (Open Thread 7) 2014-12-15T16:31:35.803Z · score: 1 (1 votes)
Upcoming Meetups 6 2014-12-08T17:29:00.830Z · score: 0 (0 votes)
Open Thread 6 2014-12-01T21:58:29.063Z · score: 1 (1 votes)
Upcoming Meetups 5 2014-11-24T21:02:07.631Z · score: 0 (0 votes)
Open thread 5 2014-11-17T15:57:12.988Z · score: 1 (1 votes)
Upcoming Meetups 4 2014-11-10T13:54:39.551Z · score: 0 (0 votes)
Open Thread 4 2014-11-03T16:57:07.873Z · score: 1 (1 votes)

Comments

Comment by ryancarey on EA Forum Prize: Winners for December 2019 · 2020-01-31T23:24:47.629Z · score: 18 (10 votes) · EA · GW

Larks' post was one of the best of the year, so it's nice of him to effectively make a hundreds-of-dollars donation to the EA Forum Prize!

Comment by ryancarey on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T20:50:07.503Z · score: 2 (1 votes) · EA · GW

Yep, that's it.

Comment by ryancarey on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T16:03:53.694Z · score: 36 (14 votes) · EA · GW

Have you heard of Neumeir's naming criteria? It's designed for businesses, but I think it's an OK heuristic. I'd agree that there are better available names, e.g.:

  • CEEALAR. Distinctiveness: 1, Brevity: 1, Appropriateness: 4, Easy spelling and punctuation: 1, Likability: 2, Extendability: 1, Protectability: 4.
  • Athena Centre. 4,4,4,4,4,4,4
  • EA Study Centre. 3,3,4,3,3,3,3.
Comment by ryancarey on RyanCarey's Shortform · 2020-01-29T11:13:14.806Z · score: 3 (2 votes) · EA · GW

Tom Inglesby on nCoV response is one recent example from just the last few days. I've generally known Stefan Schubert, Eliezer Yudkowsky, Julia Galef, and others to make very insightful comments there. I'm sure there are very many other examples.

Generally speaking, though, the philosophy would be to go to the platforms that top contributors are actually using, and offer our services there, rather than trying to push them onto ours, or at least to complement the latter with the former.

Comment by ryancarey on RyanCarey's Shortform · 2020-01-27T22:18:23.891Z · score: 9 (2 votes) · EA · GW

Possible EA intervention: just like the EA Forum Prizes, but for the best Tweets (from an EA point-of-view) in a given time window.

Reasons this might be better than the EA Forum Prize:

1) Popular tweets have greater reach than popular forum posts, so this could promote EA more effectively

2) The prizes could go to EAs who are not regular forum users, which could also help to promote EA more effectively.

One would have to check the rules and regulations.

Comment by ryancarey on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-14T00:35:59.546Z · score: 2 (1 votes) · EA · GW

Hmm, but is it good or sustainable to repeatedly switch parties?

Comment by ryancarey on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-13T07:21:59.057Z · score: 11 (7 votes) · EA · GW

Interesting point of comparison: the Conservative Party has ~35% as many members, and had held government ~60% more often over the last 100 years, so the Leverage per member is ~4.5x higher. Although for many people, their ideology would mean they cannot credibly be involved in one or the other party.

Comment by ryancarey on Long-term investment fund at Founders Pledge · 2020-01-11T00:30:40.491Z · score: 4 (2 votes) · EA · GW

The obvious approach would be to by-default invest in the stock market, (or maybe a leveraged ETF?), and only move money from that into other investments when they have higher EV.

Comment by ryancarey on Pablo_Stafforini's Shortform · 2020-01-10T01:06:24.751Z · score: 15 (7 votes) · EA · GW

I think Pablo is right about points (1) and (3). Community Favorites is quite net-negative for my experience of the forum (because it repeatedly shows the same old content), and probably likewise for users on average. "Community" seems to needlessly complicate the posting experience, whose simplicity should be valued highly.

Comment by ryancarey on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-19T17:03:03.250Z · score: 17 (7 votes) · EA · GW
Of these categories, I am most excited by the Individual Research, Event and Platform projects. I am generally somewhat sceptical of paying people to ‘level up’ their skills.

If I'm understanding the categories correctly, I agree here.

While generally good, one side effect of this (perhaps combined with the fact that many low-hanging fruits of the insight tree have been plucked) is that a considerable amount of low-quality work has been produced. Furthermore, the conventional peer review system seems to be extremely bad at dealing with this issue... Perhaps you, enlightened reader, can judge that “How to solve AI Ethics: Just use RNNs” is not great. But is it really efficient to require everyone to independently work this out?

I agree. I think part of the equation is that peer review does not just filter papers "in" or "out" - it accepts them to a journal of a certain quality. Many bad papers will get into weak journals, but will usually get read much less. Researchers who read these papers cite them, also taking into account to their quality, thereby boosting the readership of good papers. Finally, some core of elite researchers bats down arguments that due to being weirdly attractive yet misguided, manage to make it through the earlier filters. I think this process works okay in general, and can also work okay in AI safety.

I do have some ideas for improving our process though, basically to establish a steeper incentive gradient for research quality (in the dimensions of quality that we care about): (i) more private and public criticism of misguided work, (ii) stronger filters on papers being published in safety workshops, probably by agreeing to have fewer workshops, with fewer papers, and by largely ignoring any extra workshops from "rogue" creators, and (iii) funding undersupervised talent-pipeline projects a bit more carefully.

Bar guvat V jbhyq yvxr gb frr zber bs va gur shgher vf tenagf sbe CuQ fghqragf jub jnag gb jbex va gur nern. Hasbeghangryl ng cerfrag V nz abg njner bs znal jnlf sbe vaqvivqhny qbabef gb cenpgvpnyyl fhccbeg guvf.

Svygrevat ~100 nccyvpnagf qbja gb n srj npprcgrq fpubynefuvc erpvcvragf vf abg gung qvssrerag gb jung PUNV naq SUV nyernql qb va fryrpgvat vagreaf. Gur rkcrpgrq bhgchgf frrz ng yrnfg pbzcnenoyl-uvtu. Fb V guvax pubbfvat fpubynefuvc erpvcvragf jbhyq or fvzvyneyl tbbq inyhr va grezf bs rinyhngbef' gvzr, naq nyfb n cerggl tbbq hfr bs shaqf.

--

It's an impressive effort as in previous years! One meta-thought: if you stop providing this service at some point, it might be worth reaching out to the authors of the alignment newsletter, to ask whether they or anyone they know would jump in to fill the breach.

Comment by ryancarey on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-02T09:49:03.509Z · score: 11 (6 votes) · EA · GW

Yep, I'd actually just asked to clarify this. I'm listing schools that are good for doing safety work in particular. They may also be biased toward places I know about. If people are trying to become professors, or are not interested in doing safety work in their PhD then I agree they should look at a usual CS university ranking, which would look like what you describe.

That said, at Oxford there are ~10 CS PhD students interested in safety, and a few researchers, and FHI scholarships, which is why it makes it to the Amazing tier. At Imperial, there are 2 students and one professor. But happy to see this list improved.

Comment by ryancarey on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-01T02:14:24.112Z · score: 4 (3 votes) · EA · GW

On a short skim, this seems more like a research agenda? There are a few research agendas by now...

The only lit review I've seen is [1]. I probably should've said I haven't seen any great lit reviews, because I felt this one was OK - it covered a lot of ground. However, it is a couple of years old, and it didn't organize the work in a way that was satisfying for me.

1. Everitt, Tom, Gary Lea, and Marcus Hutter. "AGI safety literature review." arXiv preprint arXiv:1805.01109 (2018).

Comment by ryancarey on Update on CEA's EA Grants Program · 2019-11-14T13:45:15.331Z · score: 5 (3 votes) · EA · GW

I think the option of having (a possible renamed) EA Grants as one option in EA funds is interesting. It could preserve almost all of the benefits (one extra independent grantmaker picking different kinds of targets) while reducing maybe half the overhead, and clarifying the difference between EA Grants and EA Funds.

Comment by ryancarey on Only a few people decide about funding for community builders world-wide · 2019-10-22T22:17:02.467Z · score: 20 (8 votes) · EA · GW

Given that community groups are much more homogenous funding targets than EA projects in-general, it makes perfect sense that we allocate one CEA team to evaluating them, while we allocate a few teams to evaluating other small-scale EA projects.

Comment by ryancarey on Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? · 2019-10-17T09:35:49.794Z · score: 13 (9 votes) · EA · GW

Many infamous ideologies have impaired decision-making in important positions leading to terrible consequences like wars and harmful revolutions: communism, fascism, ethno-nationalism, racism, etc.

Comment by ryancarey on What would EAs most want to see from a "rationality" or similar project in the EA space? · 2019-10-10T09:24:18.872Z · score: 17 (7 votes) · EA · GW

I've become pretty pessimistic about rationality-improvement as an intervention, especially to the extent that it involves techniques that are domain-general, with a large subjective element and placebo effect/participant cost. Basically most interventions of this sort haven't worked, though they induce tonnes of biases that allow them to display positive testimonials: placebo effects, liking instructors, having a break from work, getting to think about interesting stuff, branding of techniques, choice-supportive bias, biased sampling of testimonials, etc etc etc.

The nearest things that I'd be interest in would be: 1) domain-specific training that delivers skills and information from trained experts in a particular area, such as research, 2) freely available online reviews of literature on rationality interventions, similar to what gwern does for nootropics, 3) new controlled experiments on existing rationality programs such as Leverage and CFAR 4) training in risk assessment for high-risk groups like policymakers.

Comment by ryancarey on What should Founders Pledge research? · 2019-09-11T10:43:05.654Z · score: 9 (3 votes) · EA · GW

I think it's a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.

Comment by ryancarey on What should Founders Pledge research? · 2019-09-11T10:36:13.563Z · score: 5 (3 votes) · EA · GW

I don't have any inside info, and perhaps "pressure" is too strong, but Holden reported recieving advice in that direction in 2016:

"Paul Christiano and Carl Shulman–a couple of individuals I place great trust in (on this topic)–have argued to me that Open Phil’s grant to MIRI should have been larger. (Note that these individuals have some connections to MIRI and are not wholly impartial.) Some other people I significantly trust on this topic are very non-enthusiastic about MIRI’s work, but having a couple of people making the argument in favor carries substantial weight with me from a “let many flowers bloom”/”cover your bases” perspective. (However, I expect that the non-enthusiastic people will be less publicly vocal, which I think is worth keeping in mind in this context.)"
Comment by ryancarey on What should Founders Pledge research? · 2019-09-10T16:08:53.398Z · score: 22 (11 votes) · EA · GW

[My views only]

Thanks for putting up with my follow-up questions.

Out of the areas you mention, I'd be very interested in:

  • Improving science. Things like academia.edu and sci-hub have been interesting. Replacing LaTeX is interesting. Working on publishing incentives is also interesting. In general, there seems to be plenty of room for improvement!

I'd be interested in:

  • Improving political institutions and political wisdom: EA might need to escalate its involvement in many areas adjacent to this, such as policy intersected with great power relations or pivotal technologies. It would be very interesting to better-understand what can be done with funding alone.
  • Reducing political bias and partisanship: this seems hard, but somewhat important. Most lobbyists are not trying to do this. Russia is actively trying to do the opposite. It would be interesting if more can be done in this space. Fact-checking websites and investigative journalism (Bellingcat) are interesting in this space too. Another interesting area is counteracting political corruption.
  • Sundry ex risks/GCRs

I'd be a little interested in:

  • Increasing economic growth

I think the other might be disadvantageous based on my understanding that it's better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.

Out of those you haven't mentioned, but that seem similar, I'd also be interested in:

  • Promotion of effective altruism
  • Scholarships for people working on high-impact research
  • More on AI safety - OpenPhil seems to be funding high-prestige mostly-aligned figures (e.g. Stuart Russell, OpenAI) and high-prestige unaligned figures (e.g. their fellows) but has mostly not funded low-mid prestige highly-aligned figures (with notable exceptions of MIRI, Michael C and Dima K). Other small but comparably informed funders mostly favor low-mid prestige highly-aligned targets to a greater extent e.g. Paul's funding for AI safety research, and Paul and Carl argued to OpenPhil that they should fund MIRI more. I think there are residual opportunities to fund other low-mid prestige highly-aligned figures. [edited for clarity]
Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T22:30:21.532Z · score: 2 (1 votes) · EA · GW

No problem. I've also had a skim of the x-risk report to get an idea of what research you're talking about.

Would you expect the donors to be much more interested in some of the areas you mention than others, or similarly interested in all the areas?

Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T21:51:49.558Z · score: 2 (1 votes) · EA · GW

Cool! Are you able to indicate roughly what order of magnitude of donations you would expect to contribute per-year, over the next few years in the promising areas (or any of the others if they're significantly bigger than those) such as:

Donors focused on the long-term future of sentient life.
Donors focused on GCRs and existential risk.
Improving science
Sundry ex risks/GCRs
Improving political institutions and political wisdom

?

Comment by ryancarey on What should Founders Pledge research? · 2019-09-09T21:10:29.109Z · score: 4 (5 votes) · EA · GW

I'd need a better understanding of how Founders Pledge works to be able to say anything intelligent. I'm guessing the idea is something like:

  • when founders are due to donate, you prompt them
  • you ask them what kind of advice they would like
  • you give them some research relevant to that, and do/don't make specific recommendations ???
  • they make donations directly

Is that how it actually happens?

Comment by ryancarey on Funding chains in the x-risk/AI safety ecosystem · 2019-09-09T08:53:40.007Z · score: 16 (15 votes) · EA · GW

This is interesting. However, this graph is also fairly misleading by putting OpenPhil on the same footing as an individual ETG-funder, although OpenPhil is disbursing wholly 1000x more funds. Maybe you could set edge-widths to correspond to funding volumes? Also, do you think by moving the nodes around you could reduce the extent to which lines cross over each other, to increase clarity?

Comment by ryancarey on Are we living at the most influential time in history? · 2019-09-05T15:36:39.882Z · score: 11 (9 votes) · EA · GW

Criticality is confusing because it describes the point when nuclear reaction becomes self-sustaining, and relates to "critical points" in the related area of dynamical systems, which is somewhat different from what we're talking about.

I think Hingeyness should have a simple name because it is not a complicated concept - It's how much actions affect long-run outcomes. In RL, in discussion of prioritized experience replay, we would just use something like "importance". I would generally use "(long-run) importance" or "(long-run) influence" here, though I guess pivotality (from Yudkowsky's "pivotal act") is alright in a jargon-liking context (like academic papers).

Edit: From Carl's comment, and from rereading the post, the per-resource component seems key. So maybe per-resource importance.

Comment by ryancarey on Ask Me Anything! · 2019-08-20T22:13:38.547Z · score: 41 (18 votes) · EA · GW

I think we need to figure out how to better collectively manage the fact that political affiliation is a shortcut to power (and hence impact), yet politicisation is a great recipe for blowing up the movement. It would be a shame if avoiding politics altogether is the best we can do.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:48:53.492Z · score: 3 (2 votes) · EA · GW

A lot of EAs I know consider Dennett as their favourite author - he was my favourite around that age. An unconventional philosopher who covers wide ranges of topics, from evolution, to consciousness, and whose later books (like this one) are more accessible than his early stuff.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:47:45.158Z · score: 2 (1 votes) · EA · GW

The most famous historical utilitarian, Mill, grew up as a child-prodigy with intense tutoring in university-level subjects by his father James Mill. I found it to be a moving story, and gifted teenagers might be able to relate to some of the troubles that Mill experienced some 160 years ago.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:46:06.184Z · score: 2 (1 votes) · EA · GW

Feynman is one of the great public intellectuals, and I loved this book. A gripping and hilarious read that teaches you a lot about the kind of clear thinking that is required to solve real-world problems. It could change a gifted kid's perspective for sure.

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:48.009Z · score: 5 (4 votes) · EA · GW

Stories of Your Life: and Others by Ted Chiang

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:30.078Z · score: 2 (3 votes) · EA · GW

From Bacteria to Bach and Back by Daniel Dennett

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:44:14.386Z · score: 6 (3 votes) · EA · GW

Permutation city by Greg Egan

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:58.811Z · score: 12 (4 votes) · EA · GW

Autobiography by John Stuart Mill


Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:42.456Z · score: 21 (9 votes) · EA · GW

Surely You're Joking Mr Feynman by Richard Feynman

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:22.447Z · score: 16 (10 votes) · EA · GW

Reasons and Persons by Derek Parfit

Comment by ryancarey on What book(s) would you want a gifted teenager to come across? · 2019-08-05T18:43:06.560Z · score: 3 (3 votes) · EA · GW

(upcoming) Human Compatible by Stuart Russell

Comment by ryancarey on The EA Forum is a News Feed · 2019-07-29T13:41:44.690Z · score: 12 (5 votes) · EA · GW

I think the present EA Forum is most like Reddit, among forms of social media, so yes, kinda like a news feed. But I think the Possible Drawbacks of switching to a classic forum are probably larger than the stated Problems with the current setup. I'd rather see the problems fixed within the current framework.

On the problems,

  • I would note that it's not super-easy to improve search, as Facebook and old-school forums were never particularly easily searchable either. My preferred way to fix this would be to have a search bar, where you can type any term and see the posts on that topic sorted by upvotes like here.
  • The forum can indeed give an underwhelming impression. But perhaps this could be addressed by (i) having posts accompanied by some of their content a la Reddit, by (ii) simply placing grey horizontal lines between the commented posts, in order to delineate them, or by (iii) darkening the text to improve readability and ease of engagement

On the drawbacks:

  • Increasing overall post quality is one of the primary challenges for the forum, so that seems like a serious cost of switching to a forum. Although sometimes people who will produce great content are intimidated from doing so, the reverse is also a problem, even in the current setting - that people who produce low-quality content will proceed to do it. I don't have a strong feeling that at present we should be pushing hard in one direction nor the other.

Overall, the picture is that the current problems might be easier to fix than those that would arise in a switch to an old-school forum.

Comment by ryancarey on Defining Effective Altruism · 2019-07-22T06:02:04.133Z · score: 23 (7 votes) · EA · GW

I think this is an excellent definition.

Broadly, I understand it as:

(1) science of applying ethics given limited resources (tentatively impartial welfarist ethics)

(2) deployment of (1).

The omission of normativity has a fifth benefit of clarifying the difference from consequentialism.

Comment by ryancarey on Announcing the launch of the Happier Lives Institute · 2019-07-07T12:18:16.717Z · score: 8 (4 votes) · EA · GW

In the UK, "Institute" is a protected term, for which you need approval from the Secretary of State to use in a business name, per https://web.archive.org/web/20080913085135/http://www.companieshouse.gov.uk/about/gbhtml/gbf3.shtml. I'm not sure how this changes if you're being a part of the university, but otherwise this could present some problems.

Comment by ryancarey on Worldwide decline of the entomofauna: A review of its drivers · 2019-07-06T00:19:11.234Z · score: 11 (4 votes) · EA · GW

Perhaps you missed the key quotation from my post (emphasized below):

...From our compilation of published scientific reports, we estimate the current proportion of insect species in decline (41%) to be twice as high as that of vertebrates, and the pace of local species extinction (10%) eight times higher, confirming previous findings (Dirzo et al., 2014). At present, about a third of all insect species are threatened with extinction in the countries studied (Table 1). Moreover, every year about 1% of all insect species are added to the list, with such biodiversity declines resulting in an annual 2.5% loss of biomass worldwide (Fig. 2)..."

Is the 2.5% estimate inaccurate?

Comment by ryancarey on I find this forum increasingly difficult to navigate · 2019-07-05T16:08:45.728Z · score: 13 (6 votes) · EA · GW

For explicit sorting and simple interface, you might like ea.greaterwrong?

Comment by ryancarey on Worldwide decline of the entomofauna: A review of its drivers · 2019-07-05T13:38:05.672Z · score: 3 (2 votes) · EA · GW

What I mean is that working on wild animal welfare is less important if there are few animals, for any axiology..

Other theoretical arguments for expecting small insect populations: (i) in the long-run future most life would be on other planets, or in extreme cases, in simulations, where there would be little reason to bring insects, (ii) in the very long-run, there's little reason to think creating insects is the optimal way for people to use limited resources to fulfill their own preferences.

Comment by ryancarey on Long Term Future Fund and EA Meta Fund applications open until June 28th · 2019-06-11T00:23:05.329Z · score: 22 (9 votes) · EA · GW

I think there should be an Oxford group that has as its audience the people in EA orgs, with activities to improve happiness, productivity, and the attractiveness of these workplaces, which is quite different from the goal of trying to grow a community of students. On this front, I've been spending time finding group housing near the new office. It would also be good to have short-term housing for visitors. It would be good to have dinners, and fun activities on a Friday night. In-principle, the range of activities that could be helped by proximity to the Oxford orgs is extremely large, but things that interact more closely with the orgs, like grant recommendations, or recruitment, just to pick a couple of arbitrary examples, would have to be worked out beforehand.

Comment by ryancarey on Long Term Future Fund and EA Meta Fund applications open until June 28th · 2019-06-10T23:16:25.302Z · score: 32 (16 votes) · EA · GW

I'm not involved with either of these funds, but here are three projects I really want to see happen:

  • More recruiting for EA orgs: FHI wants to grow a bunch and could benefit from having more great researchers referred. Probably similar is true for other orgs.
  • Targeted outreach using social media advertisements: EA is currently doing little outreach for fear of dilution, and is thereby foregoing many of the benefits of our surplus of funds and ideas. Maybe we could do more outreach in a way that doesn't bring about dilution, such as by advertising intellectual content in a way that's filtered to just intellectual audiences.
  • EA Oxford community. There's ~45 employees at FHI/GPI/Forethought/CEA-UK but almost all of the community activities are run by and directed at students.
Comment by ryancarey on [Question] 20,000/40,000 Hours- MidCareer Options · 2019-05-30T16:43:39.155Z · score: 15 (5 votes) · EA · GW

I think the main answer is that advice for mid/late-career is harder to provide. But we can improvise by leveraging the existing research:

Could one land jobs at any of the positions on 80,000 Hours' jobs board?

Could you switch to working on a high-priority area in general?

What are the main skills gained from your career? Are these needed by any of the organizations on the jobs board? Are they needed for starting any new organizations?


Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T22:54:28.137Z · score: 4 (2 votes) · EA · GW

Isn't Matt in HK?

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T20:34:25.852Z · score: 32 (17 votes) · EA · GW

It would be really useful if this was split up into separate comments that could be upvoted/downvoted separately.

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T17:17:53.418Z · score: 18 (12 votes) · EA · GW

It's a bit surprising to me that you'd want to send all four volumes.

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T13:45:27.053Z · score: 43 (22 votes) · EA · GW

This is a strong set of grants, much stronger than the EA community would've been able to assemble a couple of years ago, which is great to see.

When will you be accepting further applications and making more grants?

Comment by ryancarey on Announcing EA Hub 2.0 · 2019-04-08T13:12:03.542Z · score: 23 (11 votes) · EA · GW
In keeping with our ethos, we want to collaborate with other EA projects as much as possible. The Hub presently connects with the EA Forum, EA Work Club, PriorityWiki, EA Donation Swap and Effective Thesis.

I'm not sure much integration would be required, but did you consider linking the 80k jobs board? This seems like a really useful recent EA tool that could fit in quite well.

Comment by ryancarey on Should EA Groups Run Organ Donor Registration Drives? · 2019-03-27T18:43:34.461Z · score: 18 (6 votes) · EA · GW

I agree that registering for organ donation after death helps but does no direct harm. But I think we need to have a high bar for including an activity in the typical cache of activities that EAs promote to others. We want the act to be similar to other acts that have near-maximal impact. Donation fits that bill because once you start donating anywhere, you can switch to other donation targets that have a big long term impact.

For organ donation, though, I don't think it really gives you ideas about anything that can be done that has any real long-term significance. If you go down the organ-donation vertical, you might end up with kidney donation, or with extreme ideas about self-sacrifice. This kind of ideology is really catchy --- It brought Zell Kravinsky mild fame, and was the main object of the book Strangers Drowning. But I don't think that's the main way that long-run good is done. I think doing long-run good requires mostly a more analytical or startup mindset. If you do things like live kidney donation, I actually think you might do less good than working for the week of your operation, and donating some of that to a top longtermist charity.

I get that my claim is that the second-order effects outweigh the first-order ones here, but I don't think that should be so surprising in the context of EA outreach --- we need to carve an overall package --- that gets people to do some good in the short-run, but most-importantly, that builds up a productivity mindset, and gets people to do a lot of good over the longer term.