Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T22:54:28.137Z · score: 4 (2 votes) · EA · GW

Isn't Matt in HK?

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T20:34:25.852Z · score: 32 (17 votes) · EA · GW

It would be really useful if this was split up into separate comments that could be upvoted/downvoted separately.

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T17:17:53.418Z · score: 18 (12 votes) · EA · GW

It's a bit surprising to me that you'd want to send all four volumes.

Comment by ryancarey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T13:45:27.053Z · score: 43 (22 votes) · EA · GW

This is a strong set of grants, much stronger than the EA community would've been able to assemble a couple of years ago, which is great to see.

When will you be accepting further applications and making more grants?

Comment by ryancarey on Announcing EA Hub 2.0 · 2019-04-08T13:12:03.542Z · score: 23 (11 votes) · EA · GW
In keeping with our ethos, we want to collaborate with other EA projects as much as possible. The Hub presently connects with the EA Forum, EA Work Club, PriorityWiki, EA Donation Swap and Effective Thesis.

I'm not sure much integration would be required, but did you consider linking the 80k jobs board? This seems like a really useful recent EA tool that could fit in quite well.

Comment by ryancarey on Should EA Groups Run Organ Donor Registration Drives? · 2019-03-27T18:43:34.461Z · score: 18 (6 votes) · EA · GW

I agree that registering for organ donation after death helps but does no direct harm. But I think we need to have a high bar for including an activity in the typical cache of activities that EAs promote to others. We want the act to be similar to other acts that have near-maximal impact. Donation fits that bill because once you start donating anywhere, you can switch to other donation targets that have a big long term impact.

For organ donation, though, I don't think it really gives you ideas about anything that can be done that has any real long-term significance. If you go down the organ-donation vertical, you might end up with kidney donation, or with extreme ideas about self-sacrifice. This kind of ideology is really catchy --- It brought Zell Kravinsky mild fame, and was the main object of the book Strangers Drowning. But I don't think that's the main way that long-run good is done. I think doing long-run good requires mostly a more analytical or startup mindset. If you do things like live kidney donation, I actually think you might do less good than working for the week of your operation, and donating some of that to a top longtermist charity.

I get that my claim is that the second-order effects outweigh the first-order ones here, but I don't think that should be so surprising in the context of EA outreach --- we need to carve an overall package --- that gets people to do some good in the short-run, but most-importantly, that builds up a productivity mindset, and gets people to do a lot of good over the longer term.

Comment by ryancarey on SHOW: A framework for shaping your talent for direct work · 2019-03-22T11:08:57.991Z · score: 17 (6 votes) · EA · GW

I hear more people do cold outreach about being a researcher than RA, and my guess is that 3-10x more people apply for researcher than RA jobs even when they are advertised. I think it's a combination of those two factors.

My recommendation would be that people apply more to RA jobs that are advertised, and also reach out to make opportunities for themselves when they are not.

I think about half of researchers can use research assistants, whether or not they are currently hiring for one. A major reason researchers don't make research assistant positions available is they don't expect to find one worth hiring, and so don't want to incur the administrative burden. Or maybe they don't feel comfortable asking their bosses for this. But if you are a strong candidate, coldly reaching out may result in you being hired or may trigger a hiring round for that position. Although often strong candidates would be people I have met at an EA conference, that got far in an internship application, or that has been referred to me.

I don't think the salaries would be any lower than competitive rates.

Comment by ryancarey on Request for comments: EA Projects evaluation platform · 2019-03-21T19:00:03.741Z · score: 12 (5 votes) · EA · GW

This is an uncharitable reading of my comment in many ways.

First, you suggest that I am worried that you want to recruit people not currently doing direct work. All things being equal, of course I would prefer to recruit people with fewer alternatives. But all things are not equal. If you use people you know for the initial assessments, you will much more quickly be able to iron out bugs in the process. In the testing stages, it's best to have high-quality workers that can perceive and rectify problems, so this is a good use of time for smart, trusted friends, especially since it can help you postpone the recruitment step.

Second, you suggest that I am in the dark about the importance of consensus-building. But this assumes that I believe the only use for consultation is to reach agreement. Rather, by talking to the groups working in related spaces like BERI, Brendon, EA grants, EA funds, and donors, you will of course learn some things, and your beliefs will probably get closer. On aggregate, your process will improve. But also you will build a relationship that will help you to share proposals (and in my opinion funders).

Third, you raise the issue of connecting funding with evaluation. Of course, the distortionary effect is significant. I happen to think the effect from creating an incentive for applicants to apply is larger and more important, and funders should be highly engaged. But there are also many ways that you could have funders be moderately engaged. You could check what would be a useful report for them, that would help them to decide to fund something. You could check what projects they are more likely to fund.

The more strategic issue is as follows. Consensus is hard to reach. But a funding platform is a good that scales with the size of the network of applicants (and imo funders). Somewhat of a natural monopoly (although we want there to be at least a few funders.) You eventually want widespread community-support of some form. I think that as you suggest, that means we need some compromise, but I think it also weighs in favour of more consultation, and in favour of a more experimental approach, which projects are started in a simple form.

Comment by ryancarey on Request for comments: EA Projects evaluation platform · 2019-03-21T12:11:55.128Z · score: 25 (12 votes) · EA · GW

I'm a big fan of the idea of having a new EA projects evaluation pipeline. Since I view this as an important idea, I think it's important to get the plan to the strongest point that it can be. From my perspective, there are only a smallish number of essential elements for this sort of plan. It needs a submissions form, a detailed RFP, some funders, and some evaluators. Instead, we don't yet have these (e.g. detail re desired projects, consultation with funders). But then I'm confused about some of the other things that are emphasised: large initial scale, a process for recruiting volunteer-evaluators, and fairly rigid evaluation procedures. I think the fundamentals of the idea are strong enough that this still has a chance of working, but I'd much prefer to see the idea advanced in its strongest possible form. My previous comments on this draft are pretty similar to Oliver's, and here are some of the main ones:

This makes sense to me as an overall idea. I think this is the sort of project where if you do it badly, it might dissuade others from trying the same. So I think it is worth getting some feedback on this from other evaluators (BERI/Brendon Wong). It would also probably be useful to get feedback from 1-2 funders (maybe Matt Wage? Maybe someone from OpenPhil?), so that you can get some information about whether they think your evaluation process would be of interest to them, or what might make it so. It could also be useful to have unofficial advisors.

I predict the process could be refined significantly with ~3 projects.

You only need a couple of volunteers and you know perhaps half of the best candidates, so for the purpose of a pilot, did you consider just asking a couple of people you know to do it?

I think you should provide a ~800 word request for proposals. Then you can give a much more detailed description of who you want to apply. e.g. just longtermist projects? How does this differ from the scope of EA grants, BERI, OpenPhil, etc etc? Is it sufficient to apply with just an idea? Do you need a team? A proof of concept? etc etc etc.

This would be strengthened somewhat by already having obtained the evaluators, but this may not be important.
Comment by ryancarey on SHOW: A framework for shaping your talent for direct work · 2019-03-19T18:41:49.049Z · score: 4 (2 votes) · EA · GW

I was influenced at that time by people like Matt Fallshaw and Ben Toner, who thought that for sufficiently good intellectual work, funding would be forthcoming. It seemed like insights were mostly what was needed to reduce existential risks...

Comment by ryancarey on SHOW: A framework for shaping your talent for direct work · 2019-03-19T15:51:26.758Z · score: 5 (3 votes) · EA · GW

I thought that more technical skills were rarer, were neglected in some parts of academia (e.g. in history), and were the main thing holding me back from being able to understand papers about emerging technologies... Also, I asked Carl S, and he thought that if I was to go into research, these would be the best skills to get. Nowadays, one could ask a lot more different people.

Comment by ryancarey on The career coordination problem · 2019-03-17T20:30:35.207Z · score: 12 (8 votes) · EA · GW

I don't think this idea was mine originally, but it would go a long way just to have two pi charts: the current distribution of careers in EA, and the optimal distribution.

Comment by ryancarey on SHOW: A framework for shaping your talent for direct work · 2019-03-13T02:09:47.346Z · score: 5 (3 votes) · EA · GW

Ryan/Tegan: Did you get your "something like thirty times lower" estimate from any particular research organization(s)?

This is an order-of magnitude estimate based on experience at various orgs. I've asked to be a research assistant for various top researchers, and generally I'm the only person asking at that time. I've rarely heard from researchers that someone has asked to research-assist with them. Some of this is because RA job descriptions are less common but I would guess that there is still an effect even when there are RA job descriptions.

SHOW: A framework for shaping your talent for direct work

2019-03-12T17:16:44.885Z · score: 124 (64 votes)
Comment by ryancarey on Unsolicited Career Advice · 2019-03-09T11:46:50.538Z · score: 4 (2 votes) · EA · GW

Cover letters to core EA orgs from EAs generally indicate interest in EA. It's sometimes also indicated by involvement in EA groups, through a CV, by referral sources, and by interviews. You can pretty reliably tell.

Comment by ryancarey on Unsolicited Career Advice · 2019-03-05T14:27:53.505Z · score: 20 (10 votes) · EA · GW

Hundreds of EA applicants? Most EA org roles don't have that... I've been in/around MIRI, Ought, FHI and many other EA orgs. It's common to have about a hundred applicants for a role (research or ops) and the number of EA applicants is usually in the tens.

Comment by ryancarey on Pre-announcement and call for feedback: Operations Camp 2019 · 2019-02-20T17:47:57.424Z · score: 12 (6 votes) · EA · GW

Hey Jorgen,

That would honestly be my guess. Some people would call this cynical, but I think the amount of skills you're going to impart in 4 days, or even with a very long ~5 week camp, are pretty limited compared to the variation in people's innate dispositions, and the experience gained in their whole lifetime beforehand.

Comment by ryancarey on Pre-announcement and call for feedback: Operations Camp 2019 · 2019-02-20T01:29:01.249Z · score: 4 (2 votes) · EA · GW
A potential failure mode is that applicants believe the camp is a guaranteed way of being hired. Participants should not expect that this camp is guaranteeing, or making any promises whatsoever, about increasing the chances of getting a relevant position.

Yep! Although I'd emphasise that issue can also be solved by being more selective. If you pick some combo of: 1) reasonably strong candidates straight out of university who are happy to work on entry-level admin jobs, and 2) candidates with some PM experience, who are prepared to work as a PM at an EA org, including a community org, then that cohort is reasonably likely to leave happy (versus, I don't know, if you pick a bunch of people with lower levels of employment, who are strongly location restricted or are otherwise particular about the kinds of jobs they would accept). I think the impact from recruiting, identifying, filtering, and referring the already--semi-strong candidates is already something to get excited about!

Comment by ryancarey on Requesting community input on the upcoming EA Projects Platform · 2018-12-11T14:10:39.189Z · score: 20 (8 votes) · EA · GW

Yeah I agree with Jan that you should take things slowly. Also, my advice is that the following two bottlenecks are important, but also relatively easy to relieve: buy-in from community leaders, and support from EA institutions. So you should invest in these by having meetings and getting some people in relevant organizations take on advising roles.

Ultimately, I think you have the right general idea though. Current community-based orgs are capacity limited, and so some major projects like this should stand-alone.

Comment by ryancarey on Crohn's disease · 2018-11-16T10:27:32.175Z · score: 5 (3 votes) · EA · GW

He's just saying he thinks there's a 0.005 chance of detecting a real effect.

Comment by ryancarey on Crohn's disease · 2018-11-16T00:19:54.738Z · score: 5 (3 votes) · EA · GW

If you use a two tailed test and find a positive effect with p<0.05 it's <0.025 likely you'd get a positive effect that big by chance. If you don't understand that then you should look up two tailed tests.

Comment by ryancarey on Crohn's disease · 2018-11-14T21:12:33.413Z · score: 4 (2 votes) · EA · GW

"A cheap cure for Crohn's could save some large fraction of the $33B spent on Crohn's per year, and these funds could save thousands of lives per year if spent on other diseases."

Comment by ryancarey on Crohn's disease · 2018-11-14T10:43:48.020Z · score: 18 (8 votes) · EA · GW

I mostly agree with this, but I think it's also wrong in a couple of places.

Crohn's disease is not a spondyloarthritis! (and neither is psoriasis, ulcerative colitis, or acute anterior uveitis). As the name suggests, spondyloarthritides are arthritides (i.e. diseases principally of joints - the 'spondylo' prefix points to joints between vertebrae); Crohn's a disease of the GI tract.

I think this is just restating the hypothesis, that Crohn's shares (most of) its pathophysiology with the spondyloarthritides... Which is a well-known open possibility. The incidence of Crohns is >10% in people with AS and vice versa. They share heredity, HLA-B27. Apparently 2/3 of those with AS also have silent gut signs [1].

Also, I think the following is off the mark:

Although these are imperfect, if the person behind the project doesn't have credentials in a relevant field (bioinformatics rather than gastroenterology, say), and/or a fairly slender relevant publication record, and scant/no interest from recognised experts, these are also adverse indicators. (Remember the nobel-prize winner endorsed Vit C megadosing?)

Note that the author did manage to co-author his latest piece with an ophthalmologist/rheumatologist with a professorship in inflammation research and 20k cites.

Overall, the parts of the objection that I agree most with are i) that it seems very unlikely that one or two fungi would be implicated with all of these 14 various diseases, and that treating the fungus would cure the inflammatory disease (rather than the fungus just acting as an initial trigger), and ii) that there are mistakes, especially semantic ones, and especially on malassezia.org (as opposed to in the papers), with some of the medical science.

The interesting questions seems to me to be whether an overconfident-seeming author could nonetheless be correct about the minimal prediction that some antifungals would work well in at least Crohn's disease. I don't yet see why this is <1% likely.

1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2996322/

2. https://www.ncbi.nlm.nih.gov/pubmed/29675414, https://en.wikipedia.org/wiki/James_T._Rosenbaum

Comment by ryancarey on Crohn's disease · 2018-11-14T00:36:36.129Z · score: 3 (2 votes) · EA · GW

33000/7 * 25 is 120k not 1.2M.

Comment by ryancarey on Crohn's disease · 2018-11-13T21:33:54.523Z · score: 2 (1 votes) · EA · GW

Hm, I don't have enough expertise to efficiently evaluate this. But I think someone should.

Comment by ryancarey on Crohn's disease · 2018-11-13T20:47:38.947Z · score: 4 (3 votes) · EA · GW

Hm, RA + Prostate Ca seems to be 0.3% of all DALYs and 1.2% of those in rich North America, based on Table A2 here. So the important matter seems to be evaluating the plausibility of the fungal hypothesis.

Comment by ryancarey on Crohn's disease · 2018-11-13T19:52:43.619Z · score: 13 (5 votes) · EA · GW

Hmmm.. this post seems overconfident. Also it's a seriously way out there hypothesis. But between RA+AS+IBD+prostate Ca we're probably talking about a few percent of first-world morbidity, or at least health expenditures. The authors' include some folks with great credentials, and the written argument in Laurence 2018 is seemingly reasonable (on a skim). Very plausible that I've missed glaring mistakes in their analysis, but if not this seems like a high EV experiment.

Comment by ryancarey on Crohn's disease · 2018-11-13T19:18:32.972Z · score: -7 (1 votes) · EA · GW

[deleted]

Comment by ryancarey on Which World Gets Saved · 2018-11-11T16:33:07.752Z · score: 4 (5 votes) · EA · GW

Even if you do think possible doom is near, you might want an intermediate simplification like "some people think about consequentialist philosophy while most mitigate catastrophes that would put this thinking process at risk".

Comment by ryancarey on Which World Gets Saved · 2018-11-10T00:42:12.777Z · score: 9 (13 votes) · EA · GW

I'd guess that we don't have to think much about which world we're saving. My reasoning would be that the expected value of the world in the long-run is mostly predictable from macrostrategic -- even philosophical -- considerations. e.g. agents more often seek things that make them happier. The overall level of preference fulfilment that is physically possible might be very large. There's not much reason to think that pain is easier to create than pleasure (see [1] for an exploration of the question), and we'd expect the sum of recreation and positive incentivization to be more than the amount of disincentivization (e.g. retribution or torture).

I think a proper analysis of this would vindicate the existential risk view as a simplification of maximize utility (modulo problems with infinite ethics (!)). But I agree that all of this needs to be argued for.

1. http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html

Comment by ryancarey on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-14T08:59:42.598Z · score: 1 (1 votes) · EA · GW

we just need to bear in mind that these roles require a very unusual skill-set, so people should always have a good back-up plan

Hmm, if EA work is valuable but the selection bar excludes most EAs, that could actually mean some/many of the following:

  • many people should just have a different plan A.
  • we need to get much better at selecting the best talent
  • we need to recruit much more selectively
  • EAs should have stronger backup plans
Comment by ryancarey on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T15:49:54.111Z · score: 4 (4 votes) · EA · GW

Is it fair to summarize the thesis as: there is heaps of super-valuable talent out there but the main reason we can't cash it is that it can't be absorbed into existing managerial structures?

If so, then shouldn't we be advocating aggressively for absorbing the talent through greater funding of and more infrastructure for new EA orgs and EA contractor roles?

Comment by ryancarey on EA needs a cause prioritization journal · 2018-09-16T07:25:04.263Z · score: 6 (6 votes) · EA · GW

I agree that special issues are the sensible intermediate step.

Even for GCR, we ought to check that the special issues are not just using up the backlog of decent content (a first-album-syndrome where the first few issues are great). So I'd like to see us sticking with special issues for a little longer, and to see an ongoing improvement in quality and volume of content before we commit to a standalone journal of 2-4 issues per year. But opinions can reasonably differ, of course!

Comment by ryancarey on 500 Million, But Not A Single One More · 2018-09-15T04:01:07.602Z · score: 3 (3 votes) · EA · GW

Thanks Jai! I thought this piece was outstanding. I also loved What Almost Was.

Comment by ryancarey on EA Funds - An update from CEA · 2018-08-08T06:30:30.006Z · score: 3 (3 votes) · EA · GW

Agreed. You could get a higher effective ROI by mission-hedging -- investing AI-risk funds in things like Google. But even then, the returns seem like a pretty second-order issue.

Comment by ryancarey on Update on Envision: progress thus far and next steps · 2018-07-20T10:48:45.367Z · score: 0 (0 votes) · EA · GW

Is this likely to occur again in 2018?

Comment by ryancarey on EA Forum 2.0 Initial Announcement · 2018-07-19T23:40:40.263Z · score: 10 (12 votes) · EA · GW

This seems like a strong plan, and I'm glad you've thought things through thoroughly. I'll just outline points of agreement, and slight differences.

I certainly agree with the approach of building EA Forum 2 from LessWrong 2. After all, the current EA Forum version was built from LessWrong 1 for similar reasons. We had a designer sketch the restyled site, and this was quite a positive experience, so I'd recommend doing the same with the successor. Basically, the EA Forum turned out quite a bit more beautiful than the LessWrong, and the same should be possible again. I think there are some easy wins to be had here, like making the LW2's front-page text a bit darker, but I also think it's possible to go beyond that and make things really pretty all-around.

I agree with keeping LW2's new Karma system, and method of ordering posts, and I think that this is a major perk of the codebase. I'm also happy to see that you seem to have the downsides well-covered.

One small diff is when you say "Although CEA has a view on which causes we should prioritize, we recognize that the EA Forum is a community space that should reflect the community." Personally, I think that forum administrators should be able to shape the content of the forum a little bit. Not by carrying out biased moderation, but by various measures that are considered "fair" like producing content, promoting content, voting, and so on.

I think the possible features are kind-of interesting. My thoughts are as follows:

  • different landing pages: may be good
  • local group pages: may be good, but maybe events are best left on Facebook. Would be amazing if you can automatically include Facebook events, but I've no idea whether that's feasible.
  • additional subforums: probably bad, because I think the community is currently only large enough to support ~2 active fora, and having multiple fora adds confusion and reduces ease-of-use.
  • Single sign-on: likely to be good, since things are being consolidated to one domain.

Thanks again to Trike Apps for running the forum over all these years, and thanks to CEA for taking over. With my limited time, it would never have been possible to transition the forum over to new software, and so we would have been in a much worse position. So thanks all!

Comment by ryancarey on Open Thread #40 · 2018-07-16T06:53:29.938Z · score: 1 (1 votes) · EA · GW

Every 2-3 months seems good.

Comment by ryancarey on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T11:07:25.747Z · score: 0 (0 votes) · EA · GW

The concept starts with a website that has a fully digital grant application process. Applicants create user accounts that let them edit applications, and applicants can choose from a variety of options like having the grant be hidden or publicly displayed on the website, and posting under their real names or a pseudonym. Grants have discussions sections for the public to give feedback. Anonymous project submission help people get feedback without reputation risk and judge project funding potential before committing significant time and resources to a project. If the applicant opts to make an application public, it is displayed for everyone to see and comment on. Anyone can contact the project creator, have a public or private discussion on the grant website, and even fund a project directly.

What does this achieve that Google Docs linked from the EA Forum can't achieve? I think it should start with a more modest MVP that works within existing institutions and more extensively leverages existing software products.

The website is backed by a centralized organization that decides which proposals to fund via distributed grantmaking. Several part-time or full-time team members run the organization and assess the quality and performance of grantmakers. EAs in different cause areas can apply to be grantmakers. After an initial evaluation process, beginner grantmakers are given a role like “grant advisor” and given a small grantmaking budget. As grantmakers prove themselves effective, they are given higher roles and a larger grantmaking budget.

This sounds good.

While powered by dencentralized grantmakers, the organization has centralized funding options for donors that do not want to evaluate grants themselves.

I'm not sure what you mean by "centralized funding options"

Donations can be tax-deductible, non-tax-deductible, or even structured as impact investments into EA initiatives. Donors can choose cause areas to fund, and can perhaps even fund individual grantmakers.

This sounds good.

Comment by ryancarey on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-10T11:29:56.533Z · score: 9 (5 votes) · EA · GW

Nice post, Brendon!

I've been of the view for the last couple of years that it'd be useful to have more dedicated effort put toward funding EA projects.

I have a factual contributions that should help to flesh out your strategic picture here:

  1. BERI, in addition to EA Grants are funding some small-scale projects. In the first instance, one might want to bootstrap a project like this through BERI, given that they already have some funding available and are a major innovator in the EA support space right now.
  2. OpenPhil does already do some regranting.
  3. EA Ventures attempted, over the course of some months, to do this a few years ago, which you can read at least a bit about here: http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/. I think it failed for a range of reasons including inadequate projects, but it would be worth looking into this further.

Notwithstanding these factors, I still think this idea is worth exploring. As you suggest, I might start off by creating a grant application system. But I think the most important aspects are probably not the system itself as the quality of evaluators and the volume of funders. So it might be best to try to bootstrap it from an existing organization or funder, and to initially accept applications via a low-tech system, such as Google Doc proposals. I'd also emphasise that one good aspect of the status quo is that bad ideas mostly go unfunded at present, especially ones whose low-quality could damage the reputation of EA and its associated research fields, or ones that could inspire hamrful activity. There are more potentially harmful projects within the EA world than in entrepreneurship in general, and so these projects might be overlooked from people taking an entrepreneurial or open-source stance, and this is worth guarding against.

One meta-remark is that I generally like the conversations that are prompted by shared Google Docs, and I think that this generates, on average, more extensive and fine-grained feedback than a Forum Post would typically receive. So if you put out a "nonprofit business plan" for this idea, then I figure a Google Doc (+/- links from the Forum and relevant Facebook groups) would be a great format. Moreover, I'd be happy to provide further feedback on this idea in the future.

Comment by ryancarey on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-26T08:11:43.586Z · score: 7 (4 votes) · EA · GW

Do you have a plan for scanning over posted materials (analogously to moderation on the EA Forum), a code of conduct for posts, or a procedure for discreetly flagging hazardous content?

Comment by ryancarey on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-19T00:20:36.654Z · score: 6 (8 votes) · EA · GW

Do you have a plan for managing information hazards?

Comment by ryancarey on Enlightened Concerns of Tomorrow · 2018-05-29T17:32:25.999Z · score: 1 (1 votes) · EA · GW

I agree with the characterization of the discussion, but regardless, you can find it here: https://www.youtube.com/watch?v=H_5N0N-61Tg&t=86m12s

Comment by ryancarey on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-02T23:32:16.506Z · score: 1 (11 votes) · EA · GW

I think this has turned out really well Max. I like that this project looks set to aid with movement growth while improving the movement's intellectual quality, because the content is high-quality and representative of current EA priorities. Maybe the latter is the larger benefit, and probably it will help everyone to feel more confident in accelerating the movement growth over time, and so I hope we can find more ways to have a similar effect!

Comment by ryancarey on Why not to rush to translate effective altruism into other languages · 2018-03-05T12:08:49.627Z · score: 6 (6 votes) · EA · GW

Is it still alien and uncool if you look at an article as a whole and just rewrite it from scratch in French, rather than translating each line? (Kind-of like if I lose my copy of an essay and then rewrite the same ideas in new prose.)

Comment by ryancarey on Where Some People Donated in 2017 · 2018-02-13T09:44:10.327Z · score: 3 (3 votes) · EA · GW

I donated to MIRI and GCRI.

Also, the link to Zvi's writeup seems to be missing?

Comment by ryancarey on 69 things that might be pretty effective to fund · 2018-01-22T00:28:22.332Z · score: 6 (6 votes) · EA · GW

Is there some reason not to have them sorted by cause?

AI alignment prize winners and next round [link]

2018-01-20T12:07:16.024Z · score: 7 (7 votes)
Comment by ryancarey on Two critical Mega-trends that Effective Altruism has missed so far [Edited] · 2018-01-20T09:56:31.662Z · score: 6 (10 votes) · EA · GW

obviously the universe is finite

We can go only as far as to say that the accessible universe is finite according to prevailing current theories.

Comment by ryancarey on [Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale · 2018-01-14T12:01:42.745Z · score: 2 (2 votes) · EA · GW

I haven't read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.

The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]

For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.

We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.

Comment by ryancarey on Blood Donation: (Generally) Not That Effective on the Margin · 2018-01-10T20:57:39.487Z · score: 0 (0 votes) · EA · GW

I revisited this question earlier today. Here's my analysis with rough made-up numbers.

I think each extra time you donate blood, it saves less than 0.02 expected lives.

Suppose half the benefits come from the red blood cells.

Each blood donation gives half a unit of red blood cells. (Because a unit of blood is ~300ml)

Each red blood cell transfusion uses 2-3 units on average, and saves a life <5% of the time.

So on average every ~5 donations would save 0.1 lives (supposing the red blood cells are half of the impact)

But each marginal unit of blood is worth much less than the average because blood is kept in reserve for when it's most needed.

So it should be less than 0.02 lives saved per donation, and possibly much less. If saving a life via AMF costs a few thousand dollars, and most EAs should value their time at least tens of dollars an hour, then pretty-much all EAs should not donate their blood, at least as far as altruistic reasons go.

I could be way wrong here, especially if the components other than red blood cells are providing a large fraction of the value.

Comment by ryancarey on Cosmic EA: How Cost Effective Is Informing ET? · 2017-12-31T14:37:10.172Z · score: 1 (3 votes) · EA · GW

Perhaps you could re-evaluation this question in light of Bostrom's findings in Astronomical Waste? The overriding impacts relate to risk of extinction of all life (which alien contact could bring about, or perhaps could avoid) rather than opportunity costs of technological development.

The Threat of Nuclear Terrorism MOOC [link]

2017-10-19T12:31:12.737Z · score: 7 (7 votes)

Informatica: Special Issue on Superintelligence

2017-05-03T05:05:55.750Z · score: 7 (7 votes)

Tell us how to improve the forum

2017-01-03T06:25:32.114Z · score: 4 (4 votes)

Improving long-run civilisational robustness

2016-05-10T11:14:47.777Z · score: 9 (9 votes)

EA Open Thread: October

2015-10-10T19:27:04.119Z · score: 1 (1 votes)

September Open Thread

2015-09-13T14:22:20.627Z · score: 0 (0 votes)

Reducing Catastrophic Risks: A Practical Introduction

2015-09-09T22:33:03.230Z · score: 5 (5 votes)

Superforecasters [link]

2015-08-20T18:38:27.846Z · score: 4 (4 votes)

The long-term significance of reducing global catastrophic risks [link]

2015-08-13T22:38:23.903Z · score: 4 (4 votes)

A response to Matthews on AI Risk

2015-08-11T12:58:38.930Z · score: 11 (11 votes)

August Open Thread: EA Global!

2015-08-01T15:42:07.625Z · score: 3 (3 votes)

July Open Thread

2015-07-02T13:41:52.991Z · score: 4 (4 votes)

[Discussion] Are academic papers a terrible discussion forum for effective altruists?

2015-06-05T23:30:32.785Z · score: 3 (3 votes)

Upcoming AMA with new MIRI Executive Director, Nate Soares: June 11th 3pm PT

2015-06-02T15:05:56.021Z · score: 1 (3 votes)

June Open Thread

2015-06-01T12:04:00.027Z · score: 4 (4 votes)

Introducing Alison, our new forum moderator

2015-05-28T16:09:26.349Z · score: 9 (9 votes)

Three new offsite posts

2015-05-18T22:26:18.674Z · score: 4 (4 votes)

May Open Thread

2015-05-01T09:53:47.278Z · score: 1 (1 votes)

Effective Altruism Handbook - Now Online

2015-04-23T14:23:28.013Z · score: 26 (28 votes)

One week left for CSER researcher applications

2015-04-17T00:40:39.961Z · score: 2 (2 votes)

How Much is Enough [LINK]

2015-04-09T18:51:48.656Z · score: 3 (3 votes)

April Open Thread

2015-04-01T22:42:48.295Z · score: 2 (2 votes)

Marcus Davis will help with moderation until early May

2015-03-25T19:12:11.614Z · score: 5 (5 votes)

Rationality: From AI to Zombies was released today!

2015-03-15T01:52:54.157Z · score: 6 (8 votes)

GiveWell Updates

2015-03-11T22:43:30.967Z · score: 4 (4 votes)

Upcoming AMA: Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project: 17th March 8pm GMT

2015-03-10T21:25:39.329Z · score: 4 (4 votes)

A call for ideas - EA Ventures

2015-03-01T14:50:59.154Z · score: 3 (3 votes)

Seth Baum AMA next Tuesday on the EA Forum

2015-02-23T12:37:51.817Z · score: 7 (7 votes)

February Open Thread

2015-02-16T17:42:35.208Z · score: 0 (0 votes)

The AI Revolution [Link]

2015-02-03T19:39:58.616Z · score: 10 (10 votes)

February Meetups Thread

2015-02-03T17:57:04.323Z · score: 1 (1 votes)

January Open Thread

2015-01-19T18:12:55.433Z · score: 0 (0 votes)

[link] Importance Motivation: a double-edged sword

2015-01-11T21:01:10.451Z · score: 3 (3 votes)

I am Samwise [link]

2015-01-08T17:44:37.793Z · score: 4 (4 votes)

The Outside Critics of Effective Altruism

2015-01-05T18:37:48.862Z · score: 11 (11 votes)

January Meetups Thread

2015-01-05T16:08:38.455Z · score: 0 (0 votes)

CFAR's annual update [link]

2014-12-26T14:05:55.599Z · score: 1 (3 votes)

MIRI posts its technical research agenda [link]

2014-12-24T00:27:30.639Z · score: 4 (6 votes)

Upcoming Christmas Meetups (Upcoming Meetups 7)

2014-12-22T13:21:17.388Z · score: 0 (0 votes)

Christmas 2014 Open Thread (Open Thread 7)

2014-12-15T16:31:35.803Z · score: 1 (1 votes)

Upcoming Meetups 6

2014-12-08T17:29:00.830Z · score: 0 (0 votes)

Open Thread 6

2014-12-01T21:58:29.063Z · score: 1 (1 votes)

Upcoming Meetups 5

2014-11-24T21:02:07.631Z · score: 0 (0 votes)

Open thread 5

2014-11-17T15:57:12.988Z · score: 1 (1 votes)

Upcoming Meetups 4

2014-11-10T13:54:39.551Z · score: 0 (0 votes)

Open Thread 4

2014-11-03T16:57:07.873Z · score: 1 (1 votes)

Upcoming Meetups 3

2014-10-27T22:02:04.564Z · score: 0 (0 votes)

One month in - it's time for more introductions

2014-10-10T22:51:51.504Z · score: 6 (6 votes)