Posts

Comments

Comment by rjmk on 2017 AI Safety Literature Review and Charity Comparison · 2018-10-13T15:26:02.874Z · EA · GW

Thank you for this excellent post: I began by pulling out quotes that I wanted to reflect on further, but ended up copying most of the post paragraph by paragraph.

I'm still not sure how to get the most value out of the information that has been shared here.

Three ideas:

  1. Sample some summarised papers to (a) get a better idea of what AI safety work looks like and/or (b) build a model of where I might disagree with the post's evaluation of impact

  2. Generate alternatives to the model discussed in the introduction (for example, general public outreach being positive EV), see how that changes the outcome, and then consider which models are most likely

  3. Use as reading list for preparing for technical work in AI safety

Comment by rjmk on Against prediction markets · 2018-05-25T18:22:45.717Z · EA · GW

On the thin markets problem, there's been some prior work (on doing some googling I found https://mason.gmu.edu/~rhanson/mktscore.pdf, but I recall reading a paper with a less scary tile).

In the office case, an obvious downside to incentivising the market is that one may divert labour away from normal work, so it may still be that non-market solutions are superior

Comment by rjmk on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-05-02T20:44:12.051Z · EA · GW

Thanks for the response. I understand OPP doesn't control the visa process, but do you have a rough sense of how likely a successful applicant would be to get a visa after being sponsored, or is it a complete unknown?

Comment by rjmk on How to improve EA Funds · 2018-04-04T17:56:32.529Z · EA · GW

Thanks for the work on this. It seems very valuable: I agree that they seem to be an awesome idea and like an individual donor should be able to improve their impact easily with a fund. Unless, that is, issues like the ones you highlight eat all the gain.

I imagine the data wasn't available, but I thought I'd check: was there any more granular information on the funding history than just percentage of total donation that remains unallocated? Because that would seem to make a big difference: the more skewed towards the recent past donations are, the less discount rates would seem to be a problem

Comment by rjmk on Opportunities for individual donors in AI safety · 2018-04-01T16:25:58.227Z · EA · GW

Thanks Carl, this looks great. By

just get in touch with CEA if you need a chance at a larger pot

do you mean (a) get in touch with CEA if you need a chance at a larger pot than the current lotteries offer or (b) get in touch with CEA if you need a chance at a larger pot by entering a lottery (as there currently aren't any)?

Comment by rjmk on Opportunities for individual donors in AI safety · 2018-03-31T23:21:55.538Z · EA · GW

Thanks Alex! Those sound like useful heuristics, though I'd love to see some experience reports (perhaps I ought to generate them).

I would be interested! I'll reach out via private message

Comment by rjmk on Would an EA world with limited money fund costly treatments? · 2018-03-31T21:04:55.613Z · EA · GW

That link's broken for me (404)

Comment by rjmk on Opportunities for individual donors in AI safety · 2018-03-30T12:58:58.488Z · EA · GW

This post is excellent. I find the historical work particularly useful, both as a collation of timelines and for the conclusions you tease out of it.

Considering the high quality and usefulness of this post, it is churlish to ask for more, but I'll do so anyway.

Have you given any thought to how donors might identify funding opportunities in the AI safety space? OpenPhil have written about how they found many more giving opportunities after committing to give, but it may be difficult to shop around a more modest personal giving budget.

A fallback here could be the far future EA fund, but I would be keen to hear other ideas

Comment by rjmk on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2018-03-30T00:28:31.484Z · EA · GW

This seems like a really powerful tool to have in one's cognitive toolbox when considering allocating EA resources. I have two questions on evaluating concrete opportunities.

First, if I can state what I take to be the idea (if I have this wrong, then probably both of my questions are based on understanding): we can move resources from lower-need (i.e. the problem continues as default or improves) to higher-need situations (i.e. the problem gets worse) by investing in instruments that will be doing well if the problem is getting worse (which because of efficient markets is balanced by the expectation they will be doing poorly if the problem is improving).

You mention the possibility that for some causes, the dynamics of the cause progression might mean hedging fails (like fast takeoff AI). Is another possible issue that some problems might unlock more funding as they get worse? For example, dramatic results of climate change might increase funding to fight it sufficiently early. While the possibility of this happening could just be taken to undermine the serious of the cause ("we will sort it out when it gets bad enough"), if different worsenings unlock different amounts of funding for the same badness, the cause could still be important. So should we focus on instruments that get more valuable when the problem gets worse AND the funding doesn't get better?

My other question was on retirement saving. When pursuing earning-to-give, doesn't it make more sense just to pursue straight expected value? If you think situations in which you don't have a job will be particularly bad, you should just be hedging those situations anyway. Couldn't you just try and make the most expected money, possibly storing some for later high-value interventions that become available?

Thank you for sharing this research! I will consider it when making investment decisions.

Comment by rjmk on When to focus and when to re-evaluate · 2018-03-28T15:32:21.150Z · EA · GW

Not falling prey to sunk cost fallacy, I would switch to the higher impact project and start afresh.

I have often fallen prey to over-negating the sunk cost fallacy. That is, if the sunk cost fallacy is acting as if you get paid costs back by pursuing the purchased option, I might end up acting as if I had to pay the cost again to pursue the option.

That is, if you already bought theatre tickets, but now realise you're not much more excited about going to the play than to the pub, you should still go to the play, because the small increase in expected value is available for free now!

I don't think that this post is only pointing at problems of the sort above, but it's useful to double check when re-evaluating projects

It would also be useful to build an intuition of what the distribution of projects across return on one's own effort is. That way you can also estimate value of information to weigh up against search costs.

Comment by rjmk on Talking About Effective Altruism At Parties · 2017-11-17T09:54:32.125Z · EA · GW

Thanks for this post! I think it will make me more comfortable discussing EA in my extended friendship circle.

Which frames work best probably depend on the who you're talking to*, but I think the two on global inequality are likely to be useful for me (and are most similar to how I currently approach it)

I particularly like how they begin with explicitly granting the virtue and importance of the more local action. Firstly, it's true, and secondly, when I've seen people change cause focus it's normally been because of arguments that go "helping this person is good, but the reasons for helping this person apply EVEN MORE here".

Remembering to explicitly say that I think the local cause is important and moral is the behaviour change I'll take away from this.

* For example, with most people I meet, I can normally take moral cosmopolitanism for granted