Posts

Comments

Comment by Paul_Crowley on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-20T15:49:18.540Z · EA · GW

*loads* of people saw the title and thought "oh, this is a book about how AI is Good, Actually". For anyone who doesn't know, the full quote is Eliezer's: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.". I much preferred the old title but I guess I shouldn't be surprised people didn't get it!

Comment by Paul_Crowley on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-04T02:51:48.650Z · EA · GW

"ultimately I made offers to two candidates both of which I had had strong gut feelings about very early, which was rewarding but also highly frustrating." - I hope this comment doesn't come across as incredibly mean, but, are you getting that from notes made at the time? When I find myself thinking "this is what I thought we'd do all along", I start to suspect I've conveniently rewritten my memories of what I thought. Do you have a sense of how many candidates you had similar strong positive gut feelings about?

Thank you for a very helpful comment!

Comment by Paul_Crowley on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-04T02:24:28.129Z · EA · GW

When I applied to Google I did a phone interview and a full day of in-person interviews, plus a 1-hour conference call about how to do well in the second round. Lots of people devote significant time brushing up their coding interview skills as well; I only didn't because things like Project Euler had brushed up those skills for me.

Comment by Paul_Crowley on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-04T02:19:35.110Z · EA · GW

Of course, the one who writes the post about it is likely to be the outlier rather than the median.

Comment by Paul_Crowley on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T18:19:27.079Z · EA · GW

If you can't afford it, doesn't that suggest that earning to give might not be such a bad choice after all?

Comment by Paul_Crowley on Leverage Research: reviewing the basic facts · 2018-08-04T22:35:43.052Z · EA · GW

Could you comment specifically on the Wayback Machine exclusion? Thanks!

Comment by Paul_Crowley on What Should the Average EA Do About AI Alignment? · 2017-02-25T21:05:30.435Z · EA · GW

Nitpick: "England" here probably wants to be something like "the south-east of England". There's not a lot you could do from Newcastle that you couldn't do from Stockholm; you need to be within travel distance of Oxford, Cambridge, or London.

Comment by Paul_Crowley on Contra the Giving What We Can pledge · 2016-12-05T00:09:50.169Z · EA · GW

You have a philosopher's instinct to reach for the most extreme example, but in general I recommend against that.

There's a pretty simple counterfactual: don't take or promote the pledge.

Comment by Paul_Crowley on Why I'm donating to MIRI this year · 2016-12-02T01:29:47.172Z · EA · GW

I went to a MIRI workshop on decision theory last year. I came away with an understanding of a lot of points of how MIRI approaches these things that I'd have a very hard time writing up. In particular, at the end of the workshop I promised to write up the "Pi-maximising agent" idea and how it plays into MIRI's thinking. I can describe this at a party fairly easily, but I get completely lost trying to turn it into a writeup. I don't remember other things quite as well (eg "playing chicken with the Universe") but they have the same feel. An awful lot of what MIRI knows seems to me folklore like this.

Comment by Paul_Crowley on Concerns with Intentional Insights · 2016-10-24T15:24:04.810Z · EA · GW

I think being too nice is a failure mode worth worrying about, and your points are well taken. On the other hand, it seems plausible to me that it does a more effective job of convincing the reader that Gleb is bad news precisely by demonstrating that this is the picture you get when all reasonable charity is extended.

Comment by Paul_Crowley on Ideas for Future Effective Altruism Conferences: Open Thread · 2016-08-13T03:54:42.251Z · EA · GW

I strongly suspect that the group photo is of very high value in getting people to go, making them feel good about having gone, and making others feel good about the conference. However, it sounds like trying to optimize to shave a few minutes off would be pretty high value.

Comment by Paul_Crowley on Beware surprising and suspicious convergence · 2016-01-25T11:02:36.217Z · EA · GW

What is remarkable about this, of course, is the recognition of the need to address it.

Comment by Paul_Crowley on Saying 'AI safety research is a Pascal's Mugging' isn't a strong response · 2015-12-16T08:42:02.937Z · EA · GW

I agree with your second point but not your first. Also it's possible you mean "optimistic" in your second point: if x-risks themselves are very small, that's one way for the change in probability as a result of our actions to be very small.

Comment by Paul_Crowley on 2015 EA Survey: please take! · 2015-11-25T13:56:57.936Z · EA · GW

Where the survey says 2014, do you mean 2015?

Comment by Paul_Crowley on Charities I Would Like to See · 2015-09-23T11:11:07.412Z · EA · GW

Yes, I'd treat the ratio of brain masses as a lower bound on the ratio of moral patient-ness.

Comment by Paul_Crowley on Moral Economics in Practice: Musing on Acausal Payments through Donations · 2015-08-15T07:12:58.074Z · EA · GW

Tax complicates this. If I'm in a higher tax band than you, I can make a donation to charity more cheaply than you can, so you will "receive" more than I "give", and vice versa.

Comment by Paul_Crowley on I am Nate Soares, AMA! · 2015-06-13T13:14:10.606Z · EA · GW

It seems a bit like the question behind the question might be "I'd like to help, but I don't know formal logic, when will that stop being a barrier". In which case it's worth saying that I'm attending a MIRI decision theory workshop at the moment, and I don't really know formal logic, but it isn't proving too much of a barrier; I can think about the assertion "Suppose PA proves that A implies B" without really understanding exactly what PA is.

Comment by Paul_Crowley on Christmas 2014 Open Thread (Open Thread 7) · 2014-12-22T13:12:43.237Z · EA · GW

Thanks for the encouragement!

I wonder if you can do something with a different kind of disaster? Maybe make it a coach that can get people out of the danger zone? Or is that cheating because people don't want seats to be "wasted"?

Comment by Paul_Crowley on Christmas 2014 Open Thread (Open Thread 7) · 2014-12-17T17:35:52.674Z · EA · GW

I've been trying to work out how to sell EA in the form of a parable; let me illustrate with my current best candidate.

In a post-apocalyptic world, you're helping get the medicine that cures the disease out to the people. You know that there's a truck with the medicine on the way, and it will soon reach a T-junction. The truck doesn't know who is where and its radio is broken; you're powerless to affect what it does, watching with binoculars from far away. If it turns left, it'll be flagged down by a family of four and their lives will be saved. If it turns right, it'll be flagged down by a school where dozens of families with the disease have taken refuge.

Don't you find yourself fervently wishing the truck will turn right? It's not because the family's lives aren't worth saving; they are, and they all deserve to live. But it's clear that the better outcome is that it turn right.

So here's some things I like about this: it's not totally unfair. It's not just a choice between "save A" and "save A and B"; if you make the most effective choice, then some people die who you could have chosen to save. And weirdly, I think the reframing where you can't choose who gets saved, you can only will the truck to make the right decision, might help people see more clearly; you're not worried about guilt about not saving the family or anger at someone making the wrong moral choice, just looking at a flip of a coin and discovering how you want it to land.

What I'd like to improve is to somehow make it more like an everyday situation rather than a super contrived one.

Any improvements? Does this seem like a useful exercise?

Comment by Paul_Crowley on Open Thread 6 · 2014-12-17T17:18:59.010Z · EA · GW

Yes, please do do a proper post on this with cites etc, I think this is really valuable!

Comment by Paul_Crowley on Where are you giving and why? · 2014-12-13T07:46:46.056Z · EA · GW

In 2015 I'll be donating 10% of my salary to the Centre for the Study of Existential Risk. CSER is a particularly good giving opportunity right now: with such superb academic bona fides, it has the potential to hugely raise the profile of the study of existential risk and boost the whole field of future-oriented work, so if you are at all moved by the idea of the overwhelming importance of shaping the far future then CSER is well worth considering as a recipient. It's a particularly good cause for me to give to because I'm in the UK, so there are substantial tax advantages.

FHI are also a very appealing cause for UK taxpayers; I've been a donor to them in the past and may shift back to them in the future depending on how each is looking in room for more funding.

Comment by Paul_Crowley on Tell us about your recent EA activities: thread 2 · 2014-11-29T12:53:10.153Z · EA · GW

Wow, this is amazing! Go you!

Comment by Paul_Crowley on Tell us about your recent EA activities: thread 2 · 2014-11-29T12:52:40.032Z · EA · GW

I finally got to asking my partner if she would be OK with me sending 10% of my salary to charity (we have shared finances, so my money is hers too) and she said yes right away. Will start doing that from my December paycheck on. I'm finally an EA! In other news, I made a payment to CSER on Thursday which after tax and employer gift matching is sorted out should be worth £6500; I've been setting the money aside all year while kicking my employer and the University of Cambridge slowly along the gift matching progress.

EDIT: I wrote CFAR, but I meant CSER! Fixed.