Posts

12 Awesome Things You Should Do After EA Global 2015-08-24T10:14:04.287Z · score: 15 (19 votes)
Meetup : TrivEA Night by Effective Altruism UNSW 2015-05-06T01:10:22.473Z · score: 0 (0 votes)
Should You Visit an EA Hub? 2015-04-20T11:15:34.931Z · score: 12 (12 votes)
Meetup : Effective Altruism UNSW Social Night 2015-03-09T08:07:15.578Z · score: 0 (0 votes)
4 Common (Hedonic) Prediction Failures and How to Fix Them 2015-02-11T23:36:13.668Z · score: 3 (3 votes)

Comments

Comment by petermcintyre on Bottlenecks and Solutions for the X-Risk Ecosystem · 2018-10-10T23:21:12.538Z · score: 11 (7 votes) · EA · GW

Thanks for writing this!

Just wanted to let everyone know that at 80,000 Hours we’ve started headhunting for EA orgs and I’m working full-time leading that project. We’re advised by a headhunter from another industry, and as suggested, are attempting to implement executive search best practices.

Have reached out to your emails listed above - looking forward to speaking.

Peter

Comment by petermcintyre on Personal thoughts on careers in AI policy and strategy · 2017-09-28T00:56:01.998Z · score: 9 (9 votes) · EA · GW

Great article, thanks Carrick!

If you're an EA who wants to work on AI policy/strategy (including in support roles), you should absolutely get in touch with 80,000 Hours about coaching. Often, we've been able to help people interested in the area clarify how they can contribute, made introductions etc.

Apply for coaching here.

Comment by petermcintyre on Cognitive Science/Psychology As a Neglected Approach to AI Safety · 2017-06-22T16:10:14.938Z · score: 1 (1 votes) · EA · GW

We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych. Caveats:

  1. Personal fit could dominate this equation though, so I'd be excited about people tackling AI safety from a variety of fields.
  2. It's an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.
Comment by petermcintyre on Cognitive Science/Psychology As a Neglected Approach to AI Safety · 2017-06-20T18:58:24.979Z · score: 1 (1 votes) · EA · GW

Hi Kaj,

Thanks for writing this. Since you mention some 80,000 Hours content, I thought I’d respond briefly with our perspective.

We had intended the career review and AI safety syllabus to be about what you’d need to do from a technical AI research perspective. I’ve added a note to clarify this.

We agree that there a lot of approaches you could take to tackle AI risk, but currently expect that technical AI research will be where a large amount of the effort is required. However, we’ve also advised many people on non-technical routes to impacting AI safety, so don’t think it’s the only valid path by any means.

We’re planning on releasing other guides and paths for non-technical approaches, such as the AI safety policy career guide, which also recommends studying political science and public policy, law, and ethics, among others.

Comment by petermcintyre on EA Facebook New Member Report · 2015-07-26T23:21:11.316Z · score: 5 (5 votes) · EA · GW

Thanks for writing this up! It's very useful to be able to compare this to census data. Did you use the same/similar message for everyone? If so, I'd be interested to see what it was. This sort of thing would also be useful to a/b test to refine it. There is also the option to add people manually, bypassing the need for admin approval; did you contact these people too?

Comment by petermcintyre on You Could be the Warren Buffett of Social Investing · 2015-06-03T01:42:51.922Z · score: 1 (1 votes) · EA · GW

Hi Eric, thanks for writing these and pointing us to them. I think this is a great idea. I just posted these on our business society and law society Facebook page to test the waters and see what response we'd get from a similar input. Out of interest, what has the response been that you've gotten so far?

Comment by petermcintyre on Request for Feedback: Researching global poverty interventions with the intention of founding a charity. · 2015-05-06T23:19:46.569Z · score: 3 (3 votes) · EA · GW

Thanks for posting this. I think explicitly asking for critical feedback is very useful.

If the intervention is not currently supported by a large body of research then we want to fund/carry out a randomized controlled trial to test whether it’s worth pursuing this intervention.

RCTs are seriously expensive, would take years to get meaningful data, would need to be replicated as well before you could put much faith in it, and it wouldn't align with the core skillset I'd imagine you'd need to be starting an organisation (so you'd need to outsource it, which would increase the costs even more). As Ryan said, it might be more useful to useful to aim to be recommended by OPP, or search for another kind of EA market inefficiency. Your other ideas of finding supportable but neglected interventions and doing them sounds pretty useful though.

Comment by petermcintyre on Best way to invest with leverage? · 2015-04-02T13:40:00.600Z · score: 1 (1 votes) · EA · GW

If I remember correctly, CEA et al. decided against pursuing this strategy due to risk adversity. Due to the large downsides which may be unique to EA, it's not clear - to me at least - that our personal strategy should differ from this. I'd be interested in seeing some more thoughts on this.

Comment by petermcintyre on April Open Thread · 2015-04-02T13:34:00.507Z · score: 0 (0 votes) · EA · GW

You've probably considered it, but it's not on your list: To hedge against any change in our consumption of meat, you could invest in in vitro meat, and other meat-alikes.

Comment by petermcintyre on The Outside Critics of Effective Altruism · 2015-01-10T12:27:05.667Z · score: 1 (1 votes) · EA · GW

I think one of my concerns with this would be the consistency and commitment effect created by incentivising a criticism, leading to someone seeing herself as an EA critic, or opposed to these ideas. Similar to companies having rewards for customers writing why it's their favourite company or product in the world. See also the American prisoners of war of China in the Korean war (I think), having small incentives to write criticisms of America or Capitalism. If it were being seriously considered, it'd be good to see some more done to work out if this would be a real consequence.

Source: Influence, Cialdini.