What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)?

post by matthew.vandermerwe · 2019-12-30T17:28:32.962Z · score: 29 (14 votes) · EA · GW · 1 comment

This is a question post.

Contents

  Answers
    10 JimmyJ
None
1 comment

AFAICT this little org was briefly active in ~2014-15 but has since been dormant. It seems to have primarily been a website [petrl.org], but it is named as having supported at least one research project [https://arxiv.org/abs/1505.04497].

Answers

answer by JimmyJ · 2019-12-31T19:38:49.806Z · score: 10 (8 votes) · EA(p) · GW(p)

The founders of PETRL include Daniel Filan, Buck Shlegeris, Jan Leike, and Mayank Daswani, all of whom were students of Marcus Hutter. Brian Tomasik coined the name.

Of these five people, four are busy doing AI safety-related research. (Filan is a PhD student involved with CHAI, Shlegeris works for MIRI, Leike works for DeepMind, and Tomasik works for FRI. OTOH, Daswani works for a cybersecurity company in Australia.)

So, my guess is that they became too busy to work on PETRL, and lost interest. It's kind of a shame, because PETRL was (to my knowledge) the only organization focused on the ethics of AI-qua-moral patient. However, it seems pretty plausible to me that the AI safety work the PETRL founders are doing now is more effective.

In July 2017, I emailed PETRL asking them if they were still active:

Dear PETRL team,
Is PETRL still active? The last blog post on your site is from December 2015, and there is no indication of ongoing research or academic outreach projects. Have you considered continuing your interview series? I'm sure you could find interesting people to talk to.

The response I received was:

Thanks for reaching out. We're less active than we'd like to be, but have an interview in the works. We hope to have it out in the next few weeks!

That interview was never published.

comment by Brian_Tomasik · 2020-01-01T18:14:15.224Z · score: 3 (3 votes) · EA(p) · GW(p)

PETRL was (to my knowledge) the only organization focused on the ethics of AI-qua-moral patient

There seems to be a lot of academic and popular discussion about robot rights and machine consciousness, but yeah, I can't name offhand another organization explicitly focused on this topic. (To some degree, Sentience Institute has this as a long-run goal, and many organizations care about it as part of what they work on.)

There's a spoof organization called People for Ethical Treatment of Robots.

Update: I see there's another organization: American Society for the Prevention of Cruelty to Robots. On the FAQ page they say:

Q: Are you serious?

A: The ASPCR is, and will continue to be, exactly as serious as robots are sentient.

comment by matthew.vandermerwe · 2019-12-31T19:54:38.608Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks! Exactly the information I wanted.

1 comment

Comments sorted by top scores.

comment by Davidmanheim · 2019-12-31T19:38:41.691Z · score: 1 (1 votes) · EA(p) · GW(p)

I'm not sure exactly who was running things, but I assumed the work is related to / continued by FRI, given the overlap in people involved.