EA Hotel fundraiser 4: concrete outputs after 10 months

post by EA Hotel · 2019-03-30T19:54:48.237Z · score: 49 (30 votes) · EA · GW · 11 comments

Contents

  Total expenses as of March 2019
  Outputs as of March 2019
      Summary
    AI Safety related
    Rationality or community building related
    Global health related
    Animal welfare related
  Our Ask
11 comments

Due to popular demand [EA · GW], we're publishing a list of concrete outputs that hotel residents have made during their stay.

Note that interpreting this list comes with some caveats:

Most of the data is in. We will keep an up to date version of this post live at eahotel.org/outputs.

Total expenses as of March 2019

Money: So far ~£66,500* has been spent on hosting our residents, of which ~£7,600 was contributed by residents.

Time: ~4,000 person-days spent at the hotel.

Outputs as of March 2019

Summary

Anonymous 1:
One 3 month work trial earned at a prominent X-risk organisation

RAISE:
(context)
Nearly the entirety of this online course was created at the hotel

Linda Linsefors:
Posts on the alignment forum:
Optimization Regularization through Time Penalty (12)
The Game Theory of Blackmail (24)

Chris Leong:
“I’ve still got a few more posts on infinity to write up, but here’s the posts I’ve made on LessWrong since arriving [with estimates of how likely they were to be written had I not been at the hotel]:
Summary: Surreal Decisions [50%] (27)
An Extensive Categorisation of Infinite Paradoxes [80%] (-4)
On Disingenuity [50%] (34)
On Abstract Systems [50%] (14)
Deconfusing Logical Counterfactuals [75%] (18)
Debate AI and the Decision to Release an AI [90%] (8)

John Maxwell
Courses taken:
Improving Your Statistical Inferences (21 hours)
MITx Probability
Statistical Learning
Formal Software Verification
ARIMA Modeling with R
Introduction to Recommender Systems (20-48 hours)
Text Mining and Analytics
Introduction to Time Series Analysis
Regression Models

Anonymous 2:
Courses:
Probabilistic Graphical Models
Model Thinking
MITx Probability
LessWrong posts:
Annihilating aliens & Rare Earth suggest early filter (8)
Believing others’ priors (9)
AI development incentive gradients are not uniformly terrible (23)
EA Forum post:
Should donor lottery winners write reports? [EA · GW] (29)

Retreats hosted:

Denisa Pop:
Helped organise the EA Values-to-Actions Retreat
Helped organise the EA Community Health Unconference

Toon Alfrink
EA forum posts:
EA is vetting-constrained [EA · GW] (96)
The Home Base of EA [EA · GW] (12)
Task Y: representing EA in your field [EA · GW] (11)
LessWrong posts:
We can all be high status (61)
The housekeeper (26)
What makes a good culture? (30)

Matt Goldenberg
The entirety of Project Metis
Posts on LessWrong:
The 3 Books Technique for Learning a New Skill (125)
A Framework for Internal Debugging (20)
S-Curves for Trend Forecasting (87)
What Vibing Feels Like (9)
How to Understand and Mitigate Risk (47)

Derek Foster: Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness and Well-Being Policy Report published by the Global Happiness Council.
Hired as a research analyst for Rethink Priorities.

Max Carpendale:
Posts on the EA Forum:
The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [EA · GW] (28)
Sharks probably do feel pain: a reply to Michael Tye and others [EA · GW] (19)
Why I’m focusing on invertebrate sentience [EA · GW] (48)

Frederik Bechtold
Received an (unpaid) internship at Animal Ethics.

Saulius Šimčikas
Posts on the EA Forum:
Rodents farmed for pet snake food [EA · GW] (64)
Will companies meet their animal welfare commitments? [EA · GW] (109; winner of 3rd place EA Forum Prize for Feb 2019 [EA · GW])

Magnus Vinding
Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique (Probability it would have been written otherwise: 99 percent).
Revising journal paper for Between the Species. (Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel.)
I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”) (50 percent)

Our Ask

Do you like this initiative, and want to see it continue for longer, on a more stable footing? Do you want to cheaply buy time spent working full time on work relating to EA, whilst simultaneously facilitating a thriving EA community hub? Do you want to see more work in the same vein as the above? Then we would like to ask for your support.

We are very low on runway. Our current shortfall is ~£4k/month from July onward.

To donate, please see our GoFundMe or PayPal Money Pool, or get in touch if you'd like to make a direct bank transfer (which will save ~3% on fees and up to 3% on currency conversion if using a service like Revolut or Transferwise).

If you’d like to give regular support, we also have a Patreon.

Previous posts in this series:

Written by others:

*this is the total cost of the project to date (30 March 2019), not including the purchase of the building (£132,276.95 including building survey and conveyancing)

11 comments

Comments sorted by top scores.

comment by Peter_Hurford · 2019-03-31T03:14:47.200Z · score: 19 (11 votes) · EA · GW

(BTW note that Saulius is an employee of Rethink Charity as well as Derek, and wrote those two posts while working with us.)

comment by RomeoStevens · 2019-04-17T21:34:17.146Z · score: 5 (3 votes) · EA · GW

This would seem a solid enough record to be in the ballpark of other exploratory grants that are being made. Is EA Hotel applying with the various funds for a grant?

comment by Milan_Griffes · 2019-04-17T21:48:59.847Z · score: 5 (3 votes) · EA · GW

They applied to the Long Term Future Fund [EA · GW] April 2019 grant round, and were rejected.

comment by Habryka · 2019-04-17T22:39:34.826Z · score: 20 (6 votes) · EA · GW

(This is just my personal perspective and does not aim to reflect the opinions of anyone else on the LTF-Fund)

I am planning to send more feedback on this to the EA Hotel people.

I have actually broadly come around to the EA Hotel being a good idea, but at the time we made the grant decision there was a lot less evidence and writeups around, and it was those writeups by a variety of people that convinced me it is likely a good idea, with some caveats.

comment by Milan_Griffes · 2019-04-17T22:42:02.830Z · score: 2 (1 votes) · EA · GW

Probability of out-of-cycle grant consideration for the EA Hotel, given that they're in a funding crunch & you've broadly come around to thinking that it's a good idea?

comment by Habryka · 2019-04-18T03:41:55.038Z · score: 6 (3 votes) · EA · GW

I think sadly pretty low, based on my current model of the time constraints of everyone, and also CEA logistical constraints.

comment by Milan_Griffes · 2019-04-18T05:54:56.685Z · score: 2 (1 votes) · EA · GW

Maybe EA Grants will be up-and-running in time to make a difference? (I'll check with Nicole.)

comment by Milan_Griffes · 2019-04-18T05:47:07.548Z · score: 2 (1 votes) · EA · GW

Ah, sad.

comment by toonalfrink · 2019-04-21T15:07:51.839Z · score: 3 (2 votes) · EA · GW

fwiw, I personally give it >75% probability that we will be able to survive at least until next round

comment by Khorton · 2019-03-31T00:13:45.279Z · score: 1 (6 votes) · EA · GW

It would be helpful if you could also list how long each person stayed at the hotel. Some of these people seem to have produced very little for 10 months, but maybe they were only there for one month!

comment by toonalfrink · 2019-03-31T01:55:43.840Z · score: 18 (8 votes) · EA · GW

I would rather not. This would pressure people into goodharting their projects for legibility, which is one of the things our setup is supposed to prevent.

(tldr: an agent is legible if a principal can easily monitor them, but it limits their options to what is easy for the principal to measure, which might reduce performance)

Quite a few of our guests are not even on this list, but this doesn't mean they're sitting around doing nothing all day. They're doing illegible work that is hard or even impossible to evaluate at a distance. I put a few examples in the second caveat of the post.

(I realise this is at odds with the EA maxim of measuring outcomes. That's why we published this post: so the hotel could at least be evaluated in aggregate. I think it's neat that people with illegible work can hide behind legible ones)