EA Hotel Fundraiser 6: Concrete outputs after 17 monthspost by CEEALAR (EA Hotel) · 2019-10-31T21:39:13.256Z · EA · GW · 12 comments
Total expenses as of October 2019 Summary of concrete outputs, since the Hotel’s inception in May 2018 New outputs as of October 2019 Outputs Key AI Safety related Linda Linsefors: Luminita Bogatean: Samuel Knoche: X-Risks related Markus Salmela: Rationality & Community Building related Denisa Pop: Matt Goldenberg: Global Health & Development related Anders Huitfeldt: Animal Welfare related Max Carpendale: Rhys Southan: Events made possible by the EA Hotel Our Ask None 12 comments
We have compiled a list of concrete outputs that EA Hotel residents have produced during their stays.
Like last time [EA · GW], we note that interpreting this list comes with some caveats:
- While the output per dollar is high, this might not be the best metric. A better metric might be marginal output divided by marginal output for a counterfactual donation [EA · GW], but this is hard to estimate. As a proxy, we suggest looking at output per person-day. Does this seem high compared to an average EA?
- This list doesn't cover everything of value that has been supported by the Hotel. Some residents have spent months working on things they haven't published yet. Some have spent most of their time doing self-therapy, reading books or having a lot of research conversations. Some have been developing presentations and workshops. The Hotel is in a unique position to support this kind of hard-to-verify work. To increase transparency, guests are encouraged to share informal progress updates (presented in embedded Google Docs on our website), to give outsiders an idea of what they are doing with their time.
- Outputs are presented along with their counterfactual likelihood of happening without the Hotel. This gives an idea of the value the Hotel is adding, especially considering most residents would have been doing less in the way of EA-focused work without the Hotel. However, it’s worth considering the possibility that residents may have done things of equal or higher value otherwise. In the future, we will go further and ask residents to estimate the counterfactual impact of the work they have done at the Hotel relative to the work they expect they would have done otherwise. We will touch on this in the next post in this series, where we take a look at resident case studies.
An up-to-date, live version of the list of outputs can be found at eahotel.org/outputs.
Total expenses as of October 2019
Money: So far ~£110,400* has been spent on hosting our residents, of which ~£17,900 was contributed by residents.
Time: ~7,600 person-days spent at the Hotel.
Summary of concrete outputs, since the Hotel’s inception in May 2018
- The incubation of 3 EA projects with potential for scaling.
- 18 online course modules followed
- 2.5 online course modules produced
- 46 posts on LessWrong and the EA Forum (with a total of ~1500 karma)
- 2 papers published
- 3 AI Safety events, 1 rationality workshop, 2 EA retreats hosted; and 2 EA retreats organised
- 2 internships and 2 jobs earned at EA organisations
For the full list see the dedicated Hotel page here.
New outputs as of October 2019
Here we present new outputs since our last summary [EA · GW]; those from April-October 2019.
Expenses for this period:
Money: ~£43,900 has been spent on hosting our residents, of which ~£10,300 was contributed by residents.
Time: ~3,600 person-days spent at the Hotel.
Title with link [C] (K)
C = counterfactual likelihood of happening without the EA Hotel.
K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum) [correct as of 8th Nov 2019].
AI Safety related
Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
Organized the AI Safety Learning By Doing Workshop (August and October 2019)
Organized the AI Safety Technical Unconference (August 2019)
Distance Functions are Hard [AF · GW] [50%] (40; 14)
What are concrete examples of potential “lock-in” in AI research? [AF · GW] [1%] (17; 9)
Non-anthropically, what makes us think human-level intelligence is possible? [LW · GW] (10)
The Moral Circle is not a Circle [EA · GW] [1%] (25)
Cognitive Dissonance and Veg*nism [EA · GW] (7)
What are we assuming about utility functions? [AF · GW] [1%] (17;8)
8 AIS ideas [AF(p) · GW(p)] [1%] (39)
Courses: Python Programming: A Concise Introduction [20%]
Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
Code for lightweight Python deep learning library [5%]
Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
Organizing AI Strategy and X-Risk Unconference (AIXSU [EA · GW]) [1%]
Rationality & Community Building related
Researching and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
Becoming Interim Community Manager at the Hotel and offering residents counseling/coaching sessions (productivity & well-being) [0%]
Organizer and instructor for the Athena Rationality Workshop (June 2019) [LW · GW]
Global Health & Development related
Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology. https://doi.org/10.1007/s10654-019-00571-w
Post on EA Forum: Effect heterogeneity and external validity [EA · GW] (6)
Post on LessWrong: Effect heterogeneity and external validity in medicine [LW · GW] (43)
Distinction in MU123 and MST124 (Mathematics Modules) via the Open University.
Completed ‘Justice’ (Harvard MOOC; Verified Certificate).
Completed Units 1 (Introduction) 2 (Mathematical Typesetting), MST125 (Pure Maths module), The Open University.
Completed Unit 1, M140 (Statistics), The Open University
Completed Week 1, GV100 (Intro to Political Theory), London School of Economics [Auditing module].
Animal Welfare related
Posts on the EA Forum:
Interview with Jon Mallatt about invertebrate consciousness [EA · GW] [50%] (81; winner of 1st place EA Forum Prize for Apr 2019 [EA · GW])
My recommendations for RSI treatment [EA · GW] [25%] (60)
Thoughts on the welfare of farmed insects [EA · GW] [50%] (31)
Interview with Shelley Adamo about invertebrate consciousness [EA · GW] [50%] (37)
My recommendations for gratitude exercises [LW · GW] [50%] (39)
Interview with Michael Tye about invertebrate consciousness [EA · GW] [50%] (32)
Got a research position (part-time) at Animal Ethics [25%]
EA Forum Comment Prize ($50), July 2019 [EA · GW], for “comments on the impact of corporate cage-free campaigns [EA(p) · GW(p)]” (11)
Edited and partially re-wrote a book on meat, treatment of farmed animals, and alternatives to factory farming—earning enough money to start paying rent at the Hotel. The name of the book or its author can’t be named yet, as a non-disclosure agreement was signed, but it will be verifiable. [70%]
Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. It is currently awaiting scores from reviewers. [10%]
Recently secured a paid job writing an index for a book about death and dying by moral philosopher Frances Kamm, and will use the money to continue paying rent to the hotel. [20%]
Events made possible by the EA Hotel
(Note that these already appear in the above list under the main organizer/lecturer's name)
Athena Rationality Workshop (June 2019) (retrospective [LW · GW])
AI Safety Learning By Doing Workshop (August and October 2019)
AI Safety Technical Unconference (August 2019) (retrospective [LW · GW] written by a participant)
Do you like this initiative, and want to see it continue for longer, on a more stable footing? Do you want to cheaply buy time spent working full-time on work relating to EA, whilst simultaneously facilitating a thriving EA community hub? Do you want to see more work in the same vein as the above? Then we would like to ask for your support.
We are very low on runway. Our current shortfall is ~£5k/month from January onward (thanks to the generous donors who donated in the last week!). See the Hotel’s fundraiser page for more details.
To donate, please or use our PayPal MoneyPool (0% fees, ~2% currency conversion losses) or GoFundMe (2.9% fees). If you’d like to give regular support, we also have a Patreon (10%+ fees/losses for non-USD donors!). Contact us to donate directly via bank transfer and save in fees/currency conversion (Revolut and Transferwise are good options with 0-1% losses in fees and currency exchange).
For an added bonus equivalent to tax-deductibility, we are on Effective Altruist Donation Swap (in the Meta section). It should be possible to get your donation swapped with someone from a country that doesn’t offer tax deductibility to any charity.
Previous posts in this series:
- EA Hotel Fundraiser 1: The story [EA · GW]
- EA Hotel Fundraiser 2: Current guests and their projects [EA · GW]
- EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) [EA · GW]
- EA Hotel Fundraiser 4: Concrete outputs after 10 months [EA · GW]
- EA Hotel Fundraiser 5: Out of runway! [EA · GW]
See also the pitch document we’ve sent to potential funders.
Written by others:
- The Case for the EA Hotel [EA · GW] (winner of 3rd place EA Forum Prize for Mar 2019 [EA · GW])
- My Q1 2019 EA Hotel donation [EA · GW]
*this is the total cost of the project to date (31st October 2019), not including the purchase of the building (£132,276.95 including building survey and conveyancing)
Comments sorted by top scores.
comment by Jan_Kulveit · 2019-11-05T12:26:57.608Z · EA(p) · GW(p)
meta: I considered commenting, but instead I'm just flagging that I find it somewhat hard to have an open discussion about the EA hotel on the EA forum in the fundraising context. The feeling part is
- there is a lot of emotional investment in EA hotel,
- it seems if the hotel runs out of runway, for some people it could mean basically loosing their home.
Overall my impression is posting critical comments would be somewhat antisocial, posting just positives or endorsements is against good epistemics, so the personally safest thing to do for many is not to say anything.
At the same time it is blatantly obvious there must be some scepticism about both the project and the outputs: the situation when the hotel seems to be almost out of runway repeats. While eg EA funds collect donations basically in millions $ per year, EA hotel struggles to collect low tens of $.
I think this equilibrium where
- people are mostly silent but also mostly not supporting the hotel, at least financially
- the the financial situation of the project is somewhat dire
- talks with EA Grants and the EA Long Term Future Fund are in progress but the funders are not funding the project yet
is not good for anyone, and has some bad effects for the broader community. I'd be interested in ideas how to move out of this state.Replies from: Stefan_Schubert, Greg_Colbourn, Greg_Colbourn, RomeoStevens
↑ comment by Stefan_Schubert · 2019-11-05T12:41:17.226Z · EA(p) · GW(p)
I agree that the epistemic dynamics of discussions about the EA Hotel aren't optimal. I would guess that there are selection effects; that critics aren't heard to the same extent as supporters.
Relatedly, the amount of discussion about the EA Hotel relative to other projects may be a bit disproportionate. It's a relatively small project, but there are lots of posts about it (see OP). By contrast, there is far less discussion about larger EA orgs, large OpenPhil grants, etc. That seems a bit askew to my mind. One might wonder about the cost-effectiveness of relatively long discussions about small donations, given opportunity costs.Replies from: Gregory_Lewis
↑ comment by Gregory Lewis (Gregory_Lewis) · 2019-11-06T16:22:28.185Z · EA(p) · GW(p)
In fairness, a lot of the larger grants/projects are not seeking funding from smaller donors, so discussing (e.g.) OpenPhil's latest grants may not be hugely action relevant.
I'd also guess that some critics may not be saying much not because they're put off by sounding mean, but rather their critical view arises from their impression of the existing evidence/considerations rather than from something novel to the existing discussion. If (e.g.) one believes the hotel has performed poorly in terms of outputs given inputs it seems unnecessary to offer that as commentary: folks (and potential donors) can read the OP and related documents themselves and come to their own conclusion.Replies from: Jonas Vollmer
↑ comment by Jonas Vollmer · 2020-01-29T11:43:48.713Z · EA(p) · GW(p)
↑ comment by Greg_Colbourn · 2019-11-06T13:24:01.022Z · EA(p) · GW(p)
Flagging that there has been a post specifically soliciting reasons against donating to the EA Hotel:
$100 Prize to Best Argument Against Donating to the EA Hotel [EA · GW]
And also a Question which solicited critical responses:
Why is the EA Hotel having trouble fundraising? [EA · GW]
I agree that the "equilibrium" you describe is not great, except I don't think it is an equilibrium; more that, due to various factors, things have been moving slower than they ideally should have.
EA hotel struggles to collect low tens of $
I'm guessing you meant tens-of-thousands. It's actually mid-tens-of-thousands of $: £44.2k~$57k (from 69 unique donors) as of writing (not counting the money I've put in).
↑ comment by Greg_Colbourn · 2019-11-06T13:35:44.961Z · EA(p) · GW(p)
Regarding emotional investment, I agree that there is a substantial amount of it in the EA Hotel. But I don't think there is significantly more than there is for any new EA project that several people put a lot of time and effort into. And for many people, not being able to do the work they want to do (i.e. not getting funded/paid to do it) is at least as significant as not being able to live where they want to live.
Still, you're right in that critical comments can (often) be perceived as being antisocial. I think part of the reason that EA is considered by new people/outsiders to not be so welcoming can be explained by this.
↑ comment by RomeoStevens · 2019-11-06T02:27:32.422Z · EA(p) · GW(p)
Thanks for fleshing this out.
comment by Milan_Griffes · 2019-12-07T01:41:36.297Z · EA(p) · GW(p)
From the recent MIRI fundraising post [EA · GW]:
Rafe Kennedy, who joins MIRI after working as an independent existential risk researcher at the Effective Altruism Hotel. Rafe previously worked at the data science startup NStack, and he holds an MPhysPhil from the University of Oxford in Physics & Philosophy.
Seems like a promising output of EA Hotel!
comment by Aaron Gertler (aarongertler) · 2019-11-06T01:54:11.201Z · EA(p) · GW(p)
I appreciate the amount of organization that went into this post, as well as the counterfactual confidence figures and karma scores (there are a few missing scores, but given that you can see details about internal links Forum posts by hovering, that's not too important).
I'm not a donor and I don't expect to donate in the future, so my opinion doesn't much matter here, but I'd be interested to learn more about what Hotel occupants think they'd be doing if they weren't living there. PhD research on semi-related topics? Working random jobs to pay rent and getting nothing "productive" done at all? Working on the same projects, but going into debt or drawing down savings to fund them?Replies from: EA Hotel, EA Hotel
↑ comment by CEEALAR (EA Hotel) · 2019-11-06T13:44:30.983Z · EA(p) · GW(p)
Our next post will include some case studies with counterfactuals.Replies from: EA Hotel
↑ comment by CEEALAR (EA Hotel) · 2019-11-24T19:02:31.384Z · EA(p) · GW(p)
↑ comment by CEEALAR (EA Hotel) · 2019-11-08T14:03:42.850Z · EA(p) · GW(p)
Missing karma scores added (and the rest updated for consistency - scores correct as of 8th Nov 2019).