Posts

EA Hotel Fundraiser 2: Current guests and their projects 2019-02-04T20:41:18.823Z
EA Hotel with free accommodation and board for two years 2018-06-04T18:09:09.845Z

Comments

Comment by Greg_Colbourn on A ranked list of all EA-relevant documentaries, movies, and TV series I've watched · 2021-03-28T22:40:01.253Z · EA · GW

Started watching Next. Think it's great and will recommend people watch it if they want to understand what the big deal is with AI safety/alignment. However, it's frustrating for UK viewers - Episodes 1-3 are available on Disney+, and Episodes 6-10 are available elsewhere, but where are episodes 4 & 5!? Will try YouTube TV with a VPN..

Comment by Greg_Colbourn on A ranked list of all EA-relevant documentaries, movies, and TV series I've watched · 2021-03-28T22:33:45.374Z · EA · GW

I thought Seaspiracy was great - I started watching it without realising what it was, and it started with the filmmaker wanting to make a documentary about the oceans, then getting concerned about plastic waste (e.g. straws and bottles), and then it just kept going as he went down the rabbit hole. Seemed like a very EA kind of progression :)

Comment by Greg_Colbourn on Why do so few EAs and Rationalists have children? · 2021-03-17T19:51:49.356Z · EA · GW

I realise that. But I wouldn't be surprised if the median household in the developed world had at least one spare room (this was one of the reasons why the "bedroom tax" was so unpopular in the UK).

Comment by Greg_Colbourn on Opportunity for EA orgs: $5k/year in ETH (tech setup required) · 2021-03-17T11:46:04.646Z · EA · GW

Great, thanks. Do they accept UK charities? We are potentially interested at CEEALAR.

Comment by Greg_Colbourn on Why do so few EAs and Rationalists have children? · 2021-03-16T15:59:30.747Z · EA · GW

Great to see the write up of expenditure! 

Housing: this is kind of pretend, because we actually built two extra rooms onto our house, which had a high up-front cost but will be useful for years and will eventually make the house sell for more. I’m instead substituting the cost at which we currently rent our spare bedroom ($900/month times 2 bedrooms)

I think it's unusual for people to rent out their spare rooms, and it's good that you have done so and provided yourselves with more income/reduced your living costs. By that metric I imagine that many people (especially home owners) have higher housing costs than they think. Maybe EAs are more likely to think about this and maximise the efficiency of their housing. But at the limit, every loft, basement and garage not converted is counterfactual lost earnings. Or, indeed, you could say that real estate investment in general is profitable, and people should do more of it. But then so are other things. So any profits "left on the table" through suboptimally investing money are also potential "costs"... (and then here things get tricky, in determining what the optimal investments are. And we're pretty much back to the foundation of EA! Optimal allocation of resources).
 

[As I've said elsewhere in this thread, I don't think children are a special case of expensive. They are one of several things that can be expensive (see also: location, career choice, suboptimal investment, tastes, hobbies), and for most people, who aren't already maximising their financial efficiency (frugality; investments), it's a matter of prioritisation as to the relative expense of having them.]

Comment by Greg_Colbourn on Why do so few EAs and Rationalists have children? · 2021-03-16T11:26:53.255Z · EA · GW

why not do so without kids and get roommates to save costs instead? Or rent a smaller place in Manchester?

Indeed. I would recommend that for anyone trying to be frugal so they can save/donate more (especially if they can work remotely). My point is, however, that unless you are already living a maximally frugal lifestyle, it's possible to reduce your living costs in other areas such that having children needn't be financially expensive. Children aren't necessarily a special case of "expensive living costs". It's ultimately a matter of prioritisation.

Comment by Greg_Colbourn on Opportunity for EA orgs: $5k/year in ETH (tech setup required) · 2021-03-15T12:32:18.243Z · EA · GW

Who is behind it? I can't see any names attached, and a charity would usually need names for due diligence before being able to accept donations of that size. I guess it's ok if they want public anonymity, but they will reveal names privately.

Comment by Greg_Colbourn on Why do so few EAs and Rationalists have children? · 2021-03-15T10:24:54.103Z · EA · GW

Yes, it's ultimately a matter of prioritisation. My point is that it doesn't necessarily have to be expensive, so cost needn't be the overriding factor in deciding whether to have children or not.

Comment by Greg_Colbourn on Why do so few EAs and Rationalists have children? · 2021-03-15T08:49:44.693Z · EA · GW

Child care and babysitting is ~1/4. This could be much reduced with a parent working from home (so no before or after school clubs/childminding needed), and/or living with extended family and friends on hand.

Comment by Greg_Colbourn on Why do so few EAs and Rationalists have children? · 2021-03-15T08:45:51.147Z · EA · GW

~1/3 of that cost is  education at £74k, which I think is mostly  unreasonable to include as it includes university (where the cost is mostly borne born by loans taken out by the student that are effectively a graduate tax; and arguably, given all the free material available online now, isn't strictly necessary for a lot of careers apart from it's signalling value) and school lunch, when they will eat regardless (although fair if they deducted this from the food budget, which seems quite reasonable).

Comment by Greg_Colbourn on Why do so few EAs and Rationalists have children? · 2021-03-15T08:36:57.938Z · EA · GW

Re: location, rooms and vehicle - I guess it depends on how high your standards are (and could be net zero in % terms if you are willing and able to move to a cheaper location*). The median family in the developed world isn't especially rich (£30k/yr household income in UK), yet has a decent standard of living by most measures, without most children being impoverished.

Re: time - I think raising kids can help with preventing burnout from intellectually demanding work (and also in this vein, guarding against value drift via burnout - i.e keeping EA from taking over your life entirely, to detrimental effect). Although yes, sleep/relaxation can take a hit in the early years.

*You can rent a 3 bedroom house with a garden in Manchester (UK's 2nd largest/most productive city), for the cost of a double room in a shared house in London.

Comment by Greg_Colbourn on Why do so few EAs and Rationalists have children? · 2021-03-14T21:40:15.735Z · EA · GW

I don't think having kids needs to be expensive. At least in the UK, schools and healthcare are free. Food and clothing are cheap. Their addition to  bills isn't much in percentage terms. Toys are cheap. 

The biggest expense is probably the time investment, but parenting is a different kind of "work" to  the professional work that most EAs do, so I don't think it necessarily comes from the same time budget (unless maybe the counterfactual is working 60+ hour weeks on EA).

Comment by Greg_Colbourn on Why SENS makes sense · 2021-02-26T20:25:50.990Z · EA · GW

Looking for the transaction on the Ethereum blockchain associated with Vitalik's $2.4M SENS donation (if there is one). Can anyone link to it?

Comment by Greg_Colbourn on A case against strong longtermism · 2020-12-21T15:01:57.950Z · EA · GW

"while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years"

These two things are correlated.

Comment by Greg_Colbourn on A case against strong longtermism · 2020-12-18T13:51:07.502Z · EA · GW

This [The ergodicity problem in economics] seems like it could be important, and might fit in somewhere with the discussions of expected utility. I haven't really got my head around it though.

Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total. Since you’re just as likely to flip heads as tails, it would appear that you should, on average, come out ahead if you played enough times because your potential payoff each time is greater than your potential loss. In economics jargon, the expected utility is positive, so one might assume that taking the bet is a no-brainer.

Yet in real life, people routinely decline the bet. Paradoxes like these are often used to highlight irrationality or human bias in decision making. But to Peters, it’s simply because people understand it’s a bad deal.

Here’s why. Suppose in the same game, heads came up half the time. Instead of getting fatter, your $100 bankroll would actually be down to $59 after 10 coin flips. It doesn’t matter whether you land on heads the first five times, the last five times or any other combination in between.

Comment by Greg_Colbourn on Leopold Aschenbrenner returns to X-risk and growth · 2020-10-24T09:38:20.611Z · EA · GW

The idea is that by speeding through you increase risk initially, but the total risk is lower - i.e. smaller area under the grey curve here:

I think this probably breaks down if the peak is high enough though (here I'm thinking of AGI x-risk). Aschenbrenner gives the example of:

On the other extreme, humanity is extremely fragile. No matter how high a fraction of our resources we dedicate to safety, we cannot prevent an unrecoverable catastrophe. ..there is nothing we can do regardless. An existential catastrophe is inevitable, and it is impossible for us to survive to reach a grand future. 

And argues that

even if there is some probability we do live in this world, to maximize the moral value of the future, we should act as if we live in the other scenarios where a long and flourishing future is possible.

I'm not sure if this applies if there is some possibility of "pulling the curve sideways" to flatten it - i.e. increase the fraction of resources spent on safety whilst keeping consumption (or growth) constant. This seems to be what those concerned with x-risk are doing for the most part (rather than trying to slow down growth).

Comment by Greg_Colbourn on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-08-28T08:16:40.597Z · EA · GW

Here is an argument for how GPT-X might lead to proto-AGI in a more concrete, human-aided, way: 

..language modelling has one crucial difference from Chess or Go or image classification. Natural language essentially encodes information about the world—the entire world, not just the world of the Goban, in a much more expressive way than any other modality ever could.[1] By harnessing the world model embedded in the language model, it may be possible to build a proto-AGI.

...

This is more a thought experiment than something that’s actually going to happen tomorrow; GPT-3 today just isn’t good enough at world modelling. Also, this method depends heavily on at least one major assumption—that bigger future models will have much better world modelling capabilities—and a bunch of other smaller implicit assumptions. However, this might be the closest thing we ever get to a chance to sound the fire alarm for AGI: there’s now a concrete path to proto-AGI that has a non-negligible chance of working.

Comment by Greg_Colbourn on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-08-13T13:13:47.304Z · EA · GW

Nice post! Meta: footnote links are broken, and references to [1] and [2] aren't in the main body.

Also could [8] be referring to this post? It only touches on your point though:

Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2020-08-10T08:51:15.613Z · EA · GW

We now have general funding for the next few months and are hiring for both a Community & Projects Manager and an Operations Manager, with input from Nicole and others at CEA. Unfortunately with the winding down of EA Grants the possibility of funding for the Community & Projects Manager salary has gone. If anyone would like to top up the salaries for either the Community & Projects Manager or Operations Manager (currently ~£21.5k/yr pro rata including free accommodation and food), please get in touch!

Comment by Greg_Colbourn on Donor Lottery Debrief · 2020-08-05T12:41:27.122Z · EA · GW

Looking for more projects like these

CEEALAR (formerly the EA Hotel) is looking for funding to cover operations from Jan 2021 onward.

Comment by Greg_Colbourn on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T14:55:59.030Z · EA · GW

Sorry if this isn’t as polished as I’d hoped. Still a lot to read and think about, but posting as I won’t have time now to elaborate further before the weekend. Thanks for doing the AMA!

It seems like a crux that you have identified is how “sudden emergence” happens. How would a recursive self-improvement feedback loop start? Increasing optimisation capacity is a convergent instrumental goal. But how exactly is that goal reached? To give the most pertinent example - what would the nuts and bolts of it be for it happening in an ML system? It’s possible to imagine a sufficiently large pile of linear algebra enabling recursive chain reactions of both improvement in algorithmic efficiency, and size (e.g. capturing all global compute -> nanotech -> converting Earth to Computronium). Even more so since GPT-3. But what would the trigger be for setting it off?

Does the above summary of my take of this chime with yours? Do you (or anyone else reading) know of any attempts at articulating such a “nuts-and-bolts” explanation of “sudden emergence” of AGI in an ML system?

Or maybe there would be no trigger? Maybe a great many arbitrary goals would lead to sufficiently large ML systems brute-force stumbling upon recursive self-improvement as an instrumental goal (or mesa-optimisation)?

Responding to some quotes from the 80,000 Hours podcast:

“It’s not really that’s surprising, I don’t have this wild destructive preference about how they’re arranged. Let’s say the atoms in this room. The general principle here is that if you want to try and predict what some future technology will look like, maybe there is some predictive power you get from thinking about X percent of the ways of doing this involve property P. But it’s important to think about where there’s a process by which this technology or artifact will emerge. Is that the sort of process that will be differentially attracted to things which are let’s say benign? If so, then maybe that outweighs the fact that most possible designs are not benign.”

What mechanism makes AI be attracted to benign things? Surely only through human direction? But to my mind the whole Bostrom/Yudkowsky argument is that it FOOMs out of control of humans (and e.g. converts everything into Computronium as a convergent instrumental goal.)

“There’s some intuition of just the gap between something that’s going around and let’s say murdering people and using their atoms for engineering projects and something that’s doing whatever it is you want it to be doing seems relatively large.”

This reads like a bit of a strawman. My intuition for the problem of instrumental convergence is that in many take-off scenarios the AI will perform (a lot) more compute, and the way it will do this is by converting all available matter to Computronium (with human-existential collateral damage). From what I’ve read, you don’t directly touch on such scenarios. Would be interested to hear your thoughts on them.

“my impression is that you typically won’t get behaviours which are radically different or that seem like the system’s going for something completely different.”

Whilst you might not typically get radically different behaviours, in the cases where ML systems do fail, they tend to fail catastrophically (in ways that a human never would)! This also fits in with the notion of hidden proxy goals from “mesa optimisers” being a major concern (as well as accurate and sufficient specification of human goals).

Comment by Greg_Colbourn on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T11:34:02.834Z · EA · GW

Have you had any responses from Bostrom or Yudkowsky to your critiques?

Comment by Greg_Colbourn on Why I'm Not Vegan · 2020-04-11T15:43:03.800Z · EA · GW

I'm thinking that for me it would be something like 1/100 of a year! Maybe 1/10 tops. And for those such as the OP who think that "there's just no one inside to suffer" - would you risk making such a swap (with a high multiple) if it was somehow magically offered to you?

Comment by Greg_Colbourn on Why I'm Not Vegan · 2020-04-11T15:38:38.110Z · EA · GW

Pretty grim thought experiment - but I wonder: what amount of living as a chicken, or pig, on a factory farm would people trade for a year of extra healthy (human) life?

Assume that you would have the consciousness of the chicken or pig during the experience (memories of your previous life would be limited to the extent to what a chicken or pig could comprehend), and that you would have some kind of memory of the experience after (although these would be zero if chickens and pigs aren't sentient). Also assume that you wouldn't lose any time in your real life (say it was run as a very fast simulation, but you subjectively still experienced the time you specified).

Edit: there's another thought experiment along the same lines in MichaelStJules' comment here.

Comment by Greg_Colbourn on Halffull's Shortform · 2020-03-25T15:35:39.881Z · EA · GW

Maybe also that the talk of preventing a depression is an information hazard when we are at the stage of the pandemic where all-out lockdown is the biggest priority for most of the richest countries. In a few weeks when the epidemics in the US and Western Europe are under control, and lockdown can be eased with massive testing, tracing and isolating of cases, then it would make more sense to freely talk about boosting the economy again (in the mean time, we should be calling for governments to take up the slack with stimulus packages. Which they seem to be doing already).

Comment by Greg_Colbourn on Illegible impact is still impact · 2020-02-18T15:32:46.134Z · EA · GW
I don't know that this is still or ever really was part of the mission of the EA Hotel (now CEEALAR), but one of the things I really appreciated about it from my fortnight stay there was that it provided a space for EA-aligned folks to work on things without the pressure to produce legible results. This to me seems extremely valuable because I believe many types of impact are quantized such that no impact is legible until a lot of things fall into place and you get a "windfall" of impact all at once

Yes, this was a significant consideration in my founding of the project. We also acknowledge it where we have collated outputs. And whilst we have had a good amount of support (see histogram here), I feel that many potential supporters have been holding back, waiting for the windfall (we have struggled with a short runway over the last year).

Comment by Greg_Colbourn on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-31T14:37:28.439Z · EA · GW

Makes sense from the point of view of killing germs, and temperatures being tolerable for us also being tolerable for germs. My intuition is that it's easier to get dirt (which contains germs) off hands with warmer water (similar to how it's easier to wash dishes with warmer water).

Comment by Greg_Colbourn on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-31T11:02:42.789Z · EA · GW

Alcohol-based hand sanitiser is also good. Often better than hand washing in practice as very few people actually wait for the water to get warm, or spend 20 seconds lathering the soap.

Comment by Greg_Colbourn on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T17:50:06.157Z · EA · GW
Props for putting in the work to keep this organization alive and well. It's a wonderful asset to the EA community. :)

Thanks!

CEALAR

I agree that the extra E is a bit jarring at first (someone else has pointed this out too). I worry that without it it's too similar to CEA though; and the "Enabling" also seems useful in helping to describe what we do.

Comment by Greg_Colbourn on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T17:35:21.673Z · EA · GW

Interesting - you mean this? Worth considering for an alternative name/brand [Jonas' comment above] down the line a bit.

Comment by Greg_Colbourn on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T16:50:29.020Z · EA · GW
Congratulations for putting in all the time and effort required to get the (former) EA Hotel registered as a proper charity!

Thanks!

poll on an EA Facebook group

We did do this for the initial naming. It seems like a very lengthly process though, looking at the example of FRI. I'll also note that the names that got to the top of the latest poll I've seen (from 14 Dec) don't seem that great (but then my judgement perhaps isn't the best in this area, given the reception so far to "CEEALAR"!)

Comment by Greg_Colbourn on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T14:35:22.778Z · EA · GW

Note we are still offering the same as before, with the caveat that we aren't subsidising people earning-to-give (as was originally envisaged with the EA Hotel). We are still open to short-term visitors.

Comment by Greg_Colbourn on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T11:55:44.315Z · EA · GW

Estimate of swine flu fatality rate was ~0.5% in July 2009 with 100,000 cases reported. It ended up dropping over an order of magnitude.

Comment by Greg_Colbourn on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T09:39:14.638Z · EA · GW

~1/6 of the world population were infected by the 2009 swine flu (mortality rate was much lower though, at ~1/3000 of those infected).

Comment by Greg_Colbourn on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-27T19:34:53.944Z · EA · GW

A better comparison would be to look at death rate for those infected: ~0.1% for seasonal flu.

Comment by Greg_Colbourn on 8 things I believe about climate change · 2019-12-29T09:19:37.402Z · EA · GW

Just flagging that I posted this comment (the parent) from the wrong account (EA Hotel), should've been from this one! [mods, I don't suppose there is any way of correcting this?]

Comment by Greg_Colbourn on EA Meta Fund November 2019 Payout Report · 2019-12-11T19:00:55.254Z · EA · GW

We asked for feedback on the first rejection.

Comment by Greg_Colbourn on EA Meta Fund November 2019 Payout Report · 2019-12-11T17:28:16.022Z · EA · GW

We didn't apply (although did tick the "Forward my application to the EA Meta fund" box on our application for the October 2019 round of the Long Term Future Fund).

We got rejected in the March 2019 round of the Meta Fund, and didn't receive any feedback.

In July I made an application to the Meta Fund for an "EA Events Hotel" to be also based in Blackpool, UK (for a hotel dedicated to workshops/events/retreats/bootcamps, given it's difficult to host many people at the EA Hotel for events in addition to the longer term people). This also got rejected without feedback.

Given this situation, we haven't further engaged with the Meta Fund (we've had more engagement with the Long Term Future Fund, despite the Meta Fund being the more natural fit for the EA Hotel).


Comment by Greg_Colbourn on ALLFED 2019 Annual Report and Fundraising Appeal · 2019-11-26T13:02:49.104Z · EA · GW

Great report, and exciting times for ALLFED!

I notice that you don't include peanuts, which are one of the cheapest sources of calories currently widely available. Is this because they require a warm environment?

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T21:51:44.642Z · EA · GW

Fair point, but Nicole refers to:

Some concern about the handling of past PR situations

Also it's worth mentioning our actual subsequent track record over the past year (i.e. 0 further PR situations).

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T21:30:55.858Z · EA · GW

Regarding explicit direction vs advice - for me it was the fact that something I thought had been dealt with acceptably seems to have - unbeknownst to me - remained a live issue in terms of it effecting funding decisions. More explicit direction at the time in terms of "if you want to get funding from CEA you need to do this" seems like it would've been better in hindsight.

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T21:15:45.216Z · EA · GW

Hi Julia, ok but to me the point you raised about media was tangential, i.e. it was not directly related to the PR situations themselves. For those curious - I missed a meeting with a professional communications advisor at EAG London last year, on account of missing an email (in which the meeting was arranged for me) sent the day before whilst I was driving to London. I was overwhelmed at the time with interest in the hotel, and that wasn't the only email (or meeting) I missed.

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T19:01:07.976Z · EA · GW
Bill Gates does not (to my knowledge) support the EA Hotel

We are far too small to be on Bill Gates' radar. It's not worth his time looking at grants of less than millions of $ (Who know's though, maybe we'll get there eventually?)

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T18:55:40.781Z · EA · GW
which is less likely to have much of an effective impact, compared to other organizations such as DeepMind and OpenAI..

Do you think this is true even in terms of impact/$ (given they are spending ~1,000-10,000x what we are)?

however, the EA Hotel also has the funding it needs from other sources now

We now have ~3 months worth of runway. It's a good start to this fundraiser, but is hardly conducive to sustainability (as mentioned below, we would like to get to 6 months runway to be able to start a formal hiring process for our Community & Projects Manager. The industry standard for non-profits is 18 months runway).

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T18:25:26.809Z · EA · GW

See Nicole's comment in the parent thread.

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T18:20:53.340Z · EA · GW
Some concern about the handling of past PR situations; I think these were very difficult situations, but I think an excellent version of the hotel would have handled these better

I think this is a little unfair. It would be good to know exactly what we (or an excellent version of the hotel) could've (would've) done better regarding the PR situations (I assume this is referring to the Economist and Times articles). Oliver Habryka says here "I still think something in this space went wrong", but doesn't say what (see my reply to Habryka for detail on what happened with the media). Jonas Vollmer says in reply to Habryka's comment:

... "better than many did in the early stages (including myself in the early stages of EAF) but (due to lack of experience or training) considerably worse than most EA orgs would do these days." There are many counterintuitive lessons to be learnt, many of which I still don't fully understand, either.

but doesn't elaborate. I have also talked to someone at CEA at length about media, including what happened with the hotel, and they didn't suggest anything that we could've done better given the situation (of the media outlets publishing whether we liked it or not). So I'm genuinely curious here. Although, ok, I guess maybe we could’ve removed the flipboard sheet from the wall before the journalist came in, even though it was a surprise visit.

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T18:10:34.839Z · EA · GW
Potential for community health issues, and concern about handling of a staffing issue

It's true that there is potential for community health issues whenever you have a group of people living together. I think we have generally faired well in this regard so far though. It has been suggested that there is a significant reputational risk involved with funding a project such as the EA Hotel given the interpersonal dynamics of a large group of people living together, and therefore it might be better for it to be funded by individuals instead of grant-making organisations. However, as a counter-point: most universities provide massively-communal student accommodation.

Regarding the staffing issue, I'm afraid there's not much I can say publicly. Although it was my understanding at the time that we dealt with it appropriately, after taking advice from prominent community members.

Comment by Greg_Colbourn on EA Hotel Fundraiser 5: Out of runway! · 2019-11-07T17:30:20.882Z · EA · GW

Thanks for commenting Nicole. To address your points (will post a separate comment for each):

Hotel management generally (including selection of guests/projects)

In terms of general management, I agree that there is always room for improvement, but I don't think things have been too bad so far.

Regarding the selection of guests/projects, I have a lot to say about this, which I hope to cover in EA Hotel Fundraiser 10: Estimating the relative Expected Value of the EA Hotel (Part 2), and possibly also a separate post focusing more on my personal opinions. For now I will say that I think there might be some philosophical disagreement between us, although I can't be certain as I don't know the specifics of which guests/projects you are referring to in particular.

Comment by Greg_Colbourn on EA Hotel Fundraiser 6: Concrete outputs after 17 months · 2019-11-06T13:35:44.961Z · EA · GW

Regarding emotional investment, I agree that there is a substantial amount of it in the EA Hotel. But I don't think there is significantly more than there is for any new EA project that several people put a lot of time and effort into. And for many people, not being able to do the work they want to do (i.e. not getting funded/paid to do it) is at least as significant as not being able to live where they want to live.

Still, you're right in that critical comments can (often) be perceived as being antisocial. I think part of the reason that EA is considered by new people/outsiders to not be so welcoming can be explained by this.

Comment by Greg_Colbourn on EA Hotel Fundraiser 6: Concrete outputs after 17 months · 2019-11-06T13:24:01.022Z · EA · GW

Flagging that there has been a post specifically soliciting reasons against donating to the EA Hotel:

$100 Prize to Best Argument Against Donating to the EA Hotel

And also a Question which solicited critical responses:

Why is the EA Hotel having trouble fundraising?

I agree that the "equilibrium" you describe is not great, except I don't think it is an equilibrium; more that, due to various factors, things have been moving slower than they ideally should have.

EA hotel struggles to collect low tens of $

I'm guessing you meant tens-of-thousands. It's actually mid-tens-of-thousands of $: £44.2k~$57k (from 69 unique donors) as of writing (not counting the money I've put in).