Posts

How to evaluate the impact of influencing governments vs direct work in a given cause area? 2019-06-26T08:14:30.928Z · score: 7 (2 votes)

Comments

Comment by liam_donovan on Coronavirus Research Ideas for EAs · 2020-03-29T05:52:53.262Z · score: 1 (1 votes) · EA · GW

sent, thank you

Comment by liam_donovan on Coronavirus Research Ideas for EAs · 2020-03-28T06:46:02.078Z · score: 0 (3 votes) · EA · GW

I'd be interested in joining the Slack group

Comment by liam_donovan on What are the key ongoing debates in EA? · 2020-03-11T09:32:03.042Z · score: 2 (2 votes) · EA · GW

I'd like to take Buck's side of the bet as well if you're willing to bet more

Comment by liam_donovan on COVID-19 brief for friends and family · 2020-03-05T12:57:21.273Z · score: 1 (1 votes) · EA · GW

What was her rationale for prioritizing hand soap over food?

Comment by liam_donovan on Is vegetarianism/veganism growing more partisan over time? · 2020-01-24T15:51:03.003Z · score: 3 (3 votes) · EA · GW

It's probably the lizardman constant showing up again -- if ~5% of people answer randomly and <5% of the population are actually veg*ns, then many of the self-reported veg*ns will have been people who answered randomly.

Comment by liam_donovan on Love seems like a high priority · 2020-01-21T12:56:43.404Z · score: 9 (4 votes) · EA · GW

I think it's misleading to call that evidence that marriage causes shorter lifespans (not sure if that's your intention)

Comment by liam_donovan on Love seems like a high priority · 2020-01-20T12:05:06.740Z · score: 7 (2 votes) · EA · GW

Do you have a link and/or a brief explanation of how they convincingly established causality for the "married women have shorter lives" claim?

Comment by liam_donovan on Love seems like a high priority · 2020-01-19T22:15:02.589Z · score: 5 (4 votes) · EA · GW

The next logical step is to evaluate the novel ideas, though, where a "cadre of uber-rational people" would be quite useful IMHO. In particular, a small group of very good evaluators seems much better than a large group of less epistemically rational evaluators who could be collectively swayed by bad reasoning.

Comment by liam_donovan on [AN #80]: Why AI risk might be solved without additional intervention from longtermists · 2020-01-19T13:55:30.215Z · score: 2 (2 votes) · EA · GW

I think the argument is that we don't know how much expected value is left, but our decisions will have a much higher expected impact if the future is high-EV, so we should make decisions that would be very good conditional on the future being high-EV.

Comment by liam_donovan on 8 things I believe about climate change · 2019-12-28T22:21:54.752Z · score: 5 (4 votes) · EA · GW

Have you read this paper suggesting that there is no good evidence of a connection between climate change and the Syrian war? I found it quite persuasive.

Comment by liam_donovan on Are we living at the most influential time in history? · 2019-12-12T09:48:54.727Z · score: 7 (4 votes) · EA · GW

What is a Copernican prior? I can't find any google results

Comment by liam_donovan on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-13T18:06:35.246Z · score: 12 (5 votes) · EA · GW

You're estimating there are ~1000 people doing direct EA work? I would have guessed around an order of magnitude less (~100-200 people).

Comment by liam_donovan on EA Hotel Fundraiser 5: Out of runway! · 2019-10-26T18:01:00.203Z · score: 11 (7 votes) · EA · GW

What if rooms at the EA Hotel were cost-price by default, and you allocated "scholarships" based on a combination of need and merit, as many US universities do? This might avoid a negative feedback cycle (because you can retain the most exceptional people) while reducing costs and making the EA Hotel a less attractive target for unaligned people to take resources from.

Comment by liam_donovan on EA Hotel Fundraiser 5: Out of runway! · 2019-10-26T17:55:32.115Z · score: 6 (5 votes) · EA · GW

What does this mean in the context of the EA Hotel? In particular, would your point apply to university scholarships as well, and if not, what breaks the analogy between scholarships and the Hotel?

Comment by liam_donovan on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-14T17:10:51.984Z · score: 3 (2 votes) · EA · GW

Maybe the most successful recruitment books directly target people 1-2 stages away in the recruitment funnel? In the case of HPMOR/Crystal Society, that would be quantitatively minded people who enjoy LW-style rationality rather than those who are already interested in AI alignment specifically.

Comment by liam_donovan on What opinions that you hold would you be reluctant to express publicly to other EAs? · 2019-09-13T06:10:53.936Z · score: 2 (2 votes) · EA · GW

Doesn't that assume EAs should value the lives of fetuses and e.g. adult humans equally?

Comment by liam_donovan on What opinions that you hold would you be reluctant to express publicly to other EAs? · 2019-09-10T20:45:24.988Z · score: 7 (6 votes) · EA · GW

Due to politicization, I'd expect reducing farm animal suffering/death to be much cheaper/more tractable per animal than reducing abortion is per fetus; choosing abortion as a cause area would also imperil EA's ability to recruit smart people across the political spectrum. I'd guess that saving a fetus would need to be ~100x more important in expectation than saving a farm animal for reducing abortions to be a potential cause area; in an EA framework, what grounds are there for believing that to be true?

Note: It would also be quite costly for EA as a movement to generate a better-researched estimate of the parameters due to the risk of politicizing the movement.

Comment by liam_donovan on Extinguishing or preventing coal seam fires is a potential cause area · 2019-08-01T23:07:10.395Z · score: 1 (1 votes) · EA · GW
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same.

I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in "philosophically attractive" fields. It seems plausible to me that climate change has fallen between two stools: not concrete enough to appeal to the instinct for quantified altruism, but not intellectually attractive enough to compete with AI risk and other long-termist interventions.



Comment by liam_donovan on Defining Effective Altruism · 2019-07-20T04:48:11.683Z · score: 15 (13 votes) · EA · GW

What does it mean to be "pro-science"? In other words, what might a potential welfarist, maximizing, impartial, and non-normative movement that doesn't meet this criterion look like?

I ask because I don't have a clear picture of a definition that would be both informative and uncontroversial. For instance, the mainstream scientific community was largely dismissive of SIAI/MIRI for many years; would "proto-EAs" who supported them at that time be considered pro-science? I assume that excluding MIRI does indeed count as controversial, but then I don't have a clear picture of what activities/causes being "pro-science" would exclude.


edit: Why was this downvoted?

Comment by liam_donovan on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-19T05:26:20.253Z · score: 3 (2 votes) · EA · GW

An example of what I had in mind was focusing more on climate change when running events like Raemon's Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of "equal importance to EA" (however that's defined) in e.g. technical AI safety.

Comment by liam_donovan on Want to Save the World? Enter the Priesthood · 2019-07-16T07:08:46.197Z · score: 9 (3 votes) · EA · GW

The answer to your question is basically what I phrased as a hypothetical before:

participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

I was involved in EA at university for 2 years before coming to believe Catholicism is true, and it didn't seem like Church dogma conflicted with my pro-EA intuitions at all, so I've just stayed with it. It helped that I wasn't ever an EA for rigidly consequentialist reasons; I just wanted to help people and EA's analytical approach was a natural fit for my existing interests (e.g. LW-style rationality).

I'm not sure my case (becoming both EA and Catholic due to LW-style reasoning) is broadly applicable; I think EA would be better served sticking to traditional recruiting channels rather than trying to extend outreach to religious people qua religious people. Moreover, I feel that it's very very important for EA to defend the value of taking ideas seriously, which would rule out a lot of the proposed religious outreach strategies you see (such as this post from Ozy).

Comment by liam_donovan on Corporate Global Catastrophic Risks (C-GCRs) · 2019-07-13T22:18:25.138Z · score: 4 (3 votes) · EA · GW

I downvoted the post because I didn't learn anything from it that would be relevant to a discussion of C-GCRs (it's possible I missed something). I agree that the questions are serious ones, and I'd be interested to see a top level post that explored them in more detail. I can't speak for anyone else on this, and I admit I downvote things quite liberally.

Comment by liam_donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T21:46:01.461Z · score: 14 (4 votes) · EA · GW

Tl;dr the moral framework of most religions is different enough from EA to make this reasoning nonsensical; it's an adversarial move to try to change religions' moral framework but there's potentially scope for religions to adopt EA tools


Like I said in my reply to khorton, this logic seems very strange to me. Surely the veracity of the Christian conception of heaven/hell strongly implies the existence of an objective, non-consequentialist morality? At that point, it's not clear why "effectively doing the most good" in this manner is a more moral [edit: terminal] goal than "effectively producing the most paperclips". It's not surprising that trying to shoehorn Christian ideas into a utilitarian framework is going to produce garbage!

I agree that this implies that EA would have to develop a distinct set of arguments in order to convince priests to hijack the resources of the Church to further the goals of the EA subculture; I also think this is an unnecessarily adversarial move that shouldn't be under serious consideration.

That doesn't mean that the ideas and tools of the EA community are inapplicable in principle to Catholic charity, as long as they are situated within a Catholic moral framework. I'm confident that e.g. Catholic Relief Services would rather spend money on interventions like malaria nets rather than interventions like PlayPumps. However, even if the Catholic Church deferred every such decision to a team of top EAs, I don't think the cumulative impact (under an EA framework) would be high enough to justify the cost of outreach to the Church. I'm not confident of this though; could be an interesting Fermi estimate problem.

(I've been trying to make universally applicable arguments, but it feels dishonest at this point not to mention that I am in fact Catholic.)

Comment by liam_donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T21:17:06.919Z · score: 1 (1 votes) · EA · GW

Thank you! I'm not sure, but I assume that I accidentally highlighted part of the post while trying to fix a typo, then accidentally e.g. pressed "ctrl-v" instead of "v" (I often instinctively copy half-finished posts into the clipboard). That seems like a pretty weird accident, but I'm pretty sure it was just user error rather than anything to do with the EA forum.

Comment by liam_donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T07:01:41.387Z · score: 1 (1 votes) · EA · GW

This post seems to have become garbled when I tried to fix a typo, any idea how I can restore the original verson?

Comment by liam_donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T06:57:21.530Z · score: 19 (8 votes) · EA · GW

This doesn't seem like a great idea to me for two reasons:

1. The notion of explicitly manipulating one's beliefs about something as central as religion for non-truthseeking reasons seems very sketchy, especially when the core premise of EA relies on an accurate understanding of highly uncertain subjects.

2. Am I correct in saying the ultimate aim of this strategy is to shift religious groups' dogma from (what they believe to be) divinely revealed truth to [divinely revealed truth + random things EAs want]? I'm genuinely not sure if I interpreted the post correctly, but that seems like an unnecessarily adversarial move against a set of organized groups with largely benign goals.

Comment by liam_donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T06:34:13.228Z · score: 1 (1 votes) · EA · GW

Yeah, I don't think I phrased my comment very clearly.

I was trying to say that, if the Christian conception of heaven/hell exists, then it is highly likely than an objective non-utilitarian morality exists. It shouldn't be surprising that continuing to use utilitarianism within an otherwise Christian framework yields garbage results! As you say, a Christian can still be an EA, for most relevant definitions of "be an EA".

Comment by liam_donovan on Want to Save the World? Enter the Priesthood · 2019-07-12T19:37:17.197Z · score: 5 (3 votes) · EA · GW

I'm fairly confident the Church does not endorse basing moral decisions on expected value analysis; that says absolutely nothing about the compatibility of Catholicism and EA. For example, someone with an unusually analytical mindset might see participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

Comment by liam_donovan on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-08T23:16:13.563Z · score: 11 (6 votes) · EA · GW

This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I'm concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.

Comment by liam_donovan on How to evaluate the impact of influencing governments vs direct work in a given cause area? · 2019-06-27T05:39:27.088Z · score: 3 (2 votes) · EA · GW

Great answer, thank you!

Do you know of any examples of the "direct work+" strategy working, especially for EA-recommended charities? The closest thing I can think of would be the GiveDirectly UBI trial; is that the sort of thing you had in mind?

Comment by liam_donovan on There's Lots More To Do · 2019-06-08T04:12:27.152Z · score: 3 (2 votes) · EA · GW

It seems like that question would interact weirdly with expectations of future income: as a college student I donate ~1% of expenses, but if I could only save one life, right now, I would probably try to take out a large, high interest loan to donate a large sum. That depends on availability of loans, risk aversion, expectations of future income, etc. much more than it does on my moral values.

Comment by liam_donovan on Why we should be less productive. · 2019-05-17T10:07:27.438Z · score: 1 (1 votes) · EA · GW

-

Comment by liam_donovan on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-05-10T06:13:16.141Z · score: 1 (1 votes) · EA · GW

Isn't this essentially a reformulation of the common EA argument that the most high-impact ideas are likely to be "weird-sounding" or unintuitive? I think it's a strong point in favor of explicit modelling, but I want to avoid double-counting evidence if they are in fact similar arguments.

Comment by liam_donovan on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-05-10T06:08:29.345Z · score: 1 (1 votes) · EA · GW

A recent example of this happening might be EA LTF Fund grants to various organizations trying to improve societal epistemic rationality (e.g. by supporting prediction markets)

Comment by liam_donovan on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-05-10T06:06:25.513Z · score: 2 (2 votes) · EA · GW

Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can't give any

Comment by liam_donovan on Why is the EA Hotel having trouble fundraising? · 2019-03-30T08:41:54.935Z · score: 1 (1 votes) · EA · GW

Does he still endorse the retraction? It's just idle curiosity on my part but it wasn't clear from the comments

Comment by liam_donovan on Why is the EA Hotel having trouble fundraising? · 2019-03-27T23:08:36.346Z · score: 8 (6 votes) · EA · GW

A few thoughts this post raised for me (not directed at OP specifically):

1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it's been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.

2. Does "ea organisations are unwilling to even endorse the hotel" refer to RAISE/Rethink Charity (very surprising & important evidence!), or other EA organizations without direct ties to the Hotel?

3. I would be curious what the marginal cost of adding a new resident is: if high, this would be a good reason to leave rooms unoccupied rather than funding "tragic" projects.

4. Strongly agreed: the EV post seemed like an overly complex toy model that was unlikely to predict real-world outcomes. I think high-level heuristics for evaluating impact would be much more useful/convincing (e.g. the framework laid out here)

5. In general, donors who take a "hits-based giving" approach to funding speculative projects in their personal network are likely to become associated with failed projects regardless of personal competence, so I don't think this is evidence against the case the EA hotel makes. My relatively uninformed inside view is that the founder of the Kernel project should be associated with its failure, rather than Greg, and I think the outside view agrees.

6. I wonder how different the fundraising situation would be if it had started during the burst of initial enthusiasm/publicity surrounding the hotel?

Comment by liam_donovan on Apology · 2019-03-23T20:06:13.067Z · score: 4 (3 votes) · EA · GW

re signal boost: any particular reason why?

Comment by liam_donovan on PAF: Opioid Epidemic · 2019-03-15T05:49:51.189Z · score: 1 (1 votes) · EA · GW

Did the report consider increasing access to medical marijuana as an alternative to opioids? If so, what was the finding? (I didn't see any mention while skimming it) My impression was that many leaders in communities affected by opioid abuse see access to medical marijuana as the most effective intervention. One (not particularly good) example

Comment by liam_donovan on The Importance of Truth-Oriented Discussions in EA · 2019-03-15T04:43:45.269Z · score: 4 (4 votes) · EA · GW

Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I'm missing)? I'm having a hard time understanding the mechanism through which this occurs.

Comment by liam_donovan on Making discussions in EA groups inclusive · 2019-03-05T00:23:48.050Z · score: -3 (11 votes) · EA · GW

The law school example seems like weak evidence to me, since the topics mentioned are essential to practicing law, whereas most of the suggested "topics to avoid" are absolutely irrelevant to EA. Women who want to practice law are presumably willing to engage these topics as a necessary step towards achieving their goal. However, I don't see why women who want to effectively do good would be willing to (or expected to) engage with irrelevant arguments they find uncomfortable or toxic.

Comment by liam_donovan on Profiting-to-Give: harnessing EA talent with a new funding model · 2019-03-04T23:22:49.554Z · score: 3 (2 votes) · EA · GW

I like the idea of profiting-to-give as a way to strengthen the community and engage people outside of the limited number of direct work EA jobs; however, I don't see how an "EA certification" effectively accomplishes this goal.

I do think there would be a place for small EA-run businesses in fields with:

  • a lot of EAs
  • low barriers to entry
  • sharply diminishing returns to scale

Such a business might plausibly be able to donate at least much money as its employees were previously donating individually by virtue of their competitive success in the marketplace (i.e. without relying on EA branding or an EA customer base). By allowing EAs to work together for a common cause, it would also reduce value drift and improve morale.

More speculatively, it might improve recruitment of new EAs and reduce hiring costs for EA organizations by making it easier to find and evaluate committed candidates. If the business collectively decided how to donate its profits, it could also efficiently fufill a function similar to donor lotteries, freeing up more money for medium-size grants. Lastly, by focusing solely on maximizing profit, "profiting-to-give" would avoid the pitfalls of social benefit companies Peter_Hurford mentions while providing fulfilling work to EtG EAs.

Comment by liam_donovan on Climate Change Is, In General, Not An Existential Risk · 2019-02-01T07:22:43.424Z · score: 1 (1 votes) · EA · GW

That's a good point, but I don't think my argument was brittle in this sense (perhaps it was poorly phrased). In general, my point is that climate change amplifies the probabilities of each step in many potential chains of catastrophic events. Crucially, these chains have promoted war/political instability in the past and are likely to in the future. That's not the same as saying that each link in a single untested causal chain is likely to happen, leading to a certain conclusion, which is my understanding of a "brittle argument"

On the other hand, I think it's fair to say that e.g. "Climate change was for sure the primary cause of the Syrian civil war" is a brittle argument

Comment by liam_donovan on Climate Change Is, In General, Not An Existential Risk · 2019-02-01T07:06:31.915Z · score: 3 (2 votes) · EA · GW

I'd previously read that there was substantial evidence linking climate change-->extreme weather-->famine--> Syrian civil war (a major source of refugees). One example: https://journals.ametsoc.org/doi/10.1175/WCAS-D-13-00059.1 This paper claims the opposite though: https://www.sciencedirect.com/science/article/pii/S0962629816301822.

"The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise far greater caution when drawing such linkages or when securitising climate change."

I'll have to investigate more since I was highly confident of such a 'threat multiplier' view.

On your other two points, I expect the idea of anthropogenic global warming to continue to be associated with the elite; direct evidence of the climate changing is likely to convince people that climate change is real, but not necessarily that humans caused it. Concern over AGW is currently tied with various beliefs (including openness to immigration) and cultural markers predominantly shared by a subsection of the educated and affluent. I expect increasing inequality to calcify tribal barriers, which would make it very difficult to create widespread support for commonly proposed solutions to AGW.

PS: how do I create hyperlinks?

Comment by liam_donovan on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T07:02:10.794Z · score: 1 (3 votes) · EA · GW

I don't think this is indirect and unlikely at all; in fact, I think we are seeing this effect already. In particular, some of the 2nd-order effects of climate change (such as natural catastrophe-->famine-->war/refugees) are already warping politics in the developed world in ways that will make it more difficult to fight climate change (e.g. strengthening politicians who believe climate change is a myth). As the effects of climate change intensify, so will the dangers to other x-risks.

In particular, a plausible path is climate change immiserates poor/working class + elite attempts to stop climate change hurting working class (eg war on coal) --> even higher inequality --> broad-based resentment against elite initiatives. X-risk reduction is likely to be one of those elite initiatives simply because most X-risks are uninutitive and require time/energy/specialized knowledge to evaluate, which few non-elites have

Comment by liam_donovan on The case for taking AI seriously as a threat to humanity · 2018-12-28T20:39:40.680Z · score: -2 (5 votes) · EA · GW

1. A system that will imprison a black person but not an otherwise-identical white person can be accurately described as "a racist systsem"

2. One example of such a system is employing a ML algorithm that uses race as a predictive factor to determine bond amounts and sentencing

3. White people will tend to be biased towards more positive evaluations of a racist system because they have not experienced racism, so their evaluations should be given lower weight

4. Non-white people tend to evaluate racist systems very negatively, even when they improve predictive accuracy

To me, the rational conclusion is to not support racist systems, such as the use of this predictive algorithm.

It seems like many EAs disagree, which is why I've tried to break down my thinking to identify specific points of disagreement. Maybe people believe that #4 is false? I'm not sure where to find hard data to prove it (custom Google survey maybe?). I'm ~90% sure it's true, and would be willing to bet money on it, but if others' credences are lower that might explain the disagreement.

Edit: Maybe an implicit difference is epistemic modesty regarding moral theories -- you could frame my argument in terms of "white people misestimating the negative utility of racial discrimination", but I think it's also possible for demographic characteristics to bias one's beliefs about morality. There's no a priori reason to expect your demographic group to have more moral insight than others; one obvious example is the correlation between gender and support for utilitarianism. I don't see any reason why men would have more moral insight, so as a man I might want to reduce my credence in utilitarianism to correct for this bias.

Similarly, I expect the disagreement between a white EA who likes race-based sentencing and a random black person who doesn't to be a combination of disagreement about facts (e.g. the level of harm caused by racism) and moral beliefs (e.g. importance of fairness). However, *both* disagreements could stem from bias on the EA's part, and so I think the EA ought not discount the random guy's point of view by assigning 0 probability to the chance that fairness is morally important.

Comment by liam_donovan on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-27T01:04:06.908Z · score: 7 (6 votes) · EA · GW

How did Dylan Matthews become associated with EA? This is a serious question -- based on the articles of his I've read, he doesn't seem to particularly care about some core EA values, such as epistemic rationality and respect for "odd-sounding" opinions.

Comment by liam_donovan on EA Hotel with free accommodation and board for two years · 2018-06-10T06:23:40.836Z · score: 4 (4 votes) · EA · GW

I suspect Greg/the manager would not be able to filter projects particularly well based on personal interviews; since the point of the hotel is basically 'hits-based giving', I think a blanket ban on irreversible projects is more useful (and would satisfy most of the concerns in the fb comment vollmer linked)

Comment by liam_donovan on EA Hotel with free accommodation and board for two years · 2018-06-10T06:13:10.111Z · score: 3 (5 votes) · EA · GW

Following on vollmer's point, it might be reasonable to have a blanket rule against policy/PR/political/etc work -- anything that is irreversible and difficult to evaluate. "Not being able to get funding from other sources" is definitely a negative signal, so it seems worthwhile to restrict guests to projects whose worst possible outcome is unproductively diverting resources.

On the other hand, I really can't imagine what harm research projects could do; I guess the worst case scenario is someone so persuasive they can convince lots of EAs of their ideas but so bad at research their ideas are all wrong, which doesn't seem very likely. (why not 'malicious & persuasive people'? the community can probably identify those more easily by the subjects they write about)

Furthermore, guests' ability to engage in negative-EV projects will be constrained by the low stipend and terrible location (if I wanted to engage in Irish republican activism, living at the EA hotel wouldn't help very much). I think the largest danger to be alert for is reputation risk, especially from bad popularizations of EA, since this is easier to do remotely (one example is Intentional Insights, the only negative-EV EA project I know of)

Comment by liam_donovan on EA Hotel with free accommodation and board for two years · 2018-06-10T05:59:29.820Z · score: 5 (5 votes) · EA · GW

From my perspective, the manager should

  1. Not (necessarily) be an EA
  2. Be paid more (even if this trades off against capacity, etc)
  3. Not also be a community mentor

One of the biggest possible failure modes for this project seems to be hiring a not-excellent manager; even a small increase in competence could make a big difference between the project failing and succeeding. Thus, the #1 consideration ought to be "how to maximize the manager's expected skill". Unfortunately, the combination of undesirable location, only hiring EAs, and the low salary seem to restrict the talent pool enormously. My (perhaps totally wrong) impression is that some of these decisions are made on the basis of a vague idea of how things ought to be, rather than a conscious attempt to maximize success.

Brief arguments/responses:

  • Not only are EAs disproportionately unlikely to have operations skills (as 80K points out), but I suspect that the particular role of hotel manager requires even less of the skills we tend to have (such as a flair for optimization), and even more of the skills we tend not to have (consistency, hotel-related metis). I'm unsure of this but it's an important question to evaluate.

  • The manager will only be at the ground floor of a new organization if it doesn't fail. I think failure is more likely than expansion, but it's reasonable to be risk averse considering this is the first project of its kind in EA (diminishing marginal benefit). Consequently, optimizing for initial success seems more important than optimizing for future expansion.

  • The best feasible EA candidate is likely to have less external validation of managerial capability than a similarly qualified external candidate, who might be a hotel manager already! Thus, it'll be harder to actually identify the strong EA candidates, even if they exist.

  • The manager will get free room/board and live in low-CoL Blackpool, but I think this is outweighted by the necessity of moving to an undesirable location, and not being able to choose where you stay/eat. On net, I expect you'd need to offer a higher salary to attract the same level of talent as in, say, Oxford (though with more variance depending on how people perceive Blackpool).

  • You might be able to hire an existing hotel manager in Blackpool, which would reduce risk of turnover and guarantee a reasonable level of competence. This would obviously require separating the hotel manager and the community mentor, but I'm almost certain that doing would maximize the chances of success either way (division of labor!). I'm also not sure what exactly the cost is: the community mentor could just be an extroverted guest working on a particularly flexible project.

  • Presumably many committed and outgoing EAs (i.e. the people you'd want as managers) are already able to live with/near other EAs; moving to Blackpool would just take away their ability to choose who to live with.

Of course, there could already be exceptional candidates expressing interest, but I don't understand why the default isn't hiring a non-EA with direct experience.