Posts

How to evaluate the impact of influencing governments vs direct work in a given cause area? 2019-06-26T08:14:30.928Z

Comments

Comment by Liam_Donovan on What actions most effective if you care about reproductive rights in America? · 2022-06-26T21:23:51.071Z · EA · GW

Even if you think (eg) abortion access is bad on the margin

If you believe this, doesn't it flip the sign of the "very best interventions" (ie you would believe they are exceptionally bad interventions)?

Comment by Liam_Donovan on Some potential lessons from Carrick’s Congressional bid · 2022-05-22T20:11:07.690Z · EA · GW

What was the positive effect supposed to be? 

Comment by Liam_Donovan on Some potential lessons from Carrick’s Congressional bid · 2022-05-22T06:29:55.546Z · EA · GW

17.4% of the citizen voting age population of OR-6 is Hispanic

https://davesredistricting.org/maps#viewmap::9b2b545f-5cd2-4e0d-a9b9-cc3915a4750f

Comment by Liam_Donovan on Some potential lessons from Carrick’s Congressional bid · 2022-05-22T06:14:35.111Z · EA · GW

So now that it's over, can someone explain what the heck was up with SBF donating $6m to HMP in exchange for a $1m donation to Flynn? From an outside perspective it seems tailor made to look vaguely suspicious and generate bad press, without seeming to produce any tangible benefits for Flynn or EA. 

Comment by Liam_Donovan on Biblical advice for people with short AI timelines · 2021-12-05T19:39:44.507Z · EA · GW

It seems like these observations could be equally explained by Paul correctly having high credence in long timelines, and giving advice that is appropriate in worlds where long timelines are true, without explicitly trying to persuade people of his views on timelines. Given that, I'm not sure there's any strong evidence that this is good advice to keep in mind when you actually do have short timelines, regardless of your views on the Bible.

Comment by Liam_Donovan on Coronavirus Research Ideas for EAs · 2020-03-29T05:52:53.262Z · EA · GW

sent, thank you

Comment by Liam_Donovan on Coronavirus Research Ideas for EAs · 2020-03-28T06:46:02.078Z · EA · GW

I'd be interested in joining the Slack group

Comment by Liam_Donovan on What are the key ongoing debates in EA? · 2020-03-11T09:32:03.042Z · EA · GW

I'd like to take Buck's side of the bet as well if you're willing to bet more

Comment by Liam_Donovan on COVID-19 brief for friends and family · 2020-03-05T12:57:21.273Z · EA · GW

What was her rationale for prioritizing hand soap over food?

Comment by Liam_Donovan on Is vegetarianism/veganism growing more partisan over time? · 2020-01-24T15:51:03.003Z · EA · GW

It's probably the lizardman constant showing up again -- if ~5% of people answer randomly and <5% of the population are actually veg*ns, then many of the self-reported veg*ns will have been people who answered randomly.

Comment by Liam_Donovan on Love seems like a high priority · 2020-01-21T12:56:43.404Z · EA · GW

I think it's misleading to call that evidence that marriage causes shorter lifespans (not sure if that's your intention)

Comment by Liam_Donovan on Love seems like a high priority · 2020-01-20T12:05:06.740Z · EA · GW

Do you have a link and/or a brief explanation of how they convincingly established causality for the "married women have shorter lives" claim?

Comment by Liam_Donovan on Love seems like a high priority · 2020-01-19T22:15:02.589Z · EA · GW

The next logical step is to evaluate the novel ideas, though, where a "cadre of uber-rational people" would be quite useful IMHO. In particular, a small group of very good evaluators seems much better than a large group of less epistemically rational evaluators who could be collectively swayed by bad reasoning.

Comment by Liam_Donovan on [AN #80]: Why AI risk might be solved without additional intervention from longtermists · 2020-01-19T13:55:30.215Z · EA · GW

I think the argument is that we don't know how much expected value is left, but our decisions will have a much higher expected impact if the future is high-EV, so we should make decisions that would be very good conditional on the future being high-EV.

Comment by Liam_Donovan on 8 things I believe about climate change · 2019-12-28T22:21:54.752Z · EA · GW

Have you read this paper suggesting that there is no good evidence of a connection between climate change and the Syrian war? I found it quite persuasive.

Comment by Liam_Donovan on Are we living at the most influential time in history? · 2019-12-12T09:48:54.727Z · EA · GW

What is a Copernican prior? I can't find any google results

Comment by Liam_Donovan on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-13T18:06:35.246Z · EA · GW

You're estimating there are ~1000 people doing direct EA work? I would have guessed around an order of magnitude less (~100-200 people).

Comment by Liam_Donovan on EA Hotel Fundraiser 5: Out of runway! · 2019-10-26T18:01:00.203Z · EA · GW

What if rooms at the EA Hotel were cost-price by default, and you allocated "scholarships" based on a combination of need and merit, as many US universities do? This might avoid a negative feedback cycle (because you can retain the most exceptional people) while reducing costs and making the EA Hotel a less attractive target for unaligned people to take resources from.

Comment by Liam_Donovan on EA Hotel Fundraiser 5: Out of runway! · 2019-10-26T17:55:32.115Z · EA · GW

What does this mean in the context of the EA Hotel? In particular, would your point apply to university scholarships as well, and if not, what breaks the analogy between scholarships and the Hotel?

Comment by Liam_Donovan on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-14T17:10:51.984Z · EA · GW

Maybe the most successful recruitment books directly target people 1-2 stages away in the recruitment funnel? In the case of HPMOR/Crystal Society, that would be quantitatively minded people who enjoy LW-style rationality rather than those who are already interested in AI alignment specifically.

Comment by Liam_Donovan on What opinions that you hold would you be reluctant to express publicly to other EAs? · 2019-09-13T06:10:53.936Z · EA · GW

Doesn't that assume EAs should value the lives of fetuses and e.g. adult humans equally?

Comment by Liam_Donovan on What opinions that you hold would you be reluctant to express publicly to other EAs? · 2019-09-10T20:45:24.988Z · EA · GW

Due to politicization, I'd expect reducing farm animal suffering/death to be much cheaper/more tractable per animal than reducing abortion is per fetus; choosing abortion as a cause area would also imperil EA's ability to recruit smart people across the political spectrum. I'd guess that saving a fetus would need to be ~100x more important in expectation than saving a farm animal for reducing abortions to be a potential cause area; in an EA framework, what grounds are there for believing that to be true?

Note: It would also be quite costly for EA as a movement to generate a better-researched estimate of the parameters due to the risk of politicizing the movement.

Comment by Liam_Donovan on Extinguishing or preventing coal seam fires is a potential cause area · 2019-08-01T23:07:10.395Z · EA · GW
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same.

I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in "philosophically attractive" fields. It seems plausible to me that climate change has fallen between two stools: not concrete enough to appeal to the instinct for quantified altruism, but not intellectually attractive enough to compete with AI risk and other long-termist interventions.



Comment by Liam_Donovan on Defining Effective Altruism · 2019-07-20T04:48:11.683Z · EA · GW

What does it mean to be "pro-science"? In other words, what might a potential welfarist, maximizing, impartial, and non-normative movement that doesn't meet this criterion look like?

I ask because I don't have a clear picture of a definition that would be both informative and uncontroversial. For instance, the mainstream scientific community was largely dismissive of SIAI/MIRI for many years; would "proto-EAs" who supported them at that time be considered pro-science? I assume that excluding MIRI does indeed count as controversial, but then I don't have a clear picture of what activities/causes being "pro-science" would exclude.


edit: Why was this downvoted?

Comment by Liam_Donovan on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-19T05:26:20.253Z · EA · GW

An example of what I had in mind was focusing more on climate change when running events like Raemon's Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of "equal importance to EA" (however that's defined) in e.g. technical AI safety.

Comment by Liam_Donovan on Want to Save the World? Enter the Priesthood · 2019-07-16T07:08:46.197Z · EA · GW

The answer to your question is basically what I phrased as a hypothetical before:

participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

I was involved in EA at university for 2 years before coming to believe Catholicism is true, and it didn't seem like Church dogma conflicted with my pro-EA intuitions at all, so I've just stayed with it. It helped that I wasn't ever an EA for rigidly consequentialist reasons; I just wanted to help people and EA's analytical approach was a natural fit for my existing interests (e.g. LW-style rationality).

I'm not sure my case (becoming both EA and Catholic due to LW-style reasoning) is broadly applicable; I think EA would be better served sticking to traditional recruiting channels rather than trying to extend outreach to religious people qua religious people. Moreover, I feel that it's very very important for EA to defend the value of taking ideas seriously, which would rule out a lot of the proposed religious outreach strategies you see (such as this post from Ozy).

Comment by Liam_Donovan on Corporate Global Catastrophic Risks (C-GCRs) · 2019-07-13T22:18:25.138Z · EA · GW

I downvoted the post because I didn't learn anything from it that would be relevant to a discussion of C-GCRs (it's possible I missed something). I agree that the questions are serious ones, and I'd be interested to see a top level post that explored them in more detail. I can't speak for anyone else on this, and I admit I downvote things quite liberally.

Comment by Liam_Donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T21:46:01.461Z · EA · GW

Tl;dr the moral framework of most religions is different enough from EA to make this reasoning nonsensical; it's an adversarial move to try to change religions' moral framework but there's potentially scope for religions to adopt EA tools


Like I said in my reply to khorton, this logic seems very strange to me. Surely the veracity of the Christian conception of heaven/hell strongly implies the existence of an objective, non-consequentialist morality? At that point, it's not clear why "effectively doing the most good" in this manner is a more moral [edit: terminal] goal than "effectively producing the most paperclips". It's not surprising that trying to shoehorn Christian ideas into a utilitarian framework is going to produce garbage!

I agree that this implies that EA would have to develop a distinct set of arguments in order to convince priests to hijack the resources of the Church to further the goals of the EA subculture; I also think this is an unnecessarily adversarial move that shouldn't be under serious consideration.

That doesn't mean that the ideas and tools of the EA community are inapplicable in principle to Catholic charity, as long as they are situated within a Catholic moral framework. I'm confident that e.g. Catholic Relief Services would rather spend money on interventions like malaria nets rather than interventions like PlayPumps. However, even if the Catholic Church deferred every such decision to a team of top EAs, I don't think the cumulative impact (under an EA framework) would be high enough to justify the cost of outreach to the Church. I'm not confident of this though; could be an interesting Fermi estimate problem.

(I've been trying to make universally applicable arguments, but it feels dishonest at this point not to mention that I am in fact Catholic.)

Comment by Liam_Donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T21:17:06.919Z · EA · GW

Thank you! I'm not sure, but I assume that I accidentally highlighted part of the post while trying to fix a typo, then accidentally e.g. pressed "ctrl-v" instead of "v" (I often instinctively copy half-finished posts into the clipboard). That seems like a pretty weird accident, but I'm pretty sure it was just user error rather than anything to do with the EA forum.

Comment by Liam_Donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T07:01:41.387Z · EA · GW

This post seems to have become garbled when I tried to fix a typo, any idea how I can restore the original verson?

Comment by Liam_Donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T06:57:21.530Z · EA · GW

This doesn't seem like a great idea to me for two reasons:

1. The notion of explicitly manipulating one's beliefs about something as central as religion for non-truthseeking reasons seems very sketchy, especially when the core premise of EA relies on an accurate understanding of highly uncertain subjects.

2. Am I correct in saying the ultimate aim of this strategy is to shift religious groups' dogma from (what they believe to be) divinely revealed truth to [divinely revealed truth + random things EAs want]? I'm genuinely not sure if I interpreted the post correctly, but that seems like an unnecessarily adversarial move against a set of organized groups with largely benign goals.

Comment by Liam_Donovan on Want to Save the World? Enter the Priesthood · 2019-07-13T06:34:13.228Z · EA · GW

Yeah, I don't think I phrased my comment very clearly.

I was trying to say that, if the Christian conception of heaven/hell exists, then it is highly likely than an objective non-utilitarian morality exists. It shouldn't be surprising that continuing to use utilitarianism within an otherwise Christian framework yields garbage results! As you say, a Christian can still be an EA, for most relevant definitions of "be an EA".

Comment by Liam_Donovan on Want to Save the World? Enter the Priesthood · 2019-07-12T19:37:17.197Z · EA · GW

I'm fairly confident the Church does not endorse basing moral decisions on expected value analysis; that says absolutely nothing about the compatibility of Catholicism and EA. For example, someone with an unusually analytical mindset might see participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

Comment by Liam_Donovan on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-08T23:16:13.563Z · EA · GW

This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I'm concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.

Comment by Liam_Donovan on How to evaluate the impact of influencing governments vs direct work in a given cause area? · 2019-06-27T05:39:27.088Z · EA · GW

Great answer, thank you!

Do you know of any examples of the "direct work+" strategy working, especially for EA-recommended charities? The closest thing I can think of would be the GiveDirectly UBI trial; is that the sort of thing you had in mind?

Comment by Liam_Donovan on There's Lots More To Do · 2019-06-08T04:12:27.152Z · EA · GW

It seems like that question would interact weirdly with expectations of future income: as a college student I donate ~1% of expenses, but if I could only save one life, right now, I would probably try to take out a large, high interest loan to donate a large sum. That depends on availability of loans, risk aversion, expectations of future income, etc. much more than it does on my moral values.

Comment by Liam_Donovan on Why we should be less productive. · 2019-05-17T10:07:27.438Z · EA · GW

-

Comment by Liam_Donovan on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-05-10T06:13:16.141Z · EA · GW

Isn't this essentially a reformulation of the common EA argument that the most high-impact ideas are likely to be "weird-sounding" or unintuitive? I think it's a strong point in favor of explicit modelling, but I want to avoid double-counting evidence if they are in fact similar arguments.

Comment by Liam_Donovan on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-05-10T06:08:29.345Z · EA · GW

A recent example of this happening might be EA LTF Fund grants to various organizations trying to improve societal epistemic rationality (e.g. by supporting prediction markets)

Comment by Liam_Donovan on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-05-10T06:06:25.513Z · EA · GW

Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can't give any

Comment by Liam_Donovan on Why is the EA Hotel having trouble fundraising? · 2019-03-30T08:41:54.935Z · EA · GW

Does he still endorse the retraction? It's just idle curiosity on my part but it wasn't clear from the comments

Comment by Liam_Donovan on Why is the EA Hotel having trouble fundraising? · 2019-03-27T23:08:36.346Z · EA · GW

A few thoughts this post raised for me (not directed at OP specifically):

1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it's been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.

2. Does "ea organisations are unwilling to even endorse the hotel" refer to RAISE/Rethink Charity (very surprising & important evidence!), or other EA organizations without direct ties to the Hotel?

3. I would be curious what the marginal cost of adding a new resident is: if high, this would be a good reason to leave rooms unoccupied rather than funding "tragic" projects.

4. Strongly agreed: the EV post seemed like an overly complex toy model that was unlikely to predict real-world outcomes. I think high-level heuristics for evaluating impact would be much more useful/convincing (e.g. the framework laid out here)

5. In general, donors who take a "hits-based giving" approach to funding speculative projects in their personal network are likely to become associated with failed projects regardless of personal competence, so I don't think this is evidence against the case the EA hotel makes. My relatively uninformed inside view is that the founder of the Kernel project should be associated with its failure, rather than Greg, and I think the outside view agrees.

6. I wonder how different the fundraising situation would be if it had started during the burst of initial enthusiasm/publicity surrounding the hotel?

Comment by Liam_Donovan on Apology · 2019-03-23T20:06:13.067Z · EA · GW

re signal boost: any particular reason why?

Comment by Liam_Donovan on PAF: Opioid Epidemic · 2019-03-15T05:49:51.189Z · EA · GW

Did the report consider increasing access to medical marijuana as an alternative to opioids? If so, what was the finding? (I didn't see any mention while skimming it) My impression was that many leaders in communities affected by opioid abuse see access to medical marijuana as the most effective intervention. One (not particularly good) example

Comment by Liam_Donovan on The Importance of Truth-Oriented Discussions in EA · 2019-03-15T04:43:45.269Z · EA · GW

Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I'm missing)? I'm having a hard time understanding the mechanism through which this occurs.

Comment by Liam_Donovan on Making discussions in EA groups inclusive · 2019-03-05T00:23:48.050Z · EA · GW

The law school example seems like weak evidence to me, since the topics mentioned are essential to practicing law, whereas most of the suggested "topics to avoid" are absolutely irrelevant to EA. Women who want to practice law are presumably willing to engage these topics as a necessary step towards achieving their goal. However, I don't see why women who want to effectively do good would be willing to (or expected to) engage with irrelevant arguments they find uncomfortable or toxic.

Comment by Liam_Donovan on Profiting-to-Give: harnessing EA talent with a new funding model · 2019-03-04T23:22:49.554Z · EA · GW

I like the idea of profiting-to-give as a way to strengthen the community and engage people outside of the limited number of direct work EA jobs; however, I don't see how an "EA certification" effectively accomplishes this goal.

I do think there would be a place for small EA-run businesses in fields with:

  • a lot of EAs
  • low barriers to entry
  • sharply diminishing returns to scale

Such a business might plausibly be able to donate at least much money as its employees were previously donating individually by virtue of their competitive success in the marketplace (i.e. without relying on EA branding or an EA customer base). By allowing EAs to work together for a common cause, it would also reduce value drift and improve morale.

More speculatively, it might improve recruitment of new EAs and reduce hiring costs for EA organizations by making it easier to find and evaluate committed candidates. If the business collectively decided how to donate its profits, it could also efficiently fufill a function similar to donor lotteries, freeing up more money for medium-size grants. Lastly, by focusing solely on maximizing profit, "profiting-to-give" would avoid the pitfalls of social benefit companies Peter_Hurford mentions while providing fulfilling work to EtG EAs.

Comment by Liam_Donovan on Climate Change Is, In General, Not An Existential Risk · 2019-02-01T07:22:43.424Z · EA · GW

That's a good point, but I don't think my argument was brittle in this sense (perhaps it was poorly phrased). In general, my point is that climate change amplifies the probabilities of each step in many potential chains of catastrophic events. Crucially, these chains have promoted war/political instability in the past and are likely to in the future. That's not the same as saying that each link in a single untested causal chain is likely to happen, leading to a certain conclusion, which is my understanding of a "brittle argument"

On the other hand, I think it's fair to say that e.g. "Climate change was for sure the primary cause of the Syrian civil war" is a brittle argument

Comment by Liam_Donovan on Climate Change Is, In General, Not An Existential Risk · 2019-02-01T07:06:31.915Z · EA · GW

I'd previously read that there was substantial evidence linking climate change-->extreme weather-->famine--> Syrian civil war (a major source of refugees). One example: https://journals.ametsoc.org/doi/10.1175/WCAS-D-13-00059.1 This paper claims the opposite though: https://www.sciencedirect.com/science/article/pii/S0962629816301822.

"The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise far greater caution when drawing such linkages or when securitising climate change."

I'll have to investigate more since I was highly confident of such a 'threat multiplier' view.

On your other two points, I expect the idea of anthropogenic global warming to continue to be associated with the elite; direct evidence of the climate changing is likely to convince people that climate change is real, but not necessarily that humans caused it. Concern over AGW is currently tied with various beliefs (including openness to immigration) and cultural markers predominantly shared by a subsection of the educated and affluent. I expect increasing inequality to calcify tribal barriers, which would make it very difficult to create widespread support for commonly proposed solutions to AGW.

PS: how do I create hyperlinks?

Comment by Liam_Donovan on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T07:02:10.794Z · EA · GW

I don't think this is indirect and unlikely at all; in fact, I think we are seeing this effect already. In particular, some of the 2nd-order effects of climate change (such as natural catastrophe-->famine-->war/refugees) are already warping politics in the developed world in ways that will make it more difficult to fight climate change (e.g. strengthening politicians who believe climate change is a myth). As the effects of climate change intensify, so will the dangers to other x-risks.

In particular, a plausible path is climate change immiserates poor/working class + elite attempts to stop climate change hurting working class (eg war on coal) --> even higher inequality --> broad-based resentment against elite initiatives. X-risk reduction is likely to be one of those elite initiatives simply because most X-risks are uninutitive and require time/energy/specialized knowledge to evaluate, which few non-elites have