How to disclose a new x-risk? 2022-08-24T01:35:37.419Z
Why your charitable giving should be sustainable 2022-05-09T19:07:33.837Z
A Model of Hits-Based Giving 2022-03-25T05:09:37.725Z
Predicting for Good: Charity Prediction Markets 2022-03-22T17:44:29.507Z
The Unilateralist’s Gift 2021-10-20T20:36:17.929Z
Give Collectively 2021-09-18T17:14:30.869Z
Give Sustainably 2021-08-28T17:57:43.864Z
Charity Prediction Markets 2021-03-16T05:21:25.354Z


Comment by harsimony on How to disclose a new x-risk? · 2022-08-24T17:10:06.250Z · EA · GW

Ok, and any advice for reaching out to trusted-but-less-prestigious experts? It seems unlikely that reaching out to e.g. Kevin Esvelt will generate a response!

Comment by harsimony on Cause area: Short-sleeper genes · 2022-08-10T18:45:01.166Z · EA · GW

Great post, I really appreciate an in-depth review of research on reducing sleep need.

I wrote some arguments for why reducing sleep is important here:

I also submitted a cause exploration app:

Your post includes substantially more research than mine and I would encourage you to reformat it and submit it to the OpenPhil's Cause Exploration Prize. I'm happy to help you with edits or combine our efforts!

Comment by harsimony on Impact markets may incentivize predictably net-negative projects · 2022-06-21T20:01:55.408Z · EA · GW

This kind of thing could be made more sophisticated by making fines proportional to the harm done, requiring more collateral for riskier projects, or setting up a system to short sell different projects. But simpler seems better, at least initially.

Have you thought about whether it could work with a more free market, and not necessarily knowing all of the funders in advance?

Yeah, that's a harder case. Some ideas:

  • People undertaking projects could still post collateral on their own (or pre-commit to accepting a fine under certain conditions). This kind of behavior could be rewarded by retro-funders giving these projects more consideration and the act of posting collateral does constitute a costly signal of quality. But that still requires some pre-commitments from retro funders or a general consensus from the community.

  • If contributors undertake multiple projects it should be possible to punish after-the-fact by docking some of their rewards from other projects. For example, if someone participates in 1 beneficial project and 1 harmful project, their retro funding rewards from the beneficial project can be reduced due to their participation on the harmful project. Unfortunately, this still requires some sort of pre-commitment from funders.

Comment by harsimony on Impact markets may incentivize predictably net-negative projects · 2022-06-21T19:01:04.873Z · EA · GW

I proposed a simple solution to the problem:

  1. For a project to be considered for retroactive funding, participants must post a specific amount of money as collateral.
  2. If a retroactive funder determines that the project was net-negative, they can burn the collateral to punish the people that participated in it. Otherwise, the project receives its collateral back.

This eliminates the "no downside" problem of retroactive funding and makes some net-negative projects unprofitable.

The amount of collateral can be chosen adaptively. Start with a small amount and increase it slowly until the number of net-negative projects is low enough. Note that setting the collateral too high can discourage net-positive but risky projects.

Comment by harsimony on Against immortality? · 2022-04-28T20:43:13.646Z · EA · GW

I make a slightly different anti-immortality case here:

Summary: At a steady state of population, extended lifespan means taking resources away from other potential people. Technology for extended life may not be ethical in this case. Because we are not in steady state, this does not argue against working on life extension technology today.

Comment by harsimony on Stupid Reasons to Have More Kids · 2022-04-18T06:49:16.330Z · EA · GW

One reason people make this claim is that many models of economic growth depend on population growth. Like you noted, there are lots of other ways to grow the economy by making each individual more productive (lower poverty, more education, automating tasks, more focus on research, etc.).

But crucially, all of these measures have diminishing returns. Let's say that in the future everyone on earth has a PhD, is highly productive, and works in an important research field. In this case the only way to continue growing economy is through population growth, since everything else has already been maxed out. This is why Chad Jones claims that the long run growth rate limits to the population growth rate:

At least that's what the models say. Jones himself admits that AI might change these dynamics (I guess population growth of AI's would become the thing that matters more if they replace human labor?).

Comment by harsimony on It's ok to leave EA · 2022-01-28T20:26:24.688Z · EA · GW

Thanks for writing this. Great to see people encouraging a sustainable approach to EA!

I want to tell you that taking care of yourself is what’s best for impact. But is it?

I claim that this is true:

  • Finding personal fulfillment is a positive result in and of itself.
  • It's important to prioritize personal needs, otherwise you will not be in a good position to help others (family, friends, charity, etc.).
  • Ensuring one's relationship with EA is sustainable can actually lead to more impact over the long run (though this shouldn't be peoples primary goal, personal wellbeing comes first).
  • Encouraging a sustainable culture can make EA more welcoming to others.
Comment by harsimony on Prediction Bank: A way around current prediction market regulations? · 2022-01-26T04:04:50.158Z · EA · GW

I think another possible route around gambling restrictions to prediction markets is to ensure all proceeds go to charity, but the winners get to choose which charity to donate to. I wrote about this more here:

Comment by harsimony on 13 Very Different Stances on AGI · 2021-12-29T22:40:25.717Z · EA · GW

I have noticed that few people hold the view that we can readily reduce AI-risk. Either they are very pessimistic (they see no viable solutions so reducing risk is hard) or they are optimistic (they assume AI will be aligned by default, so trying to improve the situation is superfluous).

Either way, this would argue against alignment research, since alignment work would not produce much change.

Strategically, it's best to assume that alignment work does reduce AI-risk, since it is better to do too much alignment work (relative to doing too little alignment work and causing a catastrophe).

Comment by harsimony on Do you know any research comparing the effectiveness of indirect and direct democracy? · 2021-11-01T19:18:13.701Z · EA · GW

Though I am not super familiar with the research, it seems that in general more indirect democracy functions better due to the fact that voters have little incentive to cast informed votes, whereas representatives are incentivized to make informed decisions on voters behalf.

I think the book 10% Less Democracy can point you to relevant research on this topic. It was discussed briefly on MR here.

You may also want to check out Caplan's The Myth of the Rational Voter for research along similar lines.

Comment by harsimony on On the assessment of volcanic eruptions as global catastrophic or existential risks · 2021-10-13T19:50:11.808Z · EA · GW

Great post!

To reiterate what AppliedDivinityStudies said, I would love to hear more about proposed solutions to this problem. For example, what do you think of this paper on preventing supervolcanic eruptions?

Interventions that may prevent or mollify supervolcanic eruptions

Comment by harsimony on Give Collectively · 2021-09-19T23:11:44.447Z · EA · GW

Of course, EA funds can do all of these things, and I appreciate the work they are doing.

I think it is important to be explicit about the structure of EA funds, meta-charities, and charitable foundations: they typically involve pooling money from many donors and putting funding decisions in the hands of a few people. This is not a criticism! It makes a lot of sense to turn these decisions over to knowledgeable, committed specialists in the EA community. This approach likely improves the impact of peoples donations over the counterfactual where people give directly to charities without considering how other are donating.

While I appreciate this system, I don't see why we shouldn't at least consider other systems of collective donation. It seems worthwhile to explore other approaches before settling on one specific model of collective giving.

Also, it seems like you have more faith than me in the collective wisdom of many non-experts, compared to a team of experts whose job is to work on the questions full-time.

Under the right circumstances, many non-experts can and do outperform experts. Tetlock's Superforcasting and prediction markets are good examples of this. That being said, I am highly uncertain as to whether these conditions hold for charitable donation, so experimentation with different funding models seems valuable.

Comment by harsimony on Give Collectively · 2021-09-19T05:53:16.543Z · EA · GW

I agree that the EA funds (and meta-charities like Givewell), are great opportunities to give and can help balance the flow of donations going to different charities. But I don't think that these funds have entirely solved the collective action problem in charitable giving. Rather, they aggregate money from many donors and turn over funding decisions to a handful of experts. These experts are doing great work, and I really respect them, but it doesn't hurt to consider how we might do things even better!

If we really did have a system for small donors to coordinate their giving like large donors, things would look quite different:

  • Collections of small donors would be able to fund specific research projects, found new charitable organizations, and exert significant control over the day-to-day activities of these organizations.

  • Collections of donors would be able to work with mega-donors, governments, and charitable organizations to pursue much larger projects.

  • Collections of small donors would be able to deliberate amongst themselves and make funding decisions based on their combined knowledge.

Charities and EA funds do this in a roundabout way by acting as representatives for many small donors, but this isn't the only way to organize giving. What about a kickstarter for EA research projects? Or a charitable fund where managers are elected by donors? Or a prediction market on how impactful different interventions are? I'm not claiming that these ideas are going to be better than the current instantiation of EA funds, but I want to encourage exploration and experimentation before we settle on these as the only solution to collective donation.

Comment by harsimony on AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA · 2021-08-20T05:23:55.571Z · EA · GW

Which of your writings (including things like blog posts) do you consider most important for making the world a better place? Assuming many people agreed to deeply consider your arguments on one topic, what would you have them read?

Comment by harsimony on Give in Public Beta is live · 2021-08-02T23:13:55.201Z · EA · GW

Wonderful idea, it looks great so far.

I appreciate that the list of charities one can donate to is relatively restricted since this prevents people from publicly donating to highly political charities for signalling purposes.

I also like that there is a dashboard showing how your donations are being spent.

One thing I find a little strange is the "lives saved" total (whereas the "CO2 Reduced" total seems perfectly normal to me). I don't have a good reason for this, its just a personal feeling. Perhaps instead show the total spent or fraction spent on different causes areas rather than assert the overall impact of the donations?