Case Rates to Sequencing Reads 2022-09-21T02:00:00.158Z
Hiring Programmers in Academia 2022-07-24T20:20:00.248Z
Spending Update 2022 2022-07-19T14:10:00.259Z
Passing Up Pay 2022-07-13T14:10:00.157Z
Leaving Google, Joining the Nucleic Acid Observatory 2022-06-11T01:37:24.910Z
Revisiting "Why Global Poverty" 2022-06-01T20:20:00.193Z
Increasing Demandingness in EA 2022-04-29T01:20:00.132Z
US Taxes: Adjust Withholding When Donating? 2022-04-12T15:50:00.164Z
Responsible Transparency Consumption 2022-03-11T21:34:52.137Z
EA Dinner Covid Logistics 2021-12-11T21:50:00.669Z
Issues with Giving Multiplier 2021-09-29T21:40:00.637Z
What should "counterfactual donation" mean? 2021-09-23T12:59:09.842Z
GiveWell Donation Matching 2021-09-21T22:50:00.545Z
Limits of Giving 2021-03-04T02:20:00.618Z
When I left Google 2021-02-28T21:40:00.565Z
Giving Tuesday 2020 2020-11-30T22:30:00.575Z
EA Relationship Status 2020-09-19T01:50:00.599Z
Leaving Things For Others 2020-04-12T11:50:00.602Z
Why I'm Not Vegan 2020-04-09T13:00:00.683Z
Candy for Nets 2019-09-29T11:11:51.289Z
Long-term Donation Bunching? 2019-09-27T13:09:09.881Z
Effective Altruism and Everyday Decisions 2019-09-16T19:39:59.370Z
Answering some questions about EA 2019-09-12T17:44:47.922Z
There's Lots More To Do 2019-05-29T19:58:55.470Z
Value of Working in Ads? 2019-04-09T13:06:53.969Z
Simultaneous Shortage and Oversupply 2019-01-26T19:35:24.383Z
College and Earning to Give 2018-12-16T20:23:26.147Z
2018 ACE Recommendations 2018-11-26T18:50:57.764Z
2018 GiveWell Recommendations 2018-11-26T18:50:22.620Z
Donation Plans for 2017 2017-12-23T22:25:49.690Z
Estimating the Value of Mobile Money 2016-12-21T13:58:13.662Z
[meta] New mobile display 2016-12-05T15:21:22.121Z
Concerns with Intentional Insights 2016-10-24T12:04:22.501Z
Scientific Charity Movement 2016-07-23T14:33:38.192Z
Independent re-analysis of MFA veg ads RCT data 2016-02-20T04:48:29.296Z
The Counterfactual Validity of Donation Matching 2015-03-02T22:02:40.295Z
The Privilege of Earning To Give 2015-01-14T01:59:51.446Z
Effective Altruism at Your Work 2014-11-12T14:06:39.089Z
Lawyering to Give 2014-09-25T12:19:29.251Z
Disability Weights 2014-09-11T21:34:58.961Z
Altruism isn't about sacrifice 2013-09-06T04:00:13.000Z
Personal consumption changes as charity 2013-07-31T04:00:49.000Z
Haiti and disaster relief 2013-07-19T04:00:57.000Z
Keeping choices donation neutral 2013-06-28T04:00:07.000Z


Comment by Jeff Kaufman (Jeff_Kaufman) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-30T02:10:57.334Z · EA · GW

Expanded on this:

Comment by Jeff Kaufman (Jeff_Kaufman) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-29T11:49:54.556Z · EA · GW

I think it's "mostly avoidable" in the sense that you can avoid the majority of it, but for most parents and kids not in the sense that you can get it down to nothing?

If you are not thoughtful about it you can end up in a place where both parents have sleep that is massively disrupted, at which point it becomes pretty hard to actually address the problem because you're so tired and thinking so poorly.

Important components in avoiding sleep deprivation:

Comment by Jeff Kaufman (Jeff_Kaufman) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T19:40:04.791Z · EA · GW

Jeff Kaufman, do you have kids of your own that makes you more confident in your statement?

Three kids: 8y, 6y, and 15m

I don't know if it is possible to tag someone in a comment to notify them they have been mentioned

Happened to see it ;)

Comment by Jeff Kaufman (Jeff_Kaufman) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T00:33:44.584Z · EA · GW

do you think there is a good pronatalist argument to be made that an EA that doesn't feel like they want kids should still regardless have kids?

I don't think anyone who doesn't want to have kids should have them. It's a huge amount of work, and if you're not excited about it it seems likely to make you miserable.

Comment by Jeff Kaufman (Jeff_Kaufman) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T00:30:48.706Z · EA · GW

One confounding factor here is that the children that you might potentially adopt are pretty different from the children you might have biologically. Most adoptees have gone through some form of trauma, they are rarely newborns, they often had worse prenatal environments, their biological parents probably wouldn't enjoy the forum, etc.

I think if somehow one of my children had been swapped at birth with a child from similar parents it probably wouldn't have much of an impact on what raising them would be like, but that's not really what we're talking about?

(I do also think it's cute the various more specific ways our kids resemble us, but I agree this is not a major contribution to the experience of parenting.)

Comment by Jeff Kaufman (Jeff_Kaufman) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T00:14:36.074Z · EA · GW

On (1), another consideration you don't mention is that having kids earlier means more years of overlap with your kids and, potentially, grandkids: you'd get to see more of their lives, which is something people usually find pretty rewarding.

Comment by Jeff Kaufman (Jeff_Kaufman) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T21:01:12.840Z · EA · GW

Do you think never celebrating holidays such as Christmas or birthdays would have a strong effect on the psychological development of our children? My partner and I intend to avoid celebrating any conventional holidays, except for Halloween, and to celebrate the Solstices and Equinoxes instead.

Would you be up for saying more about why you don't want to celebrate the conventional holidays? Your kids are likely going to want to celebrate the things their friends and extended family are celebrating, and unless you have a strong reason not to, might as well make them happy?

For example, despite being atheists we celebrate Christmas, Easter, Hanukkah, and Passover. Not in an especially religious way, just things like dying eggs and looking for them are fun.

Comment by Jeff Kaufman (Jeff_Kaufman) on Ask Me Anything about parenting as an Effective Altruist · 2022-09-27T20:52:49.568Z · EA · GW

For men reading this and thinking "I want to be an equal partner in raising kids, but I know a lot of men who intellectually want this don't end up doing their share; what should I do", you might be interested in my Equal Parenting Advice for Dads

Comment by Jeff Kaufman (Jeff_Kaufman) on Case Rates to Sequencing Reads · 2022-09-23T13:25:58.290Z · EA · GW

Finally got a chance to finish reading the paper! I don't entirely understand it, though:

  • I think they're modeling prevalence as initially constant and then sharply transitioning to an increase of 5% per year. In thinking about infections in the cases I'm familiar with (people and wild populations) this sounds very unlike real spread, which is exponential initially (or the sum of an exponential and a constant if you're observing something new growing to exceed some background). Is their model more realistic for farmed animals, or is this just highly simplified?

  • It looks to me like maybe the statistical method they're using relies on this clear transition between a linear constant regime and a (very nearly) linear increasing regime, and so if the model is overly simplified (above) then their results will be too optimistic about detection. (See Figure 2 on p4 of the supplementary materials.)

  • They talk about being able to detect entirely novel antibiotic resistance genes (scenario 3), but I don't see anything in the paper about how they know to track a particular novel gene to see if it's an antibiotic resistance one? Is the idea that once you do realize you care about a gene you can go back and re-analyze the sequencing data you've been collecting to learn how quickly it has been spreading?

how you could identify a novel pathogen

I've been working on simulating exponential growth detection: count how many times each k-mer (I've been using 40-mers) occurs on each day, and then run Poisson regression to see whether this looks like an exponential increase and how good the fit is. It works, though as discussed in this post the signal for novel viruses is likely to be extremely weak and so enough sequencing gets expensive.

and know it's a pathogen

Yes, that's also an important part of the problem. In some cases I think it would be clear how much of a problem it was simply from looking at it and seeing how close various parts are to matching known things (which could be automated if we're getting lots of them). But yes, in others it would be pretty hard to judge how seriously to take it.

if you don't know if something is a human pathogen

That one seems manageable: once we recognize that something is spreading in particular areas we could use more targeted and cheaper methods, like random sampling of hospital arrivals and qPCR.

how it spreads

I think how it spreads is probably smaller than the other concerns. Yes, if we had more details we could make a more targeted response, but general responses like lowering thresholds for wearing PPE, ramping up PPE production, ramping up testing ability and developing cheap targeted tests, reducing some forms of non-essential activity, etc would still make sense.

wait to see if clinical cases start showing up in hospitals, it's not much of an early warning system.

As above, I think there are a bunch of things you can do aside from waiting to see if people show up in hospitals, but even then it's much cheaper to check for in hospitals if you know specifically what you're looking for.

An alternative is to look at airplane waste...

Yes, I think airplane waste is very promising, though the statistics are likely much trickier because of the small numbers (small numbers of fliers, small number of fliers using the in-flight toilets). I'd like to see exploration of both (and also sentinel populations) to see how they compare.

Plus, depending on your sampling system, you may not have plane-level data.

(Writing for myself, not the NAO)

Comment by Jeff Kaufman (Jeff_Kaufman) on The Next EA Global Should Have Safe Air · 2022-09-21T13:50:50.888Z · EA · GW

where people talked at normal volume

Why are you modeling people on the flight as, on average, talking? On flights I've been on most people are watching videos, reading, sleeping, eating, etc and only a few people traveling together talk.

Microcovid gives more details on how they're modeling airplanes:

Comment by Jeff Kaufman (Jeff_Kaufman) on The Next EA Global Should Have Safe Air · 2022-09-21T10:48:33.585Z · EA · GW

flights is one of the highest risk activity for getting sick via air-borne diseases


Comment by Jeff Kaufman (Jeff_Kaufman) on Open EA Global · 2022-09-02T15:43:06.604Z · EA · GW

I want to clarify that this isn’t how our admissions process works, and neither you nor anyone else we accept would be bumping anyone out of a spot. We simply have a specific bar for admissions and everyone above that bar gets admitted.

This doesn't seem right to me? For example:

  • In setting the bar I expect you consider, among other things, the desired conference size. For example, if you got a lot of "this conference felt too big" feedback, you'd probably respond by raising the bar for the next one.

  • If someone applies late, I would expect whether you're able to make room for the would depend on whether you have capacity.

Comment by Jeff Kaufman (Jeff_Kaufman) on Toby Ord’s The Scourge, Reviewed · 2022-08-31T00:22:10.862Z · EA · GW

I think Ord's argument is something like :"If you have an EA view of the world, and you think humans matter equally from conception, then natural embryo loss is very important. That people don't take it seriously implies they don't really think moral worth begins at conception". And maybe this is equivalent to what you're saying (it's a bit hard to tell with all the humor) but where his argument falls down is that people with this view rarely have an EA outlook.

Comment by Jeff Kaufman (Jeff_Kaufman) on Toby Ord’s The Scourge, Reviewed · 2022-08-31T00:17:13.625Z · EA · GW

> is Ord's graph real or imagined?

Real. P4 of

Comment by Jeff Kaufman (Jeff_Kaufman) on "Call off the EAs": Too Much Advertising? · 2022-08-22T13:27:47.207Z · EA · GW

YouTube does, but I don't think this campaign is using them? The EA plugs I've seen on YouTube have all been the sponsor directly.

Comment by Jeff Kaufman (Jeff_Kaufman) on "Call off the EAs": Too Much Advertising? · 2022-08-21T10:54:03.792Z · EA · GW

Good point! This is only for podcasts, not YouTube, right?

Comment by Jeff Kaufman (Jeff_Kaufman) on "Call off the EAs": Too Much Advertising? · 2022-08-20T10:44:51.426Z · EA · GW

The kind of ads were talking about here (podcast and YouTube sponsorship) don't go through the fancy highly optimized real time bidding process: they're embedded into the content at the time it's originally created. If someone happens to follow several different people who all accept the same sponsorship, they get a lot of the same ad. Regardless of whether annoyance is a solved problem in the more typical case (I'm not convinced) it's definitely not a solved problem for these channels.

Comment by Jeff Kaufman (Jeff_Kaufman) on Prioritizing x-risks may require caring about future people · 2022-08-15T16:03:48.627Z · EA · GW

Probably a very small share and maybe none of the targeted animals exist at the time a project is started or a donation is made, since, for example, farmed chickens only live 40 days to 2 years, and any animals that benefit would normally be ones born and raised into different systems, rather than changing practices for any animal already alive at the time of reform. They aren't going to move live egg-laying hens out of cages into cage-free systems to keep farming them. It's the next group of them who will just never be farmed in cages at all.

Many animal interventions are also about trying to reduce the number of farmed animals that will exist in the future: averted lives. If you only care about currently living beings, that has no value.

Comment by Jeff Kaufman (Jeff_Kaufman) on Doing good is a privilege. This needs to change if we want to do good long-term. · 2022-08-14T18:04:41.943Z · EA · GW

Why would you expect this?

Switching to blind hiring reducing the diversity of your hiring indicates that you've likely been (consciously or unconsciously) counting underrepresented group membership towards candidates instead of against them.

Comment by Jeff Kaufman (Jeff_Kaufman) on Doing good is a privilege. This needs to change if we want to do good long-term. · 2022-08-06T19:19:05.093Z · EA · GW

I do, but none of them have been willing to talk about it publicly. Maybe because it would imply that their hiring bar for employees that would increase their overall diversity is intentionally slightly lower?

Comment by Jeff Kaufman (Jeff_Kaufman) on Doing good is a privilege. This needs to change if we want to do good long-term. · 2022-08-04T02:45:18.162Z · EA · GW

We need to make sure that job applications are assessed blindly at most stages in the application to avoid bias.

My understanding is that several places that have tried blinding found that this decreased the diversity of their hiring. Something to be cautious about!

Comment by Jeff Kaufman (Jeff_Kaufman) on Passing Up Pay · 2022-07-13T23:33:27.947Z · EA · GW

The bit that I would expect to be tricky is trying to let employees choose how much of their pay they receive through this kind of "choose where we donate money". If you're just trying to let every employee direct a certain amount of funding, that's already fine.

Comment by Jeff Kaufman (Jeff_Kaufman) on Passing Up Pay · 2022-07-13T23:31:44.792Z · EA · GW

Since in effect this frees up funding for the kind of funder that funds the NAO, and those funders tend to be diversified, there is still some amount of diversification? See the complicated bullet on funding replaceability.

Comment by Jeff Kaufman (Jeff_Kaufman) on Passing Up Pay · 2022-07-13T15:36:06.269Z · EA · GW

I suspect there are some tax issues there? In the US employer donation matching is common, and my previous employer gave all of its employees a sum they could choose where to donate, so that is presumably also fine. I'm less sure you can do what you're proposing, however, without it counting as income.

Comment by Jeff Kaufman (Jeff_Kaufman) on Meat Externalities · 2022-07-12T11:24:36.190Z · EA · GW

eliminating the production of meat required for one individual’s diet for one year confers social welfare benefits equal to the benefits of increasing annual global output by more than $100,000.


If you’re neither vegetarian nor offsetting, here’s hoping that you produce enough positive externalities to get your life (overall) back into the green!

This sounds like a really high bar, but it's much less than it appeared to me at first read. It's not $100k to the world's poorest or $100k to the marginal EA charity, but something closer to money to everyone equally, right? Back of the envelope, world GDP per capita is $10k/y and GiveDirectly recipients are living on ~$300/y. Figuring that additional money is valuable in inverse proportion to what you already have, GiveDirectly receiving money is ~33x as valuable as increasing GDP. GiveWell thinks they can find giving opportunities at 5x to 8x GiveDirectly, like AMF, bringing marginal global poverty spending to ~166x. That brings the initial $100k down to ~$600. I think almost all EAs are doing something at least as beneficial as donating $600/y to AMF!

(If the $100k is actually money in proportion to what people already have, then it's even more dramatic)

Comment by Jeff Kaufman (Jeff_Kaufman) on Meat Externalities · 2022-07-12T02:22:13.713Z · EA · GW

While this particular number is highly sensitive to the baseline parameters of the model, the broader conclusion that animal welfare costs completely swamp the climate costs of eating meat turns out to be almost unavoidable once you grant that factory-farmed animal lives are net-negative.

This sort of question depends enormously on the parameters, and I'm not convinced that it gets you all the way to "completely swamp"? The paper seems to be and while it's not easy to read I think it is valuing suffering equally whether experienced by humans or animals? If you instead think animals matter much less than humans (ex), which is a common view, that would bring the cost down well below $100k.

Comment by Jeff Kaufman (Jeff_Kaufman) on The Future Might Not Be So Great · 2022-07-03T17:06:10.526Z · EA · GW

I think that's a strong reason for people other than Jacy to work on this topic.

Comment by Jeff Kaufman (Jeff_Kaufman) on The Future Might Not Be So Great · 2022-07-03T11:58:12.354Z · EA · GW

We don't have any centralized or formal way of kicking people out of EA. Instead, the closest we have, in cases where someone has done things that are especially egregious, is making sure that everyone who interacts with them is aware. Summarizing the situation in the comments here, on Jacy's first EA forum post in 3 years (Apology, 2019-03), accomplishes that much more than posting in the open thread.

This is a threaded discussion, so other aspects of the post are still open to anyone interested. Personally, I don't think Jacy should be in the EA movement and won't be engaging in any of the threads below.

Comment by Jeff Kaufman (Jeff_Kaufman) on Future Fund June 2022 Update · 2022-07-03T02:12:11.288Z · EA · GW

My guesses:

  1. Regranting is intended as a way to let people with local knowledge apply it to directing funds. This is different from just deputizing grantmakers.

  2. If you made the list public I'd expect the regranters to be overwhelmed by people seeking grants, and generally find it pretty frustrating. For example, would your next step be to send emails to each of those addresses? ;)

Comment by Jeff Kaufman (Jeff_Kaufman) on Leaving Google, Joining the Nucleic Acid Observatory · 2022-06-20T13:04:52.525Z · EA · GW

I suspect my knowledge isn't that useful here? It's about how the infrastructure works at a technical level and how to make changes to the platform, which is a pretty different set of skills from how to run good ad campaigns.

Comment by Jeff Kaufman (Jeff_Kaufman) on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-19T01:50:23.931Z · EA · GW

working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities

Does "1000x" refer to something in particular, or are you just saying that the GiveWell top charities set a high bar?

Comment by Jeff Kaufman (Jeff_Kaufman) on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-19T01:49:35.890Z · EA · GW

How about:

it seems plausible to me that this relative lack of quantitative inclination played a role in Open Philanthropy making comparatively suboptimal grants in the criminal justice space

I read this as a formal and softened way of saying "Chloe made avoidably bad grants because she wouldn't do the math". Different people will interpret the softening differently: it can come across either as "hey maybe this could have been a piece of what happened?" or "this is totally what I think happened, but if I say it bluntly that would be rude".

Comment by Jeff Kaufman (Jeff_Kaufman) on Leaving Google, Joining the Nucleic Acid Observatory · 2022-06-17T21:12:41.355Z · EA · GW

Yes: we've pledged to donate 30%, but will do 50% if we can still manage it

Comment by Jeff Kaufman (Jeff_Kaufman) on Sleep: effective ways to improve it · 2022-06-16T12:31:03.982Z · EA · GW

If you want to absorb 300g/day of CO2 you need the plants to grow by ~300g/day. Which is a lot!

(It's only rough: 300g of CO2 is 12/(12+2*18) carbon, but when absorbing carbon plants also take in water with goes the other way)

Comment by Jeff Kaufman (Jeff_Kaufman) on Apply to join SHELTER Weekend this August · 2022-06-15T20:09:17.362Z · EA · GW

It should be

Comment by Jeff Kaufman (Jeff_Kaufman) on The dangers of high salaries within EA organisations · 2022-06-15T11:20:35.518Z · EA · GW

In the US, the main tax benefits of employer donation matching over giving employees more money that they can donate are:

  • If you want to deduct donations, you can't take the standard deduction ($13k in 2022 for an individual). If you don't have any other deductions that you would be itemizing, this means the first $13k you donate is effectively taxed as if you had kept it.

  • At the federal level, only income tax allows deducting donations. Payroll taxes are pretty much just proportional to pay (~15%). Half of payroll taxes are paid by the employer, which makes this effect twice what it appears when looking at your paystub.

  • At the state level, most don't allow deducting donations, and the states that do typically have very low caps.

For example, last year my family paid $89k in federal income tax, $39k in state income tax (MA), and $35k in payroll taxes (the employers paid another $35k), for a total of $197k in taxes on $780k of income ($815k cost to employer counting payroll taxes) and $400k of donations. If our employers had instead let us direct that $400k and paid us $380k only, we would have paid approximately $75k in federal income tax, $20k in state income tax, and $17k in payroll taxes (with the employers paying another $17k). This saves $65k (8% of total pay, 17% of post-donation pay, 35% of post-donation post-tax pay) which could be donated.

Comment by Jeff Kaufman (Jeff_Kaufman) on Demandingness and Time/Money Tradeoffs are Orthogonal · 2022-06-15T11:00:57.806Z · EA · GW

$125k ... essentially taken a vow of poverty

Someone earning $125k, even with a family of four to support, is at the 98th income percentile globally. Within the US it's 89th for an individual or 75th for a household, and even in the Bay Area it's above the median household income.

I don't disagree with your overall point (paying more for better people can make a lot of sense), but it's still useful to be calibrated on what constitutes poverty.

Comment by Jeff Kaufman (Jeff_Kaufman) on Demandingness and Time/Money Tradeoffs are Orthogonal · 2022-06-15T10:50:16.512Z · EA · GW


  • In this instance, someone demonstrated a virtue (I just saw them go out of their way to help a coworker)

  • They generally demonstrate a virtue (they never make ad hominem attacks)

Now, technically, these are really the same: even in the latter the signal is composed of individual observations. But they differ in that with the former each instance gives lots of signal (going out of your way is rare) while in the latter each instance gives very little signal (even someone pretty disagreeable is still going to spend most of their time not making ad hominem attacks).

I'm interpreting Caroline as saying that when someone is practicing this virtue well you don't notice any individual instance of silence, and praise is generally something we do at the instance level. On the other hand, we can still notice that someone, over many opportunities, has consistently refrained from harmful speech.

I agree, though, that it isn't a very good signal because of the difficulty in reception (less legible).

Comment by Jeff Kaufman (Jeff_Kaufman) on Jobs at EA-organizations are overpaid, here is why · 2022-06-15T00:48:10.314Z · EA · GW

Many EA organizations offer entry-level salaries significantly higher than what candidates could earn elsewhere

That is definitely not my experience. Yes, offers are higher than they were, but they are generally going to highly talented people who have some very lucrative options if they choose to optimize for that. Do you mean significantly higher than what they could earn doing the same work elsewhere?

Comment by Jeff Kaufman (Jeff_Kaufman) on The dangers of high salaries within EA organisations · 2022-06-15T00:29:01.668Z · EA · GW

If people have circumstances or dependents which means they need additional income, they should obviously get it.

FYI: this is generally not legal. Organizations (at least in the US, and I'm pretty sure in Europe) are not supposed to consider these aspects of employees' personal lives when deciding how much to pay.

Comment by Jeff_Kaufman on [deleted post] 2022-06-14T17:36:05.054Z

Thanks! Something is screwy with the auto-crossposting from my blog

Comment by Jeff Kaufman (Jeff_Kaufman) on Where can I learn about how DALYs are calculated? · 2022-06-11T21:57:02.353Z · EA · GW

Here are some notes from when I looked into this a few years ago:

Comment by Jeff Kaufman (Jeff_Kaufman) on Leaving Google, Joining the Nucleic Acid Observatory · 2022-06-11T19:54:11.523Z · EA · GW

Yes, I'm also confused. Ad tech is on a path to getting weaker, with browsers making it harder and harder to connect people's behavior across sites. Privacy Sandbox, and the privacy-preserving ads APIs that other browsers are creating, are much weaker than what they're replacing.

Comment by Jeff Kaufman (Jeff_Kaufman) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-09T18:25:33.040Z · EA · GW

It's one thing to say that it's sensitive, but it's another to base your mainline argument on a really unusual view without flagging that?

Does it really seem plausible to you that we should be indifferent between six months of a happy healthy pet chicken and a year of a happy healthy human?

Comment by Jeff Kaufman (Jeff_Kaufman) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-09T15:30:30.177Z · EA · GW

This resulted in a mean moral weight of poultry of 2 QALY/pQALY[14], which implies that 1 year of fully healthy poultry life is 2 times as valuable as 1 QALY.

Unless I'm confused about what you're saying here, this is an unusual enough view that it's worth highlighting. You're saying that we're indifferent been giving a human a year of healthy life, or giving a chicken six months, right?

(I expect most people would give very different numbers than this, at the very least valuing a marginal human year above that of a chicken year.)

Comment by Jeff Kaufman (Jeff_Kaufman) on Michael Nielsen's "Notes on effective altruism" · 2022-06-04T11:16:35.152Z · EA · GW

The "misery trap" section feels like it is describing a problem that EA definitely had early on, but mostly doesn't now?

In early EA, people started thinking hard about this idea of doing the most good they could. Naively, this suggests doing things like giving up things that are seriously important to you (like having children), illegibily make you more productive (like a good work environment), or provide important flexibility (like having free time), and the author quotes some early EAs struggling with this conflict, like:

> my inner voice in early 2016 would automatically convert all money I spent (eg on dinner) to a fractional “death counter” of lives in expectation I could have saved if I’d donated it to good charities. Most EAs I mentioned that to at the time were like ah yeah seems reasonable

I really don't think many EAs would say "seems reasonable" now. If someone said this to me I'd give some of my personal history with this idea and talk about how it turns out that in practice this works terribly for people: it makes you miserable, very slightly increases how much you have available to donate, and massively decreases your likely long-term impact through burnout, depression, and short-term thinking.

One piece of writing that I think was helpful in turning this around was Another was

I think it's not a coincidence that the examples the author links are 5+ years old? If people are still getting caught in this trap, though, I'd be interested to see more? (And potentially write more on why it's not a good application of EA thinking.)

In case people are curious: Julia and I now have three kids and it's been 10+ years since stress and conflict about painful trade-offs between our own happiness and making the world better were a major issue for us.

Comment by Jeff Kaufman (Jeff_Kaufman) on Longtermist EA needs more Phase 2 work · 2022-06-01T01:35:19.674Z · EA · GW

I don't think predating longtermism rules out Wave. I would count Open Phil's grants to the Johns Hopkins Center for Health Security, which was established before EA (let alone longtermism), because Open Phil chose to donate to them for longtermist reasons. Similarly, if you wanted to argue that advancing Wave was one of our current best options for improving the long term future, that would be an argument for grouping Wave in with longtermist work.

(I'm really happy that you and Wave are doing what you're doing, but not because of direct impact on the long-term future.)

Comment by Jeff Kaufman (Jeff_Kaufman) on Introducing EAecon: Community-Building Project · 2022-05-30T10:19:42.542Z · EA · GW

I don't think the splitting makes it clearer sooner what EAecon is? I think it's the same ~9 paragraphs before you get to the information that your project is aimed at economists.

I wonder whether the initial "my community-building project, EAecon" might be better as "my econ community-building project, EAecon"

Comment by Jeff Kaufman (Jeff_Kaufman) on Introducing EAecon: Community-Building Project · 2022-05-29T14:50:30.822Z · EA · GW


Nit: I would have found this post a little easier to read if you had moved the explanation of "EAEcon" earlier.

Comment by Jeff Kaufman (Jeff_Kaufman) on What are some high-EV but failed EA projects? · 2022-05-25T11:16:53.608Z · EA · GW

Wave's first attempt to build a mobile money system was in Ethiopia. I joined them to help with it, and was laid off when it failed.

(They've more recently been successful in Senegal and elsewhere; it's just the initial Ethiopia project that failed.)