What should "counterfactual donation" mean? 2021-09-23T12:59:09.842Z
GiveWell Donation Matching 2021-09-21T22:50:00.545Z
Limits of Giving 2021-03-04T02:20:00.618Z
When I left Google 2021-02-28T21:40:00.565Z
Giving Tuesday 2020 2020-11-30T22:30:00.575Z
EA Relationship Status 2020-09-19T01:50:00.599Z
Leaving Things For Others 2020-04-12T11:50:00.602Z
Why I'm Not Vegan 2020-04-09T13:00:00.683Z
Candy for Nets 2019-09-29T11:11:51.289Z
Long-term Donation Bunching? 2019-09-27T13:09:09.881Z
Effective Altruism and Everyday Decisions 2019-09-16T19:39:59.370Z
Answering some questions about EA 2019-09-12T17:44:47.922Z
There's Lots More To Do 2019-05-29T19:58:55.470Z
Value of Working in Ads? 2019-04-09T13:06:53.969Z
Simultaneous Shortage and Oversupply 2019-01-26T19:35:24.383Z
College and Earning to Give 2018-12-16T20:23:26.147Z
2018 ACE Recommendations 2018-11-26T18:50:57.764Z
2018 GiveWell Recommendations 2018-11-26T18:50:22.620Z
Donation Plans for 2017 2017-12-23T22:25:49.690Z
Estimating the Value of Mobile Money 2016-12-21T13:58:13.662Z
[meta] New mobile display 2016-12-05T15:21:22.121Z
Concerns with Intentional Insights 2016-10-24T12:04:22.501Z
Scientific Charity Movement 2016-07-23T14:33:38.192Z
Independent re-analysis of MFA veg ads RCT data 2016-02-20T04:48:29.296Z
The Counterfactual Validity of Donation Matching 2015-03-02T22:02:40.295Z
The Privilege of Earning To Give 2015-01-14T01:59:51.446Z
Effective Altruism at Your Work 2014-11-12T14:06:39.089Z
Lawyering to Give 2014-09-25T12:19:29.251Z
Disability Weights 2014-09-11T21:34:58.961Z
Altruism isn't about sacrifice 2013-09-06T04:00:13.000Z
Personal consumption changes as charity 2013-07-31T04:00:49.000Z
Haiti and disaster relief 2013-07-19T04:00:57.000Z
Keeping choices donation neutral 2013-06-28T04:00:07.000Z


Comment by Jeff_Kaufman on What should "counterfactual donation" mean? · 2021-09-26T11:39:41.938Z · EA · GW

If you spend your personal luxuries budget in full every year, this sounds like #9, and I agree it's fine to call it counterfactual.

Comment by Jeff_Kaufman on GiveWell Donation Matching · 2021-09-23T13:01:55.877Z · EA · GW

I’d love it if you crossposted that post


I think there’s another category before 9, which is “Donate to a charity not commonly supported by EAs, such as the World Wildlife Fund or Habitat for Humanity.”

Yes, I think that's fine as long as we all agree that the impact of donating to an AA charity is very much higher than donating to one of those charities.

Comment by Jeff_Kaufman on GiveWell Donation Matching · 2021-09-23T11:07:59.448Z · EA · GW

In this case it's definitely counterfactual (it wouldn't have gone to a GiveWell charity)

I don't think that should count as counterfactual, actually. Even though the money would not have gone to a GiveWell charity, it would have done something similarly valuable, so the donor cannot reason that their impact is higher. Compare this to when an employer offers to match $X per person, and doesn't put any restrictions on what charity you donate to. In the latter case, this really is more impact, and should factor into decisions like "should I be earning to give".

( I wrote some about this a few years ago, with some discussion:

Comment by Jeff_Kaufman on GiveWell Donation Matching · 2021-09-23T00:34:07.576Z · EA · GW

This is a coherent view, but I doubt it's how GiveWell is approaching it?  Specifically, I would be quite surprised if GiveWell chose to advertise a "true" match just with the goal of preventing criticism.  GiveWell has historically been comfortable with a pretty high level of transparency, and if they thought illusory matching was acceptable I would expect them to say so. Instead, they say the opposite: their post introducing their donation matching starts by describing their issues with conventional matching offers. 

Note that GiveWell is giving up quite a lot in potential donations by insisting on a "true" match, since it means their pool of matching funds will only support small gifts by first-time donors.

Comment by Jeff_Kaufman on Concerns with ACE's Recent Behavior · 2021-04-24T00:59:26.768Z · EA · GW

Thanks! I missed that was disputed

Comment by Jeff_Kaufman on College and Earning to Give · 2021-04-22T03:14:19.367Z · EA · GW

Combine this with the destitute medicare strategy, and have them adopted by grandparents:

Comment by Jeff_Kaufman on Concerns with ACE's Recent Behavior · 2021-04-21T22:50:00.504Z · EA · GW

making this claim


I'm confused: the bit you're quoting is asking a question, not making a claim.

Comment by Jeff_Kaufman on College and Earning to Give · 2021-04-17T20:02:18.623Z · EA · GW

I haven't seen other resources that talk about the cost of college this way, but I also don't spend much time looking at financial planning advice?

The approach in this post is only relevant to a pretty small fraction of people:

  • Your children need to be likely enough to be admitted to the kind of institution that commits to meeting 100% of demonstrated financial need, or otherwise has a similar  "100% effective tax rate" that it's worth considering.
  • You need to not be very interested in saving money for your own future use.  The CSS Profile suggesting 5%/y for parental assets means that with three kids at 4y each you might be asked for 60% of assets.  (Note that the CSS profile does ask about parental retirement accounts, and some schools do consider those assets).
  • Your earnings need to be low enough just before and during college, either because your career has never been highly lucrative or because you are willing to change your line of work for that time period.

I think this is likely enough that a 529 plan or similar does not make sense for our family, but I'm planning to revisit when my kids are getting close to high school (and I have a better sense of their academic standing) before considering a career change.

Comment by Jeff_Kaufman on [deleted post] 2021-03-15T13:57:22.009Z

The Plough link is broken; it should be

Comment by Jeff_Kaufman on [deleted post] 2021-02-25T22:28:40.670Z

I don't think this is actually a reasonable request to make here?

Comment by Jeff_Kaufman on EA Relationship Status · 2020-09-20T22:26:08.633Z · EA · GW

What do you think the "for life" adds to the pledge if not "for the rest of your lives"?

Comment by Jeff_Kaufman on EA Relationship Status · 2020-09-20T18:34:52.155Z · EA · GW

See the discussion here:

It doesn't account for a very much of the data, unfortunately.

Comment by Jeff_Kaufman on EA Relationship Status · 2020-09-20T12:48:06.392Z · EA · GW

"for life" sounds just as permanent to me, if less morbid, than "till death do us part"

Comment by Jeff_Kaufman on Some thoughts on deference and inside-view models · 2020-06-02T00:06:20.655Z · EA · GW

Similar with what you're saying about AI alignment being preparadigmatic, a major reason why trying to prove the Riemann conjecture head-on would be a bad idea is that people have already been trying to do that for a long time without success. I expect the first people to consider the conjecture approached it directly, and were reasonable to do so.

Comment by Jeff_Kaufman on Some thoughts on deference and inside-view models · 2020-05-28T18:15:20.161Z · EA · GW
I asked an AI safety researcher "Suppose your research project went as well as it could possibly go; how would it make it easier to align powerful AI systems?", and they said that they hadn't really thought about that. I think that this makes your work less useful.

This seems like a deeper disagreement than you're describing. A lot of research in academia (ex: much of math) involves playing with ideas that seem poorly understood, trying to figure out what's going on. It's not really goal directed, especially not the kind of goal you can chain back to world improvement, it's more understanding directed.

It reminds me of Sarah Constantin's post about the trade-off between output and external direction:

For AI safety your view may still be right: one major way I could see the field going wrong is getting really into interesting problems that aren't useful. But on the other hand it's also possible that the best path involves highly productive interest-following understanding-building research where most individual projects don't seem promising from an end to end view. And maybe even where most aren't useful from an end to end view!

Again, I'm not sure here at all, but I don't think it's obvious you're right.

Comment by Jeff_Kaufman on How should longtermists think about eating meat? · 2020-05-21T14:12:09.829Z · EA · GW

"With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality."

Comment by Jeff_Kaufman on How should longtermists think about eating meat? · 2020-05-21T12:45:19.125Z · EA · GW

I wonder how much we can trust people's given reasons for having been veg? For example, say people sometimes go veg both for health reasons and because they also care about animals. I could imagine something where if you asked them while they were still veg they would say "mostly because I care about animals" but then if you ask them after you get more "I was doing it for health reasons" because talking about how you used to do it for the animals makes you sound selfish?

Comment by Jeff_Kaufman on How should longtermists think about eating meat? · 2020-05-20T23:40:17.052Z · EA · GW has "84% of vegetarians/vegans abandon their diet" which matches my experience and I think is an indication that it's pretty far from costless?

Comment by Jeff_Kaufman on How should longtermists think about eating meat? · 2020-05-20T22:24:22.886Z · EA · GW

> a lot of the long-term vegans that I know

It sounds like you may have a sampling bias, where you're missing out on all the people who disliked being vegan enough to stop?

Comment by Jeff_Kaufman on Why I'm Not Vegan · 2020-04-20T02:27:40.762Z · EA · GW
However, even if I were to get more than $10 of enjoyment out of punching that person, I don't think it's right that I'm morally permitted to do so.

I don't think you would be morally permitted to either, because I think is right and you can offset axiology, but not morality.

Comment by Jeff_Kaufman on Why I'm Not Vegan · 2020-04-20T02:22:50.122Z · EA · GW
I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions

Let's say I'm trying to convince someone that they shouldn't donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future ("astronomical stakes") as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don't matter, though, this isn't going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it's likely that existential risk just isn't a high priority by their values. Them saying they think there's only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.

On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn't try to convince people to go vegan because diet is strongly cultural and trying to change people's diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people's diet. On other questions, though, it's much harder to get evidence, and that's where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.

(I'm still very curious what you think of my demandingness objection to your argument above)

Comment by Jeff_Kaufman on Why I'm Not Vegan · 2020-04-18T01:03:32.072Z · EA · GW

While I think moral trades are interesting, I don't know why you would expect me to see $4.30 going to an existential risk charity to be enough for it to be worth me going vegetarian for a year over? I'd much rather donate $4.30 myself and not change my diet.

I think you're conflating "Jeff sees $0.43/y to a good charity as being clearly better than averting the animal suffering due to omnivorous eating" and "Jeff only selfishly values eating animal products at $0.43/y"?

Comment by Jeff_Kaufman on Leaving Things For Others · 2020-04-12T17:54:31.924Z · EA · GW
why can't I do both the individual action and the institutional part?

Both avoiding delivery and calling stores to encourage prioritization are ways of turning time into a better world. Yes, you can do your own shopping and call your own grocery store, but you have further options. Do you call other stores you go to less frequently and make similar encouragements? Do you call stores in other areas? Do you sign up as an Instacart shopper so there will be more delivery spots available? You write that you can act on both fronts, but if you start thinking of how you might do good with your time you'll quickly have so many potential things you can do that you have to prioritize. I'm arguing that you should prioritize based on how much good the action does relative to how much of a sacrifice it is to yourself.

The link at the end ( ) gives more details, but overall I see these as very similar to encouragements to use cold water for showering instead of warm. Yes, there's some benefit to both, but when you compare the benefit to others (the delivery slot has a chance of going to someone else who needs it more than you do, a cold shower means less CO2 emitted) with the cost to yourself (you would prefer grocery delivery and warm showers), most people will have other altruistic options that do more good for less sacrifice.

Comment by Jeff_Kaufman on Why I'm Not Vegan · 2020-04-11T02:39:47.662Z · EA · GW

The post doesn't depend on it, because the post is all conditional on animals mattering a nonzero amount ("to be safe I'll assume they do [matter]").

Comment by Jeff_Kaufman on Why I'm Not Vegan · 2020-04-10T17:13:45.031Z · EA · GW

My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and I'm not claiming that I've found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.

I included the "I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer" out of transparency. ( ) The post doesn't depend on it at all, and everything is conditional on animals mattering.

You're right that the post doesn't argue for my specific numbers on comparing animals and humans: they're inputs to the model. On the other hand, I do think that if we surveyed the general population on how they would make tradeoffs between human life and animal suffering these would be within the typical range, and these aren't numbers I've chosen to get a specific outcome.

I also think these moral worth statements need more clarification

I phrased these as "averting how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?" As in, if you gave me a choice between the two, which do I prefer. This seems pretty carefully specified to me, and clear enough that someone else could give their own numbers and we could figure out where our largest differences are?

eating animal products requires 6.125 beings to be tortured per year per American. I personally don't think that is a worthwhile thing to cause.

This kind of argument has issues with demandingness. Here's a parallel argument: renting a 1br apartment for yourself instead of splitting a 2br with someone kills ~6 people a year because you could be donating the difference. (Figuring a 1br costs $2k/m and a 2br costs $3k/m. This gives a delta of $11k, and GiveWell gives a best guess of ~$1700 for "Cost per outcome as good as averting the death of an individual under 5 — AMF"). Is that a worthwhile thing to cause?

In general, I think the model EAs should be using for thinking about giving things up is to figure out how much sacrifice we're willing to make, and then figure out for that level of sacrifice what options do the most good. Simply saying "X has harm and so we should not do it" turns into "if there's anything that you don't absolutely need, or anything you consume where there's a slightly less harmful version, you must stop".

Comment by Jeff_Kaufman on Why I'm Not Vegan · 2020-04-10T12:52:42.447Z · EA · GW

Sorry, I forgot this would be crossposted here automatically and this version was (until just now) missing an edit I made just after publishing: "how many animals" should have been "how many continuously living animals". Since animal lives on factory farms are net negative, and their ongoing suffering is a far bigger factor than their deaths, I don't care about the number of individual animals but instead how many animal-days. So I wouldn't see breeding pigs that produced twice as much meat and lived twice as long as an improvement, though perhaps you would?

The numbers are Hurford's:

36 days of suffering via beef
8 days of suffering via dairy
44 days of suffering via pork
554 days of suffering via chicken meat
347 days of suffering via eggs
76 days of suffering via turkey
949 days of suffering via aquacultured fish

but expressed in the much more natural units of continuously living animals and not animal-days per human-years.

After quickly looking at the numbers you posted it doesn't look like any of them change this by more than a factor of two, and don't look like they would change the bottom line of the argument at all. Do you disagree?

Comment by Jeff_Kaufman on New research on moral weights · 2019-12-05T04:26:22.214Z · EA · GW

Really excited to see this published. This is something I've heard people speculate about a lot over the years ("are people in places with higher child mortality more accepting of it, because it's more normal, and so are we overweighting deaths?") and it's helpful to see what the people we're trying to help actually value.

(And that's on top of us not being able to survey the children!)

Comment by Jeff_Kaufman on Updates from Leverage Research: history, mistakes and new focus · 2019-11-23T01:28:33.461Z · EA · GW

Thoughts, now that I've read it:

  • This sort thing where you try things until you figure out what's going on, starting from a place of pretty minimal knowledge feels very familiar to me. I think a lot of my hobby projects have worked this way, partly because I often find it more more fun to try things than to try to find out what people already know about them. This comment thread, trying to understand what frequencies forked brass instruments make, is an example that came to mind several times reading the post.
  • Not exactly the same, but this also feels a lot like my experience with making new musical instruments. With an established instrument in an established field the path to being really good generally looks like "figure out what the top people do, and practice a ton," while with something experimental you have much more of a tradeoff between "put effort into playing your current thing better" and "put effort into improving your current thing". If you have early batteries or telescopes or something you probably spend a lot of time with that tradeoff. Whereas in mature fields it makes much more sense for individuals to specialize in either "develop the next generation of equipment" or "use the current generation of equipment to understand the world".
  • How controversial is the idea that early stage science works pretty differently from more established explorations, and that you need pretty different approaches and skills? I don't know that much history/philosophy of science but I'm having trouble telling from the paper which of the hypotheses in section 4 are ones that you expect people to already agree with, vs ones that you think you're going to need to demonstrate?
  • One question that comes to mind is whether there is still early stage science today. Maybe the patterns that you're seeing are all about what happens if you're very early in the development of science in general, but now you only get those patterns when people are playing around (like I am above)? So I'd be interested in the most recent cases you can find that you'd consider to be early-stage.
  • And a typo: "make the same observers with different telescopes" should be "make the same observations with different telescopes".
Comment by Jeff_Kaufman on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T19:59:42.702Z · EA · GW

Looking over the website I noticed Studying Early Stage Science under "Recent Research". I haven't read it yet, but will!

Comment by Jeff_Kaufman on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T19:56:42.074Z · EA · GW

Thanks for writing this! I'm really glad Leverage has decided to start sharing more.

Comment by Jeff_Kaufman on Long-term Donation Bunching? · 2019-10-07T17:18:30.088Z · EA · GW

I wonder whether it would be worth building some standard terms for this and trying to make it a thing?

Comment by Jeff_Kaufman on Candy for Nets · 2019-09-30T10:38:51.775Z · EA · GW

Thanks! Though like all my blog posts it's already public on my website:

Comment by Jeff_Kaufman on Long-term Donation Bunching? · 2019-09-27T16:55:07.479Z · EA · GW

If ~50% of people drift away over five years it's hard to say how many do over 2-3, but it should be at least 25%-35% [1]. You need pretty large tax savings to risk a chance that large of actually donating nothing.

[1] 13%/year for five years gives you 50%, and I think I'd expect the rate of attrition to slowly decrease over time? 25% for two years and 35% for three is assuming it's linear.

Comment by Jeff_Kaufman on How to Make Billions of Dollars Reducing Loneliness · 2019-08-31T01:45:40.756Z · EA · GW
Facebook and Google have an incentive to track their users because they sell targeted advertising.

Even without ads they would have a very strong reason for tracking: trying to make the product better. Things you do when using Facebook are all fed into a model trying to predict what you like to interact with, so they can prioritize among the enormous number of things they could be showing you.

Comment by Jeff_Kaufman on If physics is many-worlds, does ethics matter? · 2019-07-10T18:58:53.873Z · EA · GW
For every decision I've made, there's a version where the other choice was made.

Is that actually something the many-worlds view implies? It seems like you're conflating "made a choice" with "quantum split"?

(I don't know any of the relevant physics.)

Comment by Jeff_Kaufman on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-06-18T02:10:46.536Z · EA · GW

One group I'm especially interested in is people who were active in EA, took the GWWC pledge, and then drifted away (eg). This is a group that likely mostly didn't take the EA Survey. I would expect that after accounting for this the actual fraction of people current on their pledges would be *much* lower.

Since we don't know the fraction of people keeping their pledge to even the nearest 10%, the survey I would find most useful would be a smallish random sample. Pick 25 GWWC members at random, and follow up with them. Write personalized handwritten letters, place a phone call, or get a friend to contact them. This should give very low non-response bias, and also good qualitative data.

Comment by Jeff_Kaufman on There's Lots More To Do · 2019-06-11T19:59:45.370Z · EA · GW

Other people being mislead is how I read "Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place."

Comment by Jeff_Kaufman on There's Lots More To Do · 2019-05-31T01:12:32.570Z · EA · GW

I don't think the post is correct in concluding that the current marginal cost-per-life-saved estimates are wrong. Annual malaria deaths are around 450k, and if you gave the Against Malaria Foundation $5k * 450k ($2.3B) they would not be able to make sure no one died from malaria in 2020, but still wouldn't give much evidence that $5k was too low an estimate for the marginal cost. It just means that AMF would have lots of difficulty scaling up so much, that some deaths can't be prevented by distributing nets, that some places are harder to work in, etc.

It does mean that big funders have seen the current cost-per-life saved numbers and decided not to give those organizations all the money they'd be able to use at that cost-effectiveness. But there are lots of reasons other than what Ben gives for why you might decide to do that, including:

  • You have multiple things you care about and are following a strategy of funding each of them some. For example, OpenPhil has also funded animal charities and existential risk reduction.
  • You don't want a dynamic where you're responsible for the vast majority of a supposedly independent organization's funding.
  • You think better giving opportunities may become available in the future and want to have funds if that happens.
Comment by Jeff_Kaufman on There's Lots More To Do · 2019-05-30T00:14:10.460Z · EA · GW

I agree the distribution would be interesting! But it depends how many such opportunities there might be, no? What about:

"Imagine that over time the low hanging fruit is picked and further opportunities for charitable giving get progressively more expensive in terms of cost per life saved equivalents (CPLSE). At what CPLSE, in dollars, would you no longer donate?"

Comment by Jeff_Kaufman on Why does EA use QALYs instead of experience sampling? · 2019-04-25T18:03:30.022Z · EA · GW

I tried experience sampling myself for about a year and a half (intro, conclusion) and it made me much more skeptical of the system. I'm just not that sure how happy I am at any given point:

When I first started rating my happiness on a 1-10 scale I didn't feel like I was very good at it. At the time I thought I might get better with practice, but I think I'm actually getting worse at it. Instead of really thinking "how do I feel right now?" it's really hard not to just think "in past situations like this I've put down '6' so I should put down '6' now".


I don't have my phone ping me during the night, because I don't want it to wake me up. Before having a kid this worked properly: I'd plug in my phone, which turns off pings, promptly fall asleep, wake up in the morning, unplug my phone. Now, though, my sleep is generally interrupted several times a night. Time spent waiting to see if the baby falls back asleep on her own, or soothing her back to sleep if she doesn't, or lying awake at 4am because it's hard to fall back asleep when you've had 7hr and just spent an hour walking around and bouncing the baby; none of these are counted. On the whole, these experiences are much less enjoyable than my average; if the baby started sleeping through the night such that none of these were needed anymore I wouldn't see that as a loss at all. Which means my data is biased upward. I'm curious how happiness sampling studies have handled this; people with insomnia would be in a similar situation.

I agree that DALY/QALY measurements aren't great either, though.

Comment by Jeff_Kaufman on Value of Working in Ads? · 2019-04-11T12:16:56.107Z · EA · GW
I think the internet shouldn't run on ads. Making people pay for content ensures that the internet is providing real value rather than just clickbaiting

Before the internet you still had tabloids with shocking claims on the cover that, after you bought the paper and read it you realized the claims were overblown. If we moved away from ads the specific case of "you pay, and afterwards you realize you were baited" would still exist.

the dependence on advertising creates controversies where corporations compel content hosts to engage in dubious censorship.

The role of middlemen like Google diminishes this substantially. Since the advertisers and publishers aren't talking directly to each other we end up with censorship only on the sort of thing that advertisers generally agree on: things like "adult or mature, copyrighted, violent, or hateful content" -- AdSense policies: a beginner's guide

Yes in theory people could always create and use paid websites, but there is too much inertia, both economically (network effects) and socially (people now feel very entitled to the Internet).

I'm not convinced this isn't just "people don't want to have to pay for things, and mostly don't mind ads that much". Newspapers, magazines, and cable TV both cost money and have ads. Analog radio sticks around on an ad-funded basis and people keep listening because it's incredibly low friction.

The government can always shift tax and welfare policy to account for the additional financial burden on low income people.

Ok, but in practice the government mostly doesn't do this. Figuring out how to get it to do this would open up a *ton* of valuable policies, but we also need to make reasonable choices in the present.

Comment by Jeff_Kaufman on Salary Negotiation for Earning to Give · 2019-04-08T19:21:49.929Z · EA · GW

I've helped a few people negotiate salaries at tech companies, and my experience has been people always bring me in too late. You want to have multiple active offers at the same time so you can get them to bid against each other. For example, when I came back to Google I did:

  • Google made me an offer
  • Facebook beat Google's offer
  • Amazon declined to match either offer
  • Google beats Facebook's offer
  • Facebook beats Google's offer
  • Google matches Facebook's offer

The ideal for you is lots of back and forth, which is the opposite of what they want. They want to cut it short and will say things like "You're asking for a lot, but I think might be able to get it for you if I talk to my boss. If we can do $X can you confirm you'll accept it?" You want to be positive enough that they'll come back with an offer of $X, but not so positive that you have no negotiating room left if they accept it.

Comment by Jeff_Kaufman on Apology · 2019-03-23T12:05:03.121Z · EA · GW

These steps, to my knowledge, are completely unprecedented for CEA.

I think CEA may have done something similar with Gleb, though for very different reasons:

Comment by Jeff_Kaufman on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-06T03:20:50.407Z · EA · GW

(Peter has been one of several people continuing to argue "earning to give is undervalued, most orgs could still do useful things with more funding".)

Comment by Jeff_Kaufman on EA Boston 2018 Year in Review · 2019-02-06T18:09:46.990Z · EA · GW

Aaron wrote:

Jeff's fundraiser for Google...

The post has:

For the past few years, Jeff Kaufman has led Google Cambridge’s EAs in successfully lobbying to direct that money toward GiveWell-recommended charities. At between a quarter-million and a half-million dollars each year, this may be the largest fundraising event for GiveWell charities in the world.

This is worded correctly but is a bit hard to interpret: I don't organize the fundraiser, I help organize the EA participation in it. Overall it looks like:

  • Each year, for the week of Giving Tuesday, there's a company wide system of fundraising for charities.
  • I coordinate EAs across the company in finding other EAs with compatible interests in their location/business unit and send out reminders about deadlines.
  • In the Cambridge office we have a bake-off where employees bake, sponsors put in some amount per good baked, other employees donate in order to taste them, and another set of sponsors matches these donations. The more you donate the more votes you get. This is the fundraiser the post talks about.
  • The bake-off organizers are people who think highly of GiveWell, partly related to the advocacy of Boston EAs, but I think don't identify as EAs themselves. They make the decision about what charities the bake-off should feature, and have chosen GiveWell top charities for the past several years.
  • The bake-off is built around matching and sponsorship, especially that the donations people make to eat/vote are matched. That matching has been provided by Google Cambridge's EAs, and one factor in the bake-off organizers choosing GiveWell charities is that we've been able to provide a large match pool.
  • It's not clear how counterfactual any of this is. Each year when I publicize it internally part of what I talk about is that my match isn't counterfactually valid, and I'll be donating my share whether or not others also donate. I use it as a time to talk about why you shouldn't expect matches like this to be counterfactual, and present it as "please join us in funding" and not "you can unlock extra funding".
Comment by Jeff_Kaufman on Simultaneous Shortage and Oversupply · 2019-01-28T00:55:23.266Z · EA · GW

My model is that if you want to move from generic software engineering to safety work that these would be very good next steps.

Comment by Jeff_Kaufman on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-25T14:52:02.894Z · EA · GW

I got the whole $20k:

Comment by Jeff_Kaufman on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-10T22:37:20.771Z · EA · GW

FB had a limit of $20k/donor this year, and I think that's much more likely to go down than up. So depending how much you're donating there's not much reason to save more than than for Giving Tuesday.

There's also the 1% PayPal match (plus 2% cash back) that's been in December each year. At a 16%/year discount rate it's worth waiting a couple months for that 3% but not all year.

Comment by Jeff_Kaufman on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-21T19:09:53.288Z · EA · GW

"Trump signed a good law this week. Yes, really." presents conflict: here's a person who you usually expect to be doing harmful things, and here they are doing something good. It can't make that hook without assuming something about their readers, and the hook draws people's interest. It's not an "unnecessary jibe"; it's the sort of thing that draws far more interest than a headline like "Trump signed a good law about HIV this week."

It's not a tradeoff I would make in my writing, but Vox is a left-leaning outlet and it seems pretty reasonable to me for them to write for a left-leaning crowd.

Comment by Jeff_Kaufman on EA Survey 2018 Series: Donation Data · 2018-12-12T16:58:04.459Z · EA · GW

The linear trend line in looks like a poor match. Instead I'd model it as there being multiple populations, where one major population has a very steep trendline.