Posts

Leaving Things For Others 2020-04-12T11:50:00.602Z · score: 7 (1 votes)
Why I'm Not Vegan 2020-04-09T13:00:00.683Z · score: 9 (45 votes)
Candy for Nets 2019-09-29T11:11:51.289Z · score: 133 (70 votes)
Long-term Donation Bunching? 2019-09-27T13:09:09.881Z · score: 16 (12 votes)
Effective Altruism and Everyday Decisions 2019-09-16T19:39:59.370Z · score: 74 (32 votes)
Answering some questions about EA 2019-09-12T17:44:47.922Z · score: 50 (24 votes)
There's Lots More To Do 2019-05-29T19:58:55.470Z · score: 123 (64 votes)
Value of Working in Ads? 2019-04-09T13:06:53.969Z · score: 17 (13 votes)
Simultaneous Shortage and Oversupply 2019-01-26T19:35:24.383Z · score: 40 (24 votes)
College and Earning to Give 2018-12-16T20:23:26.147Z · score: 26 (19 votes)
2018 ACE Recommendations 2018-11-26T18:50:57.764Z · score: 10 (13 votes)
2018 GiveWell Recommendations 2018-11-26T18:50:22.620Z · score: 8 (9 votes)
Donation Plans for 2017 2017-12-23T22:25:49.690Z · score: 14 (15 votes)
Estimating the Value of Mobile Money 2016-12-21T13:58:13.662Z · score: 8 (10 votes)
[meta] New mobile display 2016-12-05T15:21:22.121Z · score: 5 (5 votes)
Concerns with Intentional Insights 2016-10-24T12:04:22.501Z · score: 58 (56 votes)
Scientific Charity Movement 2016-07-23T14:33:38.192Z · score: 25 (25 votes)
Independent re-analysis of MFA veg ads RCT data 2016-02-20T04:48:29.296Z · score: 11 (11 votes)
The Counterfactual Validity of Donation Matching 2015-03-02T22:02:40.295Z · score: 13 (10 votes)
The Privilege of Earning To Give 2015-01-14T01:59:51.446Z · score: 26 (29 votes)
Effective Altruism at Your Work 2014-11-12T14:06:39.089Z · score: 6 (6 votes)
Lawyering to Give 2014-09-25T12:19:29.251Z · score: 11 (11 votes)
Disability Weights 2014-09-11T21:34:58.961Z · score: 12 (12 votes)
Altruism isn't about sacrifice 2013-09-06T04:00:13.000Z · score: 1 (1 votes)
Personal consumption changes as charity 2013-07-31T04:00:49.000Z · score: 1 (1 votes)
Haiti and disaster relief 2013-07-19T04:00:57.000Z · score: 1 (1 votes)
Keeping choices donation neutral 2013-06-28T04:00:07.000Z · score: 2 (1 votes)

Comments

Comment by jeff_kaufman on Some thoughts on deference and inside-view models · 2020-06-02T00:06:20.655Z · score: 6 (4 votes) · EA · GW

Similar with what you're saying about AI alignment being preparadigmatic, a major reason why trying to prove the Riemann conjecture head-on would be a bad idea is that people have already been trying to do that for a long time without success. I expect the first people to consider the conjecture approached it directly, and were reasonable to do so.

Comment by jeff_kaufman on Some thoughts on deference and inside-view models · 2020-05-28T18:15:20.161Z · score: 22 (12 votes) · EA · GW
I asked an AI safety researcher "Suppose your research project went as well as it could possibly go; how would it make it easier to align powerful AI systems?", and they said that they hadn't really thought about that. I think that this makes your work less useful.

This seems like a deeper disagreement than you're describing. A lot of research in academia (ex: much of math) involves playing with ideas that seem poorly understood, trying to figure out what's going on. It's not really goal directed, especially not the kind of goal you can chain back to world improvement, it's more understanding directed.

It reminds me of Sarah Constantin's post about the trade-off between output and external direction: https://srconstantin.wordpress.com/2019/07/20/the-costs-of-reliability/

For AI safety your view may still be right: one major way I could see the field going wrong is getting really into interesting problems that aren't useful. But on the other hand it's also possible that the best path involves highly productive interest-following understanding-building research where most individual projects don't seem promising from an end to end view. And maybe even where most aren't useful from an end to end view!

Again, I'm not sure here at all, but I don't think it's obvious you're right.

Comment by jeff_kaufman on How should longtermists think about eating meat? · 2020-05-21T14:12:09.829Z · score: 6 (3 votes) · EA · GW

"With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality." https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

Comment by jeff_kaufman on How should longtermists think about eating meat? · 2020-05-21T12:45:19.125Z · score: 5 (3 votes) · EA · GW

I wonder how much we can trust people's given reasons for having been veg? For example, say people sometimes go veg both for health reasons and because they also care about animals. I could imagine something where if you asked them while they were still veg they would say "mostly because I care about animals" but then if you ask them after you get more "I was doing it for health reasons" because talking about how you used to do it for the animals makes you sound selfish?

Comment by jeff_kaufman on How should longtermists think about eating meat? · 2020-05-20T23:40:17.052Z · score: 10 (4 votes) · EA · GW

https://faunalytics.org/a-summary-of-faunalytics-study-of-current-and-former-vegetarians-and-vegans/ has "84% of vegetarians/vegans abandon their diet" which matches my experience and I think is an indication that it's pretty far from costless?

Comment by jeff_kaufman on How should longtermists think about eating meat? · 2020-05-20T22:24:22.886Z · score: 10 (4 votes) · EA · GW

> a lot of the long-term vegans that I know

It sounds like you may have a sampling bias, where you're missing out on all the people who disliked being vegan enough to stop?

Comment by jeff_kaufman on Why I'm Not Vegan · 2020-04-20T02:27:40.762Z · score: 2 (1 votes) · EA · GW
However, even if I were to get more than $10 of enjoyment out of punching that person, I don't think it's right that I'm morally permitted to do so.

I don't think you would be morally permitted to either, because I think https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ is right and you can offset axiology, but not morality.

Comment by jeff_kaufman on Why I'm Not Vegan · 2020-04-20T02:22:50.122Z · score: 3 (2 votes) · EA · GW
I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions

Let's say I'm trying to convince someone that they shouldn't donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future ("astronomical stakes") as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don't matter, though, this isn't going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it's likely that existential risk just isn't a high priority by their values. Them saying they think there's only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.

On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn't try to convince people to go vegan because diet is strongly cultural and trying to change people's diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people's diet. On other questions, though, it's much harder to get evidence, and that's where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.

(I'm still very curious what you think of my demandingness objection to your argument above)

Comment by jeff_kaufman on Why I'm Not Vegan · 2020-04-18T01:03:32.072Z · score: 6 (4 votes) · EA · GW

While I think moral trades are interesting, I don't know why you would expect me to see $4.30 going to an existential risk charity to be enough for it to be worth me going vegetarian for a year over? I'd much rather donate $4.30 myself and not change my diet.

I think you're conflating "Jeff sees $0.43/y to a good charity as being clearly better than averting the animal suffering due to omnivorous eating" and "Jeff only selfishly values eating animal products at $0.43/y"?

Comment by jeff_kaufman on Leaving Things For Others · 2020-04-12T17:54:31.924Z · score: 3 (2 votes) · EA · GW
why can't I do both the individual action and the institutional part?

Both avoiding delivery and calling stores to encourage prioritization are ways of turning time into a better world. Yes, you can do your own shopping and call your own grocery store, but you have further options. Do you call other stores you go to less frequently and make similar encouragements? Do you call stores in other areas? Do you sign up as an Instacart shopper so there will be more delivery spots available? You write that you can act on both fronts, but if you start thinking of how you might do good with your time you'll quickly have so many potential things you can do that you have to prioritize. I'm arguing that you should prioritize based on how much good the action does relative to how much of a sacrifice it is to yourself.

The link at the end ( https://www.jefftk.com/p/effective-altruism-and-everyday-decisions ) gives more details, but overall I see these as very similar to encouragements to use cold water for showering instead of warm. Yes, there's some benefit to both, but when you compare the benefit to others (the delivery slot has a chance of going to someone else who needs it more than you do, a cold shower means less CO2 emitted) with the cost to yourself (you would prefer grocery delivery and warm showers), most people will have other altruistic options that do more good for less sacrifice.

Comment by jeff_kaufman on Why I'm Not Vegan · 2020-04-11T02:39:47.662Z · score: 6 (4 votes) · EA · GW

The post doesn't depend on it, because the post is all conditional on animals mattering a nonzero amount ("to be safe I'll assume they do [matter]").

Comment by jeff_kaufman on Why I'm Not Vegan · 2020-04-10T17:13:45.031Z · score: 10 (10 votes) · EA · GW

My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and I'm not claiming that I've found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.

I included the "I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer" out of transparency. ( https://www.facebook.com/jefftk/posts/10100153860544072?comment_id=10100153864306532 ) The post doesn't depend on it at all, and everything is conditional on animals mattering.

You're right that the post doesn't argue for my specific numbers on comparing animals and humans: they're inputs to the model. On the other hand, I do think that if we surveyed the general population on how they would make tradeoffs between human life and animal suffering these would be within the typical range, and these aren't numbers I've chosen to get a specific outcome.

I also think these moral worth statements need more clarification

I phrased these as "averting how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?" As in, if you gave me a choice between the two, which do I prefer. This seems pretty carefully specified to me, and clear enough that someone else could give their own numbers and we could figure out where our largest differences are?

eating animal products requires 6.125 beings to be tortured per year per American. I personally don't think that is a worthwhile thing to cause.

This kind of argument has issues with demandingness. Here's a parallel argument: renting a 1br apartment for yourself instead of splitting a 2br with someone kills ~6 people a year because you could be donating the difference. (Figuring a 1br costs $2k/m and a 2br costs $3k/m. This gives a delta of $11k, and GiveWell gives a best guess of ~$1700 for "Cost per outcome as good as averting the death of an individual under 5 — AMF"). Is that a worthwhile thing to cause?

In general, I think the model EAs should be using for thinking about giving things up is to figure out how much sacrifice we're willing to make, and then figure out for that level of sacrifice what options do the most good. Simply saying "X has harm and so we should not do it" turns into "if there's anything that you don't absolutely need, or anything you consume where there's a slightly less harmful version, you must stop".

Comment by jeff_kaufman on Why I'm Not Vegan · 2020-04-10T12:52:42.447Z · score: 6 (3 votes) · EA · GW

Sorry, I forgot this would be crossposted here automatically and this version was (until just now) missing an edit I made just after publishing: "how many animals" should have been "how many continuously living animals". Since animal lives on factory farms are net negative, and their ongoing suffering is a far bigger factor than their deaths, I don't care about the number of individual animals but instead how many animal-days. So I wouldn't see breeding pigs that produced twice as much meat and lived twice as long as an improvement, though perhaps you would?

The numbers are Hurford's:

36 days of suffering via beef
8 days of suffering via dairy
44 days of suffering via pork
554 days of suffering via chicken meat
347 days of suffering via eggs
76 days of suffering via turkey
949 days of suffering via aquacultured fish

but expressed in the much more natural units of continuously living animals and not animal-days per human-years.

After quickly looking at the numbers you posted it doesn't look like any of them change this by more than a factor of two, and don't look like they would change the bottom line of the argument at all. Do you disagree?

Comment by jeff_kaufman on New research on moral weights · 2019-12-05T04:26:22.214Z · score: 17 (9 votes) · EA · GW

Really excited to see this published. This is something I've heard people speculate about a lot over the years ("are people in places with higher child mortality more accepting of it, because it's more normal, and so are we overweighting deaths?") and it's helpful to see what the people we're trying to help actually value.

(And that's on top of us not being able to survey the children!)

Comment by jeff_kaufman on Updates from Leverage Research: history, mistakes and new focus · 2019-11-23T01:28:33.461Z · score: 13 (7 votes) · EA · GW

Thoughts, now that I've read it:

  • This sort thing where you try things until you figure out what's going on, starting from a place of pretty minimal knowledge feels very familiar to me. I think a lot of my hobby projects have worked this way, partly because I often find it more more fun to try things than to try to find out what people already know about them. This comment thread, trying to understand what frequencies forked brass instruments make, is an example that came to mind several times reading the post.
  • Not exactly the same, but this also feels a lot like my experience with making new musical instruments. With an established instrument in an established field the path to being really good generally looks like "figure out what the top people do, and practice a ton," while with something experimental you have much more of a tradeoff between "put effort into playing your current thing better" and "put effort into improving your current thing". If you have early batteries or telescopes or something you probably spend a lot of time with that tradeoff. Whereas in mature fields it makes much more sense for individuals to specialize in either "develop the next generation of equipment" or "use the current generation of equipment to understand the world".
  • How controversial is the idea that early stage science works pretty differently from more established explorations, and that you need pretty different approaches and skills? I don't know that much history/philosophy of science but I'm having trouble telling from the paper which of the hypotheses in section 4 are ones that you expect people to already agree with, vs ones that you think you're going to need to demonstrate?
  • One question that comes to mind is whether there is still early stage science today. Maybe the patterns that you're seeing are all about what happens if you're very early in the development of science in general, but now you only get those patterns when people are playing around (like I am above)? So I'd be interested in the most recent cases you can find that you'd consider to be early-stage.
  • And a typo: "make the same observers with different telescopes" should be "make the same observations with different telescopes".
Comment by jeff_kaufman on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T19:59:42.702Z · score: 7 (5 votes) · EA · GW

Looking over the website I noticed Studying Early Stage Science under "Recent Research". I haven't read it yet, but will!

Comment by jeff_kaufman on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T19:56:42.074Z · score: 15 (10 votes) · EA · GW

Thanks for writing this! I'm really glad Leverage has decided to start sharing more.

Comment by jeff_kaufman on Long-term Donation Bunching? · 2019-10-07T17:18:30.088Z · score: 2 (1 votes) · EA · GW

I wonder whether it would be worth building some standard terms for this and trying to make it a thing?

Comment by jeff_kaufman on Candy for Nets · 2019-09-30T10:38:51.775Z · score: 6 (5 votes) · EA · GW

Thanks! Though like all my blog posts it's already public on my website: https://www.jefftk.com/p/candy-for-nets

Comment by jeff_kaufman on Long-term Donation Bunching? · 2019-09-27T16:55:07.479Z · score: 5 (3 votes) · EA · GW

If ~50% of people drift away over five years it's hard to say how many do over 2-3, but it should be at least 25%-35% [1]. You need pretty large tax savings to risk a chance that large of actually donating nothing.


[1] 13%/year for five years gives you 50%, and I think I'd expect the rate of attrition to slowly decrease over time? 25% for two years and 35% for three is assuming it's linear.

Comment by jeff_kaufman on How to Make Billions of Dollars Reducing Loneliness · 2019-08-31T01:45:40.756Z · score: 4 (2 votes) · EA · GW
Facebook and Google have an incentive to track their users because they sell targeted advertising.

Even without ads they would have a very strong reason for tracking: trying to make the product better. Things you do when using Facebook are all fed into a model trying to predict what you like to interact with, so they can prioritize among the enormous number of things they could be showing you.

Comment by jeff_kaufman on If physics is many-worlds, does ethics matter? · 2019-07-10T18:58:53.873Z · score: 11 (5 votes) · EA · GW
For every decision I've made, there's a version where the other choice was made.

Is that actually something the many-worlds view implies? It seems like you're conflating "made a choice" with "quantum split"?


(I don't know any of the relevant physics.)

Comment by jeff_kaufman on EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? · 2019-06-18T02:10:46.536Z · score: 23 (13 votes) · EA · GW

One group I'm especially interested in is people who were active in EA, took the GWWC pledge, and then drifted away (eg). This is a group that likely mostly didn't take the EA Survey. I would expect that after accounting for this the actual fraction of people current on their pledges would be *much* lower.

Since we don't know the fraction of people keeping their pledge to even the nearest 10%, the survey I would find most useful would be a smallish random sample. Pick 25 GWWC members at random, and follow up with them. Write personalized handwritten letters, place a phone call, or get a friend to contact them. This should give very low non-response bias, and also good qualitative data.

Comment by jeff_kaufman on There's Lots More To Do · 2019-06-11T19:59:45.370Z · score: 8 (2 votes) · EA · GW

Other people being mislead is how I read "Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place."

Comment by jeff_kaufman on There's Lots More To Do · 2019-05-31T01:12:32.570Z · score: 12 (7 votes) · EA · GW

I don't think the post is correct in concluding that the current marginal cost-per-life-saved estimates are wrong. Annual malaria deaths are around 450k, and if you gave the Against Malaria Foundation $5k * 450k ($2.3B) they would not be able to make sure no one died from malaria in 2020, but still wouldn't give much evidence that $5k was too low an estimate for the marginal cost. It just means that AMF would have lots of difficulty scaling up so much, that some deaths can't be prevented by distributing nets, that some places are harder to work in, etc.

It does mean that big funders have seen the current cost-per-life saved numbers and decided not to give those organizations all the money they'd be able to use at that cost-effectiveness. But there are lots of reasons other than what Ben gives for why you might decide to do that, including:

  • You have multiple things you care about and are following a strategy of funding each of them some. For example, OpenPhil has also funded animal charities and existential risk reduction.
  • You don't want a dynamic where you're responsible for the vast majority of a supposedly independent organization's funding.
  • You think better giving opportunities may become available in the future and want to have funds if that happens.
Comment by jeff_kaufman on There's Lots More To Do · 2019-05-30T00:14:10.460Z · score: 12 (4 votes) · EA · GW

I agree the distribution would be interesting! But it depends how many such opportunities there might be, no? What about:

"Imagine that over time the low hanging fruit is picked and further opportunities for charitable giving get progressively more expensive in terms of cost per life saved equivalents (CPLSE). At what CPLSE, in dollars, would you no longer donate?"

Comment by jeff_kaufman on Why does EA use QALYs instead of experience sampling? · 2019-04-25T18:03:30.022Z · score: 12 (5 votes) · EA · GW

I tried experience sampling myself for about a year and a half (intro, conclusion) and it made me much more skeptical of the system. I'm just not that sure how happy I am at any given point:

When I first started rating my happiness on a 1-10 scale I didn't feel like I was very good at it. At the time I thought I might get better with practice, but I think I'm actually getting worse at it. Instead of really thinking "how do I feel right now?" it's really hard not to just think "in past situations like this I've put down '6' so I should put down '6' now".

And:

I don't have my phone ping me during the night, because I don't want it to wake me up. Before having a kid this worked properly: I'd plug in my phone, which turns off pings, promptly fall asleep, wake up in the morning, unplug my phone. Now, though, my sleep is generally interrupted several times a night. Time spent waiting to see if the baby falls back asleep on her own, or soothing her back to sleep if she doesn't, or lying awake at 4am because it's hard to fall back asleep when you've had 7hr and just spent an hour walking around and bouncing the baby; none of these are counted. On the whole, these experiences are much less enjoyable than my average; if the baby started sleeping through the night such that none of these were needed anymore I wouldn't see that as a loss at all. Which means my data is biased upward. I'm curious how happiness sampling studies have handled this; people with insomnia would be in a similar situation.

I agree that DALY/QALY measurements aren't great either, though.

Comment by jeff_kaufman on Value of Working in Ads? · 2019-04-11T12:16:56.107Z · score: 9 (6 votes) · EA · GW
I think the internet shouldn't run on ads. Making people pay for content ensures that the internet is providing real value rather than just clickbaiting

Before the internet you still had tabloids with shocking claims on the cover that, after you bought the paper and read it you realized the claims were overblown. If we moved away from ads the specific case of "you pay, and afterwards you realize you were baited" would still exist.

the dependence on advertising creates controversies where corporations compel content hosts to engage in dubious censorship.

The role of middlemen like Google diminishes this substantially. Since the advertisers and publishers aren't talking directly to each other we end up with censorship only on the sort of thing that advertisers generally agree on: things like "adult or mature, copyrighted, violent, or hateful content" -- AdSense policies: a beginner's guide

Yes in theory people could always create and use paid websites, but there is too much inertia, both economically (network effects) and socially (people now feel very entitled to the Internet).

I'm not convinced this isn't just "people don't want to have to pay for things, and mostly don't mind ads that much". Newspapers, magazines, and cable TV both cost money and have ads. Analog radio sticks around on an ad-funded basis and people keep listening because it's incredibly low friction.

The government can always shift tax and welfare policy to account for the additional financial burden on low income people.

Ok, but in practice the government mostly doesn't do this. Figuring out how to get it to do this would open up a *ton* of valuable policies, but we also need to make reasonable choices in the present.

Comment by jeff_kaufman on Salary Negotiation for Earning to Give · 2019-04-08T19:21:49.929Z · score: 26 (10 votes) · EA · GW

I've helped a few people negotiate salaries at tech companies, and my experience has been people always bring me in too late. You want to have multiple active offers at the same time so you can get them to bid against each other. For example, when I came back to Google I did:

  • Google made me an offer
  • Facebook beat Google's offer
  • Amazon declined to match either offer
  • Google beats Facebook's offer
  • Facebook beats Google's offer
  • Google matches Facebook's offer

The ideal for you is lots of back and forth, which is the opposite of what they want. They want to cut it short and will say things like "You're asking for a lot, but I think might be able to get it for you if I talk to my boss. If we can do $X can you confirm you'll accept it?" You want to be positive enough that they'll come back with an offer of $X, but not so positive that you have no negotiating room left if they accept it.

Comment by jeff_kaufman on Apology · 2019-03-23T12:05:03.121Z · score: 27 (22 votes) · EA · GW

These steps, to my knowledge, are completely unprecedented for CEA.

I think CEA may have done something similar with Gleb, though for very different reasons: https://forum.effectivealtruism.org/posts/fn7bo8sYEHS3RPKQG/concerns-with-intentional-insights

Comment by jeff_kaufman on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-06T03:20:50.407Z · score: 14 (7 votes) · EA · GW

(Peter has been one of several people continuing to argue "earning to give is undervalued, most orgs could still do useful things with more funding".)

Comment by jeff_kaufman on EA Boston 2018 Year in Review · 2019-02-06T18:09:46.990Z · score: 9 (4 votes) · EA · GW

Aaron wrote:

Jeff's fundraiser for Google...

The post has:

For the past few years, Jeff Kaufman has led Google Cambridge’s EAs in successfully lobbying to direct that money toward GiveWell-recommended charities. At between a quarter-million and a half-million dollars each year, this may be the largest fundraising event for GiveWell charities in the world.

This is worded correctly but is a bit hard to interpret: I don't organize the fundraiser, I help organize the EA participation in it. Overall it looks like:

  • Each year, for the week of Giving Tuesday, there's a company wide system of fundraising for charities.
  • I coordinate EAs across the company in finding other EAs with compatible interests in their location/business unit and send out reminders about deadlines.
  • In the Cambridge office we have a bake-off where employees bake, sponsors put in some amount per good baked, other employees donate in order to taste them, and another set of sponsors matches these donations. The more you donate the more votes you get. This is the fundraiser the post talks about.
  • The bake-off organizers are people who think highly of GiveWell, partly related to the advocacy of Boston EAs, but I think don't identify as EAs themselves. They make the decision about what charities the bake-off should feature, and have chosen GiveWell top charities for the past several years.
  • The bake-off is built around matching and sponsorship, especially that the donations people make to eat/vote are matched. That matching has been provided by Google Cambridge's EAs, and one factor in the bake-off organizers choosing GiveWell charities is that we've been able to provide a large match pool.
  • It's not clear how counterfactual any of this is. Each year when I publicize it internally part of what I talk about is that my match isn't counterfactually valid, and I'll be donating my share whether or not others also donate. I use it as a time to talk about why you shouldn't expect matches like this to be counterfactual, and present it as "please join us in funding" and not "you can unlock extra funding".
Comment by jeff_kaufman on Simultaneous Shortage and Oversupply · 2019-01-28T00:55:23.266Z · score: 9 (5 votes) · EA · GW

My model is that if you want to move from generic software engineering to safety work that these would be very good next steps.

Comment by jeff_kaufman on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-25T14:52:02.894Z · score: 4 (3 votes) · EA · GW

I got the whole $20k: https://www.jefftk.com/p/facebook-donation-match

Comment by jeff_kaufman on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-10T22:37:20.771Z · score: 15 (9 votes) · EA · GW

FB had a limit of $20k/donor this year, and I think that's much more likely to go down than up. So depending how much you're donating there's not much reason to save more than than for Giving Tuesday.

There's also the 1% PayPal match (plus 2% cash back) that's been in December each year. At a 16%/year discount rate it's worth waiting a couple months for that 3% but not all year.

Comment by jeff_kaufman on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-21T19:09:53.288Z · score: 14 (9 votes) · EA · GW

"Trump signed a good law this week. Yes, really." presents conflict: here's a person who you usually expect to be doing harmful things, and here they are doing something good. It can't make that hook without assuming something about their readers, and the hook draws people's interest. It's not an "unnecessary jibe"; it's the sort of thing that draws far more interest than a headline like "Trump signed a good law about HIV this week."

It's not a tradeoff I would make in my writing, but Vox is a left-leaning outlet and it seems pretty reasonable to me for them to write for a left-leaning crowd.

Comment by jeff_kaufman on EA Survey 2018 Series: Donation Data · 2018-12-12T16:58:04.459Z · score: 5 (5 votes) · EA · GW

The linear trend line in https://i.ibb.co/BgBkLZW/regression-graph.png looks like a poor match. Instead I'd model it as there being multiple populations, where one major population has a very steep trendline.

Comment by jeff_kaufman on 2018 GiveWell Recommendations · 2018-11-26T20:29:42.880Z · score: 7 (5 votes) · EA · GW

Fixed, thanks!

(Though the title with [Link] is only used on some views, for example not on the article-view page, so it's somewhat confusing.)

Comment by jeff_kaufman on Is The Hunger Site worth it? · 2018-11-26T15:27:07.005Z · score: 33 (15 votes) · EA · GW

A site that brings in money by showing ads generally makes under $10 per 1000 visits (CPM) so at most $0.01 per visit. Even if we make unrealistically positive assumptions (they're getting very high CPMs, they donate 100% of the money, the money goes to charities that are as valuable as the AMF) then $10 to the AMF does as much good as visiting the Hunger Site daily for three years. With the same unrealistically positive assumptions, if this takes you 10s each time then you're working for under $3.60/hr.

So I think this is probably not worth looking into further. Volunteering to look at ads just doesn't bring in that much money so even if you got the best possible answers to your questions it wouldn't make sense.

(Similarly, I don't think trying to clone a site like this and run it targeted at GiveWell top charities would be worth it either.)

Comment by jeff_kaufman on Announcing new EA Funds management teams · 2018-10-31T15:01:32.063Z · score: 4 (4 votes) · EA · GW

Who would you have recommended for these spots?

My not-that-informed view is something like "there are a bunch of problems with ACE, but I'm not sure there's anyone better right now". But if you have people in mind who would have been better for this role that would be really helpful to know!

Comment by jeff_kaufman on Thoughts on short timelines · 2018-10-26T12:31:00.432Z · score: 4 (4 votes) · EA · GW

You can extend your argument to even smaller probabilities: how much effort should go into this if we think the chance is 0.1%? 0.01? Or in the other direction, 50%, 90%, etc. In extremes it's very clear that this should affect how much focus we put into averting it, and I don't think there's anything special about 1% vs 10% in this regard.

Another way of thinking about it is that AI is not the only existential risk. If your estimate for AI is 1% in the next ten years but pandemics is 10%, vs 10% for AI and 1% for pandemics, then that should also affect where you think people should focus.

Comment by jeff_kaufman on Additional plans for the new EA Forum · 2018-09-19T17:58:15.450Z · score: 2 (2 votes) · EA · GW

a few random old posts on a sidebar

In my case I just have a list of posts I thought were good and want more people to see, but in a forum with voting you could show highly upvoted older posts.

Comment by jeff_kaufman on EA Hotel with free accommodation and board for two years · 2018-08-27T17:16:46.941Z · score: 1 (1 votes) · EA · GW

I'm assuming that the counterfactual here is someone who wants to do unpaid direct work full time, has some funds available that could be used to either support themselves or could be donated to something high impact, and could either live in SF or Blackpool.

Is this the counterfactual for the hotel manager, or for a resident? I'm only trying to address the hotel manager role here, but I wouldn't expect the counterfactual for a hotel manager to be unpaid direct work.

I think the value of having a very talented full-time manager for your group house is not about reducing expenses, it's about creating a house culture that serves to multiply the impact of all the residents

This makes a lot of sense to me, but reading the Hotel Manager section the impression I get is that a hotel manager would be too busy to do much in that direction. There's no discussion of their role in setting culture, and a lot of operations work.

Comment by jeff_kaufman on EA Hotel with free accommodation and board for two years · 2018-08-24T18:14:25.934Z · score: 1 (1 votes) · EA · GW

These chores don't go away if you live in an expensive housing market or make a high income.

If you have a high income, though, you can pay other people to do them: for example, instead of cooking you could buy frozen food, buy restaurant food, or hire a cook.

I expect that these economies of scale effects will become even more valuable as the number of people in the hotel grows.

My experience with cooking is that above about 6-10 people the economies of scale drop off a lot. I really like living in a house with enough adults that I can cook about once a week, but as the number of people (and combinations of dietary restrictions) grows you get beyond what one person can cook easily.

Overall, though, it sounds like you're more arguing for "group houses are great" (which I agree on) and not "taking the hotel manager job has high counterfactual impact" (which I think is much more important?)

Comment by jeff_kaufman on Fact checking comparison between trachoma surgeries and guide dogs · 2018-08-20T12:58:04.561Z · score: 1 (1 votes) · EA · GW

It looks like GiveWell put that project on hold in January 2018: https://www.givewell.org/charities/IDinsight/partnership-with-idinsight/cataract-surgery-project

Comment by jeff_kaufman on Making EA groups more welcoming · 2018-08-09T00:44:42.017Z · score: 0 (0 votes) · EA · GW

Good point! I just measured some standard cheap new construction doors and found:

  • You lose 3/8" on each side to the jamb.

  • The door open to 90° loses you 1 5/8" on top of the jamb.

So a 30" door has a clear opening of 27 5/8" (or 29 1/4" with the door off).

Comment by jeff_kaufman on Leverage Research: reviewing the basic facts · 2018-08-06T12:29:29.037Z · score: 14 (10 votes) · EA · GW

Paradigm Academy was incubated by Leverage Research, as many organizations in and around EA are by others (e.g., MIRI incubated CFAR; CEA incubated ACE, etc.). As far as I can tell now, like with those other organizations, Paradigm and Leverage should be viewed as two distinct organizations.

See Geoff's reply to me above: Paradigm and Leverage will at some point be separate, but right now they're closely related (both under Geoff etc). I don't think viewing them as separate organizations, where learning something about Leverage should not much affect your view of Paradigm, makes sense, at least not yet.

Comment by jeff_kaufman on Leverage Research: reviewing the basic facts · 2018-08-06T12:27:08.319Z · score: 25 (20 votes) · EA · GW

Thanks for clarifying!

Two takeaways for me:

  • Use of both the "Paradigm" and "Leverage" names isn't a reputational dodge, contra throwaway in the original post. The two groups focus on different work and are in the process of fully dividing.

  • People using what they know about Leverage to inform their views of Paradigm is reasonable given their level of overlap in staff and culture, contra Evan here and here.

Comment by jeff_kaufman on Leverage Research: reviewing the basic facts · 2018-08-06T12:12:00.133Z · score: 16 (14 votes) · EA · GW

See Geoff's reply to me below: Paradigm and Leverage will at some point be separate, but right now they're closely related (both under Geoff etc). I think it's reasonable for people to use Leverage's history and track record in evaluating Paradigm.

Comment by jeff_kaufman on Leverage Research: reviewing the basic facts · 2018-08-04T14:34:54.126Z · score: 31 (31 votes) · EA · GW

Hi Geoff,

In reading this I'm confused about the relationship between Paradigm and Leverage. People in this thread (well, mostly Evan) seem to be talking about them as if Leverage incubated Paradigm but the two are now fully separate. My understanding, however, was that the two organizations function more like two branches of a single entity? I don't have a full picture or anything, but I thought you ran both organizations, staff of both mostly live at Leverage, people move freely between the two as needed by projects, and what happens under each organization is more a matter of strategy than separate direction?

By analogy, I had thought the relationship of Leverage to Paradigm was much more like CEA vs GWWC (two brands of the same organization) or even CEA UK vs CEA USA (two organizations acting together as one brand) than CEA vs ACE (one organization that spun off another one, which is now operates entirely independently with no overlap of staff etc).

Jeff