Posts

Valuing lives instrumentally leads to uncomfortable conclusions 2022-09-04T21:32:17.455Z
[edited] Inequality is a (small) problem for EA and economic growth 2022-08-08T09:42:37.741Z
Cause area: climate adaptation in low-income countries 2022-07-12T18:11:47.140Z
General equilibrium thinking 2022-06-11T19:27:09.920Z

Comments

Comment by Karthik Tadepalli (therealslimkt) on William MacAskill - The Daily Show · 2022-09-30T16:27:15.967Z · EA · GW

I just don't think an accent is identical to other forms of presentation. Accents are deeply personal and cultural. When I debated in high school for a non-indian audience, we were repeatedly told in euphemistic terms that our accents made us less "compelling". It was deeply demoralizing to know that not being from Eton made us worse to listen to, and I know people who consciously changed their accent because of it.

Now that my accent has become Americanized after years of living in the US, it is genuinely painful and isolating to meet Indian people who think I grew up in the US because of my Americanized accent. I have lost something of my connection to India because of my accent change. I listen to myself and I sometimes wonder who the hell is speaking.

Sidenote: since you've essentially removed the original comment, some of the context has been lost. In particular the thing that ticked me off the most was not you saying that some people might not understand Will, but that his accent "might be something to work on".

Comment by Karthik Tadepalli (therealslimkt) on William MacAskill - The Daily Show · 2022-09-29T20:48:46.456Z · EA · GW

I understand this is a question in good faith that is concerned about comprehensibility. Nonetheless, I downvoted because I think this form of discussion is generally bad. I don't think it's okay for us to tell people - even community leaders like Will - how they should sound, no more than we should opine on how they look. The New Yorker profile has examples of how weird this can get, with Will asking his friends if he should get dental surgery to be a more appealing public figure. Discussing how to engineer a person into the perfect PR machine has limits.

I understand that comprehensibility is important. But the overlap with accent is not that large - comprehensibility is also about diction, pace, modulation, command over language, etc.

Comment by Karthik Tadepalli (therealslimkt) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-23T10:59:50.443Z · EA · GW

I think this makes a lot of sense for algorithmic regulation of human expression, but I still don't see the link to algorithmic expression itself. In particular I agree that we can't perfectly measure the violence of a speech act, but the consequences of incorrectly classifying something as violent seem way less severe for a language model than for a platform of humans.

Comment by Karthik Tadepalli (therealslimkt) on Why Wasting EA Money is Bad · 2022-09-23T10:29:16.160Z · EA · GW

It's hard to stop this argument from heading down the Dead Children Currency route. I think your heuristic that we should try to balance convenience with not being wasteful is right, and the optimizing heuristic that we should only spend on things that are more effective than giving that money away is wrong. It feels wrong in the same way that it would feel wrong to say "we should only spend time doing (activity if that activity is highly effective/it would increase our productivity in EA work". EA is a community, for better or worse, and I think it's bad for communities to create norms that are bad for community members well-being. I think a counterfactual norm of comparing all spending decisions to the potential impact of donating that money would be terrible for the well-being of EAs, especially very scrupulous EAs. Effective altruism in the garden of ends talks beautifully about the dark side of bringing such a demanding framework into everyday decisions.

That obviously does not mean all forms of EA spending are good, or even that most of them are. It's a false dichotomy to say the only options are to spend on useless luxuries or to obsess over dead children currency. But it does suggest that we should have a more heuristic approach to feeling out when spending is too much spending. Yes, we shouldn't spend on Ubers just because EA is footing the bill. Take the BART in most cases. But if it's late at night and you don't want to be on the BART then don't force yourself into a scary situation because you were scared of wasting money that could save lives.

Comment by Karthik Tadepalli (therealslimkt) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-23T07:16:02.032Z · EA · GW

The most obvious (and generic) objection is that censorship is bad.

This strikes me as a weird argument because it isn't object-level at all. There's nothing in this section about why censoring model outputs to be diverse/not use slurs/not target individuals or create violent speech is actually a bad idea. There was a Twitter thread of doing GPT-3 injections to make a remote work bot make violent threats to people. That was pretty convincing evidence to me that there is too much scope for abuse without some front-end modifications.

If you have an object-level, non-generic argument for why this form of censorship is bad, I would love to hear it.

This will create the illusion of greater safety than actually exists, and (imo) is practically begging for something to go wrong.

If true, this would be the most convincing objection to me. But I don't think this is actually how public perception works. Who is really out there who thinks that Stable Diffusion is safe, but if they saw it generate a violent image they would be convinced that Stable Diffusion is a problem? Most people who celebrate stable diffusion or GPT3 know they could be used for bad ends, they just think the good ends are more important/the bad ends are fixable. I just don't see how a front-end tweak really convinces people who otherwise would have been skeptical. I think it's much more realistic that people see this as transparently just a bandaid solution, and they just vary in how much they care about the underlying issue.

I also think there's a distinction between a model being "not aligned" and being misaligned. Insofar as a model is spitting out objectionable inputs it certainly doesn't meet the gold standard of aligned AI. But I also struggle to see how it is actually concretely misaligned. In fact, one of the biggest worries of AI safety is AIs being able to circumvent restrictions placed on them by the modeller. So it seems like an AI that is easily muzzled by front-end tweaks is not likely to be the biggest cause for concern.

Calling content censorship "AI safety" (or even "bias reduction") severely damages the reputation of actual, existential AI safety advocates.

This is very unconvincing. The AI safety vs AI ethics conflict is long-standing, goes way beyond some particular front-end censorship and is unlikely to be affected by any of these individual issues. If your broader point is that calling AI ethics AI safety is bad, then yes. But I don't think the cited tweets are really evidence that AI safety is widely viewed as synonymous with AI ethics. Timnit Gebru has far more followers than any of these tweets will ever reach, and is quite vocal about criticizing AI safety people. The contribution of front-end censorship to this debate is probably quite overstated.

Comment by Karthik Tadepalli (therealslimkt) on Cause Exploration Prizes: Announcing our prizes · 2022-09-09T21:09:29.261Z · EA · GW

I am not super clear on the delineation between DNT pesticides and suicide-risk pesticides and their relative importance so I'll defer to you.

Comment by Karthik Tadepalli (therealslimkt) on The Domestication of Zebras · 2022-09-09T19:21:48.038Z · EA · GW

That's fair but I don't think horses pre-1900 were treated in terrible ways. In particular the incentives for treating farm animals are VERY different from the incentives for treating service animals, whose usefulness depends on their continued health and quality of life.

Comment by Karthik Tadepalli (therealslimkt) on The Domestication of Zebras · 2022-09-09T18:48:24.682Z · EA · GW

Domestication isn't the same as exploitation, as wild animal welfare advocates will attest to. Dogs and cats and horses probably live better lives than all other animals.

Comment by Karthik Tadepalli (therealslimkt) on Cause Exploration Prizes: Announcing our prizes · 2022-09-09T18:43:20.891Z · EA · GW

How do you square that with the success of the Center for Pesticide Suicide Prevention in advocating for some pesticides to be banned in dozens of countries? Even if the CPSP wasn't instrumental in all of these cases, it doesn't seem to have been destroyed by the food and farming lobbies.

Comment by Karthik Tadepalli (therealslimkt) on Announcing the Change Our Mind Contest for critiques of our cost-effectiveness analyses · 2022-09-07T01:14:30.253Z · EA · GW

Exciting contest! I'd encourage the creation of a tag for this contest, to help people collect and read through entries that are posted on the Forum.

Comment by Karthik Tadepalli (therealslimkt) on The Base Rate of Longtermism Is Bad · 2022-09-05T18:06:42.574Z · EA · GW

is caring about the future really enough to meaningfully equate movements with vastly different ideas about how to improve the world?

Given that longtermism is literally defined as a focus on improving the long-term future, I think yes? You can come up with many vastly different ways to improve the long-term future, but we should think of the category as "all movements to improve the long term future" and not "all movements to improve the long term future focusing on AI and bio risk and value lock in".

Comment by Karthik Tadepalli (therealslimkt) on Interrelatedness of x-risks and systemic fragilities · 2022-09-04T23:32:18.268Z · EA · GW

I think this comment on another post about the polycrisis is pretty good and captures why I'm skeptical of the polycrisis as a concept. But I'm very suspicious of people downvoting a post that is not actually a substantive claim about the polycrisis, but rather an invitation to a collaboration (which can't possibly be negative, and could definitely be positive).

Comment by Karthik Tadepalli (therealslimkt) on Valuing lives instrumentally leads to uncomfortable conclusions · 2022-09-04T22:57:06.973Z · EA · GW

On your triage point, I think we can and do triage based on other criteria - namely, how much it costs to save a life. That feels a lot more in the spirit of triage than this specific comparison, which is much closer to a value judgment about what kinds of lives are worth living. Are we really okay with just judging that the lives of other people are less worth living than our own?

On the GCR point, that's fair enough - it is the argument that Beckstead makes. The post is just to say that I find it uncomfortable, and plausibly an argument that a less WEIRD and more international EA would reject. But I'm afraid that's just my wild speculation.

Comment by Karthik Tadepalli (therealslimkt) on Valuing lives instrumentally leads to uncomfortable conclusions · 2022-09-04T22:54:52.385Z · EA · GW

I certainly agree it has been shared by the vast majority of people historically and today. I do not think that's a sufficient justification. The vast majority of people historically and today think that animals don't matter, but we don't accept that.

I think most EAs would say reflexively that they care about life-years equally, independent of income (though not of health). I think this conclusion would be uncomfortable to those people. There are other ways to discount the life-years you save - the pedophile vs doctor example points to using some notion of virtue as a criterion for who we should save. I think that the income-difference should be deeply uncomfortable to people because of how it connects to a history (and continuing practice!) of devaluing the lives of people far away from us.

Comment by Karthik Tadepalli (therealslimkt) on Valuing lives instrumentally leads to uncomfortable conclusions · 2022-09-04T22:39:39.418Z · EA · GW

I guess it's on me for putting "repugnant conclusion" in the title, but what does this have to do with my post? My post is not about the Repugnant Conclusion as discussed in population ethics. It is just, in very literal terms, a conclusion which is repugnant.

Edit: I've changed the title from "repugnant conclusions" to "uncomfortable conclusions"

Comment by Karthik Tadepalli (therealslimkt) on To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing · 2022-09-04T20:07:06.085Z · EA · GW

SWB measures seem very useful for some types of comparisons (measuring freedom, etc) but also really inadequate for others. In particular, I worry that they over-weight the immediate effects of interventions, and underweight the long-term effects. Here are some examples:

  1. Alice is a teen targeted by an education intervention that increases her test scores dramatically but also requires her to put in more effort. Alice likes getting good grades, but it's a very small part of her subjective wellbeing as a teenager, and it's also offset by the annoyance of having to spend more time on schoolwork, so she reports essentially the same SWB on her survey. Did the education intervention have zero value?
  2. Bob is a farm laborer who gets a free bus ticket to migrate to the city and work there. He earns higher income in the city and sends much of it back to his family. But being alone in the city is lonely and difficult. He is happy that he can provide for his family, but they are far away, and the difficulty of being a migrant is much more salient to him on any given day. He reports a reduced SWB on the survey. Was migration a harmful intervention?
  3. Chris lives in a generally polluted city. He dislikes pollution, but it's usually not so bad that he notices it very saliently on a day-to-day basis. Unbeknownst to him, an air-quality intervention reduces pollution by 10%, reducing his risk of respiratory disease over twenty years. But he wasn't aware of it, or even if he was, he wasn't thinking about risks twenty years from now, so he reports the same SWB as before. Did the air-pollution intervention have zero value?

One possible solution to all of these things would be to collect SWB data for a long period after the intervention. The problem is that SWB data have to be collected with a much higher frequency than income/health data. By their nature, SWB data are reliable when reporting on current state: all the studies of SWB validity I've seen are showing the validity when people introspect on their state of life now, not their state of life a year ago. I think it's very likely that people recalling SWB in the past would be highly biased by their current SWB. In contrast, income/health are more objective for people to recall, and they can also be collected from administrative data. So I don't think WELLBYs in practice could adequately measure effects with primarily long-term benefits and little to no short-term benefits.

Comment by Karthik Tadepalli (therealslimkt) on [Cause Exploration Prizes] Preventing stillbirths · 2022-09-04T16:33:44.395Z · EA · GW

Really convincing argument. Thanks for writing it up!

Comment by Karthik Tadepalli (therealslimkt) on The discount rate is not zero · 2022-09-03T17:30:30.928Z · EA · GW

While in reality the long-term catastrophe rate will fluctuate, I think it fair to assume it is constant (an average of unknowable fluctuations)

I don't think this is an innocuous assumption! Long term anthropogenic risk is surely a function of technological progress, population, social values, etc - all of which are trending, not just fluctuating. So it feels like catastrophe rate should also be trending long term.

Comment by Karthik Tadepalli (therealslimkt) on EAs should recommend cost-effective interventions in more cause areas (not just the most pressing ones) · 2022-09-03T05:40:15.932Z · EA · GW

This is likely if the maximum cost effectiveness is highest in global health compared to other areas. If global health is just a uniquely high leverage area - which is plausible, so many people in poor countries suffer from easily preventable diseases with terrible impacts - then it's just going to have an exceptionally high ceiling compared to areas where the suffering is less preventable or less impactful.

Comment by Karthik Tadepalli (therealslimkt) on EAs should recommend cost-effective interventions in more cause areas (not just the most pressing ones) · 2022-09-02T14:40:10.675Z · EA · GW

I once had this argument with my friend, who convinced me against this position with the problem of redirecting resources. In theory, some resources are locked into a cause area (e.g. donors are committed to abortion) and some are not (donors are willing to change cause areas). Finding the best giving area within a cause will increase the efficiency of resources that are locked into that cause, but it will also encourage some amount of redirection. IIRC, when GiveDirectly introduced cash transfers for the US, their normal arms actually lost donations despite donations overall being way up during COVID. That's an example that demonstrates the worry that people will direct their money to less important areas if you give them a winning donation opportunity within that area.

Comment by Karthik Tadepalli (therealslimkt) on What books/blogs/articles were most impactful on you re: EA? · 2022-08-30T17:50:07.608Z · EA · GW

I heard about EA from a public debate that Will MacAskill did back in 2015, and I read Doing Good Better and thought it was basically sensible. But I didn't feel any compelling connection to EA until I read Small and Vulnerable.

Comment by Karthik Tadepalli (therealslimkt) on A critical review of GiveWell's 2022 cost-effectiveness model · 2022-08-29T00:10:54.291Z · EA · GW

I don't think this works to achieve the same end, because $/DALY is specific to measuring health benefits - it doesn't provide any way to capture increased income, consumption, etc. Nonetheless, evaluations like "AMF saves a life for $4,000" must have been derived from a $/DALY estimate at some point in the pipeline so I suspect it is being calculated somewhere already.

Comment by Karthik Tadepalli (therealslimkt) on Climate Change & Longtermism: new book-length report · 2022-08-27T01:17:37.015Z · EA · GW

I did not say Takakura has a discounting module and this is not changing the subject. What I said was:

I have an issue with Takakura and other models. All models I've seen measure climate impacts in a) a social cost of carbon, whose value is based on a pure time preference discount factor, or b) impacts by the end of the 21st century, which ignores impacts into future centuries.

Takakura has the latter problem, which is my issue with it as you use it.

If we are at a hingey time due to AI and bio, and climate does not affect the hingeyness of this century, then it does not have much impact on the long-term.

This doesn't seem right as a criterion and is also counter to some examples of longtermist success. For example, the campaign to reduce slavery improved the long term by eliminating a factor that would have caused recurring damage over the long term. Climate mitigation reduces a recurring damage over the long term: if that recurring damage each year is large enough, it can be an important longtermist area. My point is that the impacts of climate in the 21st century are probably a substantial underestimate of their total long-term impact. It's totally possible that when you account for the total impact it is still not important, but that doesn't follow automatically from climates effect on hingeyness.

Comment by Karthik Tadepalli (therealslimkt) on Climate Change & Longtermism: new book-length report · 2022-08-26T20:20:06.493Z · EA · GW

If you think we are in the hingiest or most important century, then the impacts of climate change this century are in fact the main thing that determine its long-term effects

This is untrue if the things that make this century hingey are orthogonal to climate change. If this century is particularly hingey only because of AI development and the risk of engineered pandemics, and climate change will not affect either of those things, then the impacts of climate change this century are not especially important relative to future centuries, even if this century is important relative to future centuries.

All the indirect effects of climate that you consider are great-power conflict, resource conflict, etc. I have not seen arguments that claim this century is especially hingey for any of those factors. Indeed, resource conflict and great power conflict are the norm throughout history. So it seems that the indirect effects of climate on these risk factors is not only relevant for the 21st century but all centuries afterwards.

Takakura does not have a discounting module but considering impacts only up to 2100 is functionally the same as discounting all impacts after 2100. Obviously impacts up to 2100 are relevant to longtermists - my point is that they could be a substantial underestimate of its long-term effects. And you can improve on that substantially with a model that considers 500 years or something similar. It's a baffling dichotomy to say that you can either consider impacts up to 2100 or millions of years.

Comment by Karthik Tadepalli (therealslimkt) on Climate Change & Longtermism: new book-length report · 2022-08-26T20:10:24.439Z · EA · GW

I strongly upvoted this because it was at -4 karma when I saw it and that seems way too low. That said, I understand the frustration people feel at a comment like this that would lead them to downvote. It raises far too many questions for the OP to answer all at once, and doesn't elaborate on any of them enough for the OP to respond to the substance of any claim you make. This is the kind of comment that is very hard to answer, regardless of its merit.

Comment by Karthik Tadepalli (therealslimkt) on Climate Change & Longtermism: new book-length report · 2022-08-26T16:04:22.826Z · EA · GW

Climate-economy models factor in technological and economic progress, and yet their SCC estimates are hugely sensitive to the discount rate. The only way I can see this happening is if climate damages in the future are very large.

Comment by Karthik Tadepalli (therealslimkt) on Climate Change & Longtermism: new book-length report · 2022-08-26T14:35:15.861Z · EA · GW

I have an issue with Takakura and other models. All models I've seen measure climate impacts in a) a social cost of carbon, whose value is based on a pure time preference discount factor, or b) impacts by the end of the 21st century, which ignores impacts into future centuries. Both of these methods are incompatible with a longtermist ethical view.

If we wanted to get a longtermist-compatible estimate of climate damages, we would have to either calculate a social cost of carbon with a zero discount factor (except for growth-adjustments), or calculate total climate damages over hundreds of years. None of the studies I've seen do this. Even worse, we know that climate models are highly sensitive to the choice of discount factor, which is only possible if a large proportion of damages occur in the future, so we could be underestimating this future damage by a lot. How do you deal with this issue when studying climate change from a longtermist perspective?

Comment by Karthik Tadepalli (therealslimkt) on A critical review of GiveWell's 2022 cost-effectiveness model · 2022-08-25T16:26:06.682Z · EA · GW

I think exercises like this are incredibly underrated and that the quantitative updates to the cost-effectiveness of each charity are the most important contribution of this post. But I do think that this would benefit a lot from putting the spreadsheet correction details in a technical appendix and dedicating more of the body to a substantive review of uncertainty analysis. You flag it as the most important disadvantage of GiveWell's framework and then only promise to talk about it in a future post - the result is that this post does not have that much substance for people to discuss or think about, and most of the 45 minute read is spreadsheet details. So I wish I could engage in a critical discussion but I don't think there's much to engage with, except that AMF is looking pretty juicy right now.

Comment by Karthik Tadepalli (therealslimkt) on Could a 'permanent global totalitarian state' ever be permanent? · 2022-08-23T17:33:11.295Z · EA · GW

My rough sense of the argument is "AI is immune to all evolution mechanisms so it can stay the same forever, so an AI-governed totalitarian state can be permanent."

AI domination is not the only situation described in this argument, though: it also considers human domination that is aided by AI. In this scenario, your argument about drift in the elite class makes sense.

Comment by Karthik Tadepalli (therealslimkt) on EAs underestimate uncertainty in cause prioritisation · 2022-08-23T17:29:37.642Z · EA · GW

I was going to write this post, so I definitely agree :)

In general, EA recommendations produce suboptimal herding behavior. This is because individuals can't choose a whole distribution over career paths, only a single career path. Let's say our best guess at the best areas for people to work in is that there's a 30% chance it's AI, a 20% chance it's biosecurity, a 20% chance it's animal welfare, a 20% chance it's global development, and a 10% chance it's something else. Then that would also be the ideal distribution of careers (ignoring personal fit concerns for the moment). But even if every single person had this estimate, all of them would be optimizing by choosing to work in AI, which is not the optimal distribution. Every person optimizing their social impact actually leads to a suboptimal outcome!

The main countervailing force is personal fit. People do not just optimize for the expected impact of a career path, they select into the career paths where they think they would be most impactful. Insofar as people's aptitudes are more evenly distributed, this evens out the distribution of career paths that people choose and brings it closer to the uncertainty-adjusted optimal distribution.

But this is not a guaranteed outcome. It depends on what kind of people EA attracts. If EA attracts primarily people with CS/software aptitudes, then we would see disproportionate selection into AI relative to other areas. So I think another source of irrationality in EA prioritization is the disproportionate attraction of people with some aptitudes rather than others.

Comment by Karthik Tadepalli (therealslimkt) on Is GiveWell underestimating the health value of lead eradication? · 2022-08-21T16:38:41.812Z · EA · GW

I'm curious, why apply a 4% discount rate?

Comment by Karthik Tadepalli (therealslimkt) on The Parable of the Boy Who Cried 5% Chance of Wolf · 2022-08-15T20:54:03.208Z · EA · GW

Related: A Failure, But Not of Prediction. The best case for x-risk reduction I've ever read, and it doesn't even mention x-risks once.

Comment by Karthik Tadepalli (therealslimkt) on Internationalism is a key value in EA · 2022-08-14T20:16:31.054Z · EA · GW

But the idea that a person in another country is equally worth caring about as a person in your country is a necessary premise for believing that that's a way to do more good per dollar. Most people implicitly would rather help one homeless person in their city than 100 homeless people in another country.

Comment by Karthik Tadepalli (therealslimkt) on How to Talk to Lefties in Your Intro Fellowship · 2022-08-14T16:31:43.461Z · EA · GW

Again, it informs only how they trade off health and income. The main point of DALY/QALYs is to measure health effects. And in that regard, EA grantmakers use off-the-shelf estimates of QALYs rather than calculating them. Even if they were to calculate them, the IDinsight study does not have anything in it that would be used to calculate QALYs, it focuses solely on income vs health tradeoffs.

Comment by Karthik Tadepalli (therealslimkt) on How to Talk to Lefties in Your Intro Fellowship · 2022-08-14T16:29:51.909Z · EA · GW

That's also simply not true because EAs use off-the-shelf DALY/QALY estimates from other organizations all the time. And this is only about health vs income tradeoffs, not health measurement, which is what QALY/DALY estimates actually do.

Edit: as a concrete example, Open Phil's South Asian air quality report takes its DALY estimates from the State of Global Air report, which is not based on any beneficiary surveys.

Comment by Karthik Tadepalli (therealslimkt) on How to Talk to Lefties in Your Intro Fellowship · 2022-08-14T02:37:02.393Z · EA · GW

That seems a bit misleading since the IDinsight study, while excellent, is not actually the basis for QALY estimates as used in e.g. the Global Burden of Disease report. My understanding is that it informs the way givewell and open philanthropy trade off health vs income, but nothing more than that.

Comment by Karthik Tadepalli (therealslimkt) on Prioritisation should consider potential for ongoing evaluation alongside expected value and evidence quality · 2022-08-13T16:58:51.758Z · EA · GW

Neat idea. I think this is probably true.

Comment by Karthik Tadepalli (therealslimkt) on To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing · 2022-08-13T07:23:54.667Z · EA · GW

I am not convinced about WELLBYs for a few reasons that I might comment later, but my primary response to this post is admiration at HLI for being so persistent and thorough about the value of SWB measures. I have a very strong intuition that SWB measures are invalid, but each analysis that you all do reduces that intuition little by little. It's really nice to see a really ambitious project to change one of the most fundamental tools of EA.

Comment by Karthik Tadepalli (therealslimkt) on Common-sense cases where "hypothetical future people" matter · 2022-08-13T07:10:34.424Z · EA · GW

Even simpler counterexample: Josh's view would preclude climate change as being important at all. Josh probably does not believe climate change is irrelevant just because it will mostly harm people in a few decades.

I suspect what's unarticulated is that Josh doesn't believe in lives in the far future, but hasn't explained why lives 1000 years from now are less important than lives 100 years from now. I sympathize because I have the same intuition. But it's probably wrong.

Comment by Karthik Tadepalli (therealslimkt) on Economic losers: SoGive's review of deworming, and why we're less positive than GiveWell · 2022-08-12T20:51:22.937Z · EA · GW

So I went over the additional documents and I owe you an apology for being dismissive. There is indeed more to the analysis than I thought, and it was flippant to suggest that your or GiveWell's replicability adjustment was just "this number looks too high" and thus incorporates this. Having gone through the replicability adjustment document, I think it makes a lot more sense than I gave it credit for.

What I couldn't gather from the document was where exactly you differed from GiveWell. Is it only in the economic losers weighting? Were your components from weight gain, years of schooling and cognition the same as GiveWell's? In the sheet where you calculate the replicability adjustment, there is no factoring of economic losers as far as I can tell, so in order to arrive at the new replicability adjustment you must have had to differ from GiveWell in the mechanism adjustment, right?

Comment by Karthik Tadepalli (therealslimkt) on Internationalism is a key value in EA · 2022-08-12T17:17:59.642Z · EA · GW

This seems very right.

Comment by Karthik Tadepalli (therealslimkt) on Against longtermism · 2022-08-12T17:09:42.203Z · EA · GW

Yes, if you showed that longtermism does not increase the EV of decisions for future people relative to normal doing things, that would be a strong argument against longtermism.

Comment by Karthik Tadepalli (therealslimkt) on How do independent researchers get access to resources? · 2022-08-12T08:03:07.067Z · EA · GW

1: scihub, scihub, scihub. If that fails you, look for the paper to be posted on any of the authors websites. I can count on one hand the number of papers I couldn't get through either of these methods. I'm at a university and still don't bother to use the library.

Comment by Karthik Tadepalli (therealslimkt) on Against longtermism · 2022-08-12T04:17:43.198Z · EA · GW

Rhetorically that just seems strange with all your examples. Human rights are also not a "necessary condition" by your standard, since good things have technically happened without them. But they are practically speaking a necessary condition for us to have strong norms of doing good things that respect human rights, such as banning slavery. So I think this is a bait-and-switch with the idea of "necessary condition".

Comment by Karthik Tadepalli (therealslimkt) on Animal rights initiative with far-reaching consequences and possibly high chance of success · 2022-08-11T22:22:38.310Z · EA · GW

I am excited and hope this works. I want to be a slight party pooper by noting that the number of animals slaughtered in Switzerland might reduce because of this measure, but it will likely reallocate at least slightly to other countries without such strict regulation.

But I think it is still an amazing measure to ensure animals live in more humane conditions and I would love to know of any organizations I can support for this.

Comment by Karthik Tadepalli (therealslimkt) on Against longtermism · 2022-08-11T08:59:22.262Z · EA · GW

Yes, if the post was simply arguing that we should look beyond longtermism for opportunities to solve big problems it would have more validity. As it stands the argument is a non sequitur.

Comment by Karthik Tadepalli (therealslimkt) on Against longtermism · 2022-08-11T07:01:46.349Z · EA · GW

In the past, all events with big positive impacts on the future occurred because people wanted to solve a problem or improve their circumstances, not because of longtermism.

Here's a parallel argument.

Before effective altruism was conceived, all events that generated good consequences occurred because people wanted to solve a problem or improve their circumstances, not because of EA. Since EA was not necessary to achieve any of those good consequences, EA is irrelevant.

The problem with both arguments is that the point of an ideology like EA or longtermism is to increase the likelihood that people take actions to make big positive impacts in the future. The printing press, the wheel, and all good things of the past occurred without us having values of human rights, liberalism, etc. This is not an argument for why these beliefs don't matter.

Comment by Karthik Tadepalli (therealslimkt) on Free-Ranging Dog Welfare in India as a Cause Area · 2022-08-10T22:46:28.392Z · EA · GW

The lives vs life years thing shouldn't change our answer much. I would also not extend the lives of 30 dogs by 1 year compared to extending a human life by 1 year, and honestly the 1/100 conversion rate I mentioned is too high for me as well, I just used it as an example of how the comparison changes with a different conversion rate.

This seems to fall under the general confusion and difficulty of evaluating wild animal suffering, and I don't envy anyone who has to do that.

Comment by Karthik Tadepalli (therealslimkt) on [edited] Inequality is a (small) problem for EA and economic growth · 2022-08-10T21:09:14.576Z · EA · GW
  1. Got it, I think I misunderstood that point the first time. Yes, I am convinced that this is an issue that is worth choosing log over isoelastic for.
  2. Yes, I agree with the first order consequence of focusing more on saving lives. The purpose of this is just to compare different approaches that only increase income, and I was just suggesting that a high set point is a sufficient way to avoid having that spill over into unappealing implications for saving lives. It is true that a very high set point is inconsistent with revealed preference VSLs, though. I don't have a good way to resolve that. I have an intuition that low VSLs are a problem and we shouldn't respect them, but it's not one I can defend, so I think you're right on this.
  3. Agreed
  4. I'm on board with the idea of averaging over scenarios ala Weitzman, my original thinking was that a normalizing constant would shrink the scale of differences between the scenarios and thus reduce the effect of outlier etas. But I was confusing two different concepts - a high normalizing constant would reduce the % difference between them, but not the absolute difference between them which is the important quantity for expected value.
Comment by Karthik Tadepalli (therealslimkt) on [edited] Inequality is a (small) problem for EA and economic growth · 2022-08-10T20:55:31.190Z · EA · GW

You... are absolutely right. That's a very good catch. I think your calculation is correct, as the utility translation only happens twice - utility from productivity growth, which I adjusted, and utility from cash transfers, which I did not. Everything else is unchanged from the original framework.

You're definitely right that it matters whether this is global average/median/poverty level. I think that the issue stems from using productivity as the input to the utility function, rather than income. This is not an issue for log utility if income is directly proportional to , since it cancels out, but it is probably better to redo this with income statistics/income growth and see how that changes things.

I'll make a note about this at the top of the post and update it with a more substantive change to the conclusion when I've dug into it further.