Posts

Thoughts on The Weapon of Openness 2020-02-13T00:10:14.841Z · score: 25 (12 votes)
The Web of Prevention 2020-02-05T04:51:51.158Z · score: 19 (13 votes)
Concrete next steps for ageing-based welfare measures 2019-11-01T14:55:03.431Z · score: 36 (16 votes)
How worried should I be about a childless Disneyland? 2019-10-28T15:32:03.036Z · score: 24 (14 votes)
Assessing biomarkers of ageing as measures of cumulative animal welfare 2019-09-27T08:00:22.716Z · score: 73 (30 votes)

Comments

Comment by willbradshaw on Thoughts on The Weapon of Openness · 2020-02-17T16:04:31.182Z · score: 2 (2 votes) · EA · GW

But key to the argument is whether these problems inexorably get worse as time goes on.

Yeah, I was thinking about this yesterday. I agree that this ("inexorable decay" vs a static cost of secrecy) is probably the key uncertainty here.

Comment by willbradshaw on Thoughts on The Weapon of Openness · 2020-02-15T21:15:06.283Z · score: 1 (1 votes) · EA · GW

Thanks, I'll try this out next time!

Comment by willbradshaw on Thoughts on The Weapon of Openness · 2020-02-14T17:57:01.768Z · score: 2 (2 votes) · EA · GW

Thanks Greg! I think a lot of what you say here is true, and well-put. I don't yet consider myself very well-informed in this area, so I wouldn't expect to be able to convince someone with a considered view that differs from mine, but I would like to get a better handle on our disagreements.

I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice.

I basically agree with this, with the proviso that I'm currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I don't yet know enough to put a number on that.

Given that, and given how little actual evidence Kantrowitz marshals, I don't think someone with a considered pro-secrecy view should be persuaded by this account. I do suspect that, if such a view were to turn out to be wrong, something like this account could be an important part of why.

Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.

Do you think there is any evidence for institutional decay due to secrecy? I'm interested in whether you think this narrative is wrong, or just unimportant relative to other considerations.

My (as yet fairly uninformed) impression is that there is also evidence of plenty of hidden inefficiency and waste in secret organisations (and indeed, given that those in those orgs would be highly motivated to use their secrecy to conceal this, I'd expect there to be more than we can see). All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

I don't know anything about the NSA, but I think Kantrowitz would claim the Manhattan project to be an example of short-term benefits of secrecy, combined with the pressures of war, producing good performance that couldn't be replicated by institutions that had been secret for decades (see footnote 7). So what is needed to counter his narrative is evidence of big wins produced by institutions with a long history of secret research.

Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught.

By "second order concerns", do you mean the proposed negative effect of secrecy on institutions/incentives/etc? Because if so that does seem to me to weigh more clearly in one direction (i.e. against secrecy) than the first-order considerations do. Though this probably depends a lot on what you count as first vs second order...

Comment by willbradshaw on Thoughts on The Weapon of Openness · 2020-02-13T00:12:02.128Z · score: 7 (4 votes) · EA · GW

The original version of footnote 8 (relating to how the narrative of the Weapon of Openness interacts with secrecy in private enterprise):

"There are various possible answers to this I could imagine being true. The first is that private companies are in fact just as vulnerable to the corrosive effects of secrecy as governments are, and that technological progress is much lower than it would be if companies were more open. Assuming arguendo that this is not the case, there are several factors I could imagine being at play:

  • Competition (i.e. the standard answer). Private companies are engaged in much more ferocious competition over much shorter timescales than states are. This provides much stronger incentives for good behaviour even when a project is secret.
  • Selection. Even if private companies are individually just as vulnerable to the corrosive effects of secrecy as state agencies, the intense short-term competition private firms are exposed to means that those companies with better epistemics at any given time will outcompete those that do not and gain market share. Hence the market as a whole can continue to produce effective technology projects in secret, even as secrecy continuously corrodes individual actors within the market.
  • Short-termism. It's plausible to me that, with rare exceptions, secret projects in firms are of much shorter duration than in state agencies. If this is the case, it might allow at least some private companies to continuously exploit the short-term benefits of secrecy while avoiding some or all of the long-term costs.
  • Differences in degrees of secrecy. If a government project is secret, it will tend to remain so even once completed, for national security reasons. Conversely, private companies may be less attached to total, indefinite secrecy, particulary given the pro-openness incentives provided by patents. It might also be easier to bring external experts into secret private projects, through NDAs and the like, than it is to get them clearance to consult on secret state ones.

I don't yet know enough economics or business studies to be confident in my guesses here, and hopefully someone who knows more can tell me which of these are plausible and which are wrong."

Comment by willbradshaw on The Intellectual and Moral Decline in Academic Research · 2020-02-12T23:34:34.562Z · score: 1 (1 votes) · EA · GW

Unrelatedly, I'm quite enjoying watching the karma on this comment go up and down. Currently at -1 karma after 7 votes. Interesting data on differing preferences over commenting norms.

Comment by willbradshaw on The Intellectual and Moral Decline in Academic Research · 2020-02-12T23:31:36.520Z · score: 3 (3 votes) · EA · GW

Yeah, I don't want to imply that I strongly support the original claims. I think there are lots of very serious problems with incentives and epistemics in science, but nevertheless that both the incentives and the epistemics of scientists are unusually good in important ways.

(As an anecdote that probably shouldn't be taken as strong evidence, but that I found striking, I once tried out the 2-4-6 test on my lab, and IIRC something like two-thirds of members got the right answer first-time, and both group leaders present did so fairly quickly.)

I'm also very worried about the effects of corporate funding on research, at least in some domains.

Comment by willbradshaw on The Intellectual and Moral Decline in Academic Research · 2020-02-12T16:00:54.068Z · score: 2 (2 votes) · EA · GW

Thanks Gavin.

I'd be interested in seeing data on the distribution of causes of retraction and how it's changed over time. I know RetractionWatch likes to say that scientists tend to underestimate the proportion of retractions that are down to fraud. I do think some (many?) retractions are due to serious technical errors with no implication of deliberate fraud or misconduct. I suspect RetractionWatch has data on this.

I'm not claiming that it's inevitably true that more retractions indicates better community epistemics, but I do think it's a big part of the story in this case. A paper retraction requires someone to notice that the paper is worthy of retraction, bring that to the editors and, very often, put a lot of pressure on the editors to retract the paper (who are usually extremely reluctant to do so). That requires people to be on the lookout for things that might need to be retracted and willing to put in the time and effort to get it retracted.

In the past this was very rare, and only extremely flagrant fraud or misconduct (or unusually honest scientists retracting their own work) led to retractions. Now, partly as a side consequence of the replication crisis but also more general (and incomplete) changes in norms, we have a lot more people who spend a lot of time actively searching for data manipulation and other retraction-worthy things in papers.

This is just the science version of the common claim that a recorded increase (or decrease) in the rate of a particular crime, or a particular mental disorder, or some such, is mainly due to changes in how closely we're looking for it.

Comment by willbradshaw on The Intellectual and Moral Decline in Academic Research · 2020-02-11T22:43:16.892Z · score: 8 (6 votes) · EA · GW

Sure.

Taken at face value, the claim is that taxpayer funding and number of retractions have increased over time, at rates not hugely different from one another. I think both can almost entirely be accounted for by an increase in the total number of researchers. If you have more researchers producing papers, this will result in both a big increase in funding required and in number of papers retracted without any change in the quality distribution.

I would want to see evidence for a big increase in retractions per number of researchers, researcher hours or some other aggregative measure before taking this seriously as a claim that science has got worse over time. It's well-known that if you don't control for the total number of people in a place or doing a thing, all sorts of things will correlate (homicides and priests, ice-cream sales and suicides, etc.).

More substantively, I also disagree with the claim that a big increase in retractions is evidence of scientific decline. Insofar as there has been any increase in the per-capita rate of retractions, I regard this as a sign of increasing epistemic standards, and think both editors and scientists are still way too reluctant to retract papers. It's like the replication crisis: the problems have always been there, but we only started paying attention to them recently. That's a good sign, not a bad one.

Comment by willbradshaw on The Intellectual and Moral Decline in Academic Research · 2020-02-11T22:32:03.333Z · score: 4 (5 votes) · EA · GW

In short, maybe the author is burnt out or has only ever worked with poor colleagues? Or hasn't been funded in a while?

I downvoted this comment based on this paragraph. Arch speculations that a position taken is probably due to inadequacies and personal frustrations of the author are nearly always uncharitable, unwarranted and, in my experience, well-correlated with sloppy and defensive thinking.

No, the guy probably isn't just mad because he couldn't cut it in academia.

Comment by willbradshaw on The Intellectual and Moral Decline in Academic Research · 2020-02-08T17:33:29.486Z · score: -3 (6 votes) · EA · GW

From 1970 to 2010, as taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent, with most due to misconduct.

https://www.tylervigen.com/spurious-correlations

Comment by willbradshaw on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T17:22:02.506Z · score: 1 (1 votes) · EA · GW

I'm sympathetic to this consideration, but I think it applies much more strongly to romantic/sexual relationships than friendships.

Comment by willbradshaw on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T17:30:10.854Z · score: 22 (20 votes) · EA · GW

This policy seems too lax to me. In particular, I'm fairly surprised at the very narrow range of circumstances in which individual fund members would recuse themselves. It seems fairly obvious to me that being in a close friendship or active collaboration with someone should require recusal and being personal friends with someone should require disclosure.

In general I feel CoI policies should err fairly strongly on the side of caution, whereas this one does the opposite. I'd appreciate some discussion on why this is the case.

Comment by willbradshaw on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-06T03:05:57.769Z · score: 5 (3 votes) · EA · GW

I did some research on hand hygiene and wrote a quick summary on Facebook and LessWrong if anyone is interested. Not sure it's really appropriate for a top-level post on the EA Forum but I do think it's pretty useful to know. Most people (including me a few days ago) are very bad at washing their hands.

Comment by willbradshaw on Please take the Reducing Wild-Animal Suffering Community Survey! · 2020-02-03T20:28:19.868Z · score: 12 (7 votes) · EA · GW

I didn't fill in the survey yet, but I just wanted to register my surprise that there isn't a question on terminology in the survey. It seems like it would be useful to get anonymous data on how many people involved prefer WAS vs WAW vs some other thing.

Comment by willbradshaw on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T20:22:18.200Z · score: 4 (3 votes) · EA · GW

I feel much more worried about being in a crowded gym than about immune effects of exercise. People are really bad at (a) cleaning gym equipment and (b) washing their hands.

To be clear, I'd guess this is less bad than many other social situation (bars, public transport, restaurants) as well as carrying a much clearer health upside. But perhaps there is an argument for switching to more solitary forms of exercise in outbreak situations?

And obviously you should not go to the gym if you yourself are sick (people apparently do this)!

Comment by willbradshaw on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T13:59:58.469Z · score: 4 (3 votes) · EA · GW

The possibility of a long incubation period (and especially a long-ish pre-symptomatic infectiousness period) is especially worrying to me, as my impression is that this was a key reason SARS didn't take off more than it did.

That said, I'm not sure it's clear yet that there is a long pre-symptomatic period. This article suggests we're not really sure about this yet. I'm expecting to get more information very soon, though.

Comment by willbradshaw on [Link] "Moral understanding and moral illusions" · 2020-01-27T21:01:20.004Z · score: 3 (2 votes) · EA · GW

I'm not very sympathetic to this response when you do exactly the thing I said we shouldn't do in the second paragraph of your post.

If you are sincere in this reply, I think you should delete that paragraph.

Comment by willbradshaw on [Link] "Moral understanding and moral illusions" · 2020-01-27T03:15:05.346Z · score: 1 (5 votes) · EA · GW

Regardless of one's views on scientific publishing and SciHub, I think it's a bad idea to openly encourage illegal behaviour on this forum.

Comment by willbradshaw on More info on EA Global admissions · 2020-01-15T23:19:16.584Z · score: 3 (3 votes) · EA · GW

I also think this is a valuable analogy.

Comment by willbradshaw on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-15T20:50:34.511Z · score: 4 (4 votes) · EA · GW

For what it's worth, I think it was pretty much a model EA Forum comment and am disappointed that people downvoted it so strongly. It seemed to be doing the difficult and valuable thing of "trying to work out what is actually the best thing to do" and met all the default commenting guidelines. It also didn't come across to me as at all antagonistic.

Comment by willbradshaw on Tentative Thoughts on Speech Policing · 2020-01-06T22:08:50.133Z · score: 3 (3 votes) · EA · GW

FWIW, this understates his position and the controversy. It's not just "extremely" disabled babies, but infants with basically any disability, due to a replaceability argument.

Can you cite this? I heard him talk about this in public (in Germany) and he focused strongly on the "extremely disabled" aspect. I'd be interested in how he makes the more general case, and how strongly he makes it.

Comment by willbradshaw on Assessing biomarkers of ageing as measures of cumulative animal welfare · 2020-01-06T02:17:04.936Z · score: 1 (1 votes) · EA · GW

Thanks Jane - good find! This seems consistent with Melissa Bateson's belief that telomere length is a more useful measure of welfare in juveniles than in adults

Comment by willbradshaw on More info on EA Global admissions · 2019-12-27T14:57:33.802Z · score: 1 (1 votes) · EA · GW

I'm confused why you don't think randomisation would be faster than producing a complete ranking of candidates, but I also don't currently have reason to think the ranking is a limiting factor, so unless we get information to the contrary this isn't the main point of contention.

More importantly, I think we disagree on the last sentence. I think snap judgements between candidates that don't clearly differ dramatically in suitability are likely to be not significantly better than, and possibly worse than, chance.

Comment by willbradshaw on More info on EA Global admissions · 2019-12-27T10:55:23.198Z · score: 7 (9 votes) · EA · GW

Why wouldn't randomisation be a good fit for EAG? I suspect that the ability of the organisers to finely distinguish between similarly-promising applicants is minimal anyway*, so a strategy of, say, roughly scoring applicants into buckets and then randomising among those who fall between "obvious shoo-in" and "clearly unsuitable" could work quite well, as well as being much quicker and easier for the organisers.

(This is roughly how the proposal to randomise scientific grantmaking would work: apply a basic check for suitability/competence and then randomise among those who make that cut. I think this would be a big improvement over the current system and would apply the same reasoning in many other domains with similar features, such as university admissions.)

* Not because I have a low opinion of the organisers, just because I think this is generally true.

Comment by willbradshaw on Concrete next steps for ageing-based welfare measures · 2019-11-18T21:23:37.542Z · score: 3 (3 votes) · EA · GW

Sadly AWMs only really give a relative measure of welfare against a comparable genetic background: you can use them to say population A is ageing faster than population B and therefore has worse cumulative welfare, but not (at least currently) to obtain an absolute welfare measure for either population. That makes it difficult to see how they could be used to compare welfare between different species, including humans.

Comment by willbradshaw on EA Forum Prize: Winners for September 2019 · 2019-11-04T20:28:56.334Z · score: 5 (4 votes) · EA · GW

The current setup (with exactly three very large prizes for posts and many more small prizes for comments) does seem a bit odd to me in that I expect it means many of the best contributions to the forum not eligible for prizes. I can easily imagine that there are excellent posts that are better than many or all of the awarded comments but not quite good enough to make the top three, and these posts can't currently win anything.

I feel it might be good to permit excellent runner-up posts to win comment prizes as well, or otherwise to allow these posts to win small prizes.

Comment by willbradshaw on Notes on 'Atomic Obsession' (2009) · 2019-11-01T14:58:34.640Z · score: 1 (1 votes) · EA · GW

What's the "side of caution" in this case?

Comment by willbradshaw on Who runs the Forum? · 2019-10-30T12:18:31.250Z · score: 4 (3 votes) · EA · GW

Agree a clearer feedback policy and way to provide feedback would be helpful, as would some way of objectively aggregating feedback among users. :-)

Exist.io (a quantified self service I use) use Changemap, which seems to serve a similar purpose.

Comment by willbradshaw on How worried should I be about a childless Disneyland? · 2019-10-28T19:31:31.771Z · score: 1 (1 votes) · EA · GW

I'm not sure I understand the first question. I don't really know what a "non-conscious being" would be. Is it synonymous with an agent?

My impression is that feeling lost is a very common response to consciousness issues, which is why it seems to me like it's not that unlikely we get it wrong and either (a) fill the universe with complex but non-conscious matter, or (b) fill it with complex conscious matter that is profoundly unlike us, in such a way that high levels of positive utility are not achieved.

The main response I can imagine for this at this time is something like "don't worry, if we solve AI alignment our AIs will solve this question for us, and if we don't things are likely to go much more obviously wrong". But this seems unsatisfactory here for some reason, and I'd like to see the argument sketched out more fully.

Comment by willbradshaw on EA Hotel Fundraiser 5: Out of runway! · 2019-10-26T12:39:20.080Z · score: 8 (6 votes) · EA · GW

Just wanted to note that the room is, in fact, ten-sided.

Comment by willbradshaw on What is wild animal suffering? · 2019-10-26T12:34:26.497Z · score: 1 (1 votes) · EA · GW

On a meta-level, it seems that a huge amount of this comes down (perhaps unsurprisingly) to strategy, framing, and target groups. This being the case, it might have been better to be more explicit about this in this post: "For [these reasons], Animal Ethics prefers to define WAS [thusly] when communicating with [target groups]. Other groups may make different communication decisions in other contexts."

Comment by willbradshaw on What is wild animal suffering? · 2019-10-26T12:26:23.075Z · score: 3 (3 votes) · EA · GW

Hi Oscar, thanks for your reply. Since I don't think there's any serious disagreement about the third point (about conservationism) I'll drop it and focus on the other two. :-) I'm also not going to address the within-EA aspect of the terminological dispute since I think we've more-or-less covered everything on both sides there.

You didn't link to the Kirkwood paper so I don't know exactly which definition he uses, but of the three possible interpretations you give in the OP I don't see a severe contradiction between 1 and 3 (that's just the difference between a thing and the study of that thing, which is generally clear from context), while the potential between 1 and 2 isn't any worse for WAW than for WAS; it just seems to be the case that people can interpret "wild animals" as either "animals in the wild" or "animals from undomesticated species (possibly in captivity)", and you need to make clear which one you mean. So I'm not currently buying this as an argument against WAW compared to WAS.

It's a reasonable point that if you're targeting traditional animal rights activists who oppose more welfarist approaches to animal issues then you might want to avoid welfarist language. I don't know much about this so I won't argue the point. I do (weakly) claim that when it comes to WAW we should be more concerned about reaching out to welfare scientists and conservationists than animal rights activists, and that our dominant language should reflect that. This might just be a WAI/AE strategic difference, though.

Regarding the scope of the term "wild animal suffering", I maintain that the most natural definition of the term is "suffering experienced by wild animals", without additional restrictions regarding the source of that suffering. Of course one can also clarify that one is also (even principally) concerned about non-anthropogenic harms, or that one thinks naturogenic harms are massively more neglected (I agree with this), but I think actually trying to restrict the scope of the term to that is likely to produce unneeded confusion, as well as throwing away an important on-ramp for getting people to care about natural harms to animals. In our experience at WAI, for example, being inclusive of anthropogenic harms has been very helpful at getting academic collaborators on board.

Finally, it seems to me that if we're taking a strictly deprivationist account of the harm of death (which I'm very sympathetic to), then death is included as a (potential) harm under WAW but not WAS; ceteris paribus, killing an animal might reduce its net welfare if its future would otherwise be good, but it's not going to increase its suffering.

[NB: As of this week I no longer work at WAI.]

Comment by willbradshaw on What is wild animal suffering? · 2019-10-22T19:15:14.459Z · score: 8 (5 votes) · EA · GW

[NB: Any opinions I express here are mine alone and not intended to represent the Wild Animal Initiative.]

There are several things I like about this post, including the clarification that it is difficult to strictly distinguish between naturogenic and anthropogenic harms, the explicit inclusion of urban animals, and the emphasis on individual context rather than species membership.

Nevertheless, there are a few things I disagree with here, regarding both the terms used and the way they are defined.

Firstly, I think the "wild animal suffering" (henceforth WAS) framing is worse than "wild animal welfare" (henceforth WAW) and should be largely abandoned in its favour. I think this for two main reasons:

  • In terms of the broader populace, I claim WAS sounds much stranger than WAW. The animal-welfare movement is well-established; the animal-suffering-reduction movement is not. Framing the issue as WAW places it firmly in the context of existing concerns in a way many more people can get behind. As such, I predict that this framing will make it easier to get buy-in from more mainstream scientists (which is essential to moving forward with welfare biology), as well as the general public.
  • In terms of the EA movement, I think WAW is more inclusive than WAS. Many people involved in the cause area are not negative utilitarians, and the WAS framing seems to assume that they are in a way that I don't think is helpful.

Both of these seem to me to be reasons why a WAS framing would make it more difficult to attract broad support for improving the lives of wild animals than a WAW framing. (I'm speaking from personal experience here: I became much more interested in participating once people started switching to WAW. I think that being put off by the negativist framing of WAS was a big part of this. I'm not sure how much I endorse this, but I do think it is true.)

I don't agree with the claim that the WAW framing is likely to cause confusion, so I don't find that counterpoint very compelling.

Secondly, I disagree with some of the boundaries drawn here around the concept of wild animal suffering / welfare. I think the term most naturally applies to anything that affects the suffering/welfare of wild animals, whether naturogenic or anthropogenic or something in between, and hence disagree with the claim here that WAS/W should refer only to harms that are "completely or partly natural". I also think including death or other non-welfarist harms in the definition is odd and confusing; if you're concerned about these I think a different term, such as "wild animal rights" or somesuch, would be preferable.

Thirdly, while I agree that there are important differences between traditional conservationist values and WAW (and personally don't think that species or ecosystems have more than instrumental value), I'd gently caution against overstating the opposition between these worldviews. It's my impression that, framed correctly, a substantial fraction of people in the conservation movement are sympathetic to concerns about the welfare of individual wild animals, and willing to consider including it as something to be considered when planning conservationist interventions. This was a big update for me when I learned about it, and I don't think it should have been. Conservationists are doing what they do because they love nature and, in many cases, because they love animals. This is important to keep in mind.

Comment by willbradshaw on Reality is often underpowered · 2019-10-22T11:45:01.469Z · score: 12 (9 votes) · EA · GW

[NB: I talked briefly to Greg in person about this last weekend, but felt it might be valuable to put this up here anyway for the purpose of public discussion / testing my beliefs.]

I have mixed feelings about this.

On the one hand, I really like the framing of reality being underpowered in certain contexts, and I think this post does a good job of explaining why this is often the case. I think the observation that we often have a lot of tacit data about the world that is hard to fit into explicit models but can nevertheless make non-data-driven expert predictions perform better than change is well-made and well-taken.

Nevertheless, I feel that in a great many cases, non-quantitative, intuitive, first-principles-heavy analyses of the world very often fail; that their rate of failure may often be poorly correlated with their apparent compellingness; that non-quantitative experts overestimate the explanatory power of their work at least as much as (and probably more than) more data-driven analysts; and that a shift towards more explicit, quantitative, data-driven approaches is often among the best ways to distinguish real knowledge about the world from the pseudo-knowledge that I think is rampant in many fields of human enquiry.

As an example: I have several friends who are academic historians, and from time to time we've talked about cliometrics/data-driven approaches to history. The general attitude seems to be "yeah, seems cool if you can do it, but just try that in [my period of study]. There's no way you could build a decent quantitative model with that little data." While I've generally been too tactful to say this to their faces, my response to these sorts of claims has historically been that if the data is too sparse to do meaningful analysis on, it's probably also too sparse to draw any other conclusions more general than a simple existence proof ("this thing happened once"). Or, more pithily, "if you don't have enough data to know things, you should just admit it".

I now suspect this is too strong a stance, but I still think there is some important truth in it. My feeling is that there may well be "good reasons why expert communities in some areas haven’t tried to use data explicitly to answer problems in their field", but there are also many bad reasons, among the most common of which is that very few people in that field have strong quantitative skills. I suspect that experts in data-poor fields often lack the epistemic modesty or statistical know-how to admit the consequences of that paucity.

Comment by willbradshaw on What are the best arguments for an exclusively hedonistic view of value? · 2019-10-21T15:48:00.314Z · score: 1 (1 votes) · EA · GW

Why are pleasure and suffering fundamentally different? I hear this a lot but it's not at all obvious to me why this is the case. They certainly seem to share a great deal in common, for example in terms of their evolutionary origins, functional purpose, and apparently inherent (dis)preferability.

Obviously pleasure and suffering are fundamentally different in that one seems good and the other seems bad, but as I understand it the essential claim here is that they are fundamentally different in other key respects. Which respects are those?

Comment by willbradshaw on Problems in effective altruism and what to do about them · 2019-10-21T12:00:27.146Z · score: 8 (6 votes) · EA · GW

I'm generally pretty in favour of public criticism of EA orgs, and of public disputatiousness in general, but this piece is (a) quite long-winded and hard to read and (b) where I did get a good idea of what it was claiming, not especially compelling. A piece on the same themes that was 1/3 as long and better researched could have been valuable.

Comment by willbradshaw on Assessing biomarkers of ageing as measures of cumulative animal welfare · 2019-10-08T12:44:20.265Z · score: 4 (2 votes) · EA · GW

Maybe it would be helpful if I try to lay out my rough model of why fasting responses to limited food availability exist in the wild, and we can see if there's actually a disagreement here.

I certainly agree that "we should doubt that animals eat (and our ancestors ate) more than is best for their fitness". If they were eating more than was good for their fitness, you'd expect them to evolve to eat less. However, wild animals exist in a state of severe food insecurity, in which food may be abundantly available one day and scarce for weeks thereafter. It probably is quite difficult to have offspring while food is scarce, and probably not very valuable anyway since those offspring will be food-deprived during crucial developmental periods. So it makes sense to use what energy you have to maintain a healthy body, and wait for better times.

The response to DR would therefore be a "making the best of a bad situation" sort of thing: from a fitness perspective it would be better to eat lots of food, have lots of offspring, and die young, but since that option is unavailable due to food scarcity it is better to activate an energy-conserving fasting response that will keep you in better shape until the good times return.

Importantly, the claim is not that DR improves fitness. It is that it increases lifespan. Natural selection doesn't care about increased lifespan, or even increased healthspan, except insofar as it increases the number of descendents you have. And food deprivation is certainly very costly: DR mice show dramatically reduced fertility relative to AL (=eat-as-much-as-you-want) mice. However, they also show less age-related decline in fertility, so if you later put them back on an AL diet they are more fertile than mice of the same age that have been on AL the whole time. I think that summarises the evolutionary point of a fasting response pretty well.

Comment by willbradshaw on Assessing biomarkers of ageing as measures of cumulative animal welfare · 2019-10-05T11:47:49.675Z · score: 1 (1 votes) · EA · GW

Okay, so from this I think you mean the metabolic response to dietary restriction, not the actual restriction of diet.

If that's dangerous in expectation, why would it have evolved?

Comment by willbradshaw on Assessing biomarkers of ageing as measures of cumulative animal welfare · 2019-10-03T15:50:20.366Z · score: 1 (1 votes) · EA · GW

I find this comment a bit confusing, I'm afraid. Do you mean dietary restriction would be dangerous or that the metabolic response to DR would be dangerous? If the latter, why? Given that it seems to be an evolved fasting response I'd be somewhat surprised if it was bad for fitness overall.

Comment by willbradshaw on Assessing biomarkers of ageing as measures of cumulative animal welfare · 2019-10-02T12:29:58.947Z · score: 2 (2 votes) · EA · GW

Given all that...

Obviously, it's highly speculative at this point, but what would you guess the correlation coefficient is between cumulative welfare and biological ageing? How large does the correlation need to be before it's useful?

We'd need to weight the different exposures by how widespread and frequent they are: some potential exceptions (e.g. food or reproduction) would be much more important than others (e.g. addictive drugs). Given some sort of weighted measure of this kind, I'd guess a moderate negative correlation, with a pretty wide uncertainty. I'd be quite surprised if it turned out to be near-zero or positive, though.

I think a moderate correlation like this should definitely be enough to be useful in many or most cases, given a sufficiently large sample. However, it also depends what exposure you're actually interested in studying; if it turns out to be one of the exceptions then it doesn't really matter how rare those exceptions are. So I think a better idea of where exactly exceptions to the rule might lie in the space of potential experiences would be more useful than estimating the overall correlation.

Comment by willbradshaw on Assessing biomarkers of ageing as measures of cumulative animal welfare · 2019-10-02T12:08:56.938Z · score: 4 (3 votes) · EA · GW

These are good questions, thanks for commenting. I'll start with your more general question then address the specific examples raised. Please be aware that everything I'm saying here is based on theory

Let's summarise the overall hypothesis as being that cumulative welfare and rate of biological ageing are generally negatively correlated. In general, I'd expect counterexamples to this general idea to be fairly rare among exposures whose general form has been present in the environment long enough for the species to evolve in response to it: a species should evolve to become averse to damaging (and generally ageing-causing) exposures and attracted to protective ones.

Pro-ageing exposures might be hedonically positive on-net if (a) they provide a gain to short-term reproductive fitness that is large enough to outweigh the longer-term costs of accumulating damage, or (b) the exposure is new and misaligned with the animal's evolved incentives. An important potential example of the former that is discussed in the 2019 Bateson & Poirier review is reproduction, which is quite stressful and costly but obviously vital to reproductive fitness. Potential examples of the latter might be various addictive-but-damaging superstimuli, such as addictive drugs or wireheading. I'd expect examples of the latter group to be fairly rare in nature, but they could potentially be important for some captive populations.

The same general classification would apply to exposures that are anti-ageing but hedonically negative: they might be good for the condition of the body but bad for reproductive success, or they might be new exposures that are misaligned with the animal's evolved drives. It's not as easy for me to come up with examples for these. There's also potentially the issue of hormetic effects: damaging or aversive stimuli that provoke a response that is on-net beneficial to health and longevity. The pro-lifespan effect of dietary restriction and fasting is the most well-studied and important of these, and is probably the biggest issue with this idea I've thought about that didn't make it into the report.

So there are probably some exceptions to the overall hypothesis the sorts of methods proposed here would rest on. However, I expect these exceptions to be fairly rare for animals in their natural environment, and I'm actually sceptical of most of the candidates raised here:

  • Reproduction is obviously crucial to fitness, but while sex is clearly hedonically positive in many species, it's far from clear to me that reproduction as a whole is. It certainly doesn't seem to be clear in humans that having children is good for your happiness on-net, for example.
  • I'd be at least mildly surprised to learn that smoking is on-net hedonically positive, even in the first couple of decades of regular use. My impression is that many of the health effects kick in pretty quickly, and a lot (though not all) of the perceived pleasure of smoking is actually lifting of negative feelings that wouldn't be present if you weren't addicted to nicotine. So I suspect many or most smokers significantly overestimate how much net pleasure they get from smoking, though I'm not confident about this. The same applies to many other addictive drugs.
  • Insofar as castration is negative in humans I think that largely arises from psychological/social factors I'd expect to be largely missing from most animals. There's the pain of the actual procedure, of course, and some lost pleasure from sex, but apart from that it's not obvious to be that it's bad on-net. I don't actually know if we have any data on what the lives of castrated wild animals are like, though.
  • Dietary restriction is probably the potential exception I'm most worried about in the wild (though I'd guess it's not much of a problem when applying the technique in captive populations), but even here I'm not sure how severe it is. DR typically involves mild to moderate caloric deprivation, and AFAIK more serious starvation does not generally extend lifespan. I'm not sure the level of deprivation required to extend lifespan is all that hedonically negative after you've got used to it: this seems to be the reported experience of people on intermittent fasting, for example, though it might be different if it's out of your control. So while mild food deprivation might be mildly hedonically negative, it might not be strongly so, and so might not actually be net-negative when the hedonic gains of improved health etc. are taken into account. (Also, AFAIK there's not much evidence that DR increases longevity in humans.)

My level of uncertainty is high for most of these cases, and I'm largely falling back on the classic academic cop-out of "more research is needed". But suffice it to say that there aren't any obviously true and important exceptions I'm aware of. The main potential exceptions I'm worried about that might be important in the wild are reproduction and food deprivation, which I'd say definitely need to be looked into further.