Is there a Price for a Covid-19 Vaccine? 2020-05-22T17:20:14.396Z · score: 11 (4 votes)
gavintaylor's Shortform 2020-05-03T19:44:17.547Z · score: 4 (1 votes)
The Intellectual and Moral Decline in Academic Research 2020-02-07T16:47:32.079Z · score: 22 (12 votes)
The illusion of science in comparative cognition 2019-11-02T19:17:18.322Z · score: 27 (9 votes)
IGDORE forum for discussing metascience 2019-10-23T18:28:07.141Z · score: 7 (3 votes)


Comment by gavintaylor on Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2? · 2020-05-23T19:29:44.059Z · score: 8 (4 votes) · EA · GW

Good post, and this also seems to be a very opportune time to be promoting wild animal vaccination. A few thoughts:

To start with, programs of this kind would only be implemented after a vaccine is developed and distributed among human beings.

In relation to the current pandemic, the media often mentions that there are 7 coronaviruses that can effect humans and we don't have an effective vaccine for any of them. However, I was recently surprised to learn that there are several commercially available veterinary vaccines against coronaviruses - this raised my expectation that a human coronavirus vaccine could be successfully developed and seems promising for animal vaccination as well.

I think it's worth thinking more about what level of safety testing goes into developing animal vaccines. The Hendra virus vaccine for horses might be an interesting case study for this. Hendra virus was relatively recently discovered in Australian, and can be transmitted from flying foxes (a megabat species), via horses, to humans where it has 60%+ case fatality. Fruit bat culling was very widely called for after a series of outbreaks in 2011, but the government decided to fund development for a horse vaccine instead (by unfortunate coincidence, a heat-wave latter killed 1/3rd of the flying fox population a few years later). A vaccine was developed within a year and widely administered soon after. However, some owners (particularly those of racing horses) reported severe side-effects (including death) and eventually started a class-action against the vaccine manufacturer. I don't know if the anecdotal reports of side-effects stood up to further scrutiny (there could have been some motivated reasoning going on similar to that used by human anti-vaxxers), but it seems plausible that veterinary vaccine development accepts, or does not even attempt to consider, much worse side-effects that would be approved in a vaccine developed for humans. Given animal's inability to self-report, some classes of minor side-effects may only be noticed by owners of companion animals who are very familiar with their behaviour. While I don't think animal side-effects would be a consideration in developing vaccines for pandemic control or economic purposes, it seems more relevant in the context of vaccinating animals to increase their own welfare.

This may be the case especially for bats, because they have one of the highest disease burdens among wild mammals. Among other conditions, they are harmed by a number of different coronaviruses-caused diseases. In fact, they harbor more than half of all known coronaviruses.

Why do bats have so many diseases (lots of which humans seem to catch)? This comment (which I found in an SSC article) frames the question in another way:

There are over 1,250 bat species in existence. This is about one fifth of all mammal species. Just to get a sense of this, let me ask a modified version of the question in the title:
"Why do human beings keep getting viruses from cows, sheep, horses, pigs, deer, bears, dogs, seals, cats, foxes, weasels, chimpanzees, monkeys, hares, and rabbits?"

This re-framing doesn't really change the problem, but it suggests that just viewing 'bats' as a single animal group comparable to 'cows' or 'deers' is concealing the scope of species diversity involved.

I heard Jonathan Epstein talk at a panel discussion on biosecurity last year. He was in favour of disease monitoring and management in wild animal populations, and also seemed sympathetic to the idea of doing this from both a human health and animal welfare standpoints. He might be interested in discussing this further, and is in a position where he could advocate for or implement these ideas.

Comment by gavintaylor on Interview with Aubrey de Grey, chief science officer of the SENS Research Foundation · 2020-05-23T16:54:48.907Z · score: 5 (2 votes) · EA · GW

Thanks for asking the questions I suggested. I thought found Aubrey's response to this question the most informative:

Has any effort been made to see if the effects of multiple treatments are additive, in terms of improved lifespan, in a pre-clinical study?
No, and indeed we would not expect them to be additive, because we would not expect any one of them to make a significant difference to lifespan. That’s because until we are fixing them all, the ones we are not yet fixing would be predicted to kill the organism more-or-less on schedule. Only more-or-less, because there is definitely cross-talk between different damage types, but still we would not expect that lifespan would be a good assay of efficacy until we’re fixing pretty much everything.

I don't have a background in anti-aging biology and my intuition was that the treatments would be have more of an additive effect. However, I agree with his view that there won't be much effect on total life-span until everything is fixed.

My feeling is that this may make the expected value of life-extension research lower (by decreasing probability of success) given that all hallmarks need to be effectively treated in parallel to realize any benefit. If one proves much harder to treat in humans, or if all the treatments don't work together, then that reduces the benefit gained from treating the other hallmarks, at least as far as LEV is concerned. This makes SRF's approach of focusing on the most difficult problems seem quite reasonable and probably the most effective way to make a marginal contribution to life-extension research at the moment. Once all hallmarks are treatable pre-clinically in-vivo, then it seems like research into treatment interactions may become the most effective way to contribute (as noted, this will probably also be hard to get main-stream funding for).

Comment by gavintaylor on Bioinfohazards · 2020-05-22T21:39:45.354Z · score: 3 (2 votes) · EA · GW
Biosecurity researchers are often better-educated and/or more creative than most bad actors.

I generally agree with the above statement and that the risk of openly discussing some topics outweigh the benefits of doing so. But I recently realised there are some people outside of EA that I think are generally well educated, probably more creative than many biosecurity researchers, and who often write openly about topics the EA community may consider bioinfohazards: authors of near-future science fiction.

Many of the authors in this genre have STEM backgrounds, often write about malicious-use GCR scenarios (thankfully, the risk is usually averted), and I've read several interviews where authors mention taking pains to do research so they can depict a scenario that represents a possible, if sometimes ambitious, future risk. While these novels don't provide implementation details, the 'attack strategies' are often described clearly and the accompanying narrative may well be more inspiring to a poorly educated bad actor looking for ideas than a technical discussion would be.

I haven't seen (realistic) fiction discussed in the context of infohazards before and would be interested to know what others think of this. In the spirit of the post, I'll refrain from creating an 'attention hazard' (or just advertising?) by mentioning any authors who I think describe GCR's particularly well.

Comment by gavintaylor on Why making asteroid deflection tech might be bad · 2020-05-21T14:46:49.460Z · score: 5 (4 votes) · EA · GW
Ignoring accidental deflection, which might occur when an asteroid is moved to an Earth or Lunar orbit for research or mining purposes

I haven't seen this mentioned in other discussion of asteroid risk (i.e. I don't think Ord mentions it in the Precipice) but I don't think it should be ignored so quickly. If states/corporations develop technology to transfer asteroids to Earth orbit then this seems like it would represent an equivalent dual-use concern. Indeed, it may be even riskier than just developing tools for deflection, as activities like mining could provide 'cover' for maliciously aiming an asteroid at Earth. On the positive side, similar tools can probably be used for both orbital transfer and deflection, so the risky technology may also be its own counter-technology.

Comment by gavintaylor on gavintaylor's Shortform · 2020-05-03T19:44:17.746Z · score: 10 (8 votes) · EA · GW

At the start of Chapter 6 in the precipice, Ord writes:

To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others as one in 50. So much of one’s work in accurately assessing the size of each risk is thus immediately wasted. Furthermore, the meanings of these phrases shift with the stakes: “highly unlikely” suggests “small enough that we can set it aside,” rather than neutrally referring to a level of probability. This causes problems when talking about high-stakes risks, where even small probabilities can be very important. And finally, numbers are indispensable if we are to reason clearly about the comparative sizes of different risks, or classes of risks.

This made me recall hearing about Matsés, a language spoken by an indigenous tribe in the Peruvian Amazon, that has the (apparently) unusual feature of using verb conjugations to indicate the certainty of information being provided in a sentence. From an article on Nautilus:

In Nuevo San Juan, Peru, the Matsés people speak with what seems to be great care, making sure that every single piece of information they communicate is true as far as they know at the time of speaking. Each uttered sentence follows a different verb form depending on how you know the information you are imparting, and when you last knew it to be true.
The language has a huge array of specific terms for information such as facts that have been inferred in the recent and distant past, conjectures about different points in the past, and information that is being recounted as a memory. Linguist David Fleck, at Rice University, wrote his doctoral thesis on the grammar of Matsés. He says that what distinguishes Matsés from other languages that require speakers to give evidence for what they are saying is that Matsés has one set of verb endings for the source of the knowledge and another, separate way of conveying how true, or valid the information is, and how certain they are about it. Interestingly, there is no way of denoting that a piece of information is hearsay, myth, or history. Instead, speakers impart this kind of information as a quote, or else as being information that was inferred within the recent past.

I doubt the Matsés spend much time talking about existential risk, but their language could provide an interesting example of how to more effectively convey aspects of certainty, probability and evidence in natural language.

Comment by gavintaylor on The Case for Impact Purchase | Part 1 · 2020-04-20T20:25:28.295Z · score: 2 (2 votes) · EA · GW
I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn't seem conducive to focusing on impact.

Agreed. In my brief experience with academic consulting one thing I've realised is that it is really quite reasonable for contracted consultants to charge a 50-100% premium (on top of their utilisation ratio - usually 50%, so another x2 markup) to account for their lack of benefits.

So if somebody is expecting to earn a 'fair' salary from impact purchases compared to employment (or from any other type of short-term contract work really) they should expect a funder to pay premium for this compared to employing them (or funding another organisation to do so) - this doesn't seem like a good use of funds in the long-term if it is possible to employee that person.

Comment by gavintaylor on The Case for Impact Purchase | Part 1 · 2020-04-15T22:06:30.486Z · score: 13 (8 votes) · EA · GW

I'm interested in seeing a second post on impact purchases and would personally consider selling impact in the future. I have a few general comments about this:

  • Impact purchases seem similar to value-based fees that are sometimes used in commercial consulting (instead of time- or project-based fees) and may be able to provide a complementary perspective. Although in business the 'impact' would usually be something easy to track (like additional revenue) and the return the consultant gets (like percentage of revenue up to a capped value) would be agreed on in advance. I wonder if a similar pre-arrangement for impact purchase could work for EA projects that have quantifiable impact outcomes, such as through a funder agreeing to pay some amount per intervention distributed, student educated, etc. Of course, the tracked outcome should reflect the funders true goals to prevent gaming the metric.
  • It seems like impact purchases would be particularly helpful for people coming into the EA community who don't yet have good EA references/prestige/track-record but are confident they can complete an impactful project, or who want to work on unorthodox ideas that the community doesn't have the expertise to evaluate. If they try something out and it works then they can get funds to continue and preliminary results for a grant, if not, it's feedback to go more mainstream. For this dynamic to work people should probably be advised to plan relatively short projects (say a up too few months), otherwise they could spend a lot of time on something nobody values.
  • This could be a particularly interesting time to trial impact purchases used in conjunction with government UBI (if that ends up being fully brought in anywhere). UBI then removes the barrier of requiring a secure salary before taking on a project.
  • From my experience applying to a handful of early-career academic grants and a few EA grants, I agree that almost none provide any/useful feedback (beyond accepted or declined), either for the initial application or for progress or completion reports. However, worse than having no feedback is that I once heard from an European Research Council (ERC) grant reviewer that their review committees are required to provided feedback on rejected applications, but also instructed to make sure the feedback is vague and obfuscated so the applicant will have no grounds to ask for an appeal, which means the applicant gets feedback the reviewers know won't be useful for improving their project... Why do they bother???
  • With regards to implementation. I think one point to consider is the demand from impacters relative to funds of purchasers. At least in academia, funding is constrained and grant success rates are often <20%, and so grantees know that it is unlikely they'll get a grant to do their project (academic granters often say they turn away a lot of great projects they want to fund). If impact purchasers were similarly funding constrained relative to the number of good projects, I think the whole scheme would be less appealing as then even if I complete a great project, getting its impact bought would still involve a bit/lot of luck.
  • These posts about impact prizes and altruistic equity may also be of interest to consider.
Comment by gavintaylor on [Question] Resources for Mid-Career Updates · 2020-03-28T22:02:09.186Z · score: 6 (4 votes) · EA · GW

Have a particular strength? Already an expert in a field? Here are the socially impactful careers 80,000 Hours suggests you consider first.

Comment by gavintaylor on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-20T20:16:41.948Z · score: 4 (3 votes) · EA · GW

In the BBC today: Coronavirus: Robots use light beams to zap hospital viruses

Comment by gavintaylor on Why SENS makes sense · 2020-03-17T15:46:35.590Z · score: 2 (2 votes) · EA · GW

Sure, I think the key questions would be:

-Of the treatments currently being developed (in reference to the list on, is it likely that treatments for multiple hallmarks can be used in parallel?

--Are there currently any observed or expected interactions between different treatments?

--Has any effort been made to see if the effects of multiple treatment are additive, in terms of improved lifespan, in a pre-clinical study?

-What side effects have been observed for the treatments currently in clinical trials?

It's interesting to know that recurring and more frequent treatments are going to be needed. That point hasn't been obvious to me before, but it could be important to consider in relation to the economics of scaling up mass anti-aging treatment - it's not like a one of vaccination against a specific type of ageing damage, but still a 'condition' that requires ongoing, and perhaps increasing, care.

Comment by gavintaylor on COVID-19 brief for friends and family · 2020-03-15T18:59:27.724Z · score: 3 (3 votes) · EA · GW

I was happy to see that I'm apparently not the only person who touches their face a lot and the BBC noted that many people even touch their face while giving official advice not to:

The main tips for how to avoid face touching were:

-Wear glasses on your face so you touch them instead.

-Make an effort to keep your hands clasped most of the time, so that touching your face is more of a conscious act that you'll notice and and can choose to stop.

Comment by gavintaylor on Why SENS makes sense · 2020-03-09T21:57:05.721Z · score: 5 (2 votes) · EA · GW

Nice piece Emanuele, I felt that I actually got what LEV was and why we should aim to get there more after reading this post than I did after reading your previous ones. A general comment is that from what the roadmap shows, it really seems like anti-aging research has progressed quite far (i.e. quite a few on going and some late-stage clinical trials) relative to the fields fringe nature and apparently limited funding.

In terms of questions, there is one thing that I think is fairly critical - how well do multiple interventions combine?

What SRF claims is that solving all the seven categories will probably lead to lifespans longer than the current maximum.

As I understand this, treatments for all of the categories are being developed in independently. Is anybody looking to see if they can all be used in parallel? Could there be interactions between treatments that prevent this? It seems that the expected value of the anti-aging research is only realised if it will, at some point, be possible to treat all the categories in parallel. Research into a treatment for one category that wouldn't be compatible with other treatments seems like it should receive much lower priority.

It seems like there could be ways to test this already. For instance, the roadmap shows many treatments are already at the pre-clinical in-vivo stage. If we start applying multiple therapies in-vivo, we can start to test how compatible they are. Do you know if that has been done?

Starting to test multiple therapies in-vivo could also provide some fundamental evidence about how the benefits of multiple therapies combine. At the moment the assumption seems to be that, say, individually treating mitochondrial mutations and extracellular aggregates, prolongs expected life by X and Y years, respectively, so treating them both in combination will prolong life by X + Y years, but both negative or positive returns on the combination could occur. To be honest, I have some general scepticism about anti-aging research because ageing is very widely conserved in the animal kingdom (there are only a few animals with negligible senescence). It could be that there is some evolutionary path way negligible senescent animals went down that is hard to cross-over to even if we treat all the categories, so I have a weak prior that senescent animals will get diminishing returns from multiple therapies.

Another point that I think is worth discussing is how the damage repair approach effects the metabolic processes causing the damage?

Dr. de Grey always stresses how the damage repair approach, which he also calls "the maintenance approach", has a big advantage over geriatrics and the kind of biogerontology aimed at targeting the metabolic processes that are causing this damage.

For instance, if we treat an 80 year olds telomere attrition, are we going to need to treat them again in the future? Are consecutive treatments going to need to occur at more regular intervals? I don't know much about how treatments effect the underlying metabolic processes (as noted, metabolism is very complicated), but it could be that these continue picking up pace even as the damage they cause is repaired. Knowing about this could also be important in assessing the value of LEV as a whole, particularly if treatments have dose dependent side-effects. For instance, it may be that we can treat ageing out to 200 or so, but then rate of damage is so high that treatment dose required is too strong to tolerate. This is probably an issue for SENS 2.0, but it also seems like an area where some in-vivo testing can provide some useful information. If nothing else, finding that regularity of therapy is expected to increases suggests that treatments with more tolerable side-effects might be preferred (where there is a choice).

This are both fairly technical issues compared to the other questions you proposed in the post, but I think they point towards some fairly crucial considerations about how the additivity and repeatability of therapies will effect the goal of LEV.

Comment by gavintaylor on COVID-19 brief for friends and family · 2020-03-09T15:00:46.095Z · score: 3 (2 votes) · EA · GW

In terms of hand sanitiser - in Brazil I've also found hand sanitiser is sold out or very expensive. However, here it is common to use 70% ethanol for household cleaning at it is possible to buy this in gel form as well, which is still well stocked and at normal prices. I expect this will work just as well for sanitisation. Would it be worth considering as an alternative if proper hand sanitiser is unavailable or for people on a budget (maybe it would leave you hands a bit dryer)?

I don't recall seeing this product while living in Australia or Sweden, so I'm not sure how widely available it is. Here is a link to the last pack I bought, although there are many brands available in Brazil.

Comment by gavintaylor on The illusion of science in comparative cognition · 2020-03-01T22:28:53.271Z · score: 6 (2 votes) · EA · GW

Further work from the authors of the original article:

Claims and statistical inference in animal physical cognition research.

Overall, our analysis provides a cautiously optimistic analysis of reliability and bias in animal physical cognition research, however it is nevertheless likely that a non-negligible proportion of results will be difficult to replicate.
Comment by gavintaylor on COVID-19 brief for friends and family · 2020-03-01T22:21:13.504Z · score: 1 (1 votes) · EA · GW
and practicing not touching your face.

How important is it to avoid touching your face if you are also washing your hands regularly?

As a practical point, I think this is somewhat hard to avoid for some people. I feel I touch my face more than wanted and even though this occurs in social situations where it may be mildly unacceptable, I have problems breaking the habit (I do have weak symptoms of body-focussed repetitive behaviour disorder and it's probably related to this). I don't think the somewhat abstract threat of reducing infection risk will be enough to stop me touching my face much as I mostly do this without think about it, although that may change when the virus spreads to my region and I feel under more personal threat.

This made me recall the Pavlok, which is a wrist-band that uses aversion therapy (vibrations and electric shocks) to break bad habits like nail biting. Although I cant find this described as a use case on their website, I suspect it could also be used to break a face touching habit quickly. Alternatively, you can probably get most of the aversion from snapping a rubber band on your wrist whenever you notice you're touching your face.

Comment by gavintaylor on The Intellectual and Moral Decline in Academic Research · 2020-02-13T14:34:46.242Z · score: 3 (3 votes) · EA · GW

Thanks for the discussion on this Tom and Will.

I originally posted this article as, although it presents a very strong opinion on the matter and admittedly uses shock tactics by taking many values out of context (as pointed out by Romeo and Will), I thought that the sentiment was going in both same the direction that I personally felt science was moving and also with several other sources I'd read. I hadn't looked into any of authors other work, and although his publication record seems reasonable, he has pushed some fairly fringe views on nutrition and knowing this does reduce the weight I give to views in this article (thanks for digging into it Tom).

For a more balanced critic of recent scientific practice I'd recommend the book Real Science by John Ziman (I have a pdf, PM if you'd like a copy). It’s a long but fairly interesting read on the sociology of science from a naturalistic perspective, and claims that University research has moved from an 'academic' to 'post-academic' phase, characterised as the transition from the rigorous pursuit of knowledge to a focus on applications, which represents a convergence between academic and industrial research traditions. Although this may lead to more applications diffusing out of academia in the short-term, the 'post-academic' system is claimed to loose some important features of traditional research, like disinterestedness, organised skepticism, and universality, and tends to trade quality for quantity. The influence of societal interests (including corporate goals) would be expected to have much influence on the work done by 'post-academic' researchers.

Agreed with both Will and Tom that there are certainly are still lot of people doing good academic research, and how strongly you weight the balance will depend on which scientists you interact with. Personally, I ended up leaving Academia without pursuing a faculty position (in-part) because I felt I the push to use excessive spin and hype in order to publish my work and attract funding was making it quite substanceless. Of course, this may have been specific to the field I was working in (invertebrate sensory neuroscience) and I'm glad to hear that you both have more positive outlooks.

Comment by gavintaylor on The Intellectual and Moral Decline in Academic Research · 2020-02-12T12:29:02.225Z · score: 2 (2 votes) · EA · GW

Thanks for elaborating Will.

Agreed that the increase in funding for science will generally just increase the size of science, and the base assumption should be that the retraction rate will stay the same, which would lead to a roughly proportionate increase in the number of retractions with science funding. The 700% vs. 900% roughly agrees with that assumption (although it could still be that the reasons for retraction change over time).

The idea of increasing retractions being a beneficial sign of better epistemic standards is interesting. My observation is that papers are usually basically only retracted if scientific fraud or misconduct was committed (e.g. falsifying or manipulating research data) - questionable research practices (e.g. P-hacking, optional stopping or HARKing), failure to replicate, or even technical errors don't usually lead to a retraction (Wikipedia also notes that plagiarism is a common cause of retractions). It is a pity there is no ground truth for scientific misconduct to reference the retraction rate against.

Aside, this summary of the influence of retractions and failure to replicate on later citations may be of interest. Thankfully, retraction usually has a strong reduction on the amount of citations the retracted paper receives.

Comment by gavintaylor on The Intellectual and Moral Decline in Academic Research · 2020-02-09T17:12:24.071Z · score: 11 (8 votes) · EA · GW

I agree that it's an extreme stance and probably overly-general (although the specificity to public health and biomedical research is noted in the article).

Still, my feeling is that this is closer to the truth than we'd want. For instance, from working in three research groups (robotics, neuroscience, basic biology), I've seen that the topic (e.g. to round out somebody's profile) and participants (e.g re-doing experiments somebody else did so they don't have to be included as an author, instead of just using their results directly) of a paper are often selected mainly on perceived career benefits rather than scientific merit. This is particularly true when the research is driven by junior researchers rather than established professors, as the value of papers to former is much more about if they will help get grants and a faculty position rather than their scientific merit. For example, it's very common that a group of post-docs and PhDs will collaborate to produce a paper without a professor to 'demonstrate' their independence, but these collaborations often just end up describing an orphan finding or obscure method that will never be really be followed up on, and the junior researchers time could arguable have produced more scientifically meaningful results if they focused on their main project. Of course, its hard to evaluate how such practices influence academic progress in the long run, but they seem inefficient in the short-term and stem from a perverse incentive of careerism.

My impression is that questionable research practices probably vary a lot by research field, and the fields most susceptible to using poor practices are probably ones where the value of the findings won't really be known for a long time, like basic biology research. My experience in neuroscience and biology is that much more 'spin', speculation, and story telling goes into presenting the biological findings than there was in robotics (where results are usually clearer steps along a path towards a goal). While a certain amount of story telling is required to present a research finding convincingly, it has become a bit of a one-up game in biology where your work really has to be presented as a critical step towards an applied outcome (like curing a disease, or inspiring a new type of material) for anybody to take it seriously, even when it's clearly blue-sky research that hasn't yet found an application.

As for the author, it looks like he is no longer working in Academia. From his publication record it looks like he was quite productive for a mid-career researcher, and although he may have an axe to grind (presumably he applied for many faculty positions but didn't get any, common story) being outside the Ivory Tower can provide a lot more perspective about it's failings than what you get from inside it.

Comment by gavintaylor on The Intellectual and Moral Decline in Academic Research · 2020-02-07T20:51:41.697Z · score: 15 (8 votes) · EA · GW

Good point. Unfortunately the Economist article referenced for this number is pay-walled for me and I am not sure if it indicates the total number of clinical trial participants during that time.

Your comment got me interested so I did some quick googling. In the US in 2009 there were 10,974 registered trials with 2.8 Million participants, and in the EU the median number of patients studied for a drug to be approved was 1,708 (during the same time window). I couldn't quickly find the average length of a clinical trial.

I expect 80,000 patients would be at most 1% of population of total clinical trial participants during that 10 year window, so this claim might be a bit over-emphasised (although it does seem striking at first read).

Comment by gavintaylor on Welfare stories: How history should be written, with an example (early history of Guam) · 2020-01-05T18:06:54.477Z · score: 8 (5 votes) · EA · GW

Jason Crawford is writing about the history of many industrial advances at Roots of Progress. I think his approach is complementary to yours, and he describes it at:

Comment by gavintaylor on Comparing naturally evolved and engineered solutions · 2020-01-02T21:27:10.027Z · score: 5 (4 votes) · EA · GW

This seems like an interesting way of comparing the results of different types of design solutions.

One important thing to consider is that evolution was under a lot of additional constraints compared to engineers when 'designing' organisms. For instance, reactions occur at room temperature and with organic chemistry, many organisms are self-replicating and self-assembling, energy and materials are usually limited to what an organism can collect itself. And rather than optimising for any specific parameter, evolution is just aiming for an organism to survive and reproduce - so few solutions will be optimal in terms of performance/efficiency unless there was a strong evolutionary pressure for them to be so.

My experience with bio-inspired design is that it is usually best to look to biology for high-efficiency solutions as resource scarcity is a constant in most environments. High-performance biology is seen in microscopic structures, which probably still out-perform engineered solutions in many areas.

Comment by gavintaylor on How Fungible Are Interests? · 2019-12-19T16:28:40.417Z · score: 3 (3 votes) · EA · GW

I think something else to consider is that familiarity can also build a passionate interest that is hard to let go of.

In the case of Sue the Poet, it's not that she's was unskilled and looking for something interesting, she's already found writing and, as described, practiced this a lot and found she is a skilled and (potentially) successful writer. Likewise, I assume that your friend the computer scientist has already studied computer science for a while and has become quite skilled at it, so its less appealing to start from scratch with physics (there would be some skills in common between CS and physics, but it will probably still feel like a big step-back on the learning curve).

While there is an element of sunk-cost fallacy here, I think that people who've done training and found that they are skilled at something are probably less likely to want to change their interest than somebody who has experience and found that they are un-skilled, or otherwise unsuccessful at their first interest. This seems like it could create a perverse incentive as generally-talented people who are highly successful in their first field could be disincentivized from trying to move into a more important field where they could have a larger impact. In academia there are often programs to encourage interdisciplinary research, but I wonder if the people these draw in may tend to be those that aren't particularly successful in their original field? (I consider myself a interdisciplinary scientist and can admit there is a bit of truth in that)

In line with this, I think it's good that EA/80k posts often emphasize the value of testing out a variety of promising career paths, not just picking the subject you are either most interested in or judge as most important when entering college. Maybe it could also be good to pre-commit to testing some number of options for a certain time (say 4 fields x 6 months) before comparing your interest and ability between them to avoid the temptation to commit to the first one grabs your attention. I know a lot of graduate courses do something similar with lab rotations, although I don't know how common this is elsewhere in career planning/education.

Comment by gavintaylor on "Altruism-driven research" (EA meets... plant pathology?) · 2019-12-19T11:52:45.420Z · score: 3 (2 votes) · EA · GW

Thanks for posting this. I think that there is a lot more scope for the INT framework to be used by researchers outside of the top-priority EA areas. From personal experience, if you come into EA as an experienced researcher from a field outside the priority areas it's somewhat hard to connect with the existing resources unless you're willing to change fields.

But I think there would benefits from more general outreach to scientists/academic working in other areas. For instance, nudging researchers to think about the potential impacts/consequences of their work could encourage a norm of selecting impactful, not just interesting, projects (academic research already encourages working on neglected/original and tractable problems) and some may also pass this idea on to their students who may be better positioned to transition to work on a top-priority EA area.

Comment by gavintaylor on We're Rethink Priorities. AMA. · 2019-12-16T00:22:55.211Z · score: 5 (4 votes) · EA · GW

Also, it may be worth considering that in many cases preprints are considered much more 'citeable' in academic articles than general webpages/blog posts would be. I think having the DOI is seen as a mark of permanence, which is considered superior to just having a permalink to the accessed version.

Comment by gavintaylor on We're Rethink Priorities. AMA. · 2019-12-15T18:31:00.334Z · score: 4 (3 votes) · EA · GW

Peter, do you have any tips for being productive while doing independent research and other work in parallel? I'm also trying to do both scientific research and scientific consulting at the same time. I've found my two major difficulties are slowed productivity while context switching (which I usually need to do several times a week, between projects in very different fields) and feeling obliged to prioritize time/energy on my clients research projects in front of my own (regardless of what I consider their relative importance/interest to be). I'd be interested to know how you deal with these or similar challenges.

Comment by gavintaylor on We're Rethink Priorities. AMA. · 2019-12-15T18:18:41.566Z · score: 7 (5 votes) · EA · GW

Biorxiv has a new initiative where they will review preprints, with the idea of the review comments then being published next the pre-print and then used by directly editors of the journal(s) the paper is later submitted to. I don't know too much about this, but it could be a useful way to get reviewer comments for some of invertebrate sentience posts, even if you don't later intend to submit them to a journal. Some further information is at:

Comment by gavintaylor on Reality is often underpowered · 2019-12-15T18:07:28.322Z · score: 1 (1 votes) · EA · GW

I recently stumbled onto this article supporting the use of both serendipitous and planned case studies.

This is related to clinical practice, but again the ideas may be relevant to development. The authors note that case-studies are particularly useful to clinicians who are might be in a good position to look for patients fitting into a specific population during their routine practice - I wonder if the same concept could be applied to field staff in development projects. For instance, developmental 'case-studies' probably won't generate generalizable results, but they could be helpful in tailoring an RCT validated intervention to a specific population.

Comment by gavintaylor on My recommendations for RSI treatment · 2019-11-21T16:07:42.811Z · score: 5 (3 votes) · EA · GW

People likely to develop RSI are probably also likely to develop back pain (which I had well before RSI-wrist problems). The book 8 steps to a pain free back looks superficially like pseudo-science, but I'd actually recommend it as I found the exercises and techniques it described to be really useful. Over 10 years after having read it I still use the 'stretchsitting', 'stretchlying' and 'inner-corset' techniques and haven't had major back-discomfort since.

I have a pdf of book, message me if you'd like a copy.

Comment by gavintaylor on Is there a clear writeup summarizing the arguments for why deep ecology is wrong? · 2019-10-28T17:13:20.393Z · score: 1 (1 votes) · EA · GW

Good links Max. I've often felt there is a conflict between ecosystems/species preservation and animal welfare and these are really useful for exploring that idea more.

However, I one point that I still get some cognitive dissonance from is the low-importance ascribed to (species) diversity. It seems like if resources are to be used to make more happy individuals (so using resources to improve the lives of unhappy individuals is not an option, maybe we're in a utopia where the lives of all sentient individuals are already net-positive and we value totalist population ethics), then it could, for instance, be better to produce more happy rhinos than happy humans, as there are far fewer rhinos than humans (if our utopia has the same current species numbers as the world today), so we will get more increase in the diversity of happy experiences. A moral weighting should also be applied between humans and rhinos, but if there is a huge difference in relative population numbers then it would probably be the dominating factor. How do others value a world with 7,700,000,000 people and 40,000 rhinos vs. a world with 7,700,010,000 people and 30,000 rhinos (using rough current species numbers and assuming all were fairly happy)?

I think my intuition is to incorporate diminishing returns (for a given species) into multi-species population ethics, given that the experiences (phenomenology) of species differs, so they experience happiness in different ways. Does this make any sense, and is there a name for such ethical views? It works best for me from the totalist population ethics standpoint, and I probably wouldn't extend this to saying we should help unhappy rhinos over unhappy humans, even given the current populations of both species.

Comment by gavintaylor on Ramiro's Shortform · 2019-10-24T12:14:22.307Z · score: 1 (1 votes) · EA · GW

Hi Ramiro.

I think that Point 1 will be difficult to test in this way. What you want to do sounds a bit like a regression discontinuity analysis, but (as I understand it) there isn't really a sharp time point for when you started promoting EA more; the translations/meetings etc. increased steadily since Oct 2018, right? I think this will make it harder to see the effect during the first year that you are scaling up outreach (particularly if compared by month, as there is probably seasonal variation in both donation and outreach). Brazil has also had a fairly distinct set of news worthy events (i.e. election and major political change, arrest of two former presidents during ongoing corruption scandals, amazon fires, etc.) over the same time period you increased outreach. If these events influence donation behaviour, then comparisons to other countries might not be particularly relevant (and it further complicates your monthly comparison). I think a better way to try and observe a quantitative effect would be if you compare the total donations for three years: pre-Oct 2018, Oct 2018-Oct 2019, post-Oct 2019 (provided you keep your level of outreach similar for the next year, and are patient). Aggregating over year will remove the seasonal effect of donations and some of the effect of current events, and if this shows an increase for 2019-2020, then you could (cautiously) look at comparing the monthly donation behaviour (three years of data will be better to compensate for monthly variation).

At this point, I think tracking your impact more subjectively by using questionnaires and interviews would produce more useful information. Not sure if charities would link their donors to you (maybe getting the contact of Brazilians who report donating in the EA survey would be more likely), but you could also try adding a annual questionnaire link to your newsletter/facebook/site like 80,000 hours does. I'd specifically try to ask people who made their first donations, or who increased their donations, this year what motivated them to do so.

Comment by gavintaylor on Reality is often underpowered · 2019-10-19T14:12:33.637Z · score: 3 (3 votes) · EA · GW

I read an article about using logic to fill in the gaps around sparse or weak data that reminded me of this post. The article is focused on health science, but I think the idea is relevant to development as well.

Comment by gavintaylor on Best EA use of $500,000AUD/$340,000 USD for basic science? · 2019-10-02T11:45:24.800Z · score: 3 (3 votes) · EA · GW

As far as I know all western universities take overheads, although the percentage varies a lot. I used to be at the Biology Department in Lund University and they took 50%!

But I think that refusing overheads is only really an option on the margin, for foundations and individual funders. Most researchers get the majority of their funding from government funding agencies (e.g. NIH, NSF) and as far as I know these all pay full overheads, but universities actually need these overheads to fund their operating expenses. I don't have first hand knowledge of this, but my understanding is that if overheads are 50% and you get $100 grant that doesn't pay overheads, the University actually has to source $50 from elsewhere in order to administer your grant.

I've never heard of a University turning down an grant without overheads, but I have heard that bringing in a majority of overhead free money reflects poorly on an academic during a career review for promotion/tenure/new job etc.

Comment by gavintaylor on My recommendations for RSI treatment · 2019-09-24T12:35:19.796Z · score: 7 (3 votes) · EA · GW

That's interesting you mention the psychological aspect - I searched a lot of material on RSI, but don't recall seeing this discussed before. When I initially developed RSI it didn't bother me much, but as the physical symptoms progressed it upset me more and probably ultimately contributed to some moderate depression I developed (it didn't help that my depression was related to difficulty reaching professional goals, and the RSI was slowing me down on achieving them). I put off treatment for both when they were at the mild stage and ultimately only treated the RSI after I treated the depression - maybe that was the wrong order to take.

Comment by gavintaylor on Best EA use of $500,000AUD/$340,000 USD for basic science? · 2019-08-27T20:08:48.125Z · score: 12 (5 votes) · EA · GW

Also, if you donate to researcher at a University, try to make sure it goes directly to them and their institution doesn't take overheads from it.

Comment by gavintaylor on Best EA use of $500,000AUD/$340,000 USD for basic science? · 2019-08-27T12:38:18.705Z · score: 25 (13 votes) · EA · GW

OPP funds transformative basic science and might be able to make some suggestions about how to allocate the money.

Comment by gavintaylor on What are good reasons for or against working on protecting biodiversity and ecosystem services? · 2019-08-25T18:02:36.588Z · score: 2 (2 votes) · EA · GW

I have wondered if species extinction should be treated as worse than simply the welfare/suffering of the last members of a species.

For example, I take it that most EAs would view the loss of the last 100 million humans as much worse than the 7.6 billion who might die before them in an existential catastrophe, particularly if the survivors still had a chance at re-building human civilizations. Likewise, if we lose a species, we lose any future value that was intrinsic to having that species in existence. And as most human value is likely to be in the far future this could also be true for animals, but this can only be realized if the species remains extant (i.e. future humans may wish to create zoo simulations or worlds after WBE or space colonization).

While I agree that a lot of both near- and long-term human related causes seem more important than protecting breeding populations of all endangered species, it could be that we are undervaluing the intrinsic benefit of biodiversity. A cheap way of safeguarding against the case we are currently under prioritizing species preservation would just be to take some genetic samples from those that are endangered (already being done). Then the opportunity exists to recreate extant species in the future if resources are available and we decide they should have been conserved.

Comment by gavintaylor on How to generate research proposals · 2019-08-22T12:18:59.078Z · score: 5 (4 votes) · EA · GW

Nice, I particularly like the table and bullet-point forms you used for curating your ideas - I often find myself with too many ideas to work on and this seems like a good way to take an objective overview.

During my PhD I read 'Becoming a successful scientist' - this presented a strategic approach to scientific discovery and problem selection (Section 3.1) that I haven't really seen elsewhere. It focused on science, but the ideas of looking for contradictions, paradoxes, new viewpoints or different scales may also be helpful for generating research questions in philosophy/economics.

I have a PDF of the book I'm happy to send by email, PM me.

Comment by gavintaylor on How Life Sciences Actually Work: Findings of a Year-Long Investigation · 2019-08-20T12:03:55.969Z · score: 1 (1 votes) · EA · GW

Another comment about the failings of peer-review and convoluted ways to circumvent them. It's quite common that reviewers will suggest extra experiments, and often these can improve the quality of the paper.

However, a Professor in Cognitive Psychology once told me that reviewers in his field seem to feel obliged to suggest extra experiments and almost always do. Even if the experiments in the paper are already quite complete, the reviewer will usually suggest an unnecessary control or a tangential experiment. So this Professor's strategy to speed things up was to do, but then leave out, a key control experiment when he wrote up his papers. Reviewers would then almost always pick up on this and only request this additional experiment, and so then he could easily include it and resubmit quickly.

Comment by gavintaylor on How Life Sciences Actually Work: Findings of a Year-Long Investigation · 2019-08-20T00:19:25.984Z · score: 11 (7 votes) · EA · GW

Very interesting post! I have worked in life science up to the postdoc level and think that is generally a reasonable summary of how life sciences research works (disclosure, Guzey interviewed me for this study).

One question is I have is how generalizable is this description geographically and across Universities? Based on the Universities/funders referenced I'd assume your thinking about Tier 1 Research Universities in the US. But did the demographics of your interviewee demographics suggest this could be situation more broadly?

A few other comments to e on some of the points:
Role of PIs
Agreed that senior PIs with large labs tend not to do very much bench work themselves. However, they aren't solely managing and writing grants - I think one of the most important things PIs do is knowledge synthesis through writing literature reviews. I haven't really met any postdocs that have the depth and breadth of knowledge of their lab head, which allows the later to both provide a high-level summary of their fields in reviews and also propose new ways forward in their grants.
A counterpoint I've come across is in mixed labs runs by a PI with a computational background who has postdocs and PhDs doing lab work while he works on using their biological results for computational modelling. From my perspective, these types of labs seem to function quite well as the PI usually relies on people coming into the lab to be well trained in the biological assays they'll use, but then teaches them computational techniques that end up using themselves by the end of their project.

Peer review
One of the big drawbacks of peer review is the hugely variable quality of reviews that are provided. As an example simply in terms of the level of detail provided, I have had comments of one paragraph and three pages for the same article.
I think a key reason for this is there isn't really any standardized format or expectations for reviews nor is there much training or feedback for reviewers. One thought I've had is that paying peer-reviewers would allow journals to both enforce review consistency and quality - although publishers have such large profit margins that it this could be feasible, they have no incentive to do so as scientists accept the status quo. In the absence of paid peer-review, I think that disclosing reviewer names and comments helps prevent 'niche guarding' and encourage reviewers to provide a useful and honest review (eLife does this currently, not sure if any other journals do so).

Permanent researchers
Agreed that letting postdocs move into staff scientist/researcher positions would be helpful - this has been discussed a bit in the Nature and Sciences career sections over the last few years (such as here). I've usually heard from postdocs who moved into staff scientist or lab/facility manager positions that they wanted to stop relying on grants for their employees and to get some job stability. But some then later regretted the move after finding the positions didn't have many options for career advancement relative the professor track. The staff scientists role is a relatively new academic position (although it has been around for a long time in government and private research labs) that doesn't yet have a lot of consistency between Universities - it would probably help to have more discussion and even formalize the roles expectations before a lot of people move into it.

Solo founders
This is an interesting observation and I hadn't thought about the individual lab head model in this way. I'd actually like to take this a step further and say that academia has a habit of breaking up good pairs of biologists. How so? In a few cases, I've seen two senior postdocs or a postdoc and junior PI (so essentially two researchers quite closely matched their level of experience and with complementary skills) work really well together and produce outstanding results over a few years, which will usually lead to one of the duo getting a permanent position. The other may be able to continue on as a postdoc for a while, but as their research speciality will overlap heavily with their colleague's field and it's unlikely that the hiring/promoting institution will open another position in a similar area for a few years, the postdoc will probably have to move elsewhere to continue their career. Although the two may continue to collaborate, the second person to be hired often starts working on different topics to show their intellectual independence (although the new topics may be less impactful than what they were working on as a pair). I only know of a few cases where duos separated in this way and I haven't really followed their outcomes, but I'd assume that the productivity of both researchers declined afterwards. Allowing one to move into a staff researcher position would help in this respect.

Big labs vs. small labs
Another option is a cluster of small labs working on a similar theme (I was in one in Lund that worked on Vision, another in the department worked on Pheromones). This seems to be more common in Northern Europe where high salaries tend to limit the group sizes that are possible (often PI, 1-2 postdocs, 1-2 PhDs). Clusters seemed to have the benefits noted for larger labs, but meant there were a lot of PIs around to mentor students, and also allowed the cost of lab facilities and support staff to be shared.

Research niches
Territorial PIs seem quite common, and as noted, the publication/grant review process allows them to be quite effective at delaying/blocking and even stealing ideas that encroach on their topic. A link was recently posted here to an economics paper taht even suggested new talent entering a field after the death of a gatekeeping PI could speed up research progress. If it seems that a gatekeeping PI is holding back research in an important field, I think that a confrontational grantmaking strategy could be used - whereby a grant agency offers to fund research on the topic but explicitly excludes the PI and his existing collaborators from applying and reviewing proposals.

Differing risk-aversion between PIs and students
Although a PI may seem risk-loving, he benefits from being able to diversify his risk across all of his students and may only need one to get a great result to keep the funding coming. He's unlikely to get all of his students working together on one hard problem, just like a student can't spend all his time on a high-risk problem.
I tend to think that developing the ability to judge a project's risk is an important skill during a PhD, and a good supervisor should be able to make sure student has at least one 'safe' project that they can write up. Realistically it is possible to recover from a PhD where nothing worked well during a postdoc, but it is a setback (particularly in applying for ECR fellowships).
I feel that postdocs are possibly where the highest risk projects get taken on at the individual level, both because they have the experience to pick an ambitious but achievable goal, and also because they want to publish something great to have a good chance at a faculty position.

Comment by gavintaylor on Do Long-Lived Scientists Hold Back Their Disciplines? · 2019-08-14T00:06:47.065Z · score: 4 (2 votes) · EA · GW

A simple suggestion to mitigate these problems could already be trialled well before life extension is available. It is probably possible to identify niche field where star scientists are acting as gatekeepers (either from citation patterns or conversations with scientists in a variety of fields) - an agency interested in that field could then simply offer some large and long term grants for work in the field provided that does not involve any of the star scientist or any of his collaborators. Hopefully the promise of substantial funding would be enough to encourage new entrants to the field.

Admittedly, this would be a very confrontational approach that might lead the star scientist to try and block publications or other grants from people entering his field in this way, but academic rivalries already occur via other causes so it should hopefully work itself out. If funding scientific competition like this resulted in similar gains as this publication shows for the death of a star scientist then it is not only a solution to the situation, but also suggests funding competitors could prove more effective than funding the incumbent gatekeepers in some cases.

Comment by gavintaylor on Do Long-Lived Scientists Hold Back Their Disciplines? · 2019-08-13T12:28:39.491Z · score: 3 (3 votes) · EA · GW

I think a lot of this comes down to social factors rather than star scientist's productivity decreasing with age.

At least in neuroscience, and probably in the life sciences more broadly, PIs who are very influential in a subfield (or who start a new one) tend to be the go to people for a topic and often become the gatekeepers, so work on that topic is generally done in collaboration with them. Junior scientists (even ones trained by that PI) will usually try to establish a unique research focus that avoids conflict with the exisiting star PIs, even if that means they end up working in a less promising area.

I haven't read the linked paper, but I assume that one factor leading to increase in productivity is simply an increase in good people working in a promising research field where the gatekeeper was removed. In principle, this doesn't need the death of a star scientist to achieve.

Comment by gavintaylor on Concrete project lists · 2019-08-10T19:21:04.195Z · score: 1 (1 votes) · EA · GW

Hi Ryan, do you know of anybody in the EA space working on BCI, either on development or ethical considerations. BCI is mentioned surprisingly infrequently here.

Comment by gavintaylor on Extreme uncertainty in wild animal welfare requires resilient model-building · 2019-08-09T14:33:16.336Z · score: 2 (2 votes) · EA · GW

Interesting article Michael, thanks for linking to it. I haven't thought much about measuring experience states before, but after briefly looking over Simon's essay I think happiness/suffering must, at minimum, be possible to indicate on an ordinal scale. But while many factors that lead to happiness/suffering can probably be measured on a ratio scale (pain could be measured objectively as nociceptor activity), I doubt that how they influence valanced experience is consistent interpersonally, or even intrapersonally at different times/conditions.

Nonetheless, I think suffering the Weber-Fechner argument can still be made if suffering/happiness is measured on an ordinal scale. For instance, say a person is suffering immensely because of being in a lot of pain, vs. someone suffering mildly from minor pain. Our intuition would be to help the person in immense pain, but we will probably have to do much more to relieve their pain for them to even notice we've helped, compared to the person being in minor pain.

I've also just realized that intuitive problem with this argument is asymmetric, in that it indicates that we are better of doing a nice thing for somebody who has is in a neutral state vs. somebody who is already very happy which does intuitively makes sense (and is how the Weber-Fechner law is usually applied to finance - a poor person appreciates a $100 gift a lot more than a millionaire).

Does this mean that for a given link between a factor and intrinsic state (say pain to suffering), we are likely to get a greater change in subjective experience by working to improve that factor for individuals who are already close to neutral to start with? This seems counterintuitive...

Comment by gavintaylor on Extreme uncertainty in wild animal welfare requires resilient model-building · 2019-08-08T18:21:27.206Z · score: 6 (4 votes) · EA · GW

I am not sure if absolute suffering/pleasure should be measured on a linear scale, but there the Weber-Fechner law suggests that relative changes are likely to be perceived less than linearly.

The Weber-Fechner law indicates that the perceived change in a stimulus is inversely proportional to the initial strength. Example:

Weber found that the just noticeable difference (JND) between two weights was approximately proportional to the weights. Thus, if the weight of 105 g can (only just) be distinguished from that of 100 g, the JND (or differential threshold) is 5 g. If the mass is doubled, the differential threshold also doubles to 10 g, so that 210 g can be distinguished from 200 g.

This is true for the 5 main senses in humans and some animals, but I'm not sure if its been tested for pain (which is already quite a subjective sense), or subjective/emotional states in response to stimuli.

So while I intuitively agree that one person experiencing 10 units of suffering is worse than ten people experiencing 1 unit of suffering, the Weber-Fechner law counterintuitively suggests that a person who goes from 1 to 0 suffering will experience more subjective relief than somebody going from 10 to 9.

Comment by gavintaylor on How to evaluate neglectedness and tractability of aging research · 2019-08-02T18:55:46.583Z · score: 4 (2 votes) · EA · GW

Nice post! Agreed that hard problems (or at least those that are likely to take more than the usual academic funding cycle to produce results) are likely to be relatively neglected.

It would also be good to consider that interdisciplinary research tends to be hard to fund but often produces outsized results (tool development for basic biology often falls into this category). So some of the hard problems could be more tractable to an interdisciplinary group, but getting funding for one is often impractical. I don't know enough about the priority areas you identify as neglected and important to know which might benefit from an such approach, but specifically allocating some funding for interdisciplinary work could might produce good results in these areas.

Comment by gavintaylor on [Link] Bolsonaro is cutting down the rainforest (nytimes) · 2019-08-02T13:19:27.931Z · score: 9 (3 votes) · EA · GW

Both Bolsonaro and the Brazilian environment Minister Salles show strong support for loggers, even when the loggers are working illegally on (still) protected land. The Brazilian Institute for the Environment (IBAMA) does try to monitor and prevent illegal logging, but is limited in its ability to do so because of the threat of violence from loggers.

Unfortunately, IBAMA seems to receive little support from politicians - for instance, after loggers burned an IBAMA full tanker used to fuel helicopters that it was using to monitor illegal logging activities, Salles gave a speech to the loggers that seemed to generally support them more than his own department:

...there is a law that must be respected while it is still a law. On the other hand, there is the need for the products provided by the loggers...

(paywalled source and pdf copy - in Portuguese, and google translate doesn't do a great job)

IBAMA looks to have a very uncertain future, but it does sound like their capabilities to monitor logging activity are quite limited at the moment (and I'm not sure what enforcement options they have).

A tractable intervention could be to provide more modern and scalable remote monitoring capabilities (UAVS/drones or even satellite imagery) and the skills to analyse data from them. I don't know if IBAMA could receive such equipment directly as donations, or if the monitoring would be better done by a NGO that could then openly publish its results.

Comment by gavintaylor on How urgent are extreme climate change risks? · 2019-08-02T12:25:24.429Z · score: 5 (5 votes) · EA · GW

From the Vox article:

I also talked to some researchers who study existential risks, like John Halstead, who studies climate change mitigation at the philanthropic advising group Founders Pledge, and who has a detailed online analysis of all the (strikingly few) climate change papers that address existential risk (his analysis has not been peer-reviewed yet).
Further, “the carbon effects don’t seem to pose an existential risk,” he told me. “People use 10 degrees as an illustrative example” — of a nightmare scenario where climate change goes much, much worse than expected in every respect — “and looking at it, even 10 degrees would not really cause the collapse of industrial civilization,” though the effects would still be pretty horrifying.

From Halstead's report (which Vox seems to represent as a reliable meta-analysis - my apologies for butchering the formatting):

The big takeaway from looking at the literature on the impact of extreme warming is that the impact of >4 degrees is dramatically understudied. King et al characterise this as “knowing the most about what matters least”
-Is extreme warming an ex risk?
*6 degrees
On the models: For the impacts I have looked at, 6 degrees isn’t plausibly an ex risk, though it would be very bad. 6 degrees would drastically change the face of the globe, with multi-metre sea level rises, massive coastal flooding, and the uninhabitability of the tropics.
*10 degrees
On the models: It’s hard to come up with ways that this could directly be an ex risk, though it would be extremely bad.
-Model uncertainty
The impacts of extreme warming are chronically understudied suggesting some model uncertainty.
There might be some unforeseen process which makes human civilisation difficult to sustain.
-Indirect risks
None of this considers the indirect risks, like mass migration and political conflict. These could be a pretty substantial risk over the next 150 years.

It sounds like study on the effects and consequences of extreme warming, particularly indirect/secondary risks, are quite neglected and could benefit from some more work (although I'm not sure how tractable work on this is at this point).

Note that the Vox article also doesn't discuss existential risks arising from indirect effects.

Comment by gavintaylor on Invertebrate Sentience Table · 2019-07-25T20:41:33.254Z · score: 8 (4 votes) · EA · GW

After hearing opinions about the Cammerts from another academic who knows them‚ I've unfortunately become a lot less confident that this study could replicate.

Comment by gavintaylor on Invertebrate Welfare Cause Profile · 2019-07-16T13:17:08.922Z · score: 5 (4 votes) · EA · GW

All of the interventions in the 'helping now' section focus on preventing additional human caused harm to invertebrates. I agree these are important, but there may also be promising interventions that improve the welfare of invertebrates from their current baseline.

For example, a popular intervention for insect conservation is to plant wildflowers along curbsides, particularly in agricultural areas with monocultures. I'm not completely sure how insects choose nest sites, but I doubt that an evaluation of local food resources is made. So insects (bees for instance), that disperse into fields growing grasses probably suffer from food scarcity (as well as pesticides). All in all, I expect that this particular intervention is less effective at increasing insect welfare than the harm-prevention interventions proposed (and it would likely increase insect numbers in agricultural areas which may be net negative due to pesticide exposure), but there may be other life-improving options to consider. These may be quite tractable to implement if they fit into conservation groups existing agendas.

Comment by gavintaylor on Invertebrate Welfare Cause Profile · 2019-07-16T12:43:04.397Z · score: 6 (5 votes) · EA · GW

I would be cautious about using clock-speed to as a multiplier for consciousness experience, particularly for small flying animals. Insect flight is dynamically unstable (hovering hummingbirds probably are to), and their flight control systems respond on the order of one to a few wingbeat cycles, which does give them their appearance of very fast responses. But the speed of consciousness relevant cognitive processing is probably slower; for instance, bumblebee flower discrimination can take 10+ seconds.

That said, I do intuitively expect small mammals (like rats) with faster heart beats and shorter life spans to have a faster subjective experience that larger mammals, so I'd expect the same to be true for insects to some extent. I'd just avoid assuming that the fastest neural processing an animal is capable of (probably related to sensorimotor control of body stabilization) applies to all of its cognitive process.