Preprint: Open Science Saves Lives: Lessons from the COVID-19 Pandemic 2020-08-24T14:01:30.503Z · score: 8 (3 votes)
Working together to examine why BAME populations in developed countries are severely affected by COVID-19 2020-08-03T16:25:32.956Z · score: 18 (6 votes)
Is there a Price for a Covid-19 Vaccine? 2020-05-22T17:20:14.396Z · score: 11 (4 votes)
gavintaylor's Shortform 2020-05-03T19:44:17.547Z · score: 4 (1 votes)
The Intellectual and Moral Decline in Academic Research 2020-02-07T16:47:32.079Z · score: 26 (17 votes)
The illusion of science in comparative cognition 2019-11-02T19:17:18.322Z · score: 27 (9 votes)
IGDORE forum for discussing metascience 2019-10-23T18:28:07.141Z · score: 7 (3 votes)


Comment by gavintaylor on Linch's Shortform · 2020-10-09T00:10:44.062Z · score: 5 (3 votes) · EA · GW

The last newsletter from Spencer Greenberg/Clearer Thinking might be helpful:

Comment by gavintaylor on Lumpyproletariat's Shortform · 2020-10-07T15:39:28.967Z · score: 2 (2 votes) · EA · GW

There is a collection of pages about the 'Kickstarter for coordinated action' idea on LessWrong.

A friend of mine started Free our knowledge, which is intended to encourage collective action from academics to support open science initiatives (open access publishing, pre-registrations, etc.). The only enforcement is deanonymizing the pledge signatories after the threshold is reached (which hasn't happened yet).

Comment by gavintaylor on Preprint: Open Science Saves Lives: Lessons from the COVID-19 Pandemic · 2020-10-03T19:54:43.693Z · score: 3 (2 votes) · EA · GW

I recently attended the UNESCO Open Talks Webinar “Open Science for Building Resilience in the Face of COVID-19”, which touched on many of the ideas from the pre-print above. The webinar recording is available on YouTube, and I've also written up a short summary which can be accessed here. The WHO representative made it clear that they were in favour of Open Science and that it has assisted them in their work.

More generally, I think that Open Science is relevant to EAs from two perspectives. Firstly, it has the potential to reduce problems with and increase benefits from scientific research, which could have positive benefits for society. More directly, EA research often summarizes academic research and EAs should benefit if that is both (legally) freely accessible and also done more transparently. Although a lot of EA research is effectively published open-access (e.g. forum/blog posts) it could be also interesting to consider what other open science ideas can be incorporated into EA research.

Comment by gavintaylor on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-03T18:24:45.297Z · score: 2 (2 votes) · EA · GW

I regard Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) as having been quite successful. From Wikipedia:

Notable developments by CSIRO have included the invention of atomic absorption spectroscopy, essential components of Wi-Fi technology, development of the first commercially successful polymer banknote, the invention of the insect repellent in Aerogard and the introduction of a series of biological controls into Australia, such as the introduction of myxomatosis and rabbit calicivirus for the control of rabbit populations.

And the items listed in the Innovation section. Still, I'm sure they have had (at least) a few research projects that didn't go anywhere.

Comment by gavintaylor on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-03T18:17:55.279Z · score: 6 (5 votes) · EA · GW

It would be an interesting case study on organisational effectiveness to compare the Fraunhofer Society to the Max Planck Society. Although they focus on different stages of research (applied innovation vs. basic science) they both German non-profit research organizations and relatively similar in size (quick google on MPS gives around 24 thousand staff and $2.1 billion budget for 2018). Yet MPS is a world-renowned research organization and its researchers have been awarded numerous Nobel prizes. I'm not sure if MPS has specific goals, but nonetheless, it seems to be achieving much more impact than Fraunhofer. Some of this difference is probably just in appearances as basic research tends to get more recognition and publicity than applied work, but it still seems like MPS is systematically doing better. Why is that?


Of course, it is not that the employees at Fraunhofer want to do harmful things. Many are cognitively dissonant, actually thinking that they do tremendous good. But many are aware of the problematic situation they are in. The dilemma is: Not having any goal-oriented incentive system, the Fraunhofer Society is dominated by the personal incentive of its members: Job security.

This is the same general trend I observed amongst a lot of University researchers, but it sounds like it's progressed much further where you work. Careerism seems to kill the integrity of researchers.


When I told a senior scientist about CoolEarth, she replied:
"When it comes to climate change, we have to stop thinking in numbers"
When I asked her why, she said : "Because you can´t just throw a couple of dollars at the ground and ask mother nature to do it one more year"

This reminded me of The value of a life from the Minding Our Way sequence.

Comment by gavintaylor on Evaluating Life Extension Advocacy Foundation · 2020-10-03T15:25:34.516Z · score: 2 (2 votes) · EA · GW

Nice write up. I've referenced the Rejuvenation Road Map on LEAF's site several times, but never really knew much about the organisation itself.

Two extra points that I think would be interesting to ask about in the general questions on the landscape section:

-LEAF seems like they have a very good overview of the organisations already in ageing research (i.e. they raise funds for 9 others orgs). Is there any space in open space in the landscape that they would be excited about a new organisation being started to address?

-Do they view ageing research as primarily being talent or funding constrained? This could be separated into University and non-profit (e.g. SENS RF) based research, as I think the funding options available to each are quite different.

Comment by gavintaylor on The Intellectual and Moral Decline in Academic Research · 2020-09-25T22:01:13.347Z · score: 3 (2 votes) · EA · GW

Good question. I did a quick google and came across Lisa Bero who seems to have done a huge amount of work on research integrity. From this popular article, it sounds like corporate funding is often problematic for the research process.

The article links to several systematic reviews her group has done, and the article 'Industry sponsorship and research outcome' does conclude that corporate funding leads to a bias in the published results:

Authors' conclusions: Sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources. Our analyses suggest the existence of an industry bias that cannot be explained by standard 'Risk of bias' assessments.

I just read the abstract this so I'm not sure if they tried to identify if this was solely due to publication bias or if corporate-funded research also tended to have other issues (e.g. less rigorous experimental designs or other questionable research practices).

Comment by gavintaylor on gavintaylor's Shortform · 2020-09-15T18:14:46.100Z · score: 3 (2 votes) · EA · GW

I was recently reading the book Subvert! by Daniel Cleather (a colleague) and thought that this quote from Karl Popper and the author's preceding description of Popper's position sounded very similar to EAs method of cause prioritisation and theory of change in the world. (Although I believe Popper is writing in the context of fighting against threats to democracy rather than threats to well-being, humanity, etc.) I haven't read The Open Society and Its Enemies (or any of Popper's books for that matter), but I'm now quite interested to see if he draws any other parallels to EA.

For the philosophical point of view, I again lean heavily on Popper’s The Open Society and Its Enemies.  Within the book, he is sceptical of projects that seek to reform society based upon some grand utopian vision.  Firstly, he argues that such projects tend to require the exercise of strong authority to drive them.  Secondly, he describes the difficulty in describing exactly what utopia is, and that as change occurs, the vision of utopia will shift.  Instead he advocates for “piecemeal social engineering” as the optimal approach for reforming society which he describes as follows:
“The piecemeal engineer will, accordingly, adopt the method of searching for, and fighting against, the greatest and most urgent evils of society, rather than searching for, and fighting for, its greatest ultimate good.”

I also quite enjoyed Subvert! And would recommend that as a fresh perspective on the philosophy of science. A key point from the book is:

The problem is that in practice, scientists often adopt a sceptical, not a subversive, stance.  They are happy to scrutinise their opponents results when they are presented at conferences and in papers.  However, they are less likely to be actively subversive, and to perform their own studies to test their opponents’ theories.  Instead, they prefer to direct their efforts towards finding evidence in support of their own ideas.  The ideal mode would be that the proposers and testers of hypotheses would be different people.  In practice they end up being the same person.
Comment by gavintaylor on The Cost Of Wasted Motion · 2020-09-08T16:12:34.265Z · score: 3 (2 votes) · EA · GW

I think this post is a good counterpoint to common adages like 'don't sweat the small stuff' or 'direction over speed' that often come up in relation to career and productivity advice.

At the risk of making a very tenuous connection, this reminded me of an animal navigation strategy for moving towards a goal which has an unstable orientation (i.e. the animal is not able to reliably face towards the goal) - progress can still be made if it moves faster when facing towards the goal than away from it. (I don't think this is a very well known navigation strategy, at least it didn't seem to be in 2014 when I wrote up an experiment on this in my PhD thesis [Chapter 5]). Work is obviously a lot more multi-faceted than spatial navigation, but maybe an analogy could be made to school students or junior employees who don't get much choice about what they are working on day to day and recommend that they go all out on the important things and just scrape by on the rest.

Comment by gavintaylor on Working together to examine why BAME populations in developed countries are severely affected by COVID-19 · 2020-08-19T13:43:31.955Z · score: 4 (3 votes) · EA · GW

Michelle's study is now searching for participants. If you are a Black, Asian from a minority ethnic group or a person of colour and interested in sharing your lived experience of COVID-19, contact her at:

See more details here.

Comment by gavintaylor on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-16T19:54:55.089Z · score: 15 (4 votes) · EA · GW

Nice article Jason. I should start by saying that as a (mostly former) visual neuroscientist, I think that you've done quite a good job summarizing the science available in this series of posts, but particularly in these last two posts about time. I have a few comments that I'd like to add.

Before artificial light sources, there weren't a lot of blinking lights in nature. So although visual processing speed is often measured as CFF, most animals didn't really evolve to see flickering lights. In fact, I recall that my PhD supervisor Srinivasan did a study where he tried to behaviorally test honeybee CFF - he had a very hard time training them to go to flickering lights (study 1), but had much more success training them to go to spinning disks (study 2). In fact, the CFF of honeybees is generally accepted to be around 200 Hz, off the charts! That said, in an innate preference study on honeybees that I was peripherally involved with, we found honeybee had preferences for different frequencies of flickering stimuli, so they certainly can perceive and act on this type of visual information (study 3).

Even though CFF has been quite widely measured, if you wanted to do a comprehensive review of visual processing speed in different taxa then it would also be worth looking at other measures, such as visual integration time. This is often measured electrophysiologically (perhaps more commonly than CFF), and I expect that integration time will be at tightly correlated with CFF and as they are causally related, one can probably be approximately calculated from the other (I say approximately because neural nonlinearities may add some variance, in the case of a video system it can be done exactly). For instance, this study on sweat bees carefully characterized their visual integration time at different times of day and different light conditions but doesn't mention CFF.

Finally, I think some simple behavioural experiments could shed a lot of light on how we expect metrics around sensory (in this case visual) processing speeds to be related to the subjective experience of time. For instance, the time taken to make a choice between options is often much longer than the sensory processing time (e.g. 10+ seconds for bumblebees, which I expect have CFF above 100 Hz), and probably reflects something more like the speed of a conscious process than the sensory processing speed alone does. A rough idea for an experiment is to take two closely related and putatively similar species where one had double the CFF of the other, measure the decision time of each on a choice-task to select flicker or motion at 25%, 50% and 100% of their CFF. So if species one has CFF at 80 Hz, test it on 20, 40 and 80 Hz, and if species two has CFF 40 Hz, test it on 10, 20 and 40 Hz. A difference in the decisions speed curve across each animals frequency range would be quite suggestive of a difference in the speed of decision making that was independent of the speed of stimulus perception. The experiment could also be done on the same animal in two conditions where its CFF differed, such as in a light- or dark-adapted state. For completeness, the choice-task could be compared to response times in a classical conditioning assay, which seems more reflexive, and I'd expect differences in speeds here correlate more tightly to differences in CFF. The results of such experiments seem like they could inform your credences on the possibility and magnitude of subjective time differences between species.

Comment by gavintaylor on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-16T18:43:22.944Z · score: 9 (3 votes) · EA · GW
I'd be interested in knowing if other senses (sound, especially) are processed faster at the same time. It could be that for a reaching movement, our attention is focused primarily visually, and we only process vision faster.

I agree that this would be an interesting experiment. If selective attention is involved then I think it is also possible that other senses would be processed slower. Unfortunately, my impression is that comparatively limited work has been done on multi-sensory processing in human psychology.

Comment by gavintaylor on What coronavirus policy failures are you worried about? · 2020-08-13T14:39:31.866Z · score: 8 (3 votes) · EA · GW

Articles like this make me think there is some basis to this concern:

Coronavirus: Russia calls international concern over vaccine 'groundless'

On Wednesday, Germany's health minister expressed concern that it had not been properly tested.

"It can be dangerous to start vaccinating millions... of people too early because it could pretty much kill the acceptance of vaccination if it goes wrong," Jens Spahn told local media.

"Based on everything we know... this has not been sufficiently tested," he added. "It's not about being first somehow - it's about having a safe vaccine."
Comment by gavintaylor on A New X-Risk Factor: Brain-Computer Interfaces · 2020-08-11T18:27:40.943Z · score: 17 (6 votes) · EA · GW

This seems like a thorough consideration of the interaction of BCIs with the risk of totalitarianism. I was also prompted to think a bit about BCIs as a GCR risk factor recently and had started compiling some references, but I haven't yet refined my views as much as this.

One comment I have is that risk described here seems to rely not just on the development of any type of BCI but on a specific kind, namely, relatively cheap consumer BCIs that can nonetheless provide a high-fidelity bidirectional neural interface. It seems likely that this type of BCI would need to be invasive, but it's not obvious to me that invasive BCI technology will inevitably progress in that direction. Musk hint's that Neuralink's goals are mass-market, but I expect that regulatory efforts could limit invasive BCI technology to medical use cases, and likewise, any military development of invasive BCI seems likely to lead to equipment that is too expensive for mass adoption (although it could provide the starting point for commercialization). Although DARPA's Next-Generation Nonsurgical Neurotechnology (N3) program does have the goal of developing high-fidelity non- or minimally-invasive BCIs; my intuition is at that they will not achieve their goal of reading from one million and writing to 100,000 neurons non-invasively, but I'm not sure about the potential of the minimally-invasive path. So one theoretical consideration is what percentage of a population needs to be thought policed to retain effective authoritarian control, which would then indicate how commercialized BCI technology would need to be before it could become a risk factor.

In my view, a reasonable way to steer BCIs development away from posing a risk-factor for totalitarianism would be to encourage the development of high-fidelity non-invasive and read-focused consumer BCI. While non-invasive devices are intrinsically more limited than invasive ones, if consumers can still be satisfied by their performance then it will reduce the demand to develop invasive technology. Facebook and Kernel already look like they are moving towards non-invasive technology. One company that I think is generally overlooked is CTRL-Labs (now owned by Facebook), who are developing an armband that acquires high-fidelity measurements from motor neurons - although this is a peripheral nervous system recording, users can apparently repurpose motor neurons for different tasks and even learn to control the activity of individual neurons (see this promotional video). As an aside, if anybody is interested in working on non-invasive BCI hardware, I have a project proposal for developing a device for acquiring high-fidelity and non-invasive central nervous system activity measurements that I'm no longer planning to pursue but am able to share.

The idea of BCIs that punish dissenting thoughts being used to condition people away from even thinking about dissent may have a potential loophole, in that such conditioning could lead people to avoid thinking such thoughts or it could simply lead them to think such thoughts in ways that aren't punished. I expect that human brains have sufficient plasticity to be able to accomplish this under some circumstances and while the punishment controller could also adapt what it punishes to try and catch such evasive thoughts, it may not always have an advantage and I don't think BCI thought policing could be assumed to be 100% effective. More broadly, differences in both intra- or inter-person thought patterns could determine how effective BCI is for thought policing. If a BCI monitoring algorithm can be developed using a small pool of subjects and then applied en masse, that seems much risky than if the monitoring algorithm needs to be adapted to each individual and possibly updated over time (though there would be scope for automating updating). I expect that Neuralinks future work will indicate how 'portable' neural decoding and encoding algorithms are between individuals.

I have a fun anecdotal example of neural activity diversity: when I was doing my PhD at the Queensland Brain Institute I did a pilot experiment for an fMRI study on visual navigation for a colleague's experiment. Afterwards, he said that my neural responses were quite different from those of the other pilot participant (we both did the navigation task well). He completed and published the study and ask the other pilot participant to join other fMRI experiments he ran, but never asked me to participate again. I've wondered if I was the one who ended up having the weird neural response compared to the rest of the participants in that study... (although my structural MRI scans are normal, so it's not like I have a completely wacky brain!)

The BCI risk scenario I've considered is whether BCIs could provide a disruptive improvement in a user's computer-interface speed or another cognitive domain. DARPA's Neurotechnology for Intelligence Analysts (NIA) program showed that an x10 increase in image analysis speed with no loss of accuracy, just using EEG (see here for a good summary of DARPAs BCI programs until 2015). It seems reasonable that somewhat larger speed improvements could be attained using invasive BCI, and this speed improvement would probably generalize to other, more complicated tasks. When advanced BCIs is limited to early adopters, could such cognitive advantages facilitate the risky development in AI or bioweapons by small teams, or give operational advantages to intelligence agencies or militaries? (happy to discuss or share my notes on this with anybody who is interested in looking into this aspect further)

Comment by gavintaylor on Criteria for scientific choice I, II · 2020-07-30T14:21:01.476Z · score: 5 (3 votes) · EA · GW

The call for science to be done in service to society reminds me of Nicholas Maxwell's call to redirect academia to work towards wisdom rather than knowledge (see here and also here). I haven't read any of Maxwell's books on this, but it surprises me that there doesn't seem to be any interaction between him and EA philosophers at other UK institutes as Maxwell's research seems to be generally EA aligned (although limited to the broad-meta level).

Comment by gavintaylor on Is there a subfield of economics devoted to "fragility vs resilience"? · 2020-07-21T13:26:04.555Z · score: 5 (4 votes) · EA · GW

Although not really a field, Nassim Taleb's book Antifragile springs to mind - I haven't read this myself but have seen it referenced in several discussion on economic fragility, so it might at least be a starting point to work with.

Comment by gavintaylor on Prioritizing COVID-19 interventions & individual donations · 2020-07-06T13:08:01.845Z · score: 2 (2 votes) · EA · GW
We are seeking additional recommendations for charities that operate in Latin America and the Arabian Peninsula, particularly in the areas of direct aid (cash transfers) and strengthening health systems.

Doe direto was running a trial to give cash transfers to vulnerable families in Brazil. They seemed to have finished the trial now and I'm not sure if/when they will consider restarting it.

Comment by gavintaylor on gavintaylor's Shortform · 2020-07-03T13:20:56.340Z · score: 3 (2 votes) · EA · GW

Thanks Michael, I had seen that but hadn't looked at the links. Some comments:

The cause report from OPP makes the distinction between molecular nanotechnology and atomically precise manufacturing. The 2008 survey seemed to be explicitly considering weaponised molecular nanotechnology as an extinction risk (I assume the nanotechnology accident was referring to molecular nanotechnology as well). While there seems to be agreement that molecular nanotechnology could be a direct path to GCR/extinction, OPP presents atomically precise manufacturing as being more of an indirect risk, such as through facilitating weapons proliferation. The Grey goo section of the report does resolve my question about why the community isn't talking about (molecular) nanotechnology as an existential risk as much now (the footnotes are worth reading for more details):

‘Grey goo’ is a proposed scenario in which tiny self-replicating machines outcompete organic life and rapidly consume the earth’s resources in order to make more copies of themselves.40 According to Dr. Drexler, a grey goo scenario could not happen by accident; it would require deliberate design.41 Both Drexler and Phoenix have argued that such runaway replicators are, in principle, a physical possibility, and Phoenix has even argued that it’s likely that someone will eventually try to make grey goo. However, they believe that other risks from APM are (i) more likely, and (ii) very likely to be relevant before risks from grey goo, and are therefore more worthy of attention.42 Similarly, Prof. Jones and Dr. Marblestone have argued that a ‘grey goo’ catastrophe is a distant, and perhaps unlikely, possibility.43

OPP's discussion on why molecular nanotechnology (and cryonics) failed to develop as scientific fields is also interesting:

First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work ...
Second, early advocates of cryonics and MNT spoke and wrote in a way that was critical and dismissive toward the most relevant mainstream scientific fields ...
Third, and perhaps largely as a result of these first two issues, these “neighboring” established scientific communities (of cryobiologists and chemists) engaged in substantial “boundary work” to keep advocates of cryonics and MNT excluded ...

It least in the case of molecular nanotechnology, the simple failure of the field to develop may have been lucky (at least from a GCR reduction perspective) as it seems that the research that was (at the time) most likely to lead to the risky outcomes was simply never pursued.

Comment by gavintaylor on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-02T16:27:57.552Z · score: 15 (8 votes) · EA · GW

Something that I think EAs may be undervaluing is scientific research done with the specific aim of identifying new technologies for mitigating global catastrophic or existential risks, particularly where these have interdisciplinary origins.

A good example of this is geoengineering (the merger of climate/environmental science and engineering) which has developed strategies that could allow for mitigating the effects of worst-case climate change scenarios. In contrast, the research being undertaken to mitigate worst-case pandemics seem to focus on developing biomedical interventions (biomedicine started as an interdisciplinary field, although it is now very well established as its own discipline). As an interdisciplinary scientist, I think there is likely to be further scope for identifying promising interventions from the existing literature, conducting initial analysis and modelling to demonstrate these could be feasible responses to GCRs, and then engaging in field-building activities to encourage further scientific research along those paths. The reason I suggest focusing on interdisciplinary areas is that merging two fields often results in unexpected breakthroughs (even to researchers from the two disciplines involved in the merger) and many 'low-hanging' discoveries that can be investigated relatively easily. However, such a workflow seems uncommon both in academia (which doesn't strongly incentivise interdisciplinary work or explicitly considering applications during early-stage research) and EA (which [with the exception of AI Safety] seems to focus on finding and promoting promising research after it has already been initiated by mainstream researchers).

Still, this isn't really a career option as much as it is a strategy for doing leveraged research which seems like it would be better done at an impact focused organisation than at a University. I'm personally planning to use this strategy and will attempt to identify and then model the feasibility of possible antiviral interventions as the intersection of physics and virology (although I haven't yet thought much about how to effectively promote any promising results).

Comment by gavintaylor on Consider a wider range of jobs, paths and problems if you want to improve the long-term future · 2020-06-30T18:20:37.499Z · score: 10 (5 votes) · EA · GW

It could also be the case that the impact distribution of orgs is not flat yet we've only discovered a subset of the high impact ones so far (speculatively, some of the highest impact orgs may not even exist yet). So if the distribution of applicants is flatter then they are still likely to satisfy the needs of the known high impact orgs and others might end up finding or founding orgs that we later recognise to be high impact.

Comment by gavintaylor on EA is risk-constrained · 2020-06-28T23:22:08.183Z · score: 1 (1 votes) · EA · GW

Sure, I agree that unvetted UBI for all EAs probably would not be a good use of resources. But I also think there are cases where an UBI-like scheme that funded people to do self directed work on high-risk projects could be a good alternative to providing grants to fund projects, particularly at the early-stage.

Comment by gavintaylor on EA is risk-constrained · 2020-06-28T22:05:52.081Z · score: 3 (2 votes) · EA · GW

Asking people who specialise in working on early-stage and risky projects to take-care of themselves with runway may be a bit unreasonable. Even if a truly risky project (in the low-probability of a high-return sense) is well executed, we should still expect it to have an a priori success rate of 1 in 10 or lower. Assuming that it takes six months or so to test the feasibility of a project, then people would need save several years worth of runway if they wanted to be financially comfortable while continuing to pursue projects until one worked out (of course, lots of failed projects may be an indication that they're not executing well, but lets be charitable and assume they are). This would probably limit serious self-supported EA entrepreneurship to an activity one takes on at a mid-career or later stage (also noted by OPP in relation to charity foundation):

Starting a new company is generally associated with high (financial) risk and high potential reward. But without a solid source of funding, starting a nonprofit means taking high financial risk without high potential reward. Furthermore, some nonprofits (like some for-profits) are best suited to be started by people relatively late in their careers; the difference is that late-career people in the for-profit sector seem more likely to have built up significant savings that they can use as a cushion. This is another reason that funder interest can be the key factor in what nonprofits get started.
Comment by gavintaylor on EA is risk-constrained · 2020-06-28T21:05:03.767Z · score: 4 (3 votes) · EA · GW

At the moment I think there aren't obvious mechanisms to support independent early-stage and high-risk projects at the point where they aren't well defined and, more generally, to support independent projects that aren't intended to lead to careers.

As an example that address both points, one of the highest impact things that I'm considering working on currently is a research project that could either fail in ~3 months or, if successful, occupy several years of work to develop into a viable intervention (with several more failure points along the way).

With regards to point 1: At the moment, my only option seems to be applying for seed-funding, doing some work and if that its successful, applying to another funder to provide longer-term project funding (probably on several occasions). Each funding application is both uncertain and time consuming, and knowing this somewhat disincentives me from even starting (although I have recently applied for seed stage funding). Having a funding format that started at project inception and could be renewed several times would be really helpful. I don't think something like this currently exists for EA projects.

With regards to point 2: As a researcher, I would view my involvement with the project as winding down if/when it lead to a viable intervention - while I could stay involved as a technical advisor, I doubt I'd contribute much after the technology is demonstrated, nor do I imagine particularly wanting to be involved in later stage activities such as manufacturing and distribution. This essentially means that the highest impact thing I can think of working on would probably need my involvement for, at most, a decade. If it did work out then I'd least have some credibility to get support for doing research in another area, but taking a gamble on starting something that won't even need your involvement after a few years hardly seems like sound career advice to give (although from the inside view, it is quite tempting to ignore that argument against doing the project).

I think that lack of support in these areas is most relevant to independent researchers or small research teams - researchers at larger organisations probably have more institutional support when developing or moving between projects, while applied work, such as distributing an intervention, should be somewhat easier to plan out.

Comment by gavintaylor on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T20:11:43.535Z · score: 6 (4 votes) · EA · GW

I haven't seen the talk yet, but tend to agree that industrial ideas and technology were probably exported very quickly after their development in Europe (and later the US), which probably displaced any later and independent industrial revolution.

I think it's also worth noting that the industrial revolution occurred after several centuries of European colonial expansion, during which material wealth was being sent back to Europe. For example, in the 300 hundred years before the industrial revolution, American colonies accounted for >80% of the worlds silver production. So considering the Industrial Revolution to simply have been a European phenomena could be substantially understating the more global scope of the material contribution that may have facilitated it. However, it's hard to know if colonial wealth was required to create the right conditions for an industrial revolution or simply helped to speed it up. (Interestingly, China was going on successful voyages of discovery in the 14th century but had apparently abandoned their navy by the early 15th century. If China had instead gone on to start colonial activities around the same time as Europe, maybe Eastern industry would have started developing before the Western industrial tradition was imported.)

Comment by gavintaylor on MichaelA's Shortform · 2020-06-28T19:20:37.497Z · score: 5 (3 votes) · EA · GW

Guns, Germs, and Steel - I felt this provided a good perspective on the ultimate factors leading up to agriculture and industry.

Comment by gavintaylor on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T19:13:03.144Z · score: 3 (2 votes) · EA · GW

In Gun, Germs and Steel, Diamond comments briefly on technological stagnation and regression in small human populations (mostly in relation to Australian aborigines). I don't know if there is much theoretical basis for this, but he suggests that it is likely that the required population size to support even quite basic agricultural technology is much larger than the minimum genetically viable population.

So even if knowledge isn't explicitly destroyed in a catastrophe, if humanity is reduced to small groups of subsistence farmers then it seems probable that the technological level they can utilize will be much lower than that of the preceding society (although probably higher than the same population level without a proceeding society). The lifetime of unmaintained knowledge is also limiting factor - books and digital media may degrade before the new civilisation is ready to make use of them (unless they plan ahead to maintain them). But I agree that this is all very speculative.

Comment by gavintaylor on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T18:49:49.577Z · score: 4 (3 votes) · EA · GW

I think this needs clarifying: the probability of getting industry conditional on already having agriculture may be more likely than the probability of getting agriculture in the first place, but as agriculture seems to be necessary for industry, the total likelihood of getting industry is almost certainly lower than that of getting agriculture (i.e. most of the difficulty in developing an industrial society may be in developing that preceding agricultural society).

Comment by gavintaylor on Space governance is important, tractable and neglected · 2020-06-25T13:10:24.725Z · score: 5 (2 votes) · EA · GW

Would policies to manage orbital space debris be a good candidate for short-term work in this area, particularly if they can be directed at preventing the tail risk scenarios such as run-away collision cascades (Kurzgesagt has a cute video on this)? Although larger pieces of space debris are tracked and there are some efforts currently being taken to test debris removal methods, it seems like this could be suffer from free-rider problem in the same way international climate change policy does (i.e. a lot countries are scaling up their space programs, but most may rely on the US to take the lead on debris management).

In the event that there is a collision cascade it also seems like it could create a weak form of the future trajectory lock-in scenario that Ord describes in the Precipice, in that humanity would be 'locked-out' of spacefaring (and satellite usage) for as long as it took to clean up the junk or until enough of it naturally fell out of orbit (possibly centuries).

Comment by gavintaylor on EA is risk-constrained · 2020-06-24T17:47:53.376Z · score: 1 (1 votes) · EA · GW

Here is just linking to this post, I think you meant to link somewhere else?

Comment by gavintaylor on EA is risk-constrained · 2020-06-24T16:04:30.771Z · score: 6 (4 votes) · EA · GW

Jade Leung's EAGx talk 'Fostering longtermist entrepreneurship' touched on some relevant ideas related to individual capacity for risk taking. (this isn't in the public CEA playlist, but a recording is still available via the Grip agenda)

Comment by gavintaylor on What coronavirus policy failures are you worried about? · 2020-06-23T00:27:12.793Z · score: 9 (5 votes) · EA · GW

This is more of a current issue, but I'm somewhat worried that vaccines will be rushed through safety testing, and then unidentified side-effects will end up having substantial medical consequences (possibly in a subgroup). This could (further) erode public confidence in scientific and medical authorities (and increase anti-vaxxer support) and lead to generally decreased vaccination rates for other diseases. Additionally, this could mean that when a truly devastating viral pandemic comes occurs, the public may be less willing to take 'a gamble' with a vaccine that's gone through accelerated development, even if the stakes are higher (i.e. something like: be careful of that new H5N1 vaccine, remember that time they rushed through the coronavirus vaccine and all the <insert demographic subgroup> had kidney failure a year later?).

Comment by gavintaylor on gavintaylor's Shortform · 2020-05-28T13:54:43.060Z · score: 7 (5 votes) · EA · GW

Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars - nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)

Since starting to engage with EA in 2018 I have seen very little discussion about nano-technology or non-nuclear warfare as existential risks, yet it seems that in 2008 these were considered risks on-par with top longtermist cause areas today (nanotechnology weapons and AGI extinction risks were both estimated at 5%). I realize that Ord's risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.

My open question is: what new information or discussion over the last decade lead the GCR to reduce their estimate of the risks posed by (primarily) nano-technology and also conventional warfare?

Comment by gavintaylor on [Stats4EA] Uncertain Probabilities · 2020-05-28T13:25:07.763Z · score: 3 (2 votes) · EA · GW

This brings to mind the assumption of normal distributions when using frequentest parametric statistical tests (t-test, ANOVA, etc.). If plots 1-3 represented random samples from three groups, an ANOVA would indicate there was no significant difference between the mean values of any group, which usually be reported as there being no significant difference between the groups (even though there is clearly a difference between them). In practice, this can come up when comparing a treatment that has a population of non-responders and strong responders vs. a treatment where the whole population has an intermediate response. This can be easily overlooked in a paper if the data is just shown as mean and standard deviation, and although better statistical practices are starting to address this now, my experience is that even experienced biomedical researchers often don't notice this problem. I suspect that there are many studies which have failed to identify that a group is composed of multiple subgroups that respond differently by averaging them out in this way.

The usual case for dealing with non-normal distributions is to test for normality (i.e. Shapiro-Wilk's test) in the data from each group and move to a non-parametric test if that fails for one or more groups (i.e. Mann-Whitney's, Kruskal-Wallis's or Friedman's tests), but even that is just comparing medians so I think it would probably still indicate no significant difference between (the median values of) these plots. Testing for difference between distributions is possible (i.e. Kolmogorov–Smirnov's test), but my experience is that this seems to be over-powered and will almost always report a significant difference between two moderately sized (~50+ samples) groups, and the result is just that there is a significant difference in distributions, not what that actually represents (i.e differing means, standard deviations, kurtosis, skewness, long-tailed, completely non-normal, etc. )

Comment by gavintaylor on Is there a Price for a Covid-19 Vaccine? · 2020-05-27T21:02:45.209Z · score: 2 (2 votes) · EA · GW

The author mentioned veterinary vaccines near the end of the post. I search around this and was surprised to find there are already commercially available veterinary vaccines against coronaviruses (that link lists 5). This raised my expectation that a human coronavirus vaccine could be successfully developed.

Comment by gavintaylor on Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2? · 2020-05-23T19:29:44.059Z · score: 8 (4 votes) · EA · GW

Good post, and this also seems to be a very opportune time to be promoting wild animal vaccination. A few thoughts:

To start with, programs of this kind would only be implemented after a vaccine is developed and distributed among human beings.

In relation to the current pandemic, the media often mentions that there are 7 coronaviruses that can effect humans and we don't have an effective vaccine for any of them. However, I was recently surprised to learn that there are several commercially available veterinary vaccines against coronaviruses - this raised my expectation that a human coronavirus vaccine could be successfully developed and seems promising for animal vaccination as well.

I think it's worth thinking more about what level of safety testing goes into developing animal vaccines. The Hendra virus vaccine for horses might be an interesting case study for this. Hendra virus was relatively recently discovered in Australian, and can be transmitted from flying foxes (a megabat species), via horses, to humans where it has 60%+ case fatality. Fruit bat culling was very widely called for after a series of outbreaks in 2011, but the government decided to fund development for a horse vaccine instead (by unfortunate coincidence, a heat-wave latter killed 1/3rd of the flying fox population a few years later). A vaccine was developed within a year and widely administered soon after. However, some owners (particularly those of racing horses) reported severe side-effects (including death) and eventually started a class-action against the vaccine manufacturer. I don't know if the anecdotal reports of side-effects stood up to further scrutiny (there could have been some motivated reasoning going on similar to that used by human anti-vaxxers), but it seems plausible that veterinary vaccine development accepts, or does not even attempt to consider, much worse side-effects that would be approved in a vaccine developed for humans. Given animal's inability to self-report, some classes of minor side-effects may only be noticed by owners of companion animals who are very familiar with their behaviour. While I don't think animal side-effects would be a consideration in developing vaccines for pandemic control or economic purposes, it seems more relevant in the context of vaccinating animals to increase their own welfare.

This may be the case especially for bats, because they have one of the highest disease burdens among wild mammals. Among other conditions, they are harmed by a number of different coronaviruses-caused diseases. In fact, they harbor more than half of all known coronaviruses.

Why do bats have so many diseases (lots of which humans seem to catch)? This comment (which I found in an SSC article) frames the question in another way:

There are over 1,250 bat species in existence. This is about one fifth of all mammal species. Just to get a sense of this, let me ask a modified version of the question in the title:
"Why do human beings keep getting viruses from cows, sheep, horses, pigs, deer, bears, dogs, seals, cats, foxes, weasels, chimpanzees, monkeys, hares, and rabbits?"

This re-framing doesn't really change the problem, but it suggests that just viewing 'bats' as a single animal group comparable to 'cows' or 'deers' is concealing the scope of species diversity involved.

I heard Jonathan Epstein talk at a panel discussion on biosecurity last year. He was in favour of disease monitoring and management in wild animal populations, and also seemed sympathetic to the idea of doing this from both a human health and animal welfare standpoints. He might be interested in discussing this further, and is in a position where he could advocate for or implement these ideas.

Comment by gavintaylor on Interview with Aubrey de Grey, chief science officer of the SENS Research Foundation · 2020-05-23T16:54:48.907Z · score: 5 (2 votes) · EA · GW

Thanks for asking the questions I suggested. I thought found Aubrey's response to this question the most informative:

Has any effort been made to see if the effects of multiple treatments are additive, in terms of improved lifespan, in a pre-clinical study?
No, and indeed we would not expect them to be additive, because we would not expect any one of them to make a significant difference to lifespan. That’s because until we are fixing them all, the ones we are not yet fixing would be predicted to kill the organism more-or-less on schedule. Only more-or-less, because there is definitely cross-talk between different damage types, but still we would not expect that lifespan would be a good assay of efficacy until we’re fixing pretty much everything.

I don't have a background in anti-aging biology and my intuition was that the treatments would be have more of an additive effect. However, I agree with his view that there won't be much effect on total life-span until everything is fixed.

My feeling is that this may make the expected value of life-extension research lower (by decreasing probability of success) given that all hallmarks need to be effectively treated in parallel to realize any benefit. If one proves much harder to treat in humans, or if all the treatments don't work together, then that reduces the benefit gained from treating the other hallmarks, at least as far as LEV is concerned. This makes SRF's approach of focusing on the most difficult problems seem quite reasonable and probably the most effective way to make a marginal contribution to life-extension research at the moment. Once all hallmarks are treatable pre-clinically in-vivo, then it seems like research into treatment interactions may become the most effective way to contribute (as noted, this will probably also be hard to get main-stream funding for).

Comment by gavintaylor on Bioinfohazards · 2020-05-22T21:39:45.354Z · score: 3 (2 votes) · EA · GW
Biosecurity researchers are often better-educated and/or more creative than most bad actors.

I generally agree with the above statement and that the risk of openly discussing some topics outweigh the benefits of doing so. But I recently realised there are some people outside of EA that I think are generally well educated, probably more creative than many biosecurity researchers, and who often write openly about topics the EA community may consider bioinfohazards: authors of near-future science fiction.

Many of the authors in this genre have STEM backgrounds, often write about malicious-use GCR scenarios (thankfully, the risk is usually averted), and I've read several interviews where authors mention taking pains to do research so they can depict a scenario that represents a possible, if sometimes ambitious, future risk. While these novels don't provide implementation details, the 'attack strategies' are often described clearly and the accompanying narrative may well be more inspiring to a poorly educated bad actor looking for ideas than a technical discussion would be.

I haven't seen (realistic) fiction discussed in the context of infohazards before and would be interested to know what others think of this. In the spirit of the post, I'll refrain from creating an 'attention hazard' (or just advertising?) by mentioning any authors who I think describe GCR's particularly well.

Comment by gavintaylor on Why making asteroid deflection tech might be bad · 2020-05-21T14:46:49.460Z · score: 5 (4 votes) · EA · GW
Ignoring accidental deflection, which might occur when an asteroid is moved to an Earth or Lunar orbit for research or mining purposes

I haven't seen this mentioned in other discussion of asteroid risk (i.e. I don't think Ord mentions it in the Precipice) but I don't think it should be ignored so quickly. If states/corporations develop technology to transfer asteroids to Earth orbit then this seems like it would represent an equivalent dual-use concern. Indeed, it may be even riskier than just developing tools for deflection, as activities like mining could provide 'cover' for maliciously aiming an asteroid at Earth. On the positive side, similar tools can probably be used for both orbital transfer and deflection, so the risky technology may also be its own counter-technology.

Comment by gavintaylor on gavintaylor's Shortform · 2020-05-03T19:44:17.746Z · score: 10 (8 votes) · EA · GW

At the start of Chapter 6 in the precipice, Ord writes:

To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others as one in 50. So much of one’s work in accurately assessing the size of each risk is thus immediately wasted. Furthermore, the meanings of these phrases shift with the stakes: “highly unlikely” suggests “small enough that we can set it aside,” rather than neutrally referring to a level of probability. This causes problems when talking about high-stakes risks, where even small probabilities can be very important. And finally, numbers are indispensable if we are to reason clearly about the comparative sizes of different risks, or classes of risks.

This made me recall hearing about Matsés, a language spoken by an indigenous tribe in the Peruvian Amazon, that has the (apparently) unusual feature of using verb conjugations to indicate the certainty of information being provided in a sentence. From an article on Nautilus:

In Nuevo San Juan, Peru, the Matsés people speak with what seems to be great care, making sure that every single piece of information they communicate is true as far as they know at the time of speaking. Each uttered sentence follows a different verb form depending on how you know the information you are imparting, and when you last knew it to be true.
The language has a huge array of specific terms for information such as facts that have been inferred in the recent and distant past, conjectures about different points in the past, and information that is being recounted as a memory. Linguist David Fleck, at Rice University, wrote his doctoral thesis on the grammar of Matsés. He says that what distinguishes Matsés from other languages that require speakers to give evidence for what they are saying is that Matsés has one set of verb endings for the source of the knowledge and another, separate way of conveying how true, or valid the information is, and how certain they are about it. Interestingly, there is no way of denoting that a piece of information is hearsay, myth, or history. Instead, speakers impart this kind of information as a quote, or else as being information that was inferred within the recent past.

I doubt the Matsés spend much time talking about existential risk, but their language could provide an interesting example of how to more effectively convey aspects of certainty, probability and evidence in natural language.

Comment by gavintaylor on The Case for Impact Purchase | Part 1 · 2020-04-20T20:25:28.295Z · score: 2 (2 votes) · EA · GW
I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn't seem conducive to focusing on impact.

Agreed. In my brief experience with academic consulting one thing I've realised is that it is really quite reasonable for contracted consultants to charge a 50-100% premium (on top of their utilisation ratio - usually 50%, so another x2 markup) to account for their lack of benefits.

So if somebody is expecting to earn a 'fair' salary from impact purchases compared to employment (or from any other type of short-term contract work really) they should expect a funder to pay premium for this compared to employing them (or funding another organisation to do so) - this doesn't seem like a good use of funds in the long-term if it is possible to employee that person.

Comment by gavintaylor on The Case for Impact Purchase | Part 1 · 2020-04-15T22:06:30.486Z · score: 13 (8 votes) · EA · GW

I'm interested in seeing a second post on impact purchases and would personally consider selling impact in the future. I have a few general comments about this:

  • Impact purchases seem similar to value-based fees that are sometimes used in commercial consulting (instead of time- or project-based fees) and may be able to provide a complementary perspective. Although in business the 'impact' would usually be something easy to track (like additional revenue) and the return the consultant gets (like percentage of revenue up to a capped value) would be agreed on in advance. I wonder if a similar pre-arrangement for impact purchase could work for EA projects that have quantifiable impact outcomes, such as through a funder agreeing to pay some amount per intervention distributed, student educated, etc. Of course, the tracked outcome should reflect the funders true goals to prevent gaming the metric.
  • It seems like impact purchases would be particularly helpful for people coming into the EA community who don't yet have good EA references/prestige/track-record but are confident they can complete an impactful project, or who want to work on unorthodox ideas that the community doesn't have the expertise to evaluate. If they try something out and it works then they can get funds to continue and preliminary results for a grant, if not, it's feedback to go more mainstream. For this dynamic to work people should probably be advised to plan relatively short projects (say a up too few months), otherwise they could spend a lot of time on something nobody values.
  • This could be a particularly interesting time to trial impact purchases used in conjunction with government UBI (if that ends up being fully brought in anywhere). UBI then removes the barrier of requiring a secure salary before taking on a project.
  • From my experience applying to a handful of early-career academic grants and a few EA grants, I agree that almost none provide any/useful feedback (beyond accepted or declined), either for the initial application or for progress or completion reports. However, worse than having no feedback is that I once heard from an European Research Council (ERC) grant reviewer that their review committees are required to provided feedback on rejected applications, but also instructed to make sure the feedback is vague and obfuscated so the applicant will have no grounds to ask for an appeal, which means the applicant gets feedback the reviewers know won't be useful for improving their project... Why do they bother???
  • With regards to implementation. I think one point to consider is the demand from impacters relative to funds of purchasers. At least in academia, funding is constrained and grant success rates are often <20%, and so grantees know that it is unlikely they'll get a grant to do their project (academic granters often say they turn away a lot of great projects they want to fund). If impact purchasers were similarly funding constrained relative to the number of good projects, I think the whole scheme would be less appealing as then even if I complete a great project, getting its impact bought would still involve a bit/lot of luck.
  • These posts about impact prizes and altruistic equity may also be of interest to consider.
Comment by gavintaylor on [Question] Resources for Mid-Career Updates · 2020-03-28T22:02:09.186Z · score: 6 (4 votes) · EA · GW

Have a particular strength? Already an expert in a field? Here are the socially impactful careers 80,000 Hours suggests you consider first.

Comment by gavintaylor on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-20T20:16:41.948Z · score: 4 (3 votes) · EA · GW

In the BBC today: Coronavirus: Robots use light beams to zap hospital viruses

Comment by gavintaylor on Why SENS makes sense · 2020-03-17T15:46:35.590Z · score: 2 (2 votes) · EA · GW

Sure, I think the key questions would be:

-Of the treatments currently being developed (in reference to the list on, is it likely that treatments for multiple hallmarks can be used in parallel?

--Are there currently any observed or expected interactions between different treatments?

--Has any effort been made to see if the effects of multiple treatment are additive, in terms of improved lifespan, in a pre-clinical study?

-What side effects have been observed for the treatments currently in clinical trials?

It's interesting to know that recurring and more frequent treatments are going to be needed. That point hasn't been obvious to me before, but it could be important to consider in relation to the economics of scaling up mass anti-aging treatment - it's not like a one of vaccination against a specific type of ageing damage, but still a 'condition' that requires ongoing, and perhaps increasing, care.

Comment by gavintaylor on COVID-19 brief for friends and family · 2020-03-15T18:59:27.724Z · score: 3 (3 votes) · EA · GW

I was happy to see that I'm apparently not the only person who touches their face a lot and the BBC noted that many people even touch their face while giving official advice not to:

The main tips for how to avoid face touching were:

-Wear glasses on your face so you touch them instead.

-Make an effort to keep your hands clasped most of the time, so that touching your face is more of a conscious act that you'll notice and and can choose to stop.

Comment by gavintaylor on Why SENS makes sense · 2020-03-09T21:57:05.721Z · score: 5 (2 votes) · EA · GW

Nice piece Emanuele, I felt that I actually got what LEV was and why we should aim to get there more after reading this post than I did after reading your previous ones. A general comment is that from what the roadmap shows, it really seems like anti-aging research has progressed quite far (i.e. quite a few on going and some late-stage clinical trials) relative to the fields fringe nature and apparently limited funding.

In terms of questions, there is one thing that I think is fairly critical - how well do multiple interventions combine?

What SRF claims is that solving all the seven categories will probably lead to lifespans longer than the current maximum.

As I understand this, treatments for all of the categories are being developed in independently. Is anybody looking to see if they can all be used in parallel? Could there be interactions between treatments that prevent this? It seems that the expected value of the anti-aging research is only realised if it will, at some point, be possible to treat all the categories in parallel. Research into a treatment for one category that wouldn't be compatible with other treatments seems like it should receive much lower priority.

It seems like there could be ways to test this already. For instance, the roadmap shows many treatments are already at the pre-clinical in-vivo stage. If we start applying multiple therapies in-vivo, we can start to test how compatible they are. Do you know if that has been done?

Starting to test multiple therapies in-vivo could also provide some fundamental evidence about how the benefits of multiple therapies combine. At the moment the assumption seems to be that, say, individually treating mitochondrial mutations and extracellular aggregates, prolongs expected life by X and Y years, respectively, so treating them both in combination will prolong life by X + Y years, but both negative or positive returns on the combination could occur. To be honest, I have some general scepticism about anti-aging research because ageing is very widely conserved in the animal kingdom (there are only a few animals with negligible senescence). It could be that there is some evolutionary path way negligible senescent animals went down that is hard to cross-over to even if we treat all the categories, so I have a weak prior that senescent animals will get diminishing returns from multiple therapies.

Another point that I think is worth discussing is how the damage repair approach effects the metabolic processes causing the damage?

Dr. de Grey always stresses how the damage repair approach, which he also calls "the maintenance approach", has a big advantage over geriatrics and the kind of biogerontology aimed at targeting the metabolic processes that are causing this damage.

For instance, if we treat an 80 year olds telomere attrition, are we going to need to treat them again in the future? Are consecutive treatments going to need to occur at more regular intervals? I don't know much about how treatments effect the underlying metabolic processes (as noted, metabolism is very complicated), but it could be that these continue picking up pace even as the damage they cause is repaired. Knowing about this could also be important in assessing the value of LEV as a whole, particularly if treatments have dose dependent side-effects. For instance, it may be that we can treat ageing out to 200 or so, but then rate of damage is so high that treatment dose required is too strong to tolerate. This is probably an issue for SENS 2.0, but it also seems like an area where some in-vivo testing can provide some useful information. If nothing else, finding that regularity of therapy is expected to increases suggests that treatments with more tolerable side-effects might be preferred (where there is a choice).

This are both fairly technical issues compared to the other questions you proposed in the post, but I think they point towards some fairly crucial considerations about how the additivity and repeatability of therapies will effect the goal of LEV.

Comment by gavintaylor on COVID-19 brief for friends and family · 2020-03-09T15:00:46.095Z · score: 3 (2 votes) · EA · GW

In terms of hand sanitiser - in Brazil I've also found hand sanitiser is sold out or very expensive. However, here it is common to use 70% ethanol for household cleaning at it is possible to buy this in gel form as well, which is still well stocked and at normal prices. I expect this will work just as well for sanitisation. Would it be worth considering as an alternative if proper hand sanitiser is unavailable or for people on a budget (maybe it would leave you hands a bit dryer)?

I don't recall seeing this product while living in Australia or Sweden, so I'm not sure how widely available it is. Here is a link to the last pack I bought, although there are many brands available in Brazil.

Comment by gavintaylor on The illusion of science in comparative cognition · 2020-03-01T22:28:53.271Z · score: 6 (2 votes) · EA · GW

Further work from the authors of the original article:

Claims and statistical inference in animal physical cognition research.

Overall, our analysis provides a cautiously optimistic analysis of reliability and bias in animal physical cognition research, however it is nevertheless likely that a non-negligible proportion of results will be difficult to replicate.
Comment by gavintaylor on COVID-19 brief for friends and family · 2020-03-01T22:21:13.504Z · score: 1 (1 votes) · EA · GW
and practicing not touching your face.

How important is it to avoid touching your face if you are also washing your hands regularly?

As a practical point, I think this is somewhat hard to avoid for some people. I feel I touch my face more than wanted and even though this occurs in social situations where it may be mildly unacceptable, I have problems breaking the habit (I do have weak symptoms of body-focussed repetitive behaviour disorder and it's probably related to this). I don't think the somewhat abstract threat of reducing infection risk will be enough to stop me touching my face much as I mostly do this without think about it, although that may change when the virus spreads to my region and I feel under more personal threat.

This made me recall the Pavlok, which is a wrist-band that uses aversion therapy (vibrations and electric shocks) to break bad habits like nail biting. Although I cant find this described as a use case on their website, I suspect it could also be used to break a face touching habit quickly. Alternatively, you can probably get most of the aversion from snapping a rubber band on your wrist whenever you notice you're touching your face.

Comment by gavintaylor on The Intellectual and Moral Decline in Academic Research · 2020-02-13T14:34:46.242Z · score: 3 (3 votes) · EA · GW

Thanks for the discussion on this Tom and Will.

I originally posted this article as, although it presents a very strong opinion on the matter and admittedly uses shock tactics by taking many values out of context (as pointed out by Romeo and Will), I thought that the sentiment was going in both same the direction that I personally felt science was moving and also with several other sources I'd read. I hadn't looked into any of authors other work, and although his publication record seems reasonable, he has pushed some fairly fringe views on nutrition and knowing this does reduce the weight I give to views in this article (thanks for digging into it Tom).

For a more balanced critic of recent scientific practice I'd recommend the book Real Science by John Ziman (I have a pdf, PM if you'd like a copy). It’s a long but fairly interesting read on the sociology of science from a naturalistic perspective, and claims that University research has moved from an 'academic' to 'post-academic' phase, characterised as the transition from the rigorous pursuit of knowledge to a focus on applications, which represents a convergence between academic and industrial research traditions. Although this may lead to more applications diffusing out of academia in the short-term, the 'post-academic' system is claimed to loose some important features of traditional research, like disinterestedness, organised skepticism, and universality, and tends to trade quality for quantity. The influence of societal interests (including corporate goals) would be expected to have much influence on the work done by 'post-academic' researchers.

Agreed with both Will and Tom that there are certainly are still lot of people doing good academic research, and how strongly you weight the balance will depend on which scientists you interact with. Personally, I ended up leaving Academia without pursuing a faculty position (in-part) because I felt I the push to use excessive spin and hype in order to publish my work and attract funding was making it quite substanceless. Of course, this may have been specific to the field I was working in (invertebrate sensory neuroscience) and I'm glad to hear that you both have more positive outlooks.