Preprint: Open Science Saves Lives: Lessons from the COVID-19 Pandemic 2020-08-24T14:01:30.503Z
Working together to examine why BAME populations in developed countries are severely affected by COVID-19 2020-08-03T16:25:32.956Z
Is there a Price for a Covid-19 Vaccine? 2020-05-22T17:20:14.396Z
gavintaylor's Shortform 2020-05-03T19:44:17.547Z
The Intellectual and Moral Decline in Academic Research 2020-02-07T16:47:32.079Z
The illusion of science in comparative cognition 2019-11-02T19:17:18.322Z
IGDORE forum for discussing metascience 2019-10-23T18:28:07.141Z


Comment by gavintaylor on Long-Term Future Fund: Ask Us Anything! · 2021-03-02T18:38:25.378Z · EA · GW

If it is possible to just get a check as an individual, I imagine that that's the best option.


One other benefit of a virtual research institute is that they can act as formal employers for independent researchers, which may be desirable for things like receiving healthcare coverage or welfare benefits.


Thanks for mentioning Theiss, I didn't know of them before. Their website doesn't look so active now, but it's good to know about the  history of the independent research scene.

Comment by gavintaylor on AMA: Jason Crawford, The Roots of Progress · 2020-12-30T17:59:09.863Z · EA · GW

Thanks for the perspective, this is interesting and a useful update for me.

Comment by gavintaylor on [Help please/Updated] Best EA use of $250,000AUD/$190,000 USD for metascience? · 2020-12-22T21:53:23.337Z · EA · GW

I'm glad to see interest in directing money to support impactful metascience projects - my intuition is that work on metascience could make a substantial contribution to advancing several EA cause areas, although I don't think enough work has been done yet on developing an EA perspective to confidently indicate specific aspects worth pursuing. Still, in parallel to trying to conduct impactful scientific research myself, I've grown interested in open science and metascience over the last couple of years and am on the board of the Institute for Globally Distributed Open Research and  Education (IGDORE), so I'll throw out a few suggestions of donation ready Open Science projects that seem promising. However, I should note that while I think these initiatives could contribute to expanding OS, I haven't evaluated the space comprehensively and I can't say these are the best opportunities, nor could I claim that this will substantially contribute to any EA cause area beyond the general refrain of 'making science more open and reproducible will generally be beneficial for society'.  

One initiative I'm particularly excited about at the moment is Free Our Knowledge (FOK) - a platform for researchers to take collective action pledges that lead to positive changes in research culture. Although COS does have a 5-step pyramid for changing research culture, I think that FOK  could go along way towards accelerating culture change towards Open Science. For instance, in one of Björn Brembs's Open Science TV interviews (I think the 3rd or 4th) he comments that he often hears 'I don’t care about these journals but everybody else does' from physicists about why they continue to publish in pay-walled journals. Using a collective action pledge could break this coordination problem rapidly. (Interestingly, LessWrong also has a discussion on coordinated action which seems to be entirely disconnected from FOK.) Anyway,  FOK is currently unfunded, and I'm sure a bit of funding would go a long way. The founder (Cooper Smout) has previously applied for funding with COS as a fiscal sponsor and could probably receive money via them, but as he is based in Brisbane and might be able to form a non-profit to receive an Australian tax-deductible donation directly. I can put you in touch with Cooper to talk further if you'd like. 

Another initiative I'm quite enthusiastic about is the Open Science MOOC (OS MOOC). They have a good reputation in the OS community and are a grass-roots effort to develop educational courses on different aspects of OS.  I'm not sure what their current funding situation is, but I do know that it's mostly a volunteer-led project so I expect they could productively use some further funding. Unfortunately, as OS MOOC is EU based, I doubt there will be a way to make any donation tax-deductible. Again, I could put you in touch with somebody on the steering committee if this is of interest. 

Lastly, while it's a bit self-serving, I should point to IGDORE as a potential funding recipient as it's another organisation I'm naturally quite excited about. We are a virtual institute committed to supporting and encouraging scientists to conduct open and replicable research, with the longer-term goal of providing services around good scientific practices and scientific education, and less EA relevant, to promote improved quality of life for scientists and support independent researchers. IGDORE members include both passionate advocates of open science, as well as students and researchers who wish to conduct open science but are either not supported or otherwise hindered in doing this at their primary academia institution. As the organisations above, we are unfunded and volunteer-led, so even a modest donation could substantially develop the organisation. Our immediate goals are to develop a package of OS Support Services to offer via a research consultancy and an educational platform that will initially host OS content and then be grown into a Massively Online Open Science Training (MOOST) service that provides supervised research training that goes beyond standard MOOCs. While both of these initiatives aim to generate revenue to make IGDORE self-sustainable in the long term, we need seed funding to higher administrative and technical services to move them forward. Let me know if you'd like to talk more about this. (while IGDORE is distributed, our financial address is in Sweden, so probably not tax-deductible). You are also more than welcome to post about this on the On Science and Academia forum, which is an open forum maintained by IGDORE and used by members of the other two organisations mentioned above, if you'd like to engage the OS community directly in discussing your donation.

I should also point out that besides being on the board of IGDORE, I know the people from FOK and OS MOOC quite well as several are also members of IGDORE. So my recommendations generally lean towards what would be considered the more 'radially progressive' branch of the OS community, that pushes for systemic reform of academia and publishing if they can't adopt open and replicable principles in their current format. A more mainstream OS perspective is represented by the organisations that presented at Metascience 2019 (which includes COS). However, as the OS community is still quite small, I think it will be hard to find completely un-conflicted recommendations.

PS. I wouldn't be so confident about COS's funding security. While they do list many funders on their site, I have heard they are now more funding constrained and last year they started monetising most of the Open Science Framework services. This might not be a problem for services used by larger institutions, and I appreciate that COS needs to make its services financially sustainable, but this has put pressure on academic communities using OSF Preprints (particularly those from developing countries), and I believe some have now moved to other platforms (see more here). 


Comment by gavintaylor on [Help please/Updated] Best EA use of $250,000AUD/$190,000 USD for metascience? · 2020-12-22T20:21:23.734Z · EA · GW

I joined a few sessions at the AIMOS (Association for Interdisciplinary Metascience and Open Science) conference a few weeks ago. It was great and I wrote up some notes about the talks I caught here. That said, beyond hosting their annual conference, I'm not really sure what other plans AIMOS has. If it's of interest I can put the OP in touch with the incoming 2021 president (Jason Chin from USyd Law School) to talk further.

Otherwise, many of the speakers were from Australia and you might find other ideas for local donation recipients on the AIMOS program. Paul Glasziou from Bond Uni mentioned something in his plenary that stood out to me - inefficient ethical reviews can be a huge source of wasted research time and money (to the tune of $160 million per annum in Australia) - if that's of interest he may be able to suggest a way to spend the money to push for ethical review reforms in Australia.

Comment by gavintaylor on The Intellectual and Moral Decline in Academic Research · 2020-12-09T22:22:04.651Z · EA · GW

I think they could help with some things. But as  I wrote here, I am not sure if it would be appropriate to only fund academic research through lotteries. 

Comment by gavintaylor on Long-Term Future Fund: Ask Us Anything! · 2020-12-08T19:30:31.857Z · EA · GW

I received my LTF grant while living in Brazil (I forwarded the details of the Brazilian tax lawyer I consulted to CEA staff). However, I built up my grantee expectations while doing research in Australia and Sweden, and was happy they were also valid in Brazil. 
My intuition is that most countries that allow either PhD students or postdocs to receive tax-free income for doing research at universities will probably also allow CEA grants to individuals to be declared in a tax-free manner, at least if the grant is for a research project.

Comment by gavintaylor on Long-Term Future Fund: Ask Us Anything! · 2020-12-06T19:18:55.771Z · EA · GW

Several comments have mentioned that CEA provides good infrastructure for making tax-deductible grants to individuals and also that the LTF  often does, and is well suited to, make grants to individual researchers. Would it make sense for either the LTF or CEA to develop some further guidelines about the practicalities of receiving and administering grants for individuals (or even non-charitable organisations) that are not familiar with this sort of income, to help funds get used effectively?
As a motivating example, when I recently received an LTF grant, I sought legal advice in my tax jurisdiction and found out the grant was tax-exempt. However, prior to that CEA staff said that many grantees do pay tax on grant funds and they would consider it reasonable for me to do so. I have been paid on scholarships and fellowships for nearly 10 years and had the strong expectation that such funding is typically tax-free, which lead me to follow this up with a taxation lawyer; still, I wonder if other people, who haven't previously received grant income, come into this with different expectations and end up paying tax unnecessarily. While specifics vary between tax-jurisdictions, having the right set of expectations for being a grantee helped me a lot. Maybe there would also be other general areas of grant receipt/administration that would be useful to provide advice on.

Comment by gavintaylor on Long-Term Future Fund: Ask Us Anything! · 2020-12-06T18:48:27.743Z · EA · GW

Just to add a comment with regards to sustainable funding for independent researchers. There haven't previously been many options available for this, however, there are a growing number of virtual research institutes through which affiliated researchers can apply to academic funding agencies. The virtual institute can then administer the grant for a researcher (usually for much lower overheads than a traditional institution), while they effectively still do independent work. The Ronin Institute administers funding from US granters, and I am a Board member at IGDORE which can receive funding from some European granters. That said, it may still be quite difficult for individuals to secure academic funding without having some traditional academic credentials (PhD, publications, etc.). 

Comment by gavintaylor on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T13:49:57.033Z · EA · GW

It seems like most progress to date has come from research in the natural/formal/applied sciences leading to technological advances (or correct me if I'm wrong?). Do you expect that trend to continue, or could you see a case for research in the social sciences/humanities (that lead to social advances) making a more prominent contribution to future progress?

Comment by gavintaylor on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T13:39:13.259Z · EA · GW

Are there any areas covered by the fund's scope where you'd like to receive more applications?

Comment by gavintaylor on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T13:36:34.171Z · EA · GW

Many areas of science currently appear to have reproducibility problems with published research (some call it a crisis). Do you think that poor reproducibility of recent (approx. the last 30 years) scientific work has been a significant contributor to the current stagnation?

On the margin, do you think that funding is better spent on improving reproducibility (or more generally, the areas covered by Metascience) or on pursuing promising scientific research directly?

Comment by gavintaylor on Lotteries for everything? · 2020-11-27T15:35:36.963Z · EA · GW

I'm generally in favour of experimenting with different granting models and am glad to hear that funders are starting to experiment with random allocation. However, I'd be a little bit cautious about moving to a system based solely on random grant assignment. Depending on the actual grant success rate per round (currently often <20%), it seems likely that one would get awarded grants quite infrequently, which would interrupt the continuity of research. For instance, if somebody gets a random grant and makes an interesting discovery, it seems silly to then expect to wait several years for another random grant assignment to follow up on it. So I feel that random assignment is probably better used for assigning funding for early-career researchers or pilot projects.

With respect to quality control,  the Nature news article linked above notes:

assessment panels spend most of their time sorting out the specific order in which to place mid-ranking ideas. Low- and high-quality applications are easy to rank, she says. “But most applications are in the midfield, which is very big.”

The current modified lottery systems just remove the low-ranking applications, but if it's easy to pick high-ranking applications, surely they should be given funding priority?

Comment by gavintaylor on Learnings about literature review strategy from research practice sessions · 2020-11-20T22:06:53.360Z · EA · GW

This article on doing systematic reviews well might also be of interest if you want to refine your process to make a publishable review. It's written by environmental researchers, but I think the ideas should be fairly general (i.e. they mention Cochrane for medical reviews).

I'd also recommend having a loot at It is a bit similar to ConnectedPapers but works off a concept map (I think) rather than than a citation map, so it can discover semantic linkages between your paper of interest and others that aren't directly connected through reference links. I've just started looking at it this week and have been quite impressed with the papers it suggested. 

The idea of doing deliberate practice on research skills is great. I agree that learning to do good research is difficult and poor feedback mechanisms certainly don't help. Which other skills are you aiming to practice?

Comment by gavintaylor on What has EA Brazil been up to? · 2020-11-16T19:24:44.234Z · EA · GW

Hey Fernando, wrt to your very final point.

Networking with Brazilian researchers conducting EA related research, specially x-risks and institutional decision-making improvement (we have already done some work on mapping them)

I recalled that Luis Mota and I briefly spoke about this at the EAGxV some months ago. We discussed a few points around avenues for academic EA work in Brazil and thought the following could be promising:
* Governance of AI and biotechnology. Brazil is doing a bit of research on both (more so on bio), and is likely to be a regional hub of applied work in these areas.
* Natural pandemics. Rainforest clearance could bring people into contact with all sorts of viruses.
* Conversely, rainforest preservation assists with climate change.
* Farmed animal welfare. Brazil farms a lot of animals and domestic consumption is quite high consumption compared to population income.  Several ACE recommended charities already work here.

For the young academic, Brazilian Academia may also be quite attractive as it's possible to get a permanent/tenured position at quite soon after your PhD via a concurso. This could then allow researchers to focus on work they view as valuable rather than having to chase high-impact publications for a decade to get a position, as is common in the US/EU. If one is mostly doing theoretical research and doesn't need grants to do experimental research, then this could be a good position from which to do theoretical research on the above areas or meta-topics (e.g. cause prioritisation).

Comment by gavintaylor on Research Summary: The Intensity of Valenced Experience across Species · 2020-11-15T19:13:18.779Z · EA · GW

There are practical limitations about the resolution with which neurons can increase resolution (noise would be limiting factor, maybe other considerations). A common 'design scheme' that gets around this is range fractionation: If the receptors are endowed with distinct transfer functions in such a way that the points of highest sensitivity are scattered along the axis of the quality being measured, the precision of the sense organ as a whole can be increased.
This example of mechanosensory neural encoding in hawkmoths is a good example of range fractionation (and where I first heard about it). 

Range fractionation is one common example where extra neurons increase resolution. There may be other ways that neural resolution can be increased without extra neurons. Also note that this has mostly been studied in peripheral sensory systems - I'm not sure if similar encoding schemes have been considered to represent the resolution of subjective experiences that are solely represented in the CNS.


Comment by gavintaylor on Working together to examine why BAME populations in developed countries are severely affected by COVID-19 · 2020-11-10T22:34:20.834Z · EA · GW

A new update on this project - it has now grown into the Ethnicity and COVID-19 Research Consortium (ECRC). They have started to publish some work, which is available here, and Michelle and her colleagues are still looking for BAME people who have been affected to participate in their study here

The consortia will also be presenting some initial results of their work in an online mini-conference on November 27th (7PM GMT). Please register here to attend.

It seems like this issue is now receiving more attention as well, as the Biden-Harris COVID-19 response plan includes a ‘COVID-19 Racial and Ethnic Disparities Task Force’. I expect the ECRC's work could be used to give that Task Force a head start, and if anybody knows somebody who will be on the Task Force, I would be happy to connect them to Michelle and the ECRC team.

Comment by gavintaylor on Nuclear war is unlikely to cause human extinction · 2020-11-08T23:48:10.411Z · EA · GW

many people assumed that this was the scientific consensus. Unfortunately, this misrepresented the scientific community’s state of uncertainty about the risks of nuclear war. There have only ever been a small numbers of papers published about this topic (<15 probably), mostly from one group of researchers, despite the topic being one of existential importance.

We’re finally beginning to see some healthy debate about some of these questions in the scientific literature. Alan Robock’s group published a paper in 2007 that found significant cooling effects even from a relatively limited regional war. A group from Los Alamos, Reisner et al, published a paper in 2018 that reexamined some of the assumptions that went into Robock et al’s model, and concluded that global cooling was unlikely in such a scenario. Robock et al. responded, and Riesner et al responded to the response. Both authors bring up good points, but I find Rieser’s position more compelling. This back and forth is worth reading for those who want to investigate deeper.

I've always found it a bit weird that so few researchers have work on such an important question. It's good to hear the more researchers are now engaging with the nuclear winter modeling. Besides genuine scientific disagreements about the modeling, I wasn't surprised to find that Wikipedia also notes there are some doubts about the emotional and political bias of the researchers involved:

As MIT meteorologist Kerry Emanuel similarly wrote a review in Nature that the winter concept is "notorious for its lack of scientific integrity" due to the unrealistic estimates selected for the quantity of fuel likely to burn, the imprecise global circulation models used, and ends by stating that the evidence of other models, point to substantial scavenging of the smoke by rain.[179] Emanuel also made an "interesting point" about questioning proponent's objectivity when it came to strong emotional or political issues that they hold.[11]

I think that funding another group of climate modellers to conduct nuclear winter simulations independently of the Robock group would provide a valuable second perspective on this. Alternatively, an adversarial collaboration between the Robock group and some nuclear winter opponents could also produce valuable results.

Comment by gavintaylor on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-04T13:51:49.024Z · EA · GW

This might be the first example I've seen of an Open Inverse Grant Proposal. Good luck!

Comment by gavintaylor on Linch's Shortform · 2020-10-09T00:10:44.062Z · EA · GW

The last newsletter from Spencer Greenberg/Clearer Thinking might be helpful:

Comment by gavintaylor on Lumpyproletariat's Shortform · 2020-10-07T15:39:28.967Z · EA · GW

There is a collection of pages about the 'Kickstarter for coordinated action' idea on LessWrong.

A friend of mine started Free our knowledge, which is intended to encourage collective action from academics to support open science initiatives (open access publishing, pre-registrations, etc.). The only enforcement is deanonymizing the pledge signatories after the threshold is reached (which hasn't happened yet).

Comment by gavintaylor on Preprint: Open Science Saves Lives: Lessons from the COVID-19 Pandemic · 2020-10-03T19:54:43.693Z · EA · GW

I recently attended the UNESCO Open Talks Webinar “Open Science for Building Resilience in the Face of COVID-19”, which touched on many of the ideas from the pre-print above. The webinar recording is available on YouTube, and I've also written up a short summary which can be accessed here. The WHO representative made it clear that they were in favour of Open Science and that it has assisted them in their work.

More generally, I think that Open Science is relevant to EAs from two perspectives. Firstly, it has the potential to reduce problems with and increase benefits from scientific research, which could have positive benefits for society. More directly, EA research often summarizes academic research and EAs should benefit if that is both (legally) freely accessible and also done more transparently. Although a lot of EA research is effectively published open-access (e.g. forum/blog posts) it could be also interesting to consider what other open science ideas can be incorporated into EA research.

Comment by gavintaylor on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-03T18:24:45.297Z · EA · GW

I regard Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) as having been quite successful. From Wikipedia:

Notable developments by CSIRO have included the invention of atomic absorption spectroscopy, essential components of Wi-Fi technology, development of the first commercially successful polymer banknote, the invention of the insect repellent in Aerogard and the introduction of a series of biological controls into Australia, such as the introduction of myxomatosis and rabbit calicivirus for the control of rabbit populations.

And the items listed in the Innovation section. Still, I'm sure they have had (at least) a few research projects that didn't go anywhere.

Comment by gavintaylor on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-03T18:17:55.279Z · EA · GW

It would be an interesting case study on organisational effectiveness to compare the Fraunhofer Society to the Max Planck Society. Although they focus on different stages of research (applied innovation vs. basic science) they both German non-profit research organizations and relatively similar in size (quick google on MPS gives around 24 thousand staff and $2.1 billion budget for 2018). Yet MPS is a world-renowned research organization and its researchers have been awarded numerous Nobel prizes. I'm not sure if MPS has specific goals, but nonetheless, it seems to be achieving much more impact than Fraunhofer. Some of this difference is probably just in appearances as basic research tends to get more recognition and publicity than applied work, but it still seems like MPS is systematically doing better. Why is that?


Of course, it is not that the employees at Fraunhofer want to do harmful things. Many are cognitively dissonant, actually thinking that they do tremendous good. But many are aware of the problematic situation they are in. The dilemma is: Not having any goal-oriented incentive system, the Fraunhofer Society is dominated by the personal incentive of its members: Job security.

This is the same general trend I observed amongst a lot of University researchers, but it sounds like it's progressed much further where you work. Careerism seems to kill the integrity of researchers.


When I told a senior scientist about CoolEarth, she replied:
"When it comes to climate change, we have to stop thinking in numbers"
When I asked her why, she said : "Because you can´t just throw a couple of dollars at the ground and ask mother nature to do it one more year"

This reminded me of The value of a life from the Minding Our Way sequence.

Comment by gavintaylor on Evaluating Life Extension Advocacy Foundation · 2020-10-03T15:25:34.516Z · EA · GW

Nice write up. I've referenced the Rejuvenation Road Map on LEAF's site several times, but never really knew much about the organisation itself.

Two extra points that I think would be interesting to ask about in the general questions on the landscape section:

-LEAF seems like they have a very good overview of the organisations already in ageing research (i.e. they raise funds for 9 others orgs). Is there any space in open space in the landscape that they would be excited about a new organisation being started to address?

-Do they view ageing research as primarily being talent or funding constrained? This could be separated into University and non-profit (e.g. SENS RF) based research, as I think the funding options available to each are quite different.

Comment by gavintaylor on The Intellectual and Moral Decline in Academic Research · 2020-09-25T22:01:13.347Z · EA · GW

Good question. I did a quick google and came across Lisa Bero who seems to have done a huge amount of work on research integrity. From this popular article, it sounds like corporate funding is often problematic for the research process.

The article links to several systematic reviews her group has done, and the article 'Industry sponsorship and research outcome' does conclude that corporate funding leads to a bias in the published results:

Authors' conclusions: Sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources. Our analyses suggest the existence of an industry bias that cannot be explained by standard 'Risk of bias' assessments.

I just read the abstract this so I'm not sure if they tried to identify if this was solely due to publication bias or if corporate-funded research also tended to have other issues (e.g. less rigorous experimental designs or other questionable research practices).

Comment by gavintaylor on gavintaylor's Shortform · 2020-09-15T18:14:46.100Z · EA · GW

I was recently reading the book Subvert! by Daniel Cleather (a colleague) and thought that this quote from Karl Popper and the author's preceding description of Popper's position sounded very similar to EAs method of cause prioritisation and theory of change in the world. (Although I believe Popper is writing in the context of fighting against threats to democracy rather than threats to well-being, humanity, etc.) I haven't read The Open Society and Its Enemies (or any of Popper's books for that matter), but I'm now quite interested to see if he draws any other parallels to EA.

For the philosophical point of view, I again lean heavily on Popper’s The Open Society and Its Enemies.  Within the book, he is sceptical of projects that seek to reform society based upon some grand utopian vision.  Firstly, he argues that such projects tend to require the exercise of strong authority to drive them.  Secondly, he describes the difficulty in describing exactly what utopia is, and that as change occurs, the vision of utopia will shift.  Instead he advocates for “piecemeal social engineering” as the optimal approach for reforming society which he describes as follows:
“The piecemeal engineer will, accordingly, adopt the method of searching for, and fighting against, the greatest and most urgent evils of society, rather than searching for, and fighting for, its greatest ultimate good.”

I also quite enjoyed Subvert! And would recommend that as a fresh perspective on the philosophy of science. A key point from the book is:

The problem is that in practice, scientists often adopt a sceptical, not a subversive, stance.  They are happy to scrutinise their opponents results when they are presented at conferences and in papers.  However, they are less likely to be actively subversive, and to perform their own studies to test their opponents’ theories.  Instead, they prefer to direct their efforts towards finding evidence in support of their own ideas.  The ideal mode would be that the proposers and testers of hypotheses would be different people.  In practice they end up being the same person.
Comment by gavintaylor on The Cost Of Wasted Motion · 2020-09-08T16:12:34.265Z · EA · GW

I think this post is a good counterpoint to common adages like 'don't sweat the small stuff' or 'direction over speed' that often come up in relation to career and productivity advice.

At the risk of making a very tenuous connection, this reminded me of an animal navigation strategy for moving towards a goal which has an unstable orientation (i.e. the animal is not able to reliably face towards the goal) - progress can still be made if it moves faster when facing towards the goal than away from it. (I don't think this is a very well known navigation strategy, at least it didn't seem to be in 2014 when I wrote up an experiment on this in my PhD thesis [Chapter 5]). Work is obviously a lot more multi-faceted than spatial navigation, but maybe an analogy could be made to school students or junior employees who don't get much choice about what they are working on day to day and recommend that they go all out on the important things and just scrape by on the rest.

Comment by gavintaylor on Working together to examine why BAME populations in developed countries are severely affected by COVID-19 · 2020-08-19T13:43:31.955Z · EA · GW

Michelle's study is now searching for participants. If you are a Black, Asian from a minority ethnic group or a person of colour and interested in sharing your lived experience of COVID-19, contact her at:

See more details here.

Comment by gavintaylor on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-16T19:54:55.089Z · EA · GW

Nice article Jason. I should start by saying that as a (mostly former) visual neuroscientist, I think that you've done quite a good job summarizing the science available in this series of posts, but particularly in these last two posts about time. I have a few comments that I'd like to add.

Before artificial light sources, there weren't a lot of blinking lights in nature. So although visual processing speed is often measured as CFF, most animals didn't really evolve to see flickering lights. In fact, I recall that my PhD supervisor Srinivasan did a study where he tried to behaviorally test honeybee CFF - he had a very hard time training them to go to flickering lights (study 1), but had much more success training them to go to spinning disks (study 2). In fact, the CFF of honeybees is generally accepted to be around 200 Hz, off the charts! That said, in an innate preference study on honeybees that I was peripherally involved with, we found honeybee had preferences for different frequencies of flickering stimuli, so they certainly can perceive and act on this type of visual information (study 3).

Even though CFF has been quite widely measured, if you wanted to do a comprehensive review of visual processing speed in different taxa then it would also be worth looking at other measures, such as visual integration time. This is often measured electrophysiologically (perhaps more commonly than CFF), and I expect that integration time will be at tightly correlated with CFF and as they are causally related, one can probably be approximately calculated from the other (I say approximately because neural nonlinearities may add some variance, in the case of a video system it can be done exactly). For instance, this study on sweat bees carefully characterized their visual integration time at different times of day and different light conditions but doesn't mention CFF.

Finally, I think some simple behavioural experiments could shed a lot of light on how we expect metrics around sensory (in this case visual) processing speeds to be related to the subjective experience of time. For instance, the time taken to make a choice between options is often much longer than the sensory processing time (e.g. 10+ seconds for bumblebees, which I expect have CFF above 100 Hz), and probably reflects something more like the speed of a conscious process than the sensory processing speed alone does. A rough idea for an experiment is to take two closely related and putatively similar species where one had double the CFF of the other, measure the decision time of each on a choice-task to select flicker or motion at 25%, 50% and 100% of their CFF. So if species one has CFF at 80 Hz, test it on 20, 40 and 80 Hz, and if species two has CFF 40 Hz, test it on 10, 20 and 40 Hz. A difference in the decisions speed curve across each animals frequency range would be quite suggestive of a difference in the speed of decision making that was independent of the speed of stimulus perception. The experiment could also be done on the same animal in two conditions where its CFF differed, such as in a light- or dark-adapted state. For completeness, the choice-task could be compared to response times in a classical conditioning assay, which seems more reflexive, and I'd expect differences in speeds here correlate more tightly to differences in CFF. The results of such experiments seem like they could inform your credences on the possibility and magnitude of subjective time differences between species.

Comment by gavintaylor on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-16T18:43:22.944Z · EA · GW
I'd be interested in knowing if other senses (sound, especially) are processed faster at the same time. It could be that for a reaching movement, our attention is focused primarily visually, and we only process vision faster.

I agree that this would be an interesting experiment. If selective attention is involved then I think it is also possible that other senses would be processed slower. Unfortunately, my impression is that comparatively limited work has been done on multi-sensory processing in human psychology.

Comment by gavintaylor on What coronavirus policy failures are you worried about? · 2020-08-13T14:39:31.866Z · EA · GW

Articles like this make me think there is some basis to this concern:

Coronavirus: Russia calls international concern over vaccine 'groundless'

On Wednesday, Germany's health minister expressed concern that it had not been properly tested.

"It can be dangerous to start vaccinating millions... of people too early because it could pretty much kill the acceptance of vaccination if it goes wrong," Jens Spahn told local media.

"Based on everything we know... this has not been sufficiently tested," he added. "It's not about being first somehow - it's about having a safe vaccine."
Comment by gavintaylor on A New X-Risk Factor: Brain-Computer Interfaces · 2020-08-11T18:27:40.943Z · EA · GW

This seems like a thorough consideration of the interaction of BCIs with the risk of totalitarianism. I was also prompted to think a bit about BCIs as a GCR risk factor recently and had started compiling some references, but I haven't yet refined my views as much as this.

One comment I have is that risk described here seems to rely not just on the development of any type of BCI but on a specific kind, namely, relatively cheap consumer BCIs that can nonetheless provide a high-fidelity bidirectional neural interface. It seems likely that this type of BCI would need to be invasive, but it's not obvious to me that invasive BCI technology will inevitably progress in that direction. Musk hint's that Neuralink's goals are mass-market, but I expect that regulatory efforts could limit invasive BCI technology to medical use cases, and likewise, any military development of invasive BCI seems likely to lead to equipment that is too expensive for mass adoption (although it could provide the starting point for commercialization). Although DARPA's Next-Generation Nonsurgical Neurotechnology (N3) program does have the goal of developing high-fidelity non- or minimally-invasive BCIs; my intuition is at that they will not achieve their goal of reading from one million and writing to 100,000 neurons non-invasively, but I'm not sure about the potential of the minimally-invasive path. So one theoretical consideration is what percentage of a population needs to be thought policed to retain effective authoritarian control, which would then indicate how commercialized BCI technology would need to be before it could become a risk factor.

In my view, a reasonable way to steer BCIs development away from posing a risk-factor for totalitarianism would be to encourage the development of high-fidelity non-invasive and read-focused consumer BCI. While non-invasive devices are intrinsically more limited than invasive ones, if consumers can still be satisfied by their performance then it will reduce the demand to develop invasive technology. Facebook and Kernel already look like they are moving towards non-invasive technology. One company that I think is generally overlooked is CTRL-Labs (now owned by Facebook), who are developing an armband that acquires high-fidelity measurements from motor neurons - although this is a peripheral nervous system recording, users can apparently repurpose motor neurons for different tasks and even learn to control the activity of individual neurons (see this promotional video). As an aside, if anybody is interested in working on non-invasive BCI hardware, I have a project proposal for developing a device for acquiring high-fidelity and non-invasive central nervous system activity measurements that I'm no longer planning to pursue but am able to share.

The idea of BCIs that punish dissenting thoughts being used to condition people away from even thinking about dissent may have a potential loophole, in that such conditioning could lead people to avoid thinking such thoughts or it could simply lead them to think such thoughts in ways that aren't punished. I expect that human brains have sufficient plasticity to be able to accomplish this under some circumstances and while the punishment controller could also adapt what it punishes to try and catch such evasive thoughts, it may not always have an advantage and I don't think BCI thought policing could be assumed to be 100% effective. More broadly, differences in both intra- or inter-person thought patterns could determine how effective BCI is for thought policing. If a BCI monitoring algorithm can be developed using a small pool of subjects and then applied en masse, that seems much risky than if the monitoring algorithm needs to be adapted to each individual and possibly updated over time (though there would be scope for automating updating). I expect that Neuralinks future work will indicate how 'portable' neural decoding and encoding algorithms are between individuals.

I have a fun anecdotal example of neural activity diversity: when I was doing my PhD at the Queensland Brain Institute I did a pilot experiment for an fMRI study on visual navigation for a colleague's experiment. Afterwards, he said that my neural responses were quite different from those of the other pilot participant (we both did the navigation task well). He completed and published the study and ask the other pilot participant to join other fMRI experiments he ran, but never asked me to participate again. I've wondered if I was the one who ended up having the weird neural response compared to the rest of the participants in that study... (although my structural MRI scans are normal, so it's not like I have a completely wacky brain!)

The BCI risk scenario I've considered is whether BCIs could provide a disruptive improvement in a user's computer-interface speed or another cognitive domain. DARPA's Neurotechnology for Intelligence Analysts (NIA) program showed that an x10 increase in image analysis speed with no loss of accuracy, just using EEG (see here for a good summary of DARPAs BCI programs until 2015). It seems reasonable that somewhat larger speed improvements could be attained using invasive BCI, and this speed improvement would probably generalize to other, more complicated tasks. When advanced BCIs is limited to early adopters, could such cognitive advantages facilitate the risky development in AI or bioweapons by small teams, or give operational advantages to intelligence agencies or militaries? (happy to discuss or share my notes on this with anybody who is interested in looking into this aspect further)

Comment by gavintaylor on Criteria for scientific choice I, II · 2020-07-30T14:21:01.476Z · EA · GW

The call for science to be done in service to society reminds me of Nicholas Maxwell's call to redirect academia to work towards wisdom rather than knowledge (see here and also here). I haven't read any of Maxwell's books on this, but it surprises me that there doesn't seem to be any interaction between him and EA philosophers at other UK institutes as Maxwell's research seems to be generally EA aligned (although limited to the broad-meta level).

Comment by gavintaylor on Is there a subfield of economics devoted to "fragility vs resilience"? · 2020-07-21T13:26:04.555Z · EA · GW

Although not really a field, Nassim Taleb's book Antifragile springs to mind - I haven't read this myself but have seen it referenced in several discussion on economic fragility, so it might at least be a starting point to work with.

Comment by gavintaylor on Prioritizing COVID-19 interventions & individual donations · 2020-07-06T13:08:01.845Z · EA · GW
We are seeking additional recommendations for charities that operate in Latin America and the Arabian Peninsula, particularly in the areas of direct aid (cash transfers) and strengthening health systems.

Doe direto was running a trial to give cash transfers to vulnerable families in Brazil. They seemed to have finished the trial now and I'm not sure if/when they will consider restarting it.

Comment by gavintaylor on gavintaylor's Shortform · 2020-07-03T13:20:56.340Z · EA · GW

Thanks Michael, I had seen that but hadn't looked at the links. Some comments:

The cause report from OPP makes the distinction between molecular nanotechnology and atomically precise manufacturing. The 2008 survey seemed to be explicitly considering weaponised molecular nanotechnology as an extinction risk (I assume the nanotechnology accident was referring to molecular nanotechnology as well). While there seems to be agreement that molecular nanotechnology could be a direct path to GCR/extinction, OPP presents atomically precise manufacturing as being more of an indirect risk, such as through facilitating weapons proliferation. The Grey goo section of the report does resolve my question about why the community isn't talking about (molecular) nanotechnology as an existential risk as much now (the footnotes are worth reading for more details):

‘Grey goo’ is a proposed scenario in which tiny self-replicating machines outcompete organic life and rapidly consume the earth’s resources in order to make more copies of themselves.40 According to Dr. Drexler, a grey goo scenario could not happen by accident; it would require deliberate design.41 Both Drexler and Phoenix have argued that such runaway replicators are, in principle, a physical possibility, and Phoenix has even argued that it’s likely that someone will eventually try to make grey goo. However, they believe that other risks from APM are (i) more likely, and (ii) very likely to be relevant before risks from grey goo, and are therefore more worthy of attention.42 Similarly, Prof. Jones and Dr. Marblestone have argued that a ‘grey goo’ catastrophe is a distant, and perhaps unlikely, possibility.43

OPP's discussion on why molecular nanotechnology (and cryonics) failed to develop as scientific fields is also interesting:

First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work ...
Second, early advocates of cryonics and MNT spoke and wrote in a way that was critical and dismissive toward the most relevant mainstream scientific fields ...
Third, and perhaps largely as a result of these first two issues, these “neighboring” established scientific communities (of cryobiologists and chemists) engaged in substantial “boundary work” to keep advocates of cryonics and MNT excluded ...

It least in the case of molecular nanotechnology, the simple failure of the field to develop may have been lucky (at least from a GCR reduction perspective) as it seems that the research that was (at the time) most likely to lead to the risky outcomes was simply never pursued.

Comment by gavintaylor on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-02T16:27:57.552Z · EA · GW

Something that I think EAs may be undervaluing is scientific research done with the specific aim of identifying new technologies for mitigating global catastrophic or existential risks, particularly where these have interdisciplinary origins.

A good example of this is geoengineering (the merger of climate/environmental science and engineering) which has developed strategies that could allow for mitigating the effects of worst-case climate change scenarios. In contrast, the research being undertaken to mitigate worst-case pandemics seem to focus on developing biomedical interventions (biomedicine started as an interdisciplinary field, although it is now very well established as its own discipline). As an interdisciplinary scientist, I think there is likely to be further scope for identifying promising interventions from the existing literature, conducting initial analysis and modelling to demonstrate these could be feasible responses to GCRs, and then engaging in field-building activities to encourage further scientific research along those paths. The reason I suggest focusing on interdisciplinary areas is that merging two fields often results in unexpected breakthroughs (even to researchers from the two disciplines involved in the merger) and many 'low-hanging' discoveries that can be investigated relatively easily. However, such a workflow seems uncommon both in academia (which doesn't strongly incentivise interdisciplinary work or explicitly considering applications during early-stage research) and EA (which [with the exception of AI Safety] seems to focus on finding and promoting promising research after it has already been initiated by mainstream researchers).

Still, this isn't really a career option as much as it is a strategy for doing leveraged research which seems like it would be better done at an impact focused organisation than at a University. I'm personally planning to use this strategy and will attempt to identify and then model the feasibility of possible antiviral interventions as the intersection of physics and virology (although I haven't yet thought much about how to effectively promote any promising results).

Comment by gavintaylor on Consider a wider range of jobs, paths and problems if you want to improve the long-term future · 2020-06-30T18:20:37.499Z · EA · GW

It could also be the case that the impact distribution of orgs is not flat yet we've only discovered a subset of the high impact ones so far (speculatively, some of the highest impact orgs may not even exist yet). So if the distribution of applicants is flatter then they are still likely to satisfy the needs of the known high impact orgs and others might end up finding or founding orgs that we later recognise to be high impact.

Comment by gavintaylor on EA is risk-constrained · 2020-06-28T23:22:08.183Z · EA · GW

Sure, I agree that unvetted UBI for all EAs probably would not be a good use of resources. But I also think there are cases where an UBI-like scheme that funded people to do self directed work on high-risk projects could be a good alternative to providing grants to fund projects, particularly at the early-stage.

Comment by gavintaylor on EA is risk-constrained · 2020-06-28T22:05:52.081Z · EA · GW

Asking people who specialise in working on early-stage and risky projects to take-care of themselves with runway may be a bit unreasonable. Even if a truly risky project (in the low-probability of a high-return sense) is well executed, we should still expect it to have an a priori success rate of 1 in 10 or lower. Assuming that it takes six months or so to test the feasibility of a project, then people would need save several years worth of runway if they wanted to be financially comfortable while continuing to pursue projects until one worked out (of course, lots of failed projects may be an indication that they're not executing well, but lets be charitable and assume they are). This would probably limit serious self-supported EA entrepreneurship to an activity one takes on at a mid-career or later stage (also noted by OPP in relation to charity foundation):

Starting a new company is generally associated with high (financial) risk and high potential reward. But without a solid source of funding, starting a nonprofit means taking high financial risk without high potential reward. Furthermore, some nonprofits (like some for-profits) are best suited to be started by people relatively late in their careers; the difference is that late-career people in the for-profit sector seem more likely to have built up significant savings that they can use as a cushion. This is another reason that funder interest can be the key factor in what nonprofits get started.
Comment by gavintaylor on EA is risk-constrained · 2020-06-28T21:05:03.767Z · EA · GW

At the moment I think there aren't obvious mechanisms to support independent early-stage and high-risk projects at the point where they aren't well defined and, more generally, to support independent projects that aren't intended to lead to careers.

As an example that address both points, one of the highest impact things that I'm considering working on currently is a research project that could either fail in ~3 months or, if successful, occupy several years of work to develop into a viable intervention (with several more failure points along the way).

With regards to point 1: At the moment, my only option seems to be applying for seed-funding, doing some work and if that its successful, applying to another funder to provide longer-term project funding (probably on several occasions). Each funding application is both uncertain and time consuming, and knowing this somewhat disincentives me from even starting (although I have recently applied for seed stage funding). Having a funding format that started at project inception and could be renewed several times would be really helpful. I don't think something like this currently exists for EA projects.

With regards to point 2: As a researcher, I would view my involvement with the project as winding down if/when it lead to a viable intervention - while I could stay involved as a technical advisor, I doubt I'd contribute much after the technology is demonstrated, nor do I imagine particularly wanting to be involved in later stage activities such as manufacturing and distribution. This essentially means that the highest impact thing I can think of working on would probably need my involvement for, at most, a decade. If it did work out then I'd least have some credibility to get support for doing research in another area, but taking a gamble on starting something that won't even need your involvement after a few years hardly seems like sound career advice to give (although from the inside view, it is quite tempting to ignore that argument against doing the project).

I think that lack of support in these areas is most relevant to independent researchers or small research teams - researchers at larger organisations probably have more institutional support when developing or moving between projects, while applied work, such as distributing an intervention, should be somewhat easier to plan out.

Comment by gavintaylor on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T20:11:43.535Z · EA · GW

I haven't seen the talk yet, but tend to agree that industrial ideas and technology were probably exported very quickly after their development in Europe (and later the US), which probably displaced any later and independent industrial revolution.

I think it's also worth noting that the industrial revolution occurred after several centuries of European colonial expansion, during which material wealth was being sent back to Europe. For example, in the 300 hundred years before the industrial revolution, American colonies accounted for >80% of the worlds silver production. So considering the Industrial Revolution to simply have been a European phenomena could be substantially understating the more global scope of the material contribution that may have facilitated it. However, it's hard to know if colonial wealth was required to create the right conditions for an industrial revolution or simply helped to speed it up. (Interestingly, China was going on successful voyages of discovery in the 14th century but had apparently abandoned their navy by the early 15th century. If China had instead gone on to start colonial activities around the same time as Europe, maybe Eastern industry would have started developing before the Western industrial tradition was imported.)

Comment by gavintaylor on MichaelA's Shortform · 2020-06-28T19:20:37.497Z · EA · GW

Guns, Germs, and Steel - I felt this provided a good perspective on the ultimate factors leading up to agriculture and industry.

Comment by gavintaylor on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T19:13:03.144Z · EA · GW

In Gun, Germs and Steel, Diamond comments briefly on technological stagnation and regression in small human populations (mostly in relation to Australian aborigines). I don't know if there is much theoretical basis for this, but he suggests that it is likely that the required population size to support even quite basic agricultural technology is much larger than the minimum genetically viable population.

So even if knowledge isn't explicitly destroyed in a catastrophe, if humanity is reduced to small groups of subsistence farmers then it seems probable that the technological level they can utilize will be much lower than that of the preceding society (although probably higher than the same population level without a proceeding society). The lifetime of unmaintained knowledge is also limiting factor - books and digital media may degrade before the new civilisation is ready to make use of them (unless they plan ahead to maintain them). But I agree that this is all very speculative.

Comment by gavintaylor on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T18:49:49.577Z · EA · GW

I think this needs clarifying: the probability of getting industry conditional on already having agriculture may be more likely than the probability of getting agriculture in the first place, but as agriculture seems to be necessary for industry, the total likelihood of getting industry is almost certainly lower than that of getting agriculture (i.e. most of the difficulty in developing an industrial society may be in developing that preceding agricultural society).

Comment by gavintaylor on Space governance is important, tractable and neglected · 2020-06-25T13:10:24.725Z · EA · GW

Would policies to manage orbital space debris be a good candidate for short-term work in this area, particularly if they can be directed at preventing the tail risk scenarios such as run-away collision cascades (Kurzgesagt has a cute video on this)? Although larger pieces of space debris are tracked and there are some efforts currently being taken to test debris removal methods, it seems like this could be suffer from free-rider problem in the same way international climate change policy does (i.e. a lot countries are scaling up their space programs, but most may rely on the US to take the lead on debris management).

In the event that there is a collision cascade it also seems like it could create a weak form of the future trajectory lock-in scenario that Ord describes in the Precipice, in that humanity would be 'locked-out' of spacefaring (and satellite usage) for as long as it took to clean up the junk or until enough of it naturally fell out of orbit (possibly centuries).

Comment by gavintaylor on EA is risk-constrained · 2020-06-24T17:47:53.376Z · EA · GW

Here is just linking to this post, I think you meant to link somewhere else?

Comment by gavintaylor on EA is risk-constrained · 2020-06-24T16:04:30.771Z · EA · GW

Jade Leung's EAGx talk 'Fostering longtermist entrepreneurship' touched on some relevant ideas related to individual capacity for risk taking. (this isn't in the public CEA playlist, but a recording is still available via the Grip agenda)

Comment by gavintaylor on What coronavirus policy failures are you worried about? · 2020-06-23T00:27:12.793Z · EA · GW

This is more of a current issue, but I'm somewhat worried that vaccines will be rushed through safety testing, and then unidentified side-effects will end up having substantial medical consequences (possibly in a subgroup). This could (further) erode public confidence in scientific and medical authorities (and increase anti-vaxxer support) and lead to generally decreased vaccination rates for other diseases. Additionally, this could mean that when a truly devastating viral pandemic comes occurs, the public may be less willing to take 'a gamble' with a vaccine that's gone through accelerated development, even if the stakes are higher (i.e. something like: be careful of that new H5N1 vaccine, remember that time they rushed through the coronavirus vaccine and all the <insert demographic subgroup> had kidney failure a year later?).

Comment by gavintaylor on gavintaylor's Shortform · 2020-05-28T13:54:43.060Z · EA · GW

Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars - nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)

Since starting to engage with EA in 2018 I have seen very little discussion about nano-technology or non-nuclear warfare as existential risks, yet it seems that in 2008 these were considered risks on-par with top longtermist cause areas today (nanotechnology weapons and AGI extinction risks were both estimated at 5%). I realize that Ord's risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.

My open question is: what new information or discussion over the last decade lead the GCR to reduce their estimate of the risks posed by (primarily) nano-technology and also conventional warfare?