Posts

Comments

Comment by gwern on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-02T23:23:36.301Z · EA · GW

I mostly agree with that with the further caveat that I tend to think the low value reflects not that ML is useless but the inertia of a local optima where the gains from automation are low because so little else is automated and vice-versa ("automation as colonization wave"). This is part of why, I think, we see the broader macroeconomic trends like big tech productivity pulling away: many organizations are just too incompetent to meaningful restructure themselves or their activities to take full advantage. Software is surprisingly hard from a social and organizational point of view, and ML more so. A recent example is coronavirus/remote-work: it turns out that remote is in fact totally doable for all sorts of things people swore it couldn't work for - at least when you have a deadly global pandemic solving the coordination problem...

As for my specific tweet, I wasn't talking about making $$$ but just doing cool projects and research. People should be a little more imaginative about applications. Lots of people angst about how they can possibly compete with OA or GB or DM, but the reality is, as crowded as specific research topics like 'yet another efficient Transformer variant' may be, as soon as you add on a single qualifier like, 'DRL for dairy herd management' or 'for anime', you suddenly have the entire field to yourself. There's a big lag between what you see on Arxiv and what's out in the field. Even DL from 5 years ago, like CNNs, can be used for all sorts of things which they are not at present. (Making money or capturing value is, of course, an entirely different question; as fun as This Anime Does Not Exist may be, there's not really any good way to extract money. So it's a good thing we don't do it for the money.)

Comment by gwern on Asya Bergal: Reasons you might think human-level AI is unlikely to happen soon · 2020-09-20T20:55:33.987Z · EA · GW

Lousy paper, IMO. There is much more relevant and informative research on compute scaling than that.

Comment by gwern on Does Economic History Point Toward a Singularity? · 2020-09-06T22:28:53.686Z · EA · GW

I think your confusion with the genetics papers is because they are talking about _effective_ population size (N~e~), which is not at all close to 'total population size'. Effective population size is a highly technical genetic statistic which has little to do with total population size except under conditions which definitely do not obtain for humans. It's vastly smaller for humans (such as 10^4) because populations have expanded so much, there are various demographic bottlenecks, and reproductive patterns have changed a great deal. It's entirely possible for effective population size to drop drastically even as the total population is growing rapidly. (For example, if one tribe with new technology genocided a distant tribe and replaced it; the total population might be growing rapidly due to the new tribe's superior agriculture, but the effective population size would have just shrunk drastically as a lot of genetic diversity gets wiped out. Ancient DNA studies indicate there has been an awful lot of population replacements going on during human history, and this is why effective population size has dropped so much.) I don't think you can get anything useful out of effective population size numbers for economics purposes without making so many assumptions and simplifications as to render the estimates far more misleading than whatever direct estimates you're trying to correct; they just measure something irrelevant but misleadingly similar sounding to what you want.

Comment by gwern on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-08-16T18:32:31.823Z · EA · GW

This seems like a retread of Bostrom's argument that, despite astronomical waste, x-risk reduction is important regardless of whether it comes at the cost of growth. Does any part of this actually rely on Roodman's superexponential growth? It seems like it would be true for almost any growth rates (as long as it doesn't take like literally billions or hundreds of billions of years to reach the steady state).

Comment by gwern on Genetic Enhancement as a Cause Area · 2020-01-13T00:28:20.347Z · EA · GW
“Recent GWASs on other complex traits, such as height, body mass index, and schizophrenia, demonstrated that with greater sample sizes, the SNP h2 increases. [...] we suspect that with greater sample sizes and better imputation and coverage of the common and rare allele spectrum, over time, SNP heritability in ASB [antisocial behavior] could approach the family based estimates.”

I don't know why Tielbeek says that, unless he's confusing SNP heritability with PGS: a SNP heritability estimate is unconnected to sample size. Increasing n will reduce the standard error but assuming you don't have a pathological case like GCTA computations diverging to a boundary of 0, it should not on average either increase or decrease the estimate... Better imputation and/or sequencing more will definitely yield a new, different, larger SNP heritability, but I am really doubtful that it will reach the family-based estimates: using pedigrees in GREML-KIN doesn't reach the family-based Neuroticism estimate, for example, even though it gets IQ close to the IQ lower bound.


For example, the meta-analysis by Polderman et al. (2015, Table 2) suggests that 93% of all studies on specific personality disorders “are consistent with a model where trait resemblance is solely due to additive genetic variation”. (Of note, for “social values” this fraction is still 63%).

Twin analysis can't distinguish between rare and common variants, AFAIK.

The SNP heritabilities I'm referring to are https://en.wikipedia.org/w/index.php?title=Genome-wide_complex_trait_analysis&oldid=871623331#Psychological There's quite low heritabilities across the board, and https://www.biorxiv.org/content/10.1101/106203v2 shows that the family-specific rare variants (which are still additive, just rare) are almost twice as large as the common variants. A common SNP heritability of 10% is still a serious limit, as it upper bounds the PGS which will be available anytime soon, and also hints at very small average effects making it even harder. Actually, 10% is much worse than it seems even if you compare to the quoted IQ's 30%, because personality is easy to measure compared to IQ, and the UKBB has better personality inventories than IQ measures (at least, substantially higher test-retest reliabilities IIRC).

Dominance...And what about epistasis? Is it just that there are quadrillions of possible combinations of interactions and so you would need astronomical sample sizes to achieve sufficient statistical power after correcting for multiple comparisons?

Yes. It is difficult to foresee any path towards cracking a reasonable amount of the epistasis, unless you have faith in neural net magic starting to work when you have millions or tens of millions of genomes, or something. So for the next decade, I'd predict, you can write off any hopes of exploiting epistasis to a degree remotely like we already can additivity. (Epistasis does make it a little harder to plan interventions: do you wind up in local optima? Does the intervention fall apart in the next generation after recombination? etc. But this is minor by comparison to the problem that no one knows what the epistasis is.) I'm less familiar with how well dominance can work.

----

So to summarize: the SNP heritabilities are all strikingly low, often <10%, and pretty much always <20%. These are real estimates and not anomalies driven by sampling error, nor largely deflated by measurement error. The PGSes, accordingly, are often near-zero and have no hits. The affordable increases in sample sizes using common SNP genotyping will push it up to the SNP heritability limit, hopefully; but for perspective, recall that IQ PGSes 2 years ago were *already* up to 11% (Allegrini et al 2018) and still have at least 20% to go, and IQ isn't even that big a GWAS success story (eg height is >40%). The 'huge success' story for personality research is that with another few million samples years and years from now, they can reach where a modestly successful trait was years ago before they hit a hard deadend and will need much more expensive sequencing technology in generally brandnew datasets, at which point the statistical power issues become far more daunting (because rare variants by definition are rare), and other sources of predictive power like epistatic variants will remain inaccessible (barring considerable luck in someone coming up with a method which can actually handle epistasis etc). The value of the possible selection for the foreseeable future will be very small, and is already exceeded by selection on many other traits, which will continue to progress more rapidly, increasing the delta, and making selection on personality traits an ever harder sell to parents since it will largely come at the expense of larger gains on other traits.

Could you select for personality traits? A little bit, yeah. But it's not going to work well compared to things selection does work well for, and it will continue not working well for a long time.

Comment by gwern on Genetic Enhancement as a Cause Area · 2019-12-26T22:15:09.219Z · EA · GW

How do you plan to deal with the observation that GWASes on personality traits have larger failed, the SNP heritabilities are often near-zero, and that this fits with balancing-selection models of how personality works in humans?

Comment by gwern on Genetic Enhancement as a Cause Area · 2019-12-26T21:05:15.823Z · EA · GW
Also, how mature is the concept of Iterated Embryo Selection?

The concept itself dates back to 1998 , as far as I can tell, based on similar ideas dating back at least a decade before that.

There has been enormous progress in various parts of the hypothetical process, like just yesterday Tian et al 2019 reported taking ovarian cells (not eggs) and converting them into mouse eggs and fertilizing and yielding live healthy fertile mice. This is a big step towards 'massive embryo selection' (do 1 egg harvesting cycle, create hundreds or thousands of eggs from the collected egg+non-egg cells, fertilize, and select, yielding >1SD gains), and of course, the more control you have over gametogenesis in general, the closer you are to a full IES process.

The animal geneticists are excited about IES, to the point of reinventing it like 3 times over the past few years, and are actively discussing implementing it for cattle. Humans, of course, who knows? But I wouldn't want to bet against IES happening during the 2020s for some species, at least in lab demonstrations. (For comparison, think about the state of the art for GWASes, editing, gametogenesis, and cloning in 2010 vs now.)

So I would phrase it as, much more obscure an idea than it deserves to be, with lots of challenging technical & engineering work still to be done, but well within current foreseeability; and will likely happen quite soon on the scale of 1-3 decades (being highly conservative) even without any particularly focused research efforts or 'Manhattan projects', because the required technologies are either far too useful in general (stem cell creation, gametogenesis), or have constituencies who want it a lot (animal breeders/geneticists, wealthy gay couples).

Comment by gwern on Are we living at the most influential time in history? · 2019-09-10T03:10:34.431Z · EA · GW

One of the amusing things about the 'hinge of history' idea is that some people make the mediocrity argument about their present time - and are wrong.

Isaac Newton, for example, 300 years ago appears to have made an anthropic argument that claims that he lived in a special time which could be considered any kind of, say, 'Revolution', due to the visible acceleration of progress and recent inventions of technologies, were wrong, and in reality, there was an ordinary rate of innovation and the invention of many things recently merely showed that humans had a very short past and were still making up for lost time (because comets routinely drove intelligent species extinct).

And Lucretius ~1800 years before Newton (probably relaying older Epicurean arguments) made his own similar argument, arguing that Greece & Rome were not any kind of exception compared to human history - certainly humans hadn't existed for hundreds of thousands or millions of years! - and if Greece & Rome seemed innovative compared to the dark past, it was merely because "our world is in its youth: it was not created long ago, but is of comparatively recent origin. That is why at the present time some arts are still being refined, still being developed."

One could read these mistakes in a very Kurzweilian fashion: if progress is accelerating or even just stable, every era *can* be (much) more innovative and influential on the future than every preceding era was, and the mediocrity argument wrong every time.

Comment by gwern on Ingredients for creating disruptive research teams · 2019-07-21T21:08:02.365Z · EA · GW

On the other hand, in that same talk, Hamming pointed out the importance of abundant computing resources:

One lesson was sufficient to educate my boss as to why I didn't want to do big jobs that displaced exploratory research and why I was justified in not doing crash jobs which absorb all the research computing facilities. I wanted instead to use the facilities to compute a large number of small problems. Again, in the early days, I was limited in computing capacity and it was clear, in my area, that a "mathematician had no use for machines." But I needed more machine capacity. Every time I had to tell some scientist in some other area, "No I can't; I haven't the machine capacity," he complained. I said "Go tell your Vice President that Hamming needs more computing capacity." After a while I could see what was happening up there at the top; many people said to my Vice President, "Your man needs more computing capacity." I got it!
Comment by gwern on EA Forum: Footnotes are live, and other updates · 2019-05-26T02:38:23.170Z · EA · GW

Both the hover-over and sidenotes on gwern.net are pure JS, and require no modifications to the original Markdown or generated HTML footnotes; they just run and modify the appearance clientside and degrade to the original footnotes if JS is disabled. (Obormot says feel free to contact him if you want/need any help integrating stuff.) For more on sidenotes, see https://www.gwern.net/Sidenotes

Comment by gwern on Is visiting North Korea effective? · 2019-04-04T19:51:20.211Z · EA · GW

The NK government permits and actively encourages foreign tourism for the cold hard foreign currency, external & internal propaganda benefits, and use of hostage-taking, because it calculates that the benefits of those outweigh any drawbacks of a closely-watched tourist being escorted along beaten paths from propaganda site to propaganda site. An inexperienced non-native foreign tourist visiting for non-tourist reasons presumably believes the opposite. Who is more likely to be correct?

Comment by gwern on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-22T02:33:10.988Z · EA · GW

Bitcoin definitely didn't become popular because of its wiki. Early on I wanted to contribute to the wiki (I think as part of my DNM work) and I went to register and... you had to pay bitcoins to register. -_- I never did register or edit it, IIRC. And certainly people didn't use it too much aside from early on use of the FAQ.

An EA wiki would be sensible. In this case, while EAers probably spend too little time adding standard factual material to Wikipedia, material like 'cause prioritization' would be poor fits for Wikipedia articles because they necessarily involve lots of Original Research, a specific EA POV, coverage of non-Notable topics and interventions (because if they were already Notable, then they might not be a good use of resources for EA!), etc.

My preference for special-purpose wikis is to try to adopt a two-tier structure where all the factual standard material gets put into Wikipedia, benefiting from the fully-built-out set of encyclopedia articles & editing community & tools & traffic, and then the more controversial, idiosyncratic stuff building on that foundation appears on a special-purpose wiki. But I admit I have no proof that this strategy works in general or would be suitable for a cause-prioritization wiki. (At least one problem is that people won't read the relevant WP article while reading the individual special-purpose wiki, because of the context switch.)

Comment by gwern on Effective altruism is self-recommending · 2018-06-22T01:44:03.432Z · EA · GW

It's hard for me to believe that the effect of bednets is large enough to show an effect in RCTs, but not large enough to show up more often than not as a result of mass distribution of bednets.

You may find it hard to believe, but nevertheless, that is the fact: correlational results can easily be several times the true causal effect, in either direction. If you really want numbers, see, for example, the papers & meta-analyses I've compiled in https://www.gwern.net/Correlation on comparing correlations with the causal estimates from simultaneous or later conducted randomized experiments, which have plenty of numbers. Hence, it is easy for a causal effect to be swamped by any time trends or other correlates, and a followup correlation cannot and should not override credible causal results. This is why we need RCTs in the first place. Followups can do useful things like measure whether the implementation is being delivered, or can provide correlational data on things not covered by the original randomized experiments (like unconsidered side effects), but not retry the original case with double jeopardy.

Comment by gwern on Effective altruism is self-recommending · 2017-07-21T21:24:48.646Z · EA · GW

The data was noisy, so they simply stopped checking whether AMF’s bed net distributions do anything about malaria.

This is an unfair gotcha. What would the point of this be? Of course the data is noisy. Not only is it noisy, it is irrelevant - if it was not, there would never be any need to have run randomized trials in the first place, you would simply dump the bed nets where convenient and check malaria rates. The whole point of randomized trials is realizing that correlational data is extremely weak and cannot give reliable causal inferences. (I can certainly imagine reasons why malaria rates might go up in regions that AMF does bed net distribution in, just as I can imagine reasons why death rates might be greater or increase over time in patients prescribed new drug X as compared to patients not prescribed X...) If they did the followups and malaria rates held stable or increased, you would not then believe that the bednets do not work; if it takes randomized trials to justify spending on bednets, it cannot then take only surveys to justify not spending on bed nets, as the causal question is identical. Since it does not affect any decisions, it is not important to measure. Or, if it did, what you ought to be criticizing Givewell & AMF for, as well as everyone else, is ever advocating & spending resources on highly unethical randomized trials, rather than criticizing them for not doing some followup surveys.

(A reasonable critique might be that they are not examining whether the intervention - which has been identified as causally effective and passing a cost-benefit - is being correctly delivered, the right people getting the nets, and using the nets. But as far as I know, they do track that...)

Comment by gwern on Tentative Thoughts on the SENS Foundation · 2017-05-17T02:35:40.600Z · EA · GW

Saying 'very little progress' seems to considerably understate it; many cancers are now treatable which were untreatable, and even former death sentences can be cured. As well, much of that research was spent in the past on expensive but obsolete methods or on building knowledge bases and tools which are now available for anti-aging research. (While Apollo may have cost $26b to put a man on the moon in 1969, it should not then cost another $26b in 2017 to put another man on the moon.)

Comparing with cancer is interesting in part because they're so different. Cancer is a hostile self-reproducing ecosystem which literally evolves as it is treated; aging and senescent cells, however, appear to be none of those. For example, it appears to be a lot easier to trick a senescent cell30246-5 "'Targeted Apoptosis of Senescent Cells Restores Tissue Homeostasis in Response to Chemotoxicity and Aging', Baar et al 2017") into committing suicide than a cancer cell.

Why should we expect a SENS-inspired "war on aging" to make lots of progress, on all seven causes of aging

Do you really need progress on all 7? Mortality with age follows a Gompertz distribution which has an exponential term increasing mortality risk and a baseline hazard/risk; interventions on the aging process itself, as opposed to tinkering with improved fixes for symptoms like cancer, would seem like they would affect the exponential term and not the hazard term. Since the Gompertz mortality curve is dominated by the exponential term, not the baseline hazard ratio, even small reductions in the aging rate lead to large changes in life expectancy. (In contrast, large reductions in the hazard ratio, like halving, only add a few years.)