Comment by ashwinacharya on Near-term focus, robustness, and flow-through effects · 2019-02-05T19:47:07.269Z · score: 4 (4 votes) · EA · GW

He brought this up in a conversation with me; I don't know if he's written it up anywhere.

Comment by ashwinacharya on Cause profile: mental health · 2019-02-05T19:00:33.820Z · score: 2 (2 votes) · EA · GW

Thanks for the thoughts, Michael. Sorry for the minor thread necro - Milan just linked me to this comment from my short post on short-termism.

The first point feels like a crux here.

On the second, the obvious counterargument is that it applies just as well to e.g. murder; in the case where the person is killed, "there is no sensible comparison to be made" between their status and that in the case where they are alive.

You could still be against killing for other reasons, like effects on friends of the victim, but I think most people have an intuition that the effects of murder on the victim alone are a significant argument against it. For example, it seems strange to say it's fine to kill someone when you're on a deserted island with no hope of rescue, no resource constraints, and when you expect the murder to have no side effects on you.

I guess the counter-counterargument is something like "while they were alive, if they knew they were going to die, they would not approve." But that seems like a fallback to the first point, rather than an affirmation of the second.

A relevant thought experiment: upon killing the other islander, the murderer is miraculously given the chance to resurrect them. This option is only available after the victim is dead; should it matter what their preferences were in life? (I think some people would bite this bullet, which also implies that generally living in accordance with our ancestors' aggregate wishes is good.)

Near-term focus, robustness, and flow-through effects

2019-02-04T20:58:26.023Z · score: 26 (13 votes)
Comment by ashwinacharya on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T18:36:11.246Z · score: 8 (4 votes) · EA · GW

One terminology for this is introduced in "Governing Boring Apocalypses", a recent x-risk paper. They call direct bad things like nuclear war an "existential harm", but note that two other key ingredients are necessary for existential risk: existential vulnerability (reasons we are vulnerable to a harm) and existential exposure (ways those vulnerabilities get exposed). I don't fully understand the vulnerability/exposure split, but I think e.g. nuclear posturing, decentralized nuclear command structures, and launch-on-warning systems constitute a vulnerability, while global-warming-caused conflicts could lead to an exposure of this vulnerability.

(I think this kind of distinction is useful, so we don't get bogged down in debates or motte/baileys over whether X is an x-risk because of indirect effects, but I'm not 100% behind this particular typology.)

Comment by ashwinacharya on What’s the Use In Physics? · 2018-12-31T07:36:29.480Z · score: 4 (4 votes) · EA · GW

You mention nanotechnology; in a similar vein, understanding molecular biology could help deal with biotech x-risks. Knowing more about plausible levels of manufacture/detection could help us understand the strategic balance better, and there’s obviously also concrete work to be done in building eg better sensors.

On the more biochemical end, there’s of mechanical and biological engineering for cultured meat.

Also, wrt non-physics careers, a major one is quantitative trading (eg at Jane Street), which seems to benefit from a physics-y mindset and use some similar tools. I think there’s even a finance firm that mostly hires physics PhDs.

Comment by ashwinacharya on [Link] Vox Article on Engineered Pathogens/ Global Catastrophic Biorisks · 2018-12-10T19:45:08.747Z · score: 3 (2 votes) · EA · GW

Interesting, scary stuff. I've been reading up on biotech/bioweapons a bit as part of my research on AI strategy. They're interesting both because there could be dangerous effects from AI improving bioweapons*, and because they're a relatively close analogue to AI by virtue of their dual-use, concealability, and reasonably large-scale effects.

Do you know of good sources on bioweapons strategy, offense-defense dynamics, and potential effects of future advances? I'm reading Koblentz's Living Weapons right now and it's quite good, but I haven't found many other leads. (I'd think there would be more papers on this; maybe they're mostly kept secret, or maybe I'm using the wrong keywords.)

*My impression from Koblentz is that foreseeable advances in biotech aren't hugely destabilizing, since bioattacks aren't a good strategic threat; military locations can be pretty effectively hardened against them for not-unbearable costs. One danger I'm curious about is the scope of potential attacks in 20-30 years; could there be devastating, hard-to-trace attacks on civilian populations?

Comment by ashwinacharya on Modelling the Good Food Institute - Oxford Prioritisation Project · 2017-05-20T22:27:38.392Z · score: 4 (4 votes) · EA · GW

However, this approach is a bit silly because it does not model the acceleration of research: If there are no other donors in the field, then our donation is futile because £10,000 will not fund the entire effort required.

Could you explain this more clearly to me please? With some stats as an example it'll likely be much clearer. Looking at the development of the Impossible Burger seems a fair phenomena to base GFI's model on, at least for now and at least insofar as it is being used to model a GFI donation's counterfactual impact in supporting similar products GFI is trying to push to market. I don't understand why the approach is silly because $10,000 wouldn't support the entire effort and that this is somehow tied to acceleration of research.

There are two ways donations to GFI could be beneficial: speeding up a paradigm-change that would have happened anyway, and increasing the odds that the change happens at all. I think it's not unreasonable to focus on the former, since there aren't fundamental barriers to developing vat meat and there are some long-term drivers for it (energy/land efficiency, demand).

However, in that case, it helps to have some kind of model for the dynamics of the process. Say you think it'll take $100 million and 10 years to develop affordable vat burgers; $1million now probably represents more than .1 year of speedup, since investors will pile on as the technology gets closer to being viable. But how much does it represent? (And, also, how much is that worth?) Plus, in practice we might want to decide between different methods and target meats, but then we need to have a decent sense of the responses for each of those.

I agree that this is possible. I'd say the way to go is generating a few possible development paths (paired $/time and progress/$ curves) based on historical tech's development and domain-experts' prognostications, and then looking at marginal effects for each path.

Not having looked into this more, it seems doable but not-straightforward. Note that the Impossible Burger isn't a great model for full-on synthetic meat. Their burgers are mostly plant-based, and they use yeast to synthesize hemoglobin, a single protein--something that's very much within the purview of existing biotech. This contrasts with New Harvest and Memphis Meats' efforts synthesizing muscle fibers to make ground beef, to say nothing of the eventual goal of synthesizing large-scale muscle structure to replicate steak, etc.

And we have a lot less to go on there. Mark Post at Maastricht University made a $325,000 burger in 2013. Memphis Meats claimed to be making meat at $40,000/kg in 2016.* Mark Post also claims scaling up his current methods could get to ~$80/kg (~$10/burger) in a few years. That's still about an order of magnitude off from the mainstream, and I think you'd need someone unbiased with domain expertise to give you a better sense of how much tougher that would be.

*Note- according to Sentience Politics' report on vat meat. I haven't listened to the interview yet.

Comment by ashwinacharya on How much does work in AI safety help the world? Probability distribution version (Oxford Prioritisation Project) · 2017-05-04T21:17:36.754Z · score: 4 (4 votes) · EA · GW

This is interesting. I'm strongly in favor of having rough models like this in general. Thanks for sharing!

Edit suggestions:

  • STI says "what percent of bad scenarios should we expect this to avert", but the formula uses it as a fraction. Probably best to keep the formula and change the wording.

  • Would help to clarify that TXR is a probability of X-risk. (This is clear after a little thought/inspection, but might as well make it as easy to use as possible.)

Quick thoughts:

  • It might be helpful to talk in terms of research-years rather than researchers.

  • It's slightly strange that the model assumes 1-P(xrisk) is linear in researchers, but then only estimates the coefficient from TXR x STI/(2 x SOT), when (1-TXR)/SOT should also be an estimate. It does make sense that risk would be "more nonlinear" for lower n_researchers, though.

Comment by ashwinacharya on Transgenic mosquitoes, update Effective Altruism Policy Analytics · 2016-08-10T23:26:35.128Z · score: 4 (4 votes) · EA · GW

Thanks for sharing! This seems like good news, and I'm glad they're looking at safety issues along so many different axes.

However, I'm a bit confused as to what interventions like this are meant/expected to accomplish. It seems like the long-term result of this kind of intervention would be a recovery of the mosquito population as the modified mosqs' descendents got outcompeted by mosquitos without the genes.

Is the idea that mosquito populations are small enough (relative to the number of modified ones introduced) that they might be eradicated entirely, to lower populations temporarily during a high-disease-risk period, or to hopefully end up in an evolutionary equilibrium with fewer a. aegypti (e.g. if other mosquito species that carry less diseases can move in on their niche while the population is low)?

Comment by ashwinacharya on You have a set amount of "weirdness points". Spend them wisely. · 2016-07-05T18:54:05.301Z · score: 1 (3 votes) · EA · GW

Hey! Just happened upon this article while searching for something else. Hope the necro isn't minded.

I wanted to point out that since this article was written--and especially in the last year--basic income at least has become a lot more mainstream. There's the (failed) Swiss referendum, and apparently Finland and YCombinator are both running basic income trials as well. (More locally, there's of course the GiveDirectly UBI trial as well.)

Anecdotally, it seems like these events have also been accompanied by many more people (in my particular left-leaning bubble of family and friends) being familiar with the idea. Empirically, see below for a graph of [number of articles mentioning basic income] per year in the New York Times in the link below. Don't know about its reception/current weirdness outside of that bubble. EDIT: Oh, in an April survey "A majority of Europeans (58%) reported to have at least some familiarity with the concept of basic income, and 64% of Europeans said they would vote in favour of basic income." Not sure about the US outside of the bubble, then. And the EU might be different post-Brexit?

Obviously it's debatable how well we could have foreseen this, but it might be worth thinking about a) to what degree we can predict(/affect) which "weird" idea will gain traction and b) to what extent (the possibility of) this sort of rapid increase in acceptability allows for some relaxation of the "weirdness points" framework. Ideally, we'd be able to talk about many things, end up with some of them succeeding, and go "see, you thought we were weird about UBI and animal suffering and X-risk, and now you agree we're not weird about UBI, so maybe take a second look at the rest?" I don't think that works well in practice, but maybe pushing stuff with more potentially broad appeal should be considered to "cost" fewer weirdness points in expectation?

NYT link. Note, too, the basic income "bubble" in the ~'70s.

Results from that April EU survey summarized here: