How much do current cultured animal products cost? 2019-07-04T16:04:08.771Z · score: 20 (10 votes)
What’s the Use In Physics? 2018-12-30T03:10:03.063Z · score: 43 (26 votes)


Comment by tetraspace-grouping on Sperm sorting in cattle · 2019-07-15T12:40:26.070Z · score: 15 (7 votes) · EA · GW

The total number of cows probably stays about the same, because if they had space to raise more cows they would have just done that - I don't think that availability of semen is the main limiting factor. So the amount of suffering averted by this intervention can be found by comparing the suffering per cow per year in either cases.

Model a cow as having two kids of experiences: normal farm life where it experiences some amount of suffering x in a year, and slaughter where it experiences some amount of suffering y all at once.

In equilibrium, the population of cows is 5/6 female and 1/6 male. A female cow can, in the next year, expect to suffer an amount (x+y/10), and a male cow can expect to suffer an amount (x+y/2). So a randomly chosen cow suffers (x+y/6).

If male cows are no longer created, this changes to just the amount for female cows, (x+y/10).

So the first-order effect of the intervention is to reduce the suffering per cow per year by the difference between these two, y/15; i.e. averting an amount of pain equal to 1/15 of that of being slaughtered per cow per year.

Comment by tetraspace-grouping on Sperm sorting in cattle · 2019-07-15T11:57:04.716Z · score: 0 (0 votes) · EA · GW

Since there’s only a limited amount of space (or other limiting factor) in which to raise cattle, the total number at any one time would stay about the same before and after this. So the overall effects would be to replace five sequential male cow lives by one female cow life.

Since death is probably painful, if their lives are similar in quality (besides slaughter) then having a third as many deaths happen per unit time seems like an improvement.

Effectively, where x is the suffering felt in one year of life and y is the suffering felt during a slaughter, this changes the suffering per cow over 5 years from 5x+3y (five years of normal life, with 2.5 male slaughters and 0.5 female slaughters on average) to 5x+1y (five years of normal life and 1 female slaughter), for a reduction of 2y per cow per five years.

Comment by tetraspace-grouping on If physics is many-worlds, does ethics matter? · 2019-07-10T18:01:15.087Z · score: 4 (4 votes) · EA · GW

If you want to make a decision, you will probably agree with me that it's more likely that you'll end up making that decision, or at least that it's possible to alter the likelyhood that you'll make a certain decision by thinking (otherwise your question would be better stated as "if physics is deterministic, does ethics matter"). And, under many worlds, if something is more likely to happen, then there will be more worlds where that happens, and more observers that see that happen (I think this is usually how it's posed, anyway). So while there'll always be some worlds where you're not altruistic, no matter what you do, you can change how many worlds are like that.

Comment by tetraspace-grouping on Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? · 2019-07-10T14:25:01.242Z · score: 3 (2 votes) · EA · GW

When I have a question about the future, I like to ask it on Metaculus. Do you have any operationalisations of synthetic biology milestones that would be useful to ask there?

Comment by tetraspace-grouping on Get-Out-Of-Hell-Free Necklace · 2019-07-10T01:47:34.160Z · score: 3 (3 votes) · EA · GW

Tangential to the main point, but what is agmatine, and how would it help someone who suspects they've been brainwashed?

Comment by tetraspace-grouping on How much do current cultured animal products cost? · 2019-07-06T14:21:21.582Z · score: 3 (3 votes) · EA · GW

This 2019 article has some costs listed:

  • Fish: "it costs Finless slightly less than $4,000 to make a pound of tuna"
  • Beef: "Aleph said it had gotten the cost down to $100 per lb."
  • Beef(?): "industry insiders say American companies are getting the cost to $50 per lb."
Comment by tetraspace-grouping on Should we talk about altruism or talk about justice? · 2019-07-06T14:03:49.528Z · score: 7 (4 votes) · EA · GW

GiveWell did an intervention report on maternal mortality 10 years ago, and at the time concluded that the evidence is less compelling than for their top charities (though they say that it is now probably out of date).

Comment by tetraspace-grouping on New study in Science implies that tree planting is the cheapest climate change solution · 2019-07-05T23:50:10.137Z · score: 15 (9 votes) · EA · GW

The amount of carbon that they say could be captured by restoring these trees is 205 GtC, which for $300bn to restore comes to ~70¢/ton of CO2 ~40¢/ton of CO2. Founders Pledge estimates that, on the margin, Coalition for Rainforest Nations averts a ton of CO2e for 12¢ (range: factor of 6) and the Clean Air Task Force averts a ton of CO2e for 100¢ (range: order of magnitude). So those numbers do check out.

Comment by tetraspace-grouping on The Case for Superintelligence Safety As A Cause: A Non-Technical Summary · 2019-05-21T23:27:51.787Z · score: 9 (8 votes) · EA · GW
You can't just ask the AI to "be good", because the whole problem is getting the AI to do what you mean instead of what you ask. But what if you asked the AI to "make itself smart"? On the one hand, instrumental convergence implies that the AI should make itself smart. On the other hand, the AI will misunderstand what you mean, hence not making itself smart. Can you point the way out of this seeming contradiction?

(Under the background assumptions already being made in the scenario where you can "ask things" to "the AI":) If you try to tell the AI to be smart, but fail and instead give it some other goal (let's call it being smart'), then in the process of becoming smart' it will also try to become smart, because no matter what smart' actually specifies, becoming smart will still be helpful for that. But if you want it to be good and mistakenly tell it to be good', it's unlikely that being good will be helpful for being good'.

Comment by tetraspace-grouping on Two AI Safety events at EA Hotel in August · 2019-05-21T20:12:59.884Z · score: 9 (6 votes) · EA · GW

The signup form for the Learning-by-doing AI Safety workshop currently links to the edit page for the form on google docs, rather than the page where one actually fills out the form; the link should be this one (and the form should probably not be publicly editable).

Comment by tetraspace-grouping on How Flying Cars Will Solve Global Poverty · 2019-04-02T23:40:55.447Z · score: 4 (3 votes) · EA · GW

The Terra Ignota series takes place in a world where global poverty has been solved by flying cars, so this is definitely well-supported by fictional evidence (from which we should generalise).

Comment by tetraspace-grouping on quant model for ai safety donations? · 2019-01-03T14:14:11.383Z · score: 1 (1 votes) · EA · GW

In MIRI's fundraiser they released their 2019 budget estimate, which spends about half on research personnel. I'm not sure how this compares to similar organizations.

Comment by tetraspace-grouping on quant model for ai safety donations? · 2019-01-03T00:58:30.282Z · score: 3 (3 votes) · EA · GW

The cost per researcher is typically larger than what they get paid, since it also includes overhead (administration costs, office space, etc).

Comment by tetraspace-grouping on quant model for ai safety donations? · 2019-01-02T21:55:14.671Z · score: 6 (3 votes) · EA · GW

One can convert the utility-per-researcher into utility-per-dollar by dividing everything by a cost per researcher. So if before you would have 1e-6 x-risk reduction per researcher, and you also decide to value researchers at $1M/researcher, then your evaluation in terms of cost is 1e-12 x-risk per dollar.

For some values (i.e. fake numbers but still acceptable for comparing orders-of-magnitude of cause areas) that I've saw used: The Oxford Prioritisation Project uses 1.8 million (lognormal distribution between $1M and $3M) for a MIRI researcher over their career, 80,000 Hours implicitly uses ~$100,000/year/worker in their yardsticks comparing cause areas, and Effective Altruism orgs in the 2018 talent survey claim to value their junior hires at $450k and senior hires at $3M on average (over three years).

Comment by tetraspace-grouping on Higher and more equal: a case for optimism · 2018-12-31T03:01:26.542Z · score: 2 (2 votes) · EA · GW

I love that “one person out of extreme poverty per second” statistic! It’s much easier to picture in my head than a group of 1,000 million people, since a second is something I’m familiar with seeing every day.

Comment by tetraspace-grouping on Long-Term Future Fund AMA · 2018-12-20T12:01:32.725Z · score: 12 (8 votes) · EA · GW

Are there any organisations you investigated and found promising, but concluded that they didn't have much room for extra funding?