How much do cultured animal products cost in 2020? 2020-06-30T16:10:13.743Z
Tetraspace Grouping's Shortform 2019-08-17T12:33:33.049Z
How much do current cultured animal products cost? 2019-07-04T16:04:08.771Z
What’s the Use In Physics? 2018-12-30T03:10:03.063Z


Comment by tetraspace-grouping on My upcoming CEEALAR stay · 2020-12-14T23:55:45.989Z · EA · GW

I also completed Software Foundations Volume 1 last year, and have been kind of meaning to do the rest of the volumes but other things keep coming up. I'm working full-time so it might be beyond my time/energy constraints to keep a reasonable pace, but would you be interested in any kind of accountability buddy / sharing notes / etc. kind of thing?

Comment by tetraspace-grouping on Prize: Interesting Examples of Evaluations · 2020-12-06T19:55:40.351Z · EA · GW

Simple linear models, including improper ones(!!). In Chapter 21 of Thinking Fast and Slow, Kahneman writes about Meehl's book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review, which finds that simple algorithms made by getting some factors related to the final judgement and weighting them gives you surprisingly good results.

The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between humans and algoithms has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy [...]

If they are weighted optimally to predict the training set, they're called proper linear models, and otherwise they're called improper linear models. Kahneman says about Dawes' The Robust Beauty of Improper Linear Models in Decision Making that

A formula that combines these predictors with equal weights is likely to be just as accurate in predicting new cases as the multiple-regression formula that was ptimal in the original sample. More recent research went further: formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling.

That is to say: to evaluate something, you can get very far just by coming up with a set of criteria that positively correlate with the overall result and with each other and then literally just adding them together.

Comment by tetraspace-grouping on AMA: Rob Mather, founder and CEO of the Against Malaria Foundation · 2020-01-24T00:31:22.300Z · EA · GW

How has the landscape of malaria prevention changed since you started? Especially since AMF alone has bought on the order of 100 million nets, which seems not insignificant compared to the total scale of the entire problem.

Comment by tetraspace-grouping on Long-Term Future Fund: November 2019 short grant writeups · 2020-01-05T14:14:37.208Z · EA · GW

In the list at the top, Sam Hilton's grant summary is "Writing EA-themed fiction that addresses X-risk topics", rather than being about the APPG for Future Generations.

Miranda Dixon-Luinenburg's grant is listed as being $23,000, when lower down it's listed as $20,000 (the former is the amount consistent with the total being $471k).

Comment by tetraspace-grouping on Conversation on AI risk with Adam Gleave · 2019-12-31T01:35:56.381Z · EA · GW

Christiano operationalises a slow takeoff as

There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.

in Takeoff speeds, and a fast takeoff as one where there isn't a complete 4 year interval before the first 1 year interval.

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-12-24T00:33:10.030Z · EA · GW

The Double Up Drive, an EA donation matching campaign (highly recommended) has, in one group of charities that it's matching donations to:

  • StrongMinds
  • International Refugee Assistance Project
  • Massachusetts Bail Fund

StrongMinds is quite prominent in EA as the mental health charity; most recently, Founders Pledge recommends it in their report on mental health.

The International Refugee Assistance Project (IRAP) works in immigration reform, and is a recipient of grants from OpenPhilanthropy as well as recommended for individual donors by an OpenPhil member of staff.

The Massachusetts Bail Fund, on the other hand, seems less centrally EA-recommended. It is working in the area of criminal justice reform, and posting bail is an effective-seeming intervention that I do like, but I haven't seen any analysis of its effectiveness or strong hints of non-public trust placed in it by informed donors (e.g. it has not received any OpenPhil grants; though note that it is listed in the Double Up Drive and the 2017 REG Matching Challenge).

I'd like to know more about the latter two from an EA perspective because they're both working on fairly shiny and high-status issues, which means that it would be quite easy for me to get my college's SU to make a large grant to them from the charity fund.

Is there any other EA-aligned information on this charity (and also on IRAP and StrongMinds, since the more the merrier)?

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-12-11T16:46:34.325Z · EA · GW

The sum of the grants made by the Long Term Future fund in August 2019 is $415,697. Listed below these grants is the "total distributed" figure $439,197, and listed above these grants is the "payout amount" figure $445,697. Huh?

Comment by tetraspace-grouping on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-12T23:41:51.087Z · EA · GW

Two people mentioned the CEA not being very effective as an unpopular opinion they hold; has any good recent criticism of the CEA been published?

Comment by tetraspace-grouping on Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering · 2019-08-29T01:16:10.119Z · EA · GW

You mention the Jhanas and metta meditation as both being immensely pleasurable experiences. Since these come from meditation, they seem like they might be possible for people to do "at home" at very little risk (save for the opportunity costs from the time investment). Do you have any thoughts on encouraging meditation aimed towards achieving these highly pleasurable states specifically as a cause area and/or something we should be doing personally?

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-08-22T14:10:18.187Z · EA · GW

In a building somewhere, tucked away in a forgotten corner, there are four clocks. Each is marked with a symbol: the first with a paperclip, the second with a double helix, the third with a trefoil, and the fourth with a stormcloud.

As you might expect from genre convention, these are not ordinary clocks. In fact, they started ticking when the first human was born, and when they strike midnight, a catastrophe occurs. The type depends on the clock, but what is always true is the disaster kills at least one in ten.

The times currently remaining on the clocks are:

  • AI Clock: 3:00 to midnight
  • Biotech Clock: 3:50 to midnight
  • Nuclear Clock: 4:30 to midnight
  • Climate Clock: 3:10 to midnight

Since there are many clocks, ticking somewhat randomly, they can be combined to estimate how long until at least one strikes midnight. 40 seconds of humanity.


These numbers were calculated using the Metaculus community median predictions of the probability of 10% of people dying from each of the causes from the Ragnarök question series.

I took those values as a constant probability of extinction over a period of 81 years (sort of like what I brought up in my previous shortform post), and calculated the mean time until catastrophe given this.

I mapped 350,000 years (the duration for which anatomically modern humans have existed according to Wikipeda) to 24 hours.


It is of course possible for human activity to push on the hands of these clocks, just as the clocks can influence humanity. An additional person working full time on those activities that would wind back the clocks could expect to delay them by this amount:

  • AI Clock: 20,000 microseconds
  • Biotech Clock: 200 microseconds
  • Nuclear Clock: 30 microseconds
  • Climate Clock: 20 microseconds


And these were calculated even more tenuously, by taking 80,000 hours' order-of-magnitude guesses at how much of the problem an additional full-time worker would solve completely literally and then finding the difference in the Doomsday clock time for that.

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-08-22T00:16:45.218Z · EA · GW


Comment by Tetraspace Grouping on [deleted post] 2019-08-19T16:00:08.249Z

The division-by-zero type error is that EV(preventing holocaust|universe is infinite) would be calculated as ∞-∞, which in the extended reals is undefined rather than zero. If it was zero, then you could prove 0 = ∞-∞ = (∞+1)-∞ = (∞-∞)+1 = 1.

Comment by tetraspace-grouping on Ask Me Anything! · 2019-08-18T21:29:45.959Z · EA · GW

This reminds me of the most important AMA question of all:

MacAskill, would you rather fight 1 horse-sized chicken, or 100 chicken-sized horses?

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-08-18T21:23:32.568Z · EA · GW

One way that x-risk outreach is done outside of EA is by evoking the image of some sort of countdown to doom. There are 12 years until climate catastrophe. There are two minutes on the Doomsday clock, etc.

However, in reality, instead of doomsday being some fixed point of time on the horizon that we know about, all the best-calibrated experts have is probability distribution smeared over a wide range of times, mostly sitting on “never” simply for the purposes of just taking the median time not working.

And yet! The doomsday clock, so evocative! And I would like to make a bot that counts down on Twitter, I would like to post vivid headlines to really get the blood flowing. (The Twitter bot question is in fact what prompted me to start thinking about this.)

Some thoughts on ways to do this in an almost-honest way:

  • Find the instantaneous probability, today. Convert this to a timescale until disaster. If there is a 0.1% chance of a nuclear war this year, then this is sort of like there being 1,000 years until doom. Adjust the clock with the probability each year. Drawback is that this both understates and overstates the urgency: there’s a good chance disaster will never happen once the acute period is over, but if it does happen it will be much sooner than 100 years. This is what the Doomsday clock seems to want to do, though I think it's just a political signalling tool for the most part.
  • Make a conditional clock. If an AI catastrophe happens in the next century (11% chance), it will on average happen in 2056 (50% CI: 2040 - 2069), so have the clock tick down until that date. Display both the probability and the timer prominently, of course, as to not mislead. Drawback is that this is far too complicated and real clocks don’t only exist with 1/10 probability. This is what I would do if I was in charge of the Bulletin of the Atomic Scientists.
  • Make a countdown instead to the predicted date of an evocative milestone strongly associated with acute risk, like the attainment of human level AI or the first time a superbug is engineered in a biotech lab. Drawback is that this will be interpreted as a countdown until doomsday approximately two reblogs in (one if I'm careless in phrasing), and everyone will laugh at me when the date passes and the end of the world has not yet happened. This is the thing everyone is ascribing to AOC on Twitter.
Comment by tetraspace-grouping on Ask Me Anything! · 2019-08-17T14:34:47.583Z · EA · GW

Will there be anything in the book new for people already on board with longtermism?

Comment by tetraspace-grouping on Tetraspace Grouping's Shortform · 2019-08-17T12:33:33.177Z · EA · GW

In 2017, 80k estimated that $10M of extra funding could solve 1% of AI xrisk (todo: see if I can find a better stock estimate for the back of my envelope than this). Taking these numbers literally, this means that anyone who wants to buy AI offsets should, today, pay $1G*(their share of the responsibility).

There are 20,000 AI researchers in the world, so if they're taken as being solely responsible for the totality of AI xrisk the appropriate pigouvian AI offset tax fine is $45,000 per researcher hired per year. This is large but not overwhelmingly so.

Additional funding towards AI safety will probably go to hiring safety researchers for $100,000 per year each, so continuing to take these cost effectiveness estimates literally, to zeroth order another way of offsetting is to hire one safety researcher for every two capabilities researchers.

Comment by tetraspace-grouping on What posts you are planning on writing? · 2019-07-24T19:24:34.181Z · EA · GW

"How targeted should donation recommendations be" (sorta)

I've noticed that Givewell targets specific programs (e.g. their recommendation), ACE targets whole organisations, and among far future charities you just kinda get promising-sounding cause areas.

I'm interested in what kind of differences between cause areas lead to this, and also whether anything can be done to make more fine-grained evaluations more desirable in practice.

Comment by tetraspace-grouping on Sperm sorting in cattle · 2019-07-15T12:40:26.070Z · EA · GW

The total number of cows probably stays about the same, because if they had space to raise more cows they would have just done that - I don't think that availability of semen is the main limiting factor. So the amount of suffering averted by this intervention can be found by comparing the suffering per cow per year in either cases.

Model a cow as having two kids of experiences: normal farm life where it experiences some amount of suffering x in a year, and slaughter where it experiences some amount of suffering y all at once.

In equilibrium, the population of cows is 5/6 female and 1/6 male. A female cow can, in the next year, expect to suffer an amount (x+y/10), and a male cow can expect to suffer an amount (x+y/2). So a randomly chosen cow suffers (x+y/6).

If male cows are no longer created, this changes to just the amount for female cows, (x+y/10).

So the first-order effect of the intervention is to reduce the suffering per cow per year by the difference between these two, y/15; i.e. averting an amount of pain equal to 1/15 of that of being slaughtered per cow per year.

Comment by tetraspace-grouping on Sperm sorting in cattle · 2019-07-15T11:57:04.716Z · EA · GW

Since there’s only a limited amount of space (or other limiting factor) in which to raise cattle, the total number at any one time would stay about the same before and after this. So the overall effects would be to replace five sequential male cow lives by one female cow life.

Since death is probably painful, if their lives are similar in quality (besides slaughter) then having a third as many deaths happen per unit time seems like an improvement.

Effectively, where x is the suffering felt in one year of life and y is the suffering felt during a slaughter, this changes the suffering per cow over 5 years from 5x+3y (five years of normal life, with 2.5 male slaughters and 0.5 female slaughters on average) to 5x+1y (five years of normal life and 1 female slaughter), for a reduction of 2y per cow per five years.

Comment by tetraspace-grouping on If physics is many-worlds, does ethics matter? · 2019-07-10T18:01:15.087Z · EA · GW

If you want to make a decision, you will probably agree with me that it's more likely that you'll end up making that decision, or at least that it's possible to alter the likelyhood that you'll make a certain decision by thinking (otherwise your question would be better stated as "if physics is deterministic, does ethics matter"). And, under many worlds, if something is more likely to happen, then there will be more worlds where that happens, and more observers that see that happen (I think this is usually how it's posed, anyway). So while there'll always be some worlds where you're not altruistic, no matter what you do, you can change how many worlds are like that.

Comment by tetraspace-grouping on Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? · 2019-07-10T14:25:01.242Z · EA · GW

When I have a question about the future, I like to ask it on Metaculus. Do you have any operationalisations of synthetic biology milestones that would be useful to ask there?

Comment by tetraspace-grouping on Get-Out-Of-Hell-Free Necklace · 2019-07-10T01:47:34.160Z · EA · GW

What is agmatine, and how would it help someone who suspects they've been brainwashed?

Comment by tetraspace-grouping on How much do current cultured animal products cost? · 2019-07-06T14:21:21.582Z · EA · GW

This 2019 article has some costs listed:

  • Fish: "it costs Finless slightly less than $4,000 to make a pound of tuna"
  • Beef: "Aleph said it had gotten the cost down to $100 per lb."
  • Beef(?): "industry insiders say American companies are getting the cost to $50 per lb."
Comment by tetraspace-grouping on Should we talk about altruism or talk about justice? · 2019-07-06T14:03:49.528Z · EA · GW

GiveWell did an intervention report on maternal mortality 10 years ago, and at the time concluded that the evidence is less compelling than for their top charities (though they say that it is now probably out of date).

Comment by tetraspace-grouping on New study in Science implies that tree planting is the cheapest climate change solution · 2019-07-05T23:50:10.137Z · EA · GW

The amount of carbon that they say could be captured by restoring these trees is 205 GtC, which for $300bn to restore comes to ~70¢/ton of CO2 ~40¢/ton of CO2. Founders Pledge estimates that, on the margin, Coalition for Rainforest Nations averts a ton of CO2e for 12¢ (range: factor of 6) and the Clean Air Task Force averts a ton of CO2e for 100¢ (range: order of magnitude). So those numbers do check out.

Comment by tetraspace-grouping on The Case for Superintelligence Safety As A Cause: A Non-Technical Summary · 2019-05-21T23:27:51.787Z · EA · GW
You can't just ask the AI to "be good", because the whole problem is getting the AI to do what you mean instead of what you ask. But what if you asked the AI to "make itself smart"? On the one hand, instrumental convergence implies that the AI should make itself smart. On the other hand, the AI will misunderstand what you mean, hence not making itself smart. Can you point the way out of this seeming contradiction?

(Under the background assumptions already being made in the scenario where you can "ask things" to "the AI":) If you try to tell the AI to be smart, but fail and instead give it some other goal (let's call it being smart'), then in the process of becoming smart' it will also try to become smart, because no matter what smart' actually specifies, becoming smart will still be helpful for that. But if you want it to be good and mistakenly tell it to be good', it's unlikely that being good will be helpful for being good'.

Comment by tetraspace-grouping on Two AI Safety events at EA Hotel in August · 2019-05-21T20:12:59.884Z · EA · GW

The signup form for the Learning-by-doing AI Safety workshop currently links to the edit page for the form on google docs, rather than the page where one actually fills out the form; the link should be this one (and the form should probably not be publicly editable).

Comment by tetraspace-grouping on New Top EA Cause: Flying Cars · 2019-04-02T23:40:55.447Z · EA · GW

The Terra Ignota series takes place in a world where global poverty has been solved by flying cars, so this is definitely well-supported by fictional evidence (from which we should generalise).

Comment by tetraspace-grouping on quant model for ai safety donations? · 2019-01-03T14:14:11.383Z · EA · GW

In MIRI's fundraiser they released their 2019 budget estimate, which spends about half on research personnel. I'm not sure how this compares to similar organizations.

Comment by tetraspace-grouping on quant model for ai safety donations? · 2019-01-03T00:58:30.282Z · EA · GW

The cost per researcher is typically larger than what they get paid, since it also includes overhead (administration costs, office space, etc).

Comment by tetraspace-grouping on quant model for ai safety donations? · 2019-01-02T21:55:14.671Z · EA · GW

One can convert the utility-per-researcher into utility-per-dollar by dividing everything by a cost per researcher. So if before you would have 1e-6 x-risk reduction per researcher, and you also decide to value researchers at $1M/researcher, then your evaluation in terms of cost is 1e-12 x-risk per dollar.

For some values (i.e. fake numbers but still acceptable for comparing orders-of-magnitude of cause areas) that I've saw used: The Oxford Prioritisation Project uses 1.8 million (lognormal distribution between $1M and $3M) for a MIRI researcher over their career, 80,000 Hours implicitly uses ~$100,000/year/worker in their yardsticks comparing cause areas, and Effective Altruism orgs in the 2018 talent survey claim to value their junior hires at $450k and senior hires at $3M on average (over three years).

Comment by tetraspace-grouping on Higher and more equal: a case for optimism · 2018-12-31T03:01:26.542Z · EA · GW

I love that “one person out of extreme poverty per second” statistic! It’s much easier to picture in my head than a group of 1,000 million people, since a second is something I’m familiar with seeing every day.

Comment by tetraspace-grouping on Long-Term Future Fund AMA · 2018-12-20T12:01:32.725Z · EA · GW

Are there any organisations you investigated and found promising, but concluded that they didn't have much room for extra funding?