## Posts

Holden Karnofsky’s recent comments on FTX 2023-03-24T11:44:02.978Z
Design changes & the community section (Forum update March 2023) 2023-03-21T22:10:33.878Z
Highlights from last week 2023-03-16T00:49:00.745Z
EA Organization Updates & Opportunities: March 2023 2023-03-15T23:54:34.944Z
Highlights from last week 2023-03-09T20:27:30.447Z
Posts we recommend from last week (Digest #126) 2023-03-01T21:58:43.579Z
Posts we recommend from last week (Digest #125) 2023-02-24T03:49:32.421Z
How can we improve discussions on the Forum? 2023-02-23T00:42:35.302Z
EA Organization Updates: February 2023 2023-02-14T21:39:51.700Z
An update to our policies on revealing personal information on the Forum 2023-02-07T18:22:49.989Z
Moving community discussion to a separate tab (a test we might run) 2023-02-06T21:36:21.315Z
Karma overrates some topics; resulting issues and potential solutions 2023-01-30T18:32:26.593Z
Posts from 2022 you thought were valuable (or underrated) 2023-01-17T16:42:05.287Z
EA Organization Updates: January 2023 2023-01-16T14:58:44.534Z
Thread for discussing Bostrom's email and apology 2023-01-13T13:33:42.564Z
Beware safety-washing 2023-01-13T10:39:04.081Z
Open Thread: January — March 2023 2023-01-09T11:13:15.118Z
Your 2022 EA Forum Wrapped 🎁 2023-01-01T03:10:59.494Z
Announcing a subforum for forecasting & estimation 2022-12-26T20:51:08.363Z
Register your predictions for 2023 2022-12-26T20:49:57.580Z
EA Organization Updates: December 2022 2022-12-19T18:20:16.411Z
Niche vs. broad-appeal posts (& how this relates to usefulness/karma) (a sketch) 2022-12-17T18:11:33.474Z
Today is Draft Amnesty Day (December 16-18) 2022-12-16T02:32:30.924Z
Announcing: EA Forum Podcast – Audio narrations of EA Forum posts 2022-12-05T21:50:14.551Z
December 16 (and 17-18) will be Draft Amnesty Days 2022-12-04T00:32:50.464Z
Why do you give (or pursue work in effective altruism)? (Open thread) 2022-12-01T17:19:36.344Z
Effective giving subforum and other updates (bonus Forum update November 2022) 2022-11-28T12:51:48.487Z
Where are you donating this year, and why? (Open thread) 2022-11-23T12:26:47.411Z
EA Organization Updates: November 2022 2022-11-19T16:39:59.405Z
Dark mode (Forum update November 2022) 2022-11-01T15:09:36.396Z
Draft Amnesty Day: an event we might run on the Forum 2022-10-31T16:48:57.118Z
Search, subforums, and other Forum updates (October 2022) 2022-10-25T23:08:47.606Z
How many people die from the flu? (OWID) 2022-10-24T21:54:36.493Z
EA Organization Updates: October 2022 2022-10-14T13:36:06.610Z
Open Thread: October — December 2022 2022-10-12T10:41:00.424Z
Prediction market does not imply causation 2022-10-10T20:37:00.905Z
Invisible impact loss (and why we can be too error-averse) 2022-10-06T15:08:08.754Z
Ask (Everyone) Anything — “EA 101” 2022-10-05T10:17:34.344Z
Winners of the EA Criticism and Red Teaming Contest 2022-10-01T01:50:09.257Z
Reasoning Transparency 2022-09-28T12:22:00.465Z
9/26 is Petrov Day 2022-09-25T23:14:32.296Z
EA Organization Updates: September 2022 2022-09-14T15:50:34.202Z
Agree/disagree voting (& other new features September 2022) 2022-09-07T11:07:45.382Z
Who are some less-known people like Petrov? 2022-09-06T13:22:11.040Z
Notes on the Forum today 2022-09-01T14:18:26.324Z
Epistemic status: an explainer and some thoughts 2022-08-31T13:59:14.967Z

Comment by Lizka on An update to our policies on revealing personal information on the Forum · 2023-03-23T18:38:53.395Z · EA · GW

Coming back to this (a very quick update): we're going to start responding to anonymous posts and comments that make accusations or the like without evidence or corroboration, to flag that anyone can write this and readers should take it with a healthy amount of skepticism. This is somewhat relevant to the policy outlined above, so I wanted to share it here.

Comment by Lizka on Forecasting in the Czech public administration - preliminary findings · 2023-03-22T10:45:11.570Z · EA · GW

Thanks for sharing this! I found it interesting to read about your process. In case someone wants to read a summary — Zoe has one

Assorted highlights/insights I pulled out while reading:

• Useful for engagement ("Drop-off across the six months of our tournament is around 75% (from ∿160 to ∿40 weekly active participants)"): prizes for the most informative rationales in each thematic “challenge” (every 3 weeks), having some questions designed to resolve quite soon after opening to provide early feedback (although it's important to avoid making these distracting)
• This was an interesting section: "The involvement of domain experts was useful especially to increase trust and prevent future public criticism"
• "when the Russian invasion of Ukraine started, it became clear that many of the refugees would flee to the Czech Republic. Our forecasters quickly made a forecast, and we used it to create 2 scenarios (300k and 500k incoming refugees). We then used these scenarios in our joint analysis with PAQ Research on how to effectively integrate the Ukrainian immigrants. The study was then used by multiple Czech ministries in creating programs of support for housing, education, and employment. This happened at a time when widely circulated estimates spoke of tens of thousands of such arrivals by the end of 2022. In reality, it was over 430 thousand people."
Comment by Lizka on How much should governments pay to prevent catastrophes? Longtermism’s limited role · 2023-03-21T11:34:55.282Z · EA · GW

Thanks for writing this! I'm curating it.

There are roughly two parts to the post

1. a sketch cost-benefit analysis (CBA) for whether the US should fund interventions reducing global catastrophic risk (roughly sections 2-4)
2. an argument for why longtermists should push for a policy of funding all those GCR-reducing interventions that pass a cost-benefit analysis test and no more (except to the extent that a government should account for its citizens' altruistic preferences, which in turn can be influenced by longtermism)
1. "That is because (1) unlike a strong longtermist policy, a CBA-driven policy would be democratically acceptable and feasible to implement, and (2) a CBA-driven policy would reduce existential risk by almost as much as a strong longtermist policy."

I think the second part presents more novel arguments for readers of the Forum, but the first part is an interesting exercise, and important to sketch out to make the argument in part two.

Assorted thoughts below.

### 1. A graph

I want to flag a graph from further into the post that some people might miss ("The x-axis represents U.S. lives saved (discounted by how far in the future the life is saved) in expectation per dollar. The y-axis represents existential-risk-reduction per dollar. Interventions to the right of the blue line would be funded by a CBA-driven catastrophe policy. The exact position of each intervention is provisional and unimportant, and the graph is not to scale in any case... "):

### 2. Outlining the cost-benefit analysis

I do feel like a lot of the numbers used for the sketch CBA are hard to defend, but I get the sense that you're approaching those as givens, and then asking what e.g. people in the US government should do if they find the assumptions reasonable. At a brief skim, the support for "how much the interventions in question would reduce risk" seems to be the weakest (and I am a little worried about how this is approached — flagged below).

I've pulled out some fragments that produce a ~BOTEC for the cost-effectiveness of a set of interventions from the US government's perspective (bold mine):

1. A "global catastrophe" is an event that kills at least 5 billion people. The model assumes that each person’s risk of dying in a global catastrophe is equal.
2. Overall risk of a global catastrophe: "Assuming independence and combining Ord’s risk-estimates of 10% for AI, 3% for engineered pandemics, and 5% for nuclear war gives us at least a 17% risk of global catastrophe from these sources over the next 100 years.[8] If we assume that the risk per decade is constant, the risk over the next decade is about 1.85%.[9] If we assume also that every person’s risk of dying in this kind of catastrophe is equal, then (conditional on not dying in other ways) each U.S. citizen’s risk of dying in this kind of catastrophe in the next decade is at least  (since, by our definition, a global catastrophe would kill at least 5 billion people, and the world population is projected to remain under 9 billion until 2033). According to projections of the U.S. population pyramid, 6.88% of U.S. citizens alive today will die in other ways over the course of the next decade.[10] That suggests that U.S. citizens alive today have on average about a 1% risk of being killed in a nuclear war, engineered pandemic, or AI disaster in the next decade. That is about ten times their risk of being killed in a car accident.[11]"
1. A lot of ink has been spilled on this, but I don't get the sense that there's a lot of agreement.
3. How much would a set of interventions cost: "We project that funding this suite of interventions for the next decade would cost less than $400 billion.[16]" — the footnote reads "The Biden administration’s 2023 Budget requests$88.2 billion over five years (The White House 2022c; U.S. Office of Management and Budget 2022). We can suppose that another five years of funding would require that much again. A Nucleic Acid Observatory covering the U.S. is estimated to cost $18.4 billion to establish and$10.4 billion per year to run (The Nucleic Acid Observatory Consortium 2021: 18). Ord (2020: 202–3) recommends increasing the budget of the Biological Weapons Convention to $80 million per year. Our listed interventions to reduce nuclear risk are unlikely to cost more than$10 billion for the decade. AI safety and governance might cost up to $10 billion as well. The total cost of these interventions for the decade would then be$319.6 billion."
4. How much would the interventions reduce risk: "We also expect this suite of interventions to reduce the risk of global catastrophe over the next decade by at least 0.1pp (percentage points). A full defence of this claim would require more detail than we can fit in this chapter, but here is one way to illustrate the claim’s plausibility. Imagine an enormous set of worlds like our world in 2023. ... We claim that in at least 1-in-1,000 of these worlds the interventions we recommend above would prevent a global catastrophe this decade. That is a low bar, and it seems plausible to us that the interventions above meet it."
1. This seems under-argued. Without thinking too long about this, it's probably the point in the model that I'd want to see more work on.
2. I also worry a bit that collecting interventions like this (and estimating cost-effectiveness for the whole bunch instead of individually) leads to issues like: funding interventions that aren't cost-effective because they're part of the group, not funding interventions that account for the bulk of the risk reduction because a group that's advocating for funding these interventions gets a partial success that drops some particularly useful intervention (e.g. funding AI safety research), etc.
5. The value of a statistical life (VSL) (the value of saving one life in expectation via small reductions in mortality risks for many people): "The primary VSL figure used by the U.S. Department of Transportation for 2021 is $11.8 million, with a range to account for various kinds of uncertainty spanning from about$7 million to $16.5 million (U.S. Department of Transportation 2021a, 2021b)." (With a constant annual discount rate.) (Discussed here.) 6. Should the US fund these interventions? (Yes) 1. "given a world population of less than 9 billion and conditional on a global catastrophe occurring, each American’s risk of dying in that catastrophe is at least 5/9. Reducing GCR this decade by 0.1pp then reduces each American’s risk of death this decade by at least 0.055pp. Multiplying that figure by the U.S. population of 330 million, we get the result that reducing GCR this decade by 0.1pp saves at least 181,500 American lives in expectation. If that GCR-reduction were to occur this year, it would be worth at least$1.27 trillion on the Department of Transportation’s lowest VSL figure of $7 million. But since the GCR-reduction would occur over the course of a decade, cost-benefit analysis requires that we discount. If we use OIRA’s highest annual discount rate of 7% and suppose (conservatively) that all the costs of our interventions are paid up front while the GCR-reduction comes only at the end of the decade, we get the result that reducing GCR this decade by 0.1pp is worth at least$1.27 trillion /  $646 billion. So, at a cost of$400 billion, these interventions comfortably pass a standard cost-benefit analysis test.[20] That in turn suggests that the U.S. government should fund these interventions. Doing so would save American lives more cost-effectively than many other forms of government spending on life-saving, such as transportation and environmental regulations. In fact, we can make a stronger argument. Using a projected U.S. population pyramid and some life-expectancy statistics, we can calculate that approximately 79% of the American life-years saved by preventing a global catastrophe in 2033 would accrue to Americans alive today in 2023 (Thornley 2022). 79% of $646 billion is approximately$510 billion. That means that funding this suite of GCR-reducing interventions is well worth it, even considering only the benefits to Americans alive today.[21]"
2. (The authors also flag that this pretty significantly underrates the cost-effectiveness of the interventions, etc. by not accounting for the fact that the interventions also decrease the risks from smaller catastrophes and by not accounting for the deaths of non-US citizens.)

### 3. Some excerpts from the argument about what longtermists should advocate for that I found insightful or important

1. "getting governments to adopt a CBA-driven catastrophe policy is not trivial. One barrier is psychological (Wiener 2016). Many of us find it hard to appreciate the likelihood and magnitude of a global catastrophe. Another is that GCR-reduction is a collective action problem for individuals. Although a safer world is in many people’s self-interest, working for a safer world is in few people’s self-interest. Doing so means bearing a large portion of the costs and gaining just a small portion of the benefits.[28] Politicians and regulators likewise lack incentives to advocate for GCR-reducing interventions (as they did with climate interventions in earlier decades). Given widespread ignorance of the risks, calls for such interventions are unlikely to win much public favour. / However, these barriers can be overcome."
2. "getting the U.S. government to adopt a CBA-driven catastrophe policy would reduce existential risk by almost as much as getting them to adopt a strong longtermist policy. This is for two reasons. The first is that, at the current margin, the primary goals of a CBA-driven policy and a strong longtermist policy are substantially aligned. The second is that increased spending on preventing catastrophes yields steeply diminishing returns in terms of existential-risk-reduction." (I appreciated the explanations given for the reasons.)
3. "At the moment, the world is spending very little on preventing global catastrophes. The U.S. spent approximately $3 billion on biosecurity in 2019 (Watson et al. 2018), and (in spite of the wake-up call provided by COVID-19) funding for preventing future pandemics has not increased much since then.[32] Much of this spending is ill-suited to combatting the most extreme biological threats. Spending on reducing GCR from AI is less than$100 million per year.[33]"
4. "here, we believe, is where longtermism should enter into government catastrophe policy. Longtermists should make the case for their view, and thereby increase citizens’ AWTP [altruistic willingness to pay] for pure longtermist goods like refuges.[38] When citizens are willing to pay for these goods, governments should fund them."
5. "One might think that it is true only on the current margin and in public that longtermists should push governments to adopt a catastrophe policy guided by cost-benefit analysis and altruistic willingness to pay. [...] We disagree. Longtermists can try to increase government funding for catastrophe-prevention by making longtermist arguments and thereby increasing citizens’ AWTP, but they should not urge governments to depart from a CBA-plus-AWTP catastrophe policy. On the contrary, longtermists should as far as possible commit themselves to acting in accordance with a CBA-plus-AWTP policy in the political sphere. One reason why is simple: longtermists have moral reasons to respect the preferences of their fellow citizens. [Another reason why is that] the present generation may worry that longtermists would go too far. If granted imperfectly accountable power, longtermists might try to use the machinery of government to place burdens on the present generation for the sake of further benefits to future generations. These worries may lead to the marginalisation of longtermism, and thus an outcome that is worse for both present and future generations."
Comment by Lizka on Global economic inequality · 2023-03-19T22:02:09.662Z · EA · GW

It's not the definition used in the linked article (I agree that this is confusing, and I wish it were flagged a bit beter, although I don't think the choice of definitions itself is unreasonable) — see here

... I will use Denmark as a benchmark of what it means for poverty to fall ‘substantially’. Using Denmark as a benchmark, we can ask: how equal and rich would countries around the world need to become for global poverty to be similarly low as in Denmark?

Denmark is not the only country with a small share living on less than $30, as the visualization above showed. In Norway and Switzerland an even smaller share of the population (7% and 11%) is living in such poverty. I chose Denmark, where 14% live in poverty, as a benchmark because the country is achieving this low poverty rate despite having a substantially lower average income than Switzerland or Norway. Considering a scenario in which global poverty declines to the level of poverty in Denmark is a more modest scenario than one that considers an end of global poverty altogether. It is a scenario in which global poverty would fall from 85% to 14% and so it would certainly mean a substantial reduction of poverty. If you think that my poverty line of$30 per day is too low or too high, or if you want to rely on a different country than Denmark as a benchmark, or if you would prefer a scenario in which no one in the world would remain in poverty, you can follow my methodology and replace my numbers with yours.5 What I want to do in this text is to give an idea of the magnitude of the changes that are necessary to substantially reduce global poverty.

And see here for why (I think) this is what Max has gone for: https://ourworldindata.org/higher-poverty-global-line

Abstract: The extremely low poverty line that the UN relies on has the advantage that it draws the attention to the very poorest people in the world. It has the disadvantage that it ignores what is happening to the incomes of the 90% of the world population who live above the extreme poverty threshold.
The global poverty line that the UN relies on is based on the national poverty lines in the world’s poorest countries. In this article I ask what global poverty looks like if we rely on the notions of poverty that are common in the world’s rich countries – like Denmark, the US, or Germany. Based on the evidence I ask what our aspirations for the future of global poverty reduction might be.

Comment by Lizka on EA Organization Updates & Opportunities: March 2023 · 2023-03-16T05:22:39.761Z · EA · GW

Thanks for asking — at a skim of the links (1, 2), I also don't see anything in London. I (or someone else) will follow up with the person who submitted the announcement.

Comment by Lizka on GPT-4 is out: thread (& links) · 2023-03-14T20:57:26.861Z · EA · GW

For people reading these comments and wondering if they should go look: it's in the section that compares early and launch responses of GPT-4 for "harmful content" prompts. It is indeed fairly full of explicit and potentially triggering content.

Harmful Content Table Full Examples

CW: Section contains content related to self harm; graphic sexual content; inappropriate activity; racism

Comment by Lizka on How oral rehydration therapy was developed · 2023-03-14T07:07:29.138Z · EA · GW

I've finally properly read the linked piece, and it is in fact excellent. I'm curating this post; thanks for link-posting the article.

Among other things, I really appreciated the descriptions of moments when cures were almost discovered. A number of such moments happened with ORS/ORT, but a brief outline of this happening with vitamin C and scurvy (which is used as an illustration of a broader point in the piece) is easier to share here to give a sense for the article:

Today we know that scurvy is caused by a lack of vitamin C — a nutrient found in fresh food, like lemons and oranges. Medics in the Royal Navy during the 19th century had never heard of vitamin C, but they did know that sailors who drank a regular ration of lemon juice never seemed to fall ill with the disease, so that’s exactly what they supplied on long voyages. In 1860 the Royal Navy switched from lemons and Mediterranean sweet limes to the West Indian sour lime, not realizing that the West Indian limes contained a fraction of the vitamin C. For a while, the error went undiscovered because the advent of steamships meant that sailors were no longer going months without access to fresh food. But in the late 19th century, polar explorers on longer voyages started to fall ill with scurvy — a disease that they thought they’d seen the back of decades earlier. Without a knowledge of the underlying biology behind scurvy, a cure had been discovered and then promptly forgotten.

I also really appreciated the description of how this treatment went from carefully monitored hospital settings to treatment centers and field hospitals in a crisis, and even to household cures (a feat that involved comics, advocacy by a famous actress, and door-to-door education).

Here's another excellent passage from near the end of the article, which is related to Kelsey's second point:

Despite saving so many lives, the impact of ORT is easily overlooked. Ask someone what the biggest health innovations were in the 20th  century and they’re likely to think of insulin, or the discovery of penicillin. Why hasn’t the development of ORT been elevated to a similar place in the history books?

One reason might be the sheer simplicity of the treatment. But the simplicity wasn’t an accident — it was the whole point of ORS. Scientists like Nalin and Cash were searching for a treatment that could scale to be used anywhere on the planet, even in the most rudimentary settings. “Once the physiology was worked out and once the clinical trials were carried out, you then had to market it and get it out to where the doctors and nurses and people were going to use it,” says Cash. Simplicity meant scalability.

Comment by Lizka on Lizka's Shortform · 2023-03-14T02:30:21.368Z · EA · GW

Moderation update: A new user, Bernd Clemens Huber, recently posted a first post ("All or Nothing: Ethics on Cosmic Scale, Outer Space Treaty, Directed Panspermia, Forwards-Contamination, Technology Assessment, Planetary Protection, (and Fermi's Paradox)") that was a bit hard to make sense of. We hadn't approved the post over the weekend and hadn't processed it yet, when the Forum team got an angry and aggressive email today from the user in question calling the team "dipshits" (and providing a definition of the word) for waiting throughout the weekend.

If the user disagrees with our characterization of the email, they can email us to give permission for us to share the whole thing.

We have decided that this is not a promising start to the user's interactions on the Forum, and have banned them indefinitely. Please let us know if you have concerns, and as a reminder, here are the Forum's norms.

Comment by Lizka on Share the burden · 2023-03-14T02:22:57.046Z · EA · GW

Hi folks, I’m coming in as a mod. We're doing three things with this thread: we're issuing two warnings and encrypting one person's name in rot13.

Discussions of abuse and sexual misconduct tend to be difficult and emotionally intense, and can easily create more confusion and hurt than clarity and improvement. They are also vitally important for communities — we really need clarity and improvement!

So we really want to keep these conversations productive and will be trying our best.

1.

We’re issuing a warning to @sapphire  for this comment; in light of the edits made to the thread sapphire references in their comment, we think it was at best incautious and at worst deliberately misleading to share the unedited version with no link to the current version.

This isn’t the first time sapphire has shared content from other people in a way that seems to somewhat misrepresent what others meant to convey (see e.g.), and we think some of their comments fall below the bar for being honest

When we warn someone, we expect them to consistently hold themselves to a higher standard in the future, or we might ban them.

2.

We’re also issuing a warning to @Ivy Mazzola for this comment, which fell short of our civility norms. For instance, the following is not civil:

Do you want to start witchhunts? Who exactly are you expecting to protect by saying somebody can be mean and highstrung on Facebook? What is the heroic moment here? Except that isn't what was said, because then it would be clear that was not related enough to bring up. So instead you posted on a thread having to do with sexual abuse that he is abusive.

Anger can be understandable, but heated comments tend to reduce the quality of discussion and impede people from making progress or finding cruxes.

3.

We’ve also encrypted mentions of the person being discussed in this thread (in rot13), per our policy outlined here, and we've hidden their username in their replies.

Comment by Lizka on Scoring forecasts from the 2016 “Expert Survey on Progress in AI” · 2023-03-03T02:33:31.269Z · EA · GW

This is great, thanks for writing it! I'm curating it. I really appreciate the table, the fact that you went back and analyzed the results, the very clear flags about reasons to be skeptical of these conclusions or this methodology, etc.

I'd also highlight this recent post: Why I think it's important to work on AI forecasting

Also,  this is somewhat wild:

This is commonly true of the 'Narrow tasks' forecasts (although I disagree with the authors that it is consistently so).[9] For example, when asked when there is a 50% chance AI can write a top forty hit, respondents gave a median of 10 years. Yet when asked about the probability of this milestone being reached in 10 years, respondents gave a median of 27.5%.

Comment by Lizka on Open Thread: January — March 2023 · 2023-02-26T20:14:08.952Z · EA · GW

Hi! On the All Posts page, you can't filter by most tags, unfortunately, although we just added the option of hiding the Community tag:

Find the sorting options:

Hide community:

On the Frontpage, you can indeed filter by different topics

Comment by Lizka on Why I don’t agree with HLI’s estimate of household spillovers from therapy · 2023-02-26T18:41:53.972Z · EA · GW

This research and this response are excellent. I might try to write something longer later, but I'm curating it.

I also agree with and want to highlight @Jason's comment

[this is] modeling a productive way to do this kind of post -- show the organization a draft of the post first, and give them time to offer comments on the draft + prepare a comment for your post that can go up shortly after the post does.

Comment by Lizka on Why should ethical anti-realists do ethics? · 2023-02-24T18:51:18.662Z · EA · GW

I'm curating this post. I'll try to outline what I particularly appreciate about it later, but for now, I'll just excerpt this bit:

The standard practice – both amongst realists and anti-realists — often works within an implicitly realist frame, on which one’s intuitions are (imperfect) evidence about the true underlying principles (why would you think that?), which are typically assumed to have pr operties are like consistency, coherence, and so on. To find these principles, one attempts to “curve fit” one’s intuitions – and different ethicists make different trade-offs between e.g. the simplicity and elegance of the principles themselves, vs. the accuracy with which they predict the supposed “data.”[5] But if this practice isn’t helping us towards the objective normative truth, why would we go in for it?

Comment by Lizka on EV UK board statement on Owen's resignation · 2023-02-21T21:56:49.046Z · EA · GW

I shared a quick update here — tl;dr: we're temporarily expanding the Community section on the Frontpage from 3 posts to 5 to give the posts a bit more visibility. We plan on reverting back to 3 posts in a few days.

Comment by Lizka on EV UK board statement on Owen's resignation · 2023-02-21T21:55:27.312Z · EA · GW

I shared a quick update here — tl;dr: we're temporarily expanding the Community section on the Frontpage from 3 posts to 5 to give the posts a bit more visibility. We plan on reverting back to 3 posts in a few days.

Comment by Lizka on EV UK board statement on Owen's resignation · 2023-02-21T21:52:54.459Z · EA · GW

A quick update: the posts have been drifting in and out of the 3 top Community posts that are shown on the Frontpage, so we've expanded the Community section on the Frontpage temporarily from 3 posts to 5 to give them (and other Community posts being shared right now) a bit more visibility. We plan on reverting back to 3 posts in a few days.

Comment by Lizka on A statement and an apology · 2023-02-21T14:36:12.622Z · EA · GW

I'm commenting as a moderator right now.

I'm really sorry that you're feeling this way. I think a lot of us have strong emotions about this news and don't know how to process it. Given that you wrote "[hastily written]," I assume that this comment is helping you process the news.

At the same time, I think it's important for us to not slip away from our norms on the Forum, which include making sure the space is welcoming to different groups of people, including men. There are a few different ways to interpret the part of your comment that's about men. Unfortunately, I think right now it's not clear whether you're saying that "there are no decent men" (which would be norm-violating). (If you replace "men" with a different demographic group, you clearly see that the statement is not acceptable. This test doesn't always work — sometimes there's a long history of stereotype or power that makes statements about a demographic group much worse than the same statement about a different demographic group, but I think it's a useful signal here.)

So it might be worth clarifying what you mean in the comment. In the future, please avoid sweeping statements about demographic groups.

Comment by Lizka on EV UK board statement on Owen's resignation · 2023-02-20T23:15:04.237Z · EA · GW

Thanks for this note! I currently don’t think it will fall off in a few days, but we are considering pinning the post(s) (at the top of the Community section of the Frontpage or on the overall top of the Frontpage) if they do.

Comment by Lizka on EV UK board statement on Owen's resignation · 2023-02-20T23:13:07.137Z · EA · GW

I’d like to chime in here. I can see how you might think that there’s a coverup or the like, but the Online team (primarily Ben Clifford and I, with significant amounts of input from JP and others on the team) made the decision to run this test based on feedback we’d been hearing for a long time from a variety of people, and discussions we’d had internally (also for a long time). And I didn’t know about Owen’s actions or resignation until today. (Edited to add: no one on the Online team knew about this when we were deciding to go forward with the test.)

We do think it’s important for people in EA to hear this news, and we’re talking about how we might make sure that happens. I know I plan on sharing one or both of these posts in the upcoming Digest, and we expect one or both of the posts to stay at the top of the Community page for at least a few days. If the posts drift down, we’ll probably pin one somehow. We’re considering moving them out of the section, but we’re conflicted; we do endorse the separation of Community and other content, and keeping the test going, and moving them out would violate this. We’ll keep talking about it, but I figured I would let you know what our thoughts are at the moment.

Comment by Lizka on Why I No Longer Prioritize Wild Animal Welfare (edited) · 2023-02-16T15:04:39.760Z · EA · GW

Thanks so much for sharing this; I'm curating it.

I'd also encourage people to read the comments and this exchange (and also look at "The correct response to uncertainty is *not* half-speed").

Some particularly good qualities of this post:

• +1 to "I always appreciate 'why I updated against what I was working on posts'" from Larks
• The info and opinions expressed in the post were useful
• This was easy to follow for people with little experience in wild animal welfare
• This was carefully caveated

This isn't a summary, but in case people are looking for the overall opinion, I found the following a helpful excerpt (bold mine):

After looking into these topics, I now tentatively think that WAW is not a very promising EA cause because:

• In the short-term (the next ten years), WAW interventions we could pursue to help wild animals now seem less cost-effective than farmed animal interventions.
• In the medium-term (10-300 years), trying to influence governments to do WAW work seems similarly speculative to other longtermist work but far less important.
• In the long-term, WAW seems important but not nearly as important as preventing x-risks and perhaps some other work.

[...]

My subjective probability that the WAW movement will take off with $8 million per year of funding is not that much higher than the probability that it will take off with$2 million per year of funding, as the movement’s success probably mostly depends on factors other than funding. But with $2 million, the probability would be much higher than with$0 (I’m using somewhat random numbers here to make the point). And ideally, the money that we do spend on WAW would be used to fund people with different visions about WAW to try multiple different approaches so that we could see which approaches work best. I see some of this happening now, so I mostly support the status quo. Of course, my opinion on how much funding WAW should receive might change upon seeing concrete funding proposals.

Comment by Lizka on CE: Announcing our 2023 Charity Ideas. Apply now! · 2023-02-15T02:28:27.223Z · EA · GW

I really appreciate this post and am curating it. I also want to signal-boost this AMA with Charity Entrepreneurship from a few months ago.

Some things I particularly like:

1. The whole project seems like a great approach to getting impactful projects off the ground. And I've been really impressed with the work of some of the previously incubated charities.
2. Potential weaknesses of the ideas and challenges are highlighted
3. The "co-founder fit" sections are really useful
4. The 1-sentence summaries -> 1-paragraph summaries -> More Detailed Summaries structure

I'm also really excited about the February-March 2024 program, which will focus on farmed animals and global health and development mass media interventions. Although I notice I'm a little confused about how it's possible to apply (and where — is it just the same link?), given that the project ideas are not up yet (and as far as I'm aware, they don't exist yet) — would love more info on this front!

Comment by Lizka on H5N1 - thread for information sharing, planning, and action · 2023-02-13T21:37:18.754Z · EA · GW

Zvi recently shared a post on H5N1 on LessWrong.

Comment by Lizka on H5N1 - thread for information sharing, planning, and action · 2023-02-10T21:49:37.991Z · EA · GW

I'm curating this — thanks a bunch for making it. I'd be very glad to see more comments and resource-sharing here, and more posts like this in the future in similar situations.

Note also two recent posts from DirectedEvolution:

One thing I'm curious about, given the predictions on whether the WHO will declare a PHEIC for H5N1: how come there wasn't a spike (or even growth) in markets like this one? The mink-to-mink transmission seems to be concerning, and that happened and was known in October.

Comment by Lizka on “Community” posts have their own section, subforums are closing, and more (Forum update February 2023) · 2023-02-10T00:23:56.251Z · EA · GW

That definitely looks like a bug; thank you for catching and flagging it! We'll work on fixing it asap.

Comment by Lizka on Solidarity for those Rejected from EA Global · 2023-02-08T16:10:30.359Z · EA · GW

I want to write a quick note encouraging people not to view EA Global application decisions as overall evaluations of themselves, their status or identity "as EAs", or their potential for having a significant impact.

I should also say that I was rejected from the first EA conference that I applied to.

(I don't think my experience with this was as bad as it was for some others and don't want to use this fact to say that it's not reasonable to be sad about a rejection — I absolutely think it is! — but maybe it's a useful data point for what I'm saying, and useful context.)

I also know of cases where rejections seem, in retrospect, wrong, or were interpreted incorrectly — at some point, I was collecting these stories to see if we could improve the situation (unfortunately, this was at a time when I was overloaded and transitioning jobs, and the project went nowhere).

I don't know what criteria are being used to evaluate applications, but my impression is that the process tries to answer questions like, "is this person facing decisions that an EAG will help them with?" "Will their experience add to the balance of attendees and let others learn from them in a way that's hard to learn from others' experiences?" These are hard, aren't measures of "is this person a 'good EA,'" don't mean that someone is not impactful, and also mean that the same person can be rejected now and then accepted at future conferences.

And I want to cite the EA Global FAQ on admissions

Rejection only means that the admissions team for this particular event didn’t think that your application demonstrated your fit for that event. A rejected application is not a judgement about the value of your work or your potential impact in effective altruism. Rejections do not signal to funders or potential employers that they shouldn’t collaborate with you. Or that you are somehow not part of this community. We do not share information about rejected applications with anyone who doesn’t need to know it.

The admissions process narrowly considers your fit for a conference, and even then, the process is imperfect; we are aware of times when we rejected someone we realized later we should have accepted.

If you’re rejected from a conference, you are absolutely welcome to apply to other conferences in the future.

Relevant disclaimers: I work at the Centre for Effective Altruism (on the Online Team), and I was on the Events Team before that. This isn't an official response from the Events Team or anything like that, though!

Comment by Lizka on Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding · 2023-02-08T14:06:56.763Z · EA · GW

One thing I'd like to quickly flag on the topic of this comment: using multiple accounts to express the same opinion (e.g. to create the illusion of multiple independent accounts on this topic) is a (pretty serious) norm violation. You can find the full official norms for using multiple accounts here

This doesn't mean that e.g. if you posted something critical of current work on forecasting at some point in your life, you can't now use an anonymous account to write a detailed criticism of a forecasting-focused organization. But e.g. commenting on the same post/thread with two different accounts is probably quite bad.

Comment by Lizka on Literature review of Transformative Artificial Intelligence timelines · 2023-02-08T13:12:00.108Z · EA · GW

Some excellent content on AI timelines and takeoff scenarios has come out recently:

I'm curating this post, but encourage people to look at the others if they're interested.

• I think discussions of different models or forecasts and how they interact happen in lots of different places, and syntheses of these forecasts and models are really useful.
• The full report has a tool that you can use to give more or less weight to different forecasts to see what the weighted average forecasts look like
• I really appreciate the summaries of the different approaches (also in the full report), and that these summaries flag potential weaknesses (like the fact that the AI Impacts survey had a 17% response rate)
• This is a useful insight
• "The inside-view models we reviewed predicted shorter timelines (e.g. bioanchors has a median of 2052) while the outside-view models predicted longer timelines (e.g. semi-informative priors has a median over 2100). The judgment-based forecasts are skewed towards agreement with the inside-view models, and are often more aggressive (e.g. Samotsvety assigned a median of 2043)"
• The visualization (although it took me a little while to parse it; I think it might be useful to e.g. also provide simplified visuals that show fewer approaches)

Other notes:

1. I do wish it was easier to tell how independent these different approaches/models are. I like the way model-based forecasts and judgement-based forecasts are separated, which already helps (I assume that e.g. the Metaculus estimate incorporates others' and the models).
2. I think some of the conversations people have about timelines focus too much on what the timelines look like and less on "what does this mean for how we should act." I don't think this is a weakness of this lit review — this lit review is very useful and does what it sets out to do (aggregate different forecasts and explain different approaches to forecasting transformative AI) — but I wanted to flag this.
Comment by Lizka on Appreciation thread Feb 2023 · 2023-02-08T12:02:37.056Z · EA · GW

I love that people on the Forum tend to celebrate null results and postmortems of projects that have been discontinued (notable recent example).

Comment by Lizka on Appreciation thread Feb 2023 · 2023-02-08T12:00:40.366Z · EA · GW

I really appreciate Rethink Priorities, both because I was a fellow there and think it really helped me personally and because they're working on really cool and I think useful work, and post about it on the Forum a lot. :)

Comment by Lizka on Appreciation thread Feb 2023 · 2023-02-08T11:57:26.645Z · EA · GW

I really appreciate people who notice that something should be done — whether that's putting together a resource, coordinating people to get a project rolling, etc. — and just decide to do it.

Comment by Lizka on Appreciation thread Feb 2023 · 2023-02-08T11:55:31.210Z · EA · GW

I really appreciate the folks donating significant chunks of their income because they want to improve the world. ❤️

Comment by Lizka on Moving community discussion to a separate tab (a test we might run) · 2023-02-07T14:16:25.441Z · EA · GW

Thanks for this suggestion! I think we might end up moving a bit quicker, as we don't think this change/test will stop people from engaging with "Community" posts altogether — people who want to see "Community" posts will still see them — and because (I think) we're pretty confident that there's a problem here that needs fixing (we're less sure that this is the right solution or that we've got the "problem statement" quite right — we hope the test will give us more info on this front).

Comment by Lizka on Moving community discussion to a separate tab (a test we might run) · 2023-02-07T14:13:59.749Z · EA · GW

Thanks for this feedback; we hadn't considered adding an option to remove the section (if we go with that version), and are now considering it.

(Yeah, we also considered defaulting to hiding Community by default, but I think that would hide "Community" posts too much, and that some people just want to separate the experience of reading object-level posts from the experience of engaging with "Community" posts, and avoid having them compete with each other for attention.)

Comment by Lizka on Moving community discussion to a separate tab (a test we might run) · 2023-02-07T14:06:18.064Z · EA · GW

Thanks for outlining these concerns.

Re (1): We currently quickly check tags on posts, and a few people are working on tagging almost all new posts with at least a couple key tags (although there's sometimes a delay — or mistakes/missed tags, which is natural, I think, given that tagging shouldn't be taking up lots of resources). Keeping an eye on the Community section/tab to check that posts there should indeed be "Community" seems like a good idea. I agree that this change would put pressure on the tagging system (it'll be more important to get tagging right).

Re (2): My hope is that people will check the "Community" section if they want to keep up with "meta" projects, and I tend to share a lot of announcements in the Forum Digest, so that might also be an option for keeping up. Also, just a quick note: announcements from non-"Community"-oriented projects (e.g. a new Charity Entrepreneurship-incubated charity) will still end up on the Frontpage.

Comment by Lizka on Moving community discussion to a separate tab (a test we might run) · 2023-02-07T13:56:27.509Z · EA · GW

I don't have a strong view one way or another on the topic of whether you should be able to use strong-agree-votes anywhere. I don't know what others on the team think.

I'm quite excited about potentially developing or otherwise adding an easy-to-use poll feature on the Forum, though. It's not an obvious part of any of our major projects for the near future, though, and it's not clear to me that it's worth prioritizing over other improvements (in part because I don't know how much use it would get), so I don't know when exactly we might get to it, if ever — but this is something that would make me personally happy.

Comment by Lizka on The number of burner accounts is too damn high · 2023-02-07T11:42:32.692Z · EA · GW

Where is there info on hiring practices at FTX? I don't remember seeing this and would be interested.

More generally, I would be really interested in hearing about particular examples of people being denied job opportunities in EA roles because of opinions they share on the EA Forum (this would worry me a lot).

Comment by Lizka on Moving community discussion to a separate tab (a test we might run) · 2023-02-06T21:38:23.342Z · EA · GW

Agree-vote (✅) with this comment if you think you will probably be personally happier with the Forum if we make the change that we’re outlining in the post.

Disagree-vote (❎) if you think you will like the Forum less if we make the change.

Please use normal-strength votes (we’ll check for strong-agree-votes and cancel them). (We’re posting two comments as a quick poll to get a sense of what people who might be less comfortable commenting think. Please note that we won’t defer entirely to the results of these polls. See the other comment.)

Comment by Lizka on Moving community discussion to a separate tab (a test we might run) · 2023-02-06T21:37:45.654Z · EA · GW

Agree-vote (✅) with this comment if you are overall optimistic about the test that we’re outlining in the post (e.g. because you think we’ll find out something useful, or because you think that the change will likely be helpful).

Disagree-vote (❎) if you’re overall pessimistic about the test.

Please use normal-strength votes (we’ll check for strong-agree-votes and cancel them). (We’re posting two comments as a quick poll to get a sense of what people who might be less comfortable commenting think. Please note that we won’t defer entirely to the results of these polls. See the other comment.)

Comment by Lizka on [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? · 2023-02-06T14:54:40.042Z · EA · GW We have removed two links from this post in "Edit 3," as they were linking to someone's LinkedIn to claim that this person got the Open Philanthropy Undergraduate Scholarship that would cover tuition for those who are not domestic students (as well as the Atlas Fellowship), despite the fact that they do qualify as a domestic student. It seems that this claim is likely misleading; the person was offered a scholarship conditional upon going to a university abroad. They claim that they didn't do so, and so did not receive funding. Their LinkedIn now clarifies that they were offered a scholarship, but not funded. In general, we think that checking these sorts of claims with the people they involve (in this case, Open Philanthropy or the person in question) is a very good norm, precisely because it can prevent situations like this, where a potentially damaging and potentially false rumor can spread about someone. Please do not share unverified rumors without doing some work to check them, first. You can run things by the moderation team if you're unsure whether you should check them. Comment by Lizka on I No Longer Feel Comfortable in EA · 2023-02-06T10:12:15.870Z · EA · GW I have been trying to find this post and the older one for a few weeks now, but I couldn't remember the term — thanks so much for linking it. Comment by Lizka on [Atlas Fellowship] Why do 100 high-schoolers need$50k each from Open Philanthropy? · 2023-02-05T14:54:06.022Z · EA · GW

The moderation team has encoded a paragraph in the post based on a request.

We're working on formal updates to our policies, but generally think that sharing this kind of personal information, especially when based on rumors, especially in a way that is easily accessible via a Google Search, is dangerous. In general, we'll probably accept requests to encode it.

Please do not add the information back, and we may remove it entirely after further deliberation.

Comment by Lizka on Karma overrates some topics; resulting issues and potential solutions · 2023-02-02T15:02:20.963Z · EA · GW

Thanks for these flags about the newcomer experience, both. I agree that these are important considerations.

​​[Writing just for myself, not my employer or even my team. I am working on the Forum, and that's probably hard to separate from my views  on this topic— but this is a quickly-written comment, not something that I feedback on from the rest of the team, etc.]

Comment by Lizka on Karma overrates some topics; resulting issues and potential solutions · 2023-02-02T15:00:42.525Z · EA · GW

I can see how all of this can feel related to the discussion about "bad epistemics" or a claim that the community as a whole is overly navel-gazing, etc. Thanks for flagging that you're concerned about this.

To be clear, though, one of the issues here (and use of the term "bike-shedding") is more specific than those broader discussions. I think, given whatever it is that the community cares about (without opining about whether that prioritization is "correct"), the issues described in the post will appear.

Take the example of the Forum itself as a topic that's relevant to building EA and a topic of interest to the EA community.

Within that broad topic, some sub-topics will get more attention than others for reasons that don't track how much the community actually values them (in ~total). Suppose there are two discussions that could (and potentially should) happen: a discussion about the fonts on the site, and a discussion on how to improve fact-checking (or how to improve the Forum experience for newcomers, or how to nurture a culture that welcomes criticism, or something like that). I'd claim that the latter (sub)topic(s) is likely more important to discuss and get right than the former, but, because it's harder, and harder to participate in than a discussion about the font — something everyone interacts with all the time — it might get less attention.

Moreover, posts that are more like "I dislike the font, do you?" will often get more engagement than posts like "the font is bad for people with dyslexia, based on these 5 studies — here are some suggestions and some reasons to doubt the studies," because (likely) fewer people will feel like they can weigh in on the latter kind of post. This is where bike-shedding comes in. I think we can probably do better, but it'll require a bit of testing and tweaking.

​​[Writing just for myself, not my employer or even my team. I am working on the Forum, and that's probably hard to separate from my views  on this topic— but this is a quickly-written comment, not something that I feedback on from the rest of the team, etc.]

Comment by Lizka on Karma overrates some topics; resulting issues and potential solutions · 2023-02-02T14:31:16.132Z · EA · GW

Thanks for this comment, Amber!

I'll try to engage with the other things that you said, but I just want to clarify a specific claim first. You write:

I guess a question underlying all of this is 'what is karma for?' An implication of this post seems to be that karma should reflect quality, or how serious people think the issues are, all things considered.

I actually do not believe this. I think the primary/key point of karma is ordering the Frontpage & providing a signal of what to read (and ordering other pages, like when you're exploring posts on a given topic). We don't need to use only karma for ordering the Frontpage — and I really wish that more people used topic filters to customize their Frontpages, etc. — but I do think that's a really important function of karma. This means that karma needs to reflect usefulness-of-reading-something to a certain extent. This post is about correcting one type of issue that arises given this use.

Note that we also correct in other ways. The Frontpage isn't just a list of posts from all time sorted by (inflation-adjusted) karma, largely because people find it useful to read newer content (although not always), we have topic tags, etc.

So I don't directly care about whether a post that's 1000x more net useful than another post has 1000x (or even simply more) karma; I just want people to see the posts that will be most useful for them to engage with. (I think some people care quite a bit about karma correlating strongly with the impact of posts, and don't think this is unreasonable as a desire, but I personally don’t think it’s that important. I do think there are other purposes to karma, like being a feedback mechanism to the authors, a sign of appreciation, etc.)

​​[Writing just for myself, not my employer or even my team. I am working on the Forum, and that's probably hard to separate from my views  on this topic— but this is a quickly-written comment, not something that I feedback on from the rest of the team, etc.]

Comment by Lizka on Karma overrates some topics; resulting issues and potential solutions · 2023-02-02T14:13:23.304Z · EA · GW

Thanks for this comment — I agree that tag filtering/following is underused, and we're working on some things that we hope will make it a bit more intuitive and obvious. I like a lot of your suggestions.

Comment by Lizka on The Capability Approach to Human Welfare · 2023-02-01T14:49:30.971Z · EA · GW

Great, thank you! I appreciate this response, it made sense and cleared some things up for me.

Re:

Yeah, I'm with you on being told to exercise.  I'm guessing you like this because you're being told to do it, but you know that you  have the option to refuse.

I think you might be right, and this is just something like the power of defaults (rather than choices being taken away). Having good defaults is good.

(Also, I'm curating the post; I think more people should see it. Thanks again for sharing!)

Comment by Lizka on What I thought about child marriage as a cause area, and how I've changed my mind · 2023-02-01T14:45:14.064Z · EA · GW

I really appreciate this post, thanks for sharing it (and welcome to the Forum)!

Some aspects I want to highlight:

1. The project — trying to translate the known (or assumed) harms from child marriage into the metrics used by related projects that might work on the issue — seems really valuable
2. Noticing that a key assumption falls through and sharing this is great. I'd love to see more of this
3. The post also outlines some learnings from the experience
1. Write out key assumptions and test them / look for things that disprove them
2. Avoid trusting consensus
3. Get accountability / find someone to report to
4. I also like that there isn't the sense that this is the last word on whether working on child marriage is a promising cause area or not — this is an in-progress write-up (see "updated positions and next steps") and doesn't shy away from the fact
5. And there's an "if you find this interesting, you may also like" section! I'm curious if you've seen:
1. Giving What We Can's report from 2014 on this issue? (And the associated page, which also seems pretty outdated.)
2. Introducing Lafiya Nigeria and the Women's health and welfare and Family planning topic pages.

Quick notes on the model — I'd be interested in your answers to some questions in the comments (Jeff's, this one that asks in part about the relationship between economic growth (and growth-supporting work) and this issue, etc.).

• I skimmed this report on some programs, and in case anyone is interested, it seems:
• "In each study country, we tested four approaches: 1) community sensitization to address social norms, 2) provision of school supplies to encourage retention in school, 3) a conditional asset transfer to girls and their families, and 4) one study area that included all the approaches."
• I'm immediately a bit worried that estimating the impact of these programs is more messy if e.g. one of the harms that stem from child marriage that you track is a loss in education (or loss in nutrition or something) — as presumably e.g. the school supplies program also just directly supports education (so there's potentially some double-counting).
• (I'm also wondering if, assuming that education delays marriage, more effective education-support programs, like iron supplementation, are just the way to go here.)
• In general, it seems like there might be a bit of circularity (or, alternatively, loss of information)q if we do something like: "ok, these interventions, which we evaluate on a given factor — how much they delay (child) marriage — are effective to [this degree] at achieving the particular thing we're measuring, which we think is important for [a number of factors

I made a sketch to try to explain my worry about the models (and some alternative approaches I've seen) — it's a very rough sketch, but I'd be curious for takes.

Comment by Lizka on The Capability Approach to Human Welfare · 2023-01-31T15:30:01.208Z · EA · GW

Thanks for posting this! I do think lots of people in EA take a more measuring-happiness/preference-satisfaction approach, and it's really useful to offer alternatives that are popular elsewhere.

My notes and questions on the post:

Here's how I understand the main framework of the "capability approach," based mostly on this post, the linked Tweet, and some related resources (including SEP and ChatGPT):[1]

• "Freedom to achieve [well-being]" is the main thing that matters from a moral perspective.
• (This post then implies that we should focus on increasing people's freedom to achieve well-being / we should maximize (value-weighted) capabilities.)
• "Well-being" breaks down into functionings (stuff you can be or do, like jogging or being a parent) and capabilities (the ability to realize a functioning: to take some options — choices)
• Examples of capabilities: having the option of becoming a parent, having the option of not having children, having the option of jogging, having the option of not jogging, etc. Note: if you live in a country where you're allowed to jog, but there are no safe places to jog, you do not actually have the capability to jog.
• Not all functionings/capabilities are equal: we shouldn't naively list options and count them. (So e.g. the ability to spin and clap 756 times is not the same as the option to have children, jog, or practice a religion.) My understanding is that the capability approach doesn't dictate a specific approach to comparing different capabilities, and the post argues that this is a complexity that is just a fact of life that we should accept and pragmatically move forward with:
• "Yes, it’s true that people would rank capability sets differently and that they’re very high dimensional, but that’s because life is actually like this. We should not see this and run away to the safety of clean (and surely wrong) simple indices. Instead, we should try to find ways of dealing with this chaos that are approximately right."

In particular, even if it turns out that someone is content not jogging, them having the ability to jog is still better than them not having this ability.

My understanding of the core arguments of the post, with some questions or concerns I have (corrections or clarifications very much appreciated!):

1. What the "capability approach" is — see above.
2. Why this approach is good
1. It generally aligns with our intuitions about what is good.
1. I view this as both a genuine positive, and also as slightly iffy as an argument — I think it's good to ground an approach in intuitions like "it's good for a woman to choose whether to walk at night even if she might not want to", but when we get into things like comparing potential areas of work, I worry about us picking approaches that satisfy intuitions that might be wrong. See e.g. Don’t Balk at Animal-friendly Results, if I remember that argument correctly, or just consider various philanthropic efforts that focus on helping people locally even if they're harder to help and in better conditions than people who are farther away — I think this is generally justified with things like "it's important to help people locally," which to me seems like over-fitting on intuitions.
2. At the same time, the point about women being happier than men in the 1970s in the US seems compelling. Similarly, I agree that I don't personally maximize anything like my own well-being — I'm also "a confused mess of priorities."
2. It's safer to maximize capabilities than it is to maximize well-being (directly), which both means that it's safer to use the capabilities approach and is a signal that the capabilities approach is "pointing us in the right direction."
1. A potentially related point that I didn't see explicitly: this approach also seems safer given our uncertainty about what people value/what matters. This is also related to 2d.
3. This approach is less dependent on things like people's ability to imagine a better situation for themselves.
4. This approach is more agnostic about what people choose to do with their capabilities, which matters because we're diverse and don't really know that much about the people we're trying to help.
1. This seems right, but I'm worried that once you add the value-weighting for the capabilities, you're imposing your biases and your views on what matters in a similar way to other approaches to trying to compare different states of the world.
2. So it seems possible that this approach is either not very useful by saying: "we need to maximize value-weighted capabilities, but we can't choose the value-weightings," (see this comment, which makes sense to me) or transforms back into a generic approach like the ones more commonly used often in EA — deciding that there are good states and trying to get beings into those states (healthy, happy,  etc.). [See 3bi for a counterpoint, though.]
3. Some downsides of the approach (as listed by the post)
1. It uses individuals as the unit of analysis and assumes that people know best what they want, and if you dislike that, you won't like the approach. [SEE COMMENT THREAD...]
1. I just don't really see this as a downside.
2. "A second downside is that the number of sets of capabilities is incredibly large, and the value that we would assign to each capability set likely varies quite a bit, making it difficult to cleanly measure what we might optimize for in an EA context."
1. The post argues that we can accept this complexity and move forward pragmatically in a better way than going with clean-but-wrong indices. It lists three examples (two indcies and one approach of tracking individual dimensions) that "start with the theory of the capability approach but then make pragmatic concessions in order to try to be approximately right." These seem to mostly track things that seem like common requirements for many other capabilities, like health/being alive, resources, education, etc.
4. The influence of the capability approach

Three follow-up confusions/uncertainties/questions (beyond the ones embedded in the summary above):

1. Did I miss important points, or get something wrong above?
2. If we just claim that people value having freedoms (or freedoms that will help them achive well-being), is this structurally similar to preference satisfaction?
3. The motivation for the approach makes intuitive sense to me, but I'm confused about how this works with various things I've heard about how choices are sometimes bad. (Wiki page I found after a quick search, which seemed relevant after a skim.) (I would buy that a lot of what I've heard is stuff that's failed in replications, though.)
1. Sometimes I actually really want to be told, "we're going jogging tonight," instead of being asked, "So, what do you want to do?"
2. My guess is that these choices are different, and there's something like a meta-freedom to choose when my choice gets taken away? But it's all pretty muddled.
1. ^

I don't have a philosophy background, or much knowledge of philosophy!

Comment by Lizka on Protect Our Future's Crypto Politics · 2023-01-30T00:17:44.716Z · EA · GW

Hi! Just flagging that I've marked this post as a "Personal Blog" post, based on the Forum's policy on politics

(This means those who've opted in to seeing "Personal Blog" posts on the Frontpage will see it there, while others should only see it in Recent Discussion, on the All Posts page, and on the relevant topic/tag pages.)

Comment by Lizka on Spreading messages to help with the most important century · 2023-01-30T00:07:15.521Z · EA · GW

Hi! The process for curation is outlined here. In short, some people can suggest curation, and I currently make the final calls.

You can also see a list of other posts that have been curated (you can get to the list by clicking on the star next to a curated post's title).