Posts

How many EA billionaires five years from now? 2022-08-20T09:57:29.577Z
Erich_Grunewald's Shortform 2022-08-11T23:36:41.387Z
Risk of famine in Somalia 2022-06-11T16:39:18.164Z
About the Neglectedness of Longtermism and Future Work 2022-03-19T10:37:45.206Z
Doubts about Track Record Arguments for Utilitarianism 2022-02-12T09:49:05.960Z
How impactful is free and open source software development? 2021-10-09T10:15:44.105Z
Some background and thoughts on animal advocacy terminology 2021-07-20T14:53:19.776Z
Interview with Lucia Coulter: Lead Exposure, Effective Altruism, Progress in Malawi 2021-07-03T14:32:13.492Z
Are there any evaluations or impact assessments of circular economy or related initiatives? 2021-06-28T09:41:14.280Z
Animal Testing Is Exploitative and Largely Ineffective 2021-06-13T10:46:44.836Z
Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism 2021-05-08T11:44:30.113Z
Can a Vegan Diet Be Healthy? A Literature Review 2021-03-12T12:47:15.185Z
Two Inadequate Arguments against Moral Vegetarianism 2021-01-30T10:16:35.997Z

Comments

Comment by Erich_Grunewald on On Artificial General Intelligence: Asking the Right Questions · 2022-10-03T15:27:07.925Z · EA · GW

Thanks for responding! I think I now understand better what you're getting at, though I'm still a bit unsure about how much work each of these beliefs are doing:

  1. We shouldn't build AGI.
  2. We can't build AGI (because there's no coherent reward function we can give it, since many of the tasks it'd have to do have fuzzy success criteria).
  3. We won't build AGI (because the incentives mean narrow AI will be far more useful).

Could you clarify whether you agree with these and how important you think each point is? Or is it something else entirely that's key?

Comment by Erich_Grunewald on On Artificial General Intelligence: Asking the Right Questions · 2022-10-02T23:35:34.546Z · EA · GW

This post reads a little like someone pushing at an open door to me. So you write that FTX should ask themselves whether humanity should create AGI. The feeling I get from that is that you think FTX assume that AGI will be good. But the reason they've announced the contest is that they think the development of AGI carries a serious risk of global catastrophe.

Two of the propositions focus on when AGI will arrive. This makes it seem like AGI is a natural event, like an asteroid strike or earthquake. But AGI is something we will create, if we create it.

There are immense (economical, other) incentives to build AGI, so while humanity can simply choose not to build AGI, FTX (or any other single actor) is not in a position to choose not to build AGI. I expect FTX is open to considering interventions aimed at making that happen (not least as there's been some discussion on whether to try to slow down AI progress recently). But whether those work at all is not obvious.

How would know we had successful AGI if/when we created it? It would nothing like human intelligence, which is shaped not only by information processing, but by embodiment and the emotions central to human existence. ... So AGI cannot be like human intelligence.

As far as I'm aware, writers on AGI risk have been clear from the beginning that there's no reason to expect an AGI to take the same form as a human mind (unless it's the result of whole-brain emulation). E.g. Bostrom roughly defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" (Superintelligence ch. 2). There's no reason to think a highly capable but alien-to-us intelligence poses less of a threat than one that's similar to us.

AGI might be helpful for thinking about complex human problems, but it is doubtful that it would be better than task specific AI. Task specific AI has already proven successful at useful, difficult jobs (such as cancer screening for tissue samples and hypothesizing protein folding structures). Part of what has enabled such successful applications is the task specificity. That allows for clear success/fail training and ongoing evaluation measures.

There are advantages to generality too, like reducing the need for task-specific data. There's at least one example of a general intelligence being extremely successful, and that is our own, as evidenced by the last few billions' years of evolutionary history. An example of fairly successful general-ish AI is GPT-3, which was just trained on next-word prediction but ended up being capable of everything from translation and spell-checking to creative writing and chess-playing.

Comment by Erich_Grunewald on Questions on databases of AI Risk estimates · 2022-10-02T18:33:32.601Z · EA · GW

I'm excited to see what you come up with!

Comment by Erich_Grunewald on Questions on databases of AI Risk estimates · 2022-10-02T10:05:41.380Z · EA · GW

Am I right that Carlsmith (2021) is the only end-to-end model of AI Risk with numerical predictions at each stage (by end-to-end I mean there are steps in between 'AI invented' and 'AI catastrophe' which are individually predicted)? Any other examples would be really helpful so I can scope out the community consensus on the microdynamics of AI risk.

This spreadsheet (found here) has estimates on the propositions in Carlsmith by (some of?) the reviewers of that paper.

Comment by Erich_Grunewald on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T19:03:21.864Z · EA · GW

But even then, I think my logic stands: notice that OP talks about EA orgs in particular. Meaning OP does want to see a higher concentration of posts with views correlated to those of EA org employees. But that means a lower concentration of posts from people with views who don't directly align with EA orgs - which would cause a cycle of blocking more diverse views.

I suspect OP doesn't want more posts from employees at EA orgs because they are such employees -- I understood OP as wanting higher quality posts, wherever they come from.

True, the post does suggest that employees at EA orgs make higher quality posts on average, and that they may have less time to post on the Forum than the average user, but those are empirical matters (and seem plausible to me, anyway).

Edit to add: I generally didn't get the feeling that OP wholeheartedly supports intervening on any or each of these possible explanations, or that doing so wouldn't risk other negative consequences (e.g. increased groupthink).

Comment by Erich_Grunewald on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T12:41:34.087Z · EA · GW

From the wiki: "An existential risk is the risk of an existential catastrophe, i.e. one that threatens the destruction of humanity’s longterm potential." That can include getting permanently locked into a totalitarian dictatorship and things of that sort, even if they don't result in extinction.

Comment by Erich_Grunewald on Defective Altruism article in Current Affairs Magazine · 2022-09-22T15:47:39.342Z · EA · GW

This is nitpicky, but I wouldn't call that "an obscure academic screed":

  • It was written by Charity Navigator leadership, who presumably felt threatened or something by GiveWell. So I think it was more like a non-profit turf war thing than an academic debate.
  • I wasn't around at the time, but I have the impression that it was pretty (in)famous in EA circles at the time. Among other things it prompted a response by Will MacAskill. So it also feels wrong to call it obscure.
Comment by Erich_Grunewald on The religion problem in AI alignment · 2022-09-16T08:13:16.573Z · EA · GW

Possibly relevant: The Pope’s AI adviser on ensuring algorithms respect human dignity.

Comment by Erich_Grunewald on Cause Exploration Prizes: Announcing our prizes · 2022-09-15T23:01:31.131Z · EA · GW

Thus any strategy to address SCD should definitely include increasing access to IVF/PGT in low resource countries.

That's a pretty bold claim. Are you sure that would be more cost-effective than the newborn screening and treatment intervention proposed in that post? IVF seems pretty expensive compared to the costs of screening and treatment.

Comment by Erich_Grunewald on EA syllabi and teaching materials · 2022-09-15T22:50:05.050Z · EA · GW

Looks like the doc isn't publicly shared -- I get "You need access" when I try to view it.

Comment by Erich_Grunewald on Linch's Shortform · 2022-09-15T19:13:54.408Z · EA · GW

I think "Reality doesn’t grade on a curve" might originally be from Scott Alexander's Transhumanist Fables.

Comment by Erich_Grunewald on [deleted post] 2022-09-14T14:56:19.964Z
  • Does she think that someone like Franz Kafka (who was famously hard on himself) would've produced better artworks if he'd been more self-compassionate?
  • Is it possible to be too self-compassionate? If so, how does that failure mode look?
  • If self-compassion is better both for mental health and productivity, why isn't everyone really self-compassionate already? Does it trade off against some other desirable thing?
  • What other variables do researchers condition on when trying to figure out whether self-compassion causes more/less {productivity, health, responsibility}? Do e.g. neuroticism/conscientiousness drive self-compassion (or the lack of it)?
Comment by Erich_Grunewald on [deleted post] 2022-09-13T12:13:49.310Z

I was under the impression that most trans people find it ok to mention a deadname in a parenthesis if the person has been notable under that name (which is true of Émile). That's the Wikipedia policy; here's a Reddit thread where that seems to be the consensus opinion. Is this wrong?

Comment by Erich_Grunewald on [deleted post] 2022-09-13T09:05:23.654Z

Fair enough!

Comment by Erich_Grunewald on [deleted post] 2022-09-13T09:01:59.924Z

Yeah I guess it would depend on the particulars, for example if it’s more like they received an authoritative order not to mention Torres wrt the paper or more like a colleague or peer suggested it? Not sure.

Comment by Erich_Grunewald on [deleted post] 2022-09-13T08:01:03.155Z

If they were indeed forced from removing a coauthor from their paper, it doesn’t seem to me that they’re being deceptive when they don’t mention that coauthor.

Comment by Erich_Grunewald on Red Teaming CEA’s Community Building Work · 2022-09-04T17:32:49.410Z · EA · GW

It's discussed in the OP. You'll find further links there.

Comment by Erich_Grunewald on EA Culture and Causes: Less is More · 2022-09-04T17:30:57.684Z · EA · GW

I thought this was a great post, thanks for writing it. Some notes:

  • If a community rests itself on broad, generally-agreed-to-be-true principles, like a kind of lowest-common-denominator beneficentrism, some of these concerns seem to me to go away.
    • Example: People feel free to change their minds ideologically; the only sacred principles are something like "it's good to do good" and "when doing good, we should do so effectively", which people probably won't disagree with, and which, if they did disagree, should probably make them not-EAs.
    • If a core value of EA is truth-seeking/scout mindset, then identifying as an EA may reduce groupthink. (This is similar to what Julia Galef recommends in The Scout Mindset.)
  • I feel like, if there wasn't an EA community, there would naturally spring up an independent effective global health & poverty community, an independent effective animal advocacy community, an independent AI safety community, etc., all of which would be more homogeneous and therefore possibly more at risk of groupthink. The fact that EA allows people with these subtly different inclinations (of course there's a lot of overlap) to exist in the same space should if anything attenuate groupthink.
    • Maybe there's evidence for this in European politics, where narrow parties like Socialists, Greens and (in Scandinavia though not in Germany) Christian Democrats may be more groupthinky than big-tent parties like generic Social Democratic ones. I'm not sure if this is true though.
  • Fwiw, I think EA should not grow indefinitely. I think at a certain point it makes sense to try to advocate for some core EA values and practices without necessarily linking them (or weighing them down) with EA.
  • I agree that it seems potentially unhealthy to have one's entire social and professional circle drawn from a single intellectual movement.

Many different (even contradictory!) actual goals can stem from trying to act altruistically effectively. For example a negative utilitarian and a traditional one disagree on how to count utility. I think that the current umbrella of EA cause areas is too large. EA may agree on mottos and methods, but the ideology is too broad to agree on what matters on an object-level.

This just doesn't seem to cause problems in practice? And why not? I think because (1) we should and often do have some uncertainty about our moral views and (2) even though we think A is an order of magnitude more important to work on than B, we can still think B is orders of magnitude more important than whatever most non-EAs do. In that case two EAs can disagree and still be happy that the other is doing what they're doing.

Comment by Erich_Grunewald on EA is about maximization, and maximization is perilous · 2022-09-04T14:23:18.889Z · EA · GW

I'm mostly-deontologist and don't think the paralysis argument works, but I also don't think the way most people live is a good counterargument to it. I don't think so because MacAskill is arguing against a coherent moral worldview, whereas hardly any people live according to a coherent moral worldview. Them not being paralysed is, I think, not because they have a much more refined version of deontology than what MacAskill argues against, but because they don't have a coherent version of it at all.

Comment by Erich_Grunewald on A podcast episode exploring critiques of effective altruism (with Michael Nielsen and Ajeya Cotra) · 2022-09-01T16:48:50.573Z · EA · GW

I liked both this episode and the one on social justice last winter and would love to hear more semi-adversarial ones of this sort.

Comment by Erich_Grunewald on Notes on how prizes may fail and how to reduce the risk of them failing · 2022-08-30T19:50:12.379Z · EA · GW

Another downside is that it eats up quite a lot of time. E.g. if we take the Cause Exploration Prize and assume:

  • there are 143 entries (the tag shows 144 posts with that tag on the Forum, one of which introduces the prize)
  • an average entry takes ~27h to research and write up (90% CI 15-50h)
  • an average entry takes ~1.4h to judge (90% CI 0.5-4h, but maybe I'm wildly underestimating this?)

then we get ~2 FTE years spent (90% CI 1.2-3.6 years). That's quite a lot of labour spent by engaged and talented EAs (and ppl adjacent to EA)!

(Caveats: Those assumptions are only off-the-cuff guesses. It's not clear to me what the counterfactual is, but presumably some of these hours wouldn't have been spent doing productive-for-EA work. Also, I'm not sure whether, had you hired a person to think of new cause areas for 2 years, they would've done as well, and at any rate it would've taken them 2 years!)

Edit: To be clear, I'm not saying the Prize isn't worth it. I just wanted to point out a cost that may to some degree be hidden when the org that runs a contest isn't the one doing most of the labour.

Comment by Erich_Grunewald on Preventing an AI-related catastrophe - Problem profile · 2022-08-29T21:57:55.364Z · EA · GW

See e.g. Yudkowsky's AGI Ruin: A List of Lethalities. I think at this point Yudkowsky is far from alone in giving it >50% probability, though I expect that view is far less common in academia and among machine learning (capabilities) researchers.

Comment by Erich_Grunewald on A critical review of GiveWell's 2022 cost-effectiveness model · 2022-08-28T12:40:45.973Z · EA · GW

Thanks, this is a very substantial and interesting post.

I accept GiveWell have a robust defence of their approach. They say they prefer to use cost-effectiveness estimates only as one input in their thinking about charities (with ‘track record’ and ‘certainty of results’ being two other important but hard-to-quantify inputs), and therefore (I infer) don’t want to compare charities head-to-head because the 21.2x of AMF is not the same sort of ‘thing’ as the 9.8x of Malaria Consortium. For sure, Health Economists would agree that there may be factors beyond pure cost-effectiveness to consider when making a decision (e.g. equity considerations, commercial negotiation strategies that companies might employ and so on), but typically this consideration happens after the cost-effectiveness modelling, to avoid falling into the trap I mentioned above where you implicitly state that you are working with two different kinds of ‘thing’ even though they actually compete for the same resources [4].

[...] I really do want to stress how jarring it is to see a cost-effectiveness model which doesn’t actually deliver on the promise of guiding resource utilisation at the margin. An economic model is the most transparent and democratic method we have of determining which of a given set of charities will do the most good, and any attempt to use intuition to plug gaps rather than trying to formalise that intuition undoes a lot of the benefit of creating a model in the first place.

Could you clarify (to a layperson) what the disagreement is here?

My understanding: Say we need to choose between interventions A and B, where A and B have outputs of different types. In order to make a choice, we need to make some assumptions -- either explicit or implicit -- about how to compare those different types of outputs. Either we can make those assumptions explicitly and bake them into the model, or we can model the interventions with separate units, and then make the assumptions (either explicitly or implicitly).

I take it that your fertility analysis did not do this, but that GiveWell does do some of this (e.g. comparing lives saved to increased consumption), only they then take other things -- like track record and strength of evidence -- into account in addition to the model's output. Is the disagreement that you think GiveWell should also include these additional considerations in the cost-effectiveness analysis?

Comment by Erich_Grunewald on Open Thread: June — September 2022 · 2022-08-28T08:36:49.929Z · EA · GW

Phil, you've been making a lot of posts in very short order since you joined. The enthusiasm is great! But have you considered taking the downvotes as a sign that maybe you should increase the threshold in quality for what you decide to post? I.e. take what you would've posted, and only post the most substantial and informative 25% of those.

As it is, it feels kind of like an indiscriminate information dump, and I for one am already tuning out most of what you write, which I think neither of us wants.

Comment by Erich_Grunewald on Can a Vegan Diet Be Healthy? A Literature Review · 2022-08-27T15:52:45.645Z · EA · GW

Thanks! Yeah I guess we need to check in again in another year or two.

Comment by Erich_Grunewald on Open Thread: June — September 2022 · 2022-08-27T14:49:28.396Z · EA · GW

What are you trying to accomplish here? No one said that it was bad to give explanations for downvotes, only that it's ok not to do it. No one said that downvoting (or getting downvoted) should be an end goal -- evaluating and signalling comment quality is the goal. Your comment reads to me like a sarcastic rant based on (wilful?) misunderstandings.

Comment by Erich_Grunewald on Can a Vegan Diet Be Healthy? A Literature Review · 2022-08-27T11:39:25.320Z · EA · GW

I've written a follow-up post covering a few new meta-studies on veganism/vegetarianism and mental health, including a couple that Michael St. Jules graciously pointed out in the comment section here. The conclusion is probably disappointing in its lack of conclusiveness:

Overall, I think there may be a link between veganism/vegetarianism and depression but there’s no good evidence on what causes the link. I’m vaguely leaning towards there being no link between veganism/vegetarianism and other mental health issues, and am very uncertain about associations between it and fatigue and cognitive function.

Comment by Erich_Grunewald on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-25T19:41:46.471Z · EA · GW

Maybe translations into Mandarin could be useful too, not only because there are >1B speakers, but also because influential Chinese EAs may end up being very impactful in reducing AI risk (e.g. wrt AI race dynamics).

Comment by Erich_Grunewald on Erich_Grunewald's Shortform · 2022-08-23T20:50:07.367Z · EA · GW

So I think "(not) allowing X in" was not particularly well worded; what I meant was something like "making choices that cause X (not) to join". So that includes stuff like this:

I see the tradeoff more in who we're advertising towards and what type of activities we're focussing on as a community, e.g. things that better reflect what is most useful, like cultivating intellectual rigor and effective execution of useful projects.

And to be clear, I'm talking about EA as a community / shared project. I think it's perfectly possible and fine to have an EA mindset / do good by EA standards without being a member of the community.

That said, I do think there are some rare situations where you would not allow some people to be part of the community, e.g. I don't think Gleb Tsipursky should be a member today.

Comment by Erich_Grunewald on What We Owe The Future is out today · 2022-08-23T09:13:32.657Z · EA · GW

How would What We Owe the Future be different if it wasn't aimed at a general audience? Imagine for example that the target audience was purely EAs. What would you put in, take out? Would you be bolder in your conclusions?

Comment by Erich_Grunewald on Thoughts on Émile P. Torres' new article, 'Understanding "longtermism": Why this suddenly influential philosophy is so toxic'? · 2022-08-22T09:48:28.869Z · EA · GW

See Response to Phil Torres’ ‘The Case Against Longtermism’ and Response to Recent Criticisms of Longtermism, including comments.

Comment by Erich_Grunewald on Erich_Grunewald's Shortform · 2022-08-21T12:03:17.632Z · EA · GW

A while ago I wrote a post with some thoughts on "EA for dumb people" discussions. The summary:

I think:

  • Intelligence is real, to a large degree determined by genes and an important driver (though not the only one) of how much good one can do.
    • That means some people are by nature better positioned to do good. This is unfair, but it is what it is.
  • Somewhere there’s a trade-off between getting more people into a community, and keeping a high average level of ability in the community, in other words to do with selectivity. The optimal solution is neither to allow no one in nor to allow everyone in, but somewhere in between.
    • Being welcoming and accommodating can allow you to get more impact with a more permissive threshold, but you still need to set the threshold somewhere.
    • I think effective altruism today is far away from hitting any diminishing returns on new recruits.
  • Ultimately what matters for the effective altruist community is that good is done, not who exactly does it.
Comment by Erich_Grunewald on How many EA billionaires five years from now? · 2022-08-20T18:22:15.095Z · EA · GW

Thanks!

3 -- I think I mention this in a footnote:

The observant reader may have noticed that the model allows for a number of additional billionaires in 2027 in the negative. That makes sense in that we may lose some of the ones we have currently (they may no longer be billionaires or effective altruists), but I don't know if Patel is predicting the number of new billionaires, or the difference between how many there are then and how many there are now. E.g. if we get 10 new billionaires but lose one old one, my model would say we have 9 additional ones, but I suspect Patel's bet would resolve in the positive (because there are 10 new ones).

So congrats, you are officially an observant reader. ;) (Edit: Though I realise that I'm muddling things by using "new" when I actually mean the difference between then and now.)

4 -- Nice, looks like he's modelling future capital (not merely # of billionaires) but seems similar enough. I'm not sure if it's in a finished state, but I see Nuno's getting a chanceOfNewBillionnairePerYearOptimistic of ~18% which seems significantly more pessimistic than me, which is interesting given that some other ppl here seem to be more optimistic than me.

5 -- Oh, will do!

Comment by Erich_Grunewald on How many EA billionaires five years from now? · 2022-08-20T14:24:52.284Z · EA · GW

Thanks!

Also do you count people that sympathize with EA ideas as EAs? Fred Ehrsam and Brian Armstrong have both wrote positively about EA in the past. I have seen on Twitter a handful of 9-10 figure net worth crypto hedge fund managers talk about Less Wrong and a few talk about EA.

I interpret it more strictly than that. One of the markets I mention refers to people "who identify as effective altruists", and the other as "either a) public self-identification as EA, b) signing the Giving What Can pledge or c) taking the EA survey and being a 4 or 5 on the engagement axis".

I suspect this would exclude some/most of the people you mention?

Fwiw, here are the model outputs with some other assumptions in current # of EA billionaires:

  • 7 current EA billionaires (as upper bound): 4.0 expected new billionaires, 18% chance of >=10.
  • 7 current EA billionaires (ignoring Ivy League base rate): 8.8 expected new billionaires, 41% chance of >= 10.
  • 10 current EA billionaires (as upper bound): 4.4 expected new billionaires, 27% chance of >=10.
  • 10 current EA billionaires (ignoring Ivy League base rate): 12.2 expected new billionaires, 55% chance of >= 10.
  • 15 current EA billionaires (as upper bound): 5.3 expected new billionaires, 32% chance of >=10.
  • 15 current EA billionaires (ignoring Ivy League base rate): 17.3 expected new billionaires, 67% chance of >= 10.

If there is another crypto bull market and Bitcoin hits $200k, I remember seeing a BOTEC that half of all the new billionaires in the world will be due to crypto.

Yeah, true, crypto seems like an interesting wild card which could make the current base rate conservative.

Comment by Erich_Grunewald on How many EA billionaires five years from now? · 2022-08-20T14:16:52.929Z · EA · GW

Ah yes, for what it's worth, I do allude to this (as does Patel, who I'm paraphrasing): "Effective altruists are more risk-tolerant by default, since you don't get diminishing returns on larger donations the same way you do on increased personal consumption."

I feel like this should be accounted for in the EA base rate, but maybe the effect has gotten or will get more pronounced now as Sam Bankman-Fried is vocal about having this mindset.

Comment by Erich_Grunewald on How many EA billionaires five years from now? · 2022-08-20T13:00:21.021Z · EA · GW

Thanks! I guess I vaguely sort of might've guessed but didn't really think about it when I wrote that.

Comment by Erich_Grunewald on Open Thread: June — September 2022 · 2022-08-18T14:10:58.533Z · EA · GW

Welcome!

Comment by Erich_Grunewald on Erich_Grunewald's Shortform · 2022-08-11T23:36:41.549Z · EA · GW

I wrote a post about Kantian moral philosophy and (human) extinction risk. Summary:

The deontologist in me thinks human extinction would be very bad for three reasons:

  • We’d be failing in our duty to humanity itself (55% confidence).
  • We’d be failing in our duty to all those who have worked for a better future (70% confidence).
  • We’d be failing in our duty to those wild animals whose only hope for better lives rests on future human technology (35% confidence).
Comment by Erich_Grunewald on Most* small probabilities aren't pascalian · 2022-08-07T18:41:43.059Z · EA · GW

I agree, and though it doesn't matter from an expected value point of view, I suspect part of what people object to in those risks is not just the probabilities being low but also there being lots of uncertainty around them.

Or actually, it could change the expected value calculation too if the probabilities aren't normally distributed, e.g. one could look at an x-risk and judge most of the probability density to be around 0.001% but feel pretty confident that it's not more than 0.01% and not at all confident that it's not below 0.0001% or even 0.00001% etc. This makes it different from your examples, which probably have relatively narrow and normally distributed probabilities (because we have well-grounded base rates for airline accidents and voting and -- I believe -- robust scientific models of asteroid risks).

Edit: I see that Richard Y Chappell made this point already.

Comment by Erich_Grunewald on [link post] The Case for Longtermism in The New York Times · 2022-08-05T20:01:06.253Z · EA · GW

https://archive.ph/1DezV

Comment by Erich_Grunewald on The first AGI will be a buggy mess · 2022-07-30T15:42:08.757Z · EA · GW

I take this post to argue that, just as an AGI's alignment property won't generalise well out-of-distribution, its ability to actually do things, i.e. achieve its goals, also won't generalise well out-of-distribution. Does that seem like a fair (if brief) summary?

As an aside, I feel like it's more fruitful to talk about specific classes of defects rather than all of them together. You use the word "bug" to mean everything from divide by zero crashes to wrong beliefs which leads you to write things like "the inherent bugginess of AI is a very good thing for AI safety", whereas the entire field of AI safety seems to exist precisely because AIs will have bugs (i.e. deviations from desired/correct behaviour), so if anything an inherent lack of bugs in AI would be better for AI safety.

Comment by Erich_Grunewald on Announcing Non-trivial, an EA learning platform for teenagers · 2022-07-12T19:21:00.096Z · EA · GW

This looks very cool!

I'm curious about why you need to sign up to view the lessons?

Also, a quibble: some links (like the author's name next to the course) aren't actually HTML <a> elements, which both makes it impossible to e.g. right-click and open in a new tab, and is also bad for accessibility purposes.

For what it's worth, I don't think the design is particularly childish (as some others have opined). I see a similar style all the time in the creative/tech/start-up-ish world, and there it's surely aimed at adults.

Comment by Erich_Grunewald on Person-affecting intuitions can often be money pumped · 2022-07-07T15:23:51.031Z · EA · GW

I don't think that negates the validity of the critique.

Agreed -- I didn't mean to imply it was.

Okay, but I still don't know what the view says about x-risk reduction (the example in my previous comment)?

By "the view", do you mean the consequentialist person-affecting view you argued against, or one of the non-consequentialist person-affecting views I alluded to?

If the former, I have no idea.

If the latter, I guess it depends on the precise view. On the deontological view I find pretty plausible we have, roughly speaking, a duty to humanity, and that'd mean actions that reduce x-risk are good (and vice versa). (I think there are also other deontological reasons to reduce x-risk, but that's the main one.) I guess I don't see any way that changes depending on what the default is? I'll stop here since I'm not sure this is even what you were asking about ...

Comment by Erich_Grunewald on Person-affecting intuitions can often be money pumped · 2022-07-07T14:59:42.747Z · EA · GW

My objection to it is that you can't use it for decision-making because it depends on what the "default" is. For example, if you view x-risk reduction as preventing a move from "lots of happy people to no people" this view is super excited about x-risk reduction, but if you view x-risk reduction as a move from "no people to lots of happy people" this view doesn't care.

That still seems somehow like a consequentialist critique though. Maybe that's what it is and was intended to be. Or maybe I just don't follow?

From a non-consequentialist point of view, whether a "no people to lots of happy people" move (like any other move) is good or not depends on other considerations, like the nature of the action, our duties or virtue. I guess what I want to say is that "going from state A to state B"-type thinking is evaluating world states in an outcome-oriented way, and that just seems like the wrong level of analysis for those other philosophies.

From a consequentalist point of view, I agree.

Comment by Erich_Grunewald on Announcing a contest: EA Criticism and Red Teaming · 2022-06-05T18:40:55.151Z · EA · GW

Pablo is quoting a 10-year-old comment; the 80k article you link was published in 2020.

Comment by Erich_Grunewald on Yglesias on EA and politics · 2022-05-23T23:48:29.600Z · EA · GW

For what it's worth, something like one fifth of EAs don't identify as consequentialist.

Comment by Erich_Grunewald on Tentative Reasons You Might Be Underrating Having Kids · 2022-05-09T20:32:09.190Z · EA · GW

This is not to say that these people were good parents, that they didn’t have extensive help, or that they didn’t heavily rely on their spouses to do deeply unequal child rearing. But it should be surprising that if we were one of the only groups in history working so productively that we should eschew child rearing entirely.

It doesn't seem surprising at all to me -- for example, I have a hard time thinking of any historical community that has not separated child-rearing duties by gender. I mean, I'm sure there's one out there, but it's probably vanishingly rare. The present seems very unusual in that regard.

https://ourworldindata.org/grapher/regional-averages-of-the-composite-gender-equality-index

Comment by Erich_Grunewald on Future-proof ethics · 2022-04-02T14:28:27.851Z · EA · GW

As you noticed, I limited the scope of the original comment to axiology (partly because moral theory is messier and more confusing to me), hence the handwaviness. Generally speaking, I trust my intuitions about axiology more than my intuitions about moral theory, because I feel like my intuition is more likely to "overfit" on more complicated and specific moral dilemmas than on more basic questions of value, or something in that vein.

Anyway, I'll just preface the rest of this comment with this: I'm not very confident about all this and at any rate not sure whether deontology is the most plausible view. (I know that there are consequentialists who take person-affecting views too, but I haven't really read much about it. It seems weird to me because the view of value as tethered seems to resist aggregation, and it seems like you need to aggregate to evaluate and compare different consequences?)

On Challenge 1A (and as a more general point) - if we take action against climate change, that presumably means making some sort of sacrifice today for the sake of future generations. Does your position imply that this is "simply better for some and worse for others, and not better or worse on the whole?" Does that imply that it is not particularly good or bad to take action on climate change, such that we may as well do what's best for our own generation?

Since in deontology we can't compare two consequences and say which one is better, the answer depends on the action used to get there. I guess what matters is whether the action that brings about world X involves us doing or neglecting (or neither) the duties we have towards people in world X (and people alive now). Whether world X is good/bad for the population of world X (or for people alive today) only matters to the extent that it tells us something about our duties to those people.

Example: Say we can do something about climate change either (1) by becoming benevolent dictators and implementing a carbon tax that way, or (2) by inventing a new travel simulation device, which reduces carbon emissions from flights but is also really addictive. (Assume the consequences of these two scenarios have equivalent expected utility, though I know the example is unfair since "dictatorship" sounds really bad -- I just couldn't think of a better one off the top of my head.) Here, I think the Kantian should reject (1) and permit or even recommend (2), roughly speaking because (2) respects people's autonomy (though the "addictive" part may complicate this a bit) in a way that (1) does not.

Also on Challenge 1A - under your model, who specifically are the people it is "better for" to take action on climate change, if we presume that the set of people that exists conditional on taking action is completely distinct from the set of people that exists conditional on not taking action (due to chaotic effects as discussed in the dialogue)?

I don't mean to say that a certain action is better or worse for the people that will exist if we take it. I mean more that what is good or bad for those people matters when deciding what duties we have to them, and this matters when deciding whether the action we take wrongs them. But of course the action can't be said to be "better" for them as they wouldn't have existed otherwise.

On Challenge 1B, are you saying there is no answer to how to ethically choose between those two worlds, if one is simply presented with a choice?

I am imagining this scenario as a choice between two actions, one involving waving a magic wand that brings world X into existence, and the other waving it to bring world Y into existence.

I guess deontology has less to say about this thought experiment than consequentialism does, given that the latter is concerned with the values of states of affair and the former more with the values of actions. What this thought experiment does is almost eliminate the action, reducing it to a choice of value. (Of course choosing is still an action, but it seems qualitatively different to me in a way that I can't really explain.) Most actions we're faced with in practice probably aren't like that, so it seems like ambivalence in the face of pure value choices isn't too problematic?

I realise that I'm kind of dodging the question here, but in my defense you are, in a way, asking me to make a decision about consequences, and not actions. :)

On Challenge 2, does your position imply that it is wrong to bring someone into existence, because there is a risk that they will suffer greatly (which will mean they've been wronged), and no way to "offset" this potential wrong?

One of the weaknesses in deontology is its awkwardness with uncertainty. I think one ok approach is to put values on outcomes (by "outcome" I mean e.g. "violating duty X" or "carrying out duty Y", not a state of affairs as in consequentialism) and multiplying by probability. So I could put a value on "wronging someone by bringing them into a life of terrible suffering" and on "carrying out my duty to bring a flourishing person into the world" (if we have such a duty) and calculating expected value that way. Then whether or not the action is wrong would depend on the level of risk. But that is very tentative ...

Comment by Erich_Grunewald on Future-proof ethics · 2022-03-30T00:12:39.266Z · EA · GW

Really like this post!

I think one important crux here is differing theories of value.

My preferred theory is the (in my view, commonsensical) view that for something to be good or bad, it has to be good or bad for someone. (This is essentially Christine Korsgaard's argument; she calls it "tethered value".) That is, value is conditional on some valuer. So where a utilitarian might say that happiness/well-being/whatever is the good and that we therefore ought to maximise it, I say that the good is always dependent on some creature who values things. If all the creatures in the world valued totally different things than what they do in our dimension, then that would be the good instead.

(I should mention that, though I'm not very confident about moral philosophy, to me the most plausible view is a version of Kantianism. Maybe I give 70% weight to that, 20% to some form of utilitarianism and the rest to Schopenhauerian ethics/norms/intuitions. I can recommend being a Kantian effective altruist: it keeps you on your toes. Anyway, I'm closer to non-utilitarian Holden in the post, but with some differences.)

This view has two important implications:

  • It no longer makes sense to aggregate value. As Korsgaard puts it, "If Jack would get more pleasure from owning Jill's convertible than Jill does, the utilitarian thinks you should take the car away from Jill and give it to Jack. I don't think that makes things better for everyone. I think it makes it better for Jack and worse for Jill, and that's all. It doesn't make it better on the whole."
  • It no longer makes sense to talk about the value of potential people. Their non-existence is neither good nor bad because there is no one for it to be good or bad for. (Exception: They can still be valued by people who are alive. But let's ignore that.)

I haven't spent tons of time thinking about how this shakes out in longtermism, so quite a lot of uncertainty here. But here's roughly how I think this view would apply to your thought experiments:

  • Challenge 1A -- climate change. If we decide to ignore climate change, then we wrong future people (because climate change is bad for them). If we don't ignore it, then we don't wrong those people (because they won't exist); we also don't wrong the future people who will exist, because we did our best to mitigate the problem. In a sense, we have a duty to future generations, whoever they may be.
  • Challenge 1B -- world A/B/C. It doesn't make sense to compare different world in this way, because that would necessarily involve aggregation. Instead, we have to evaluate every action based on whether it wrongs (or not, or benefits) people in the world it produces.
  • Challenge 2 -- asymmetry. This objection I think doesn't apply now. The relevant question is still: does our action wrong the person that does come into existence? If we have good reason to believe that a new life will be full of suffering, and we choose to bring it into existence, plausibly we do wrong that person. If we have good reason to believe that the life will be great, and we choose to bring it into existence, obviously we don't wrong the person. (If we do not bring it into existence, we don't wrong anyone, because there's no one to wrong.)

Additional thoughts:

  • I want to mention a harder problem than the "should we have as many children as possible?" one you mention. It is that it seems ok to abort a fetus that would have a happy life, but it seems really wrong not to abort a fetus we know would have a terrible life full of pain and suffering. (This is apparently called the asymmetry problem in philosophy.) These intuitions make perfect sense if we take the view that value is tethered. But they don't really make sense in total utilitarianism.
  • Extinction would still be very bad, but it would be bad for the people who are alive when it happens, and for all the people in history whose work to improve things in the far future is being thwarted.

(I recognise that my view gets weirder when we bring probability into the picture (as we have to). That's something I want to think more about. I also totally recognise that my view is pretty complicated, and simplicity is one of the things I admire in utilitarianism.)

I think one important difference between me and non-utilitarian Holden is that I am not a consequentialist, but I kind of suspect that he is? Otherwise I would say that he is ceding too much ground to his evil twin. ;)

Comment by Erich_Grunewald on Making People Pay for Something Doesn’t Cause Them to Value it More · 2022-03-28T21:52:32.924Z · EA · GW

Claim (5) is more interesting. People certainly seem to value free public education and healthcare highly (“The NHS is the closest thing the English have to a religion”). Many families that send their children to public school could afford to pay tuition, if they had to.

maybe you are talking about two different things:

  • valuing the product alone
  • valuing the product and its price as a package deal

people probably really like free health care because it's health care and free. but that doesn't necessarily mean they value the health care they get for free as much as they would value the health care that they paid for, had they done that instead. it just means they value not having to spend any money on it.