Posts

EA is Insufficiently Value Neutral in Practice 2022-08-04T20:00:19.818Z
What actions most effective if you care about reproductive rights in America? 2022-06-26T15:35:03.778Z
Good Heart Donation Lottery 2022-04-01T18:17:46.340Z
[Event] Bodhi Day All Night Sitting 2021-12-07 to 2021-12-8 2021-12-06T02:51:51.661Z
Where does most of the suffering from eating meat come from? 2021-11-09T03:09:27.588Z
G Gordon Worley III's Shortform 2020-08-19T02:09:07.652Z
Expected value under normative uncertainty 2020-06-08T15:45:24.374Z
Vive la Différence? Structural Diversity as a Challenge for Metanormative Theories 2020-05-26T00:45:01.131Z
Comparing the Effect of Rational and Emotional Appeals on Donation Behavior 2020-05-26T00:24:25.239Z
Rejecting Supererogationism 2020-04-20T16:19:16.032Z
Normative Uncertainty and the Dependence Problem 2020-03-23T17:29:03.369Z
Chloramphenicol as intervention in heart attacks 2020-02-17T18:47:44.328Z
Illegible impact is still impact 2020-02-13T21:45:00.234Z
If Veganism Is Not a Choice: The Moral Psychology of Possibilities in Animal Ethics 2020-01-20T18:07:53.003Z
EA and the Paramitas 2020-01-15T03:17:18.158Z
Normative Uncertainty and Probabilistic Moral Knowledge 2019-11-11T20:26:07.702Z
TAISU 2019 Field Report 2019-10-15T01:10:40.645Z
Announcing the Buddhists in EA Group 2019-07-02T20:41:23.737Z
Best thing at EAG SF 2019? 2019-06-24T19:19:49.700Z
What movements does EA have the strongest synergies with? 2018-12-20T23:36:55.641Z
HLAI 2018 Field Report 2018-08-29T00:13:22.489Z
Avoiding AI Races Through Self-Regulation 2018-03-12T20:52:06.475Z
Prioritization Consequences of "Formally Stating the AI Alignment Problem" 2018-02-19T21:31:36.942Z

Comments

Comment by G Gordon Worley III (gworley3) on Let’s not glorify people for how they look. · 2022-08-12T19:04:23.262Z · EA · GW

Wonderful! A great way to be proven wrong!

Comment by G Gordon Worley III (gworley3) on Let’s not glorify people for how they look. · 2022-08-12T19:02:09.793Z · EA · GW

In isolation I agree. But I found nothing new or interesting in this post. Since votes control how visible a post is, I view votes as purely a signal about how much I want to see and how much I want others to see content like this. Since I didn't find it new or interested it was a poor use of my time to read it, hence the down vote.

When I down vote I like to tell people why so they have useful feedback on what makes people down vote.

I know many people vote to say "yay" or "boo". I disagree with this voting style, and my votes generally should not be interpreted that way. I down vote to say "I don't think you should bother reading this" and I up vote to say "I think you should read this".

Comment by G Gordon Worley III (gworley3) on Let’s not glorify people for how they look. · 2022-08-11T20:59:26.141Z · EA · GW

I agree, but is this a post that will make that change? I don't see any really compelling arguments or stories here that are likely to change minds.

Comment by G Gordon Worley III (gworley3) on Let’s not glorify people for how they look. · 2022-08-11T20:58:27.805Z · EA · GW

There's no reason a person can't be both earnest and still be hitting the applause light button. Intent matters, but so does outcomes.

I don't recall recent EA discussion of this topic, but this is an extremely well-worn topic in general. This is sort of a professionalism 101 topic that most people debate in high school as something of a toy topic because the arguments are already well explored.

Comment by G Gordon Worley III (gworley3) on Let’s not glorify people for how they look. · 2022-08-11T15:40:16.978Z · EA · GW

Downvoted because I don't feel like there's any substance here and it's not worth spending the time to read. I think most people already agree with this sentiment and know the arguments presented in one way or another, so it feels like this post is just flashing the applause lights.

I'd probably have at least not downvoted and maybe would have upvoted this post if it contained some new content, like a proposal for how to get people not to glorify looks.

Comment by G Gordon Worley III (gworley3) on EA is Insufficiently Value Neutral in Practice · 2022-08-05T12:38:51.500Z · EA · GW

Hmm, I think these arguments comparing to other causes are missing two key things:

  • they aren't sensitive to scope
  • they aren't considering opportunity cost

Here's an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that's all missed opportunity in expectation to save more future lives.

But I don't actually go around deriding people who donate to breast cancer research as if they donated to Nazis even though they, by comparison in scope to mitigating x-risks and the missed opportunity to have more mitigated x-risk, did approximately similarly "bad" things from my perspective. Why?

I take their values seriously. I don't agree, but they have a right to value what they want, even if I disagree. I don't personally have to help them, but I also won't oppose them unless they come into object level conflict with my own values.

Actually, that last sentence makes me realize a point I failed to make in the post! It's not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to "help our 'enemies'" at the meta level even as we might oppose them at the object level.

Comment by G Gordon Worley III (gworley3) on EA is Insufficiently Value Neutral in Practice · 2022-08-05T12:20:32.067Z · EA · GW

To your footnote, I'm not sure how many people are directly uncomfortable, but I do find arguments that roughly boil down to "but what about Nazis?" lazy as they try to run around the discussion by pointing to a thing that will make most readers go "Nazis bad, I agree with whatever says 'Nazis bad' most strongly!". This doesn't mean thinking Nazis are bad is an unreasonable position or something, only that it looms so large it swamps many people's ability to think clearly.

Rationalists the to taboo comparing things to Nazis or using Nazis as an example for this reason, but not all EAs are rationalists and it is a specific point in idea space that most everyone will agree is bad, but I'm also pretty sure we can cook up worse views even more people would disagree with (cf. the baby eaters of Three Worlds Collide).

Comment by G Gordon Worley III (gworley3) on EA is Insufficiently Value Neutral in Practice · 2022-08-04T21:58:50.168Z · EA · GW

I'd bite the bullet and say "yes". I disagree with Nazism, but to be intellectually consistent I have to accept that even beliefs about what is good that I find personally unpalatable deserve consideration. This is very similar to my stance on free speech: people should be allowed to say things that I disagree with, and I'm generally in favor of making it easier for people to say things, including things I disagree with.

To your point about not caring about the difference between good and evil, this sort of misses the point I'd like to make. How do you know what is good and evil? Well, you made some value judgement, and that judgment is yours. Even if you're a moral realist, the fact remains that you're discovering moral facts and can be mistaken about the facts. Since all we have access to is what claims people make about what they believe is best, we're limited in how prescriptive we can be without risking, e.g., punishing ourselves if moral fashion changes.

Comment by G Gordon Worley III (gworley3) on What actions most effective if you care about reproductive rights in America? · 2022-06-26T21:01:06.669Z · EA · GW

I've edited my post to make it clear I think this is an off topic discussion within the context of this question. I think it's fine for this comment to stay because it was there before I made this clarification, but I have asked the moderators to convert this from an answer to a proper comment.

Comment by G Gordon Worley III (gworley3) on Buddhism and Utilitarianism; EA vs EB · 2022-06-24T20:39:20.616Z · EA · GW

I don't think it actually has (1).

Engaged Buddhism is, as I see it, best understood as a movement among Western Liberals who are also Buddhists, and as such as primarily infused with Western liberal values. These are sometimes incidentally the best way to do good, but unlike EA they don't explicitly target doing the most good, they instead uphold an ideology that values things like racial equality, human dignity, and freedom on religion (including freedom to reject religion).

As for (2), I'm not sure how much there is to learn. There's likely some things, but I also worry that paying too much attention to Engaged Buddhism might be a distraction because it suffers common failure modes that EA seeks to avoid. For example, people I know who are part of Engaged Buddhism would rather volunteer directly, even if it's ineffective, than earn to give, because they want to be directly engaged. That's fine, but from what I've seen the whole movement is oriented more around satisfying a desire to help rather than actually doing the most good.

Comment by G Gordon Worley III (gworley3) on Buddhism and Utilitarianism; EA vs EB · 2022-06-24T20:29:09.175Z · EA · GW

I think there's some case for specialization. That is, some people should dedicate their lives to meditation because it is necessary to carry forward the dharma. Most people probably have other comparative advantages. This is not a typical way of thinking about practice, but I think there's a case to be made that we could look at becoming a monk, for example, as a case of exercises comparative advantage as part of an ecosystem of practitioners who engage in various ways based on their comparative abilities (mostly focused on what they could be doing in the world otherwise).

I use this sort of reasoning myself. Why not become a monk? Because it seems like I can have a larger positive impact on the world as a lay practitioner. Why would I become a monk? If the calculus changed and it was my best course of action to positively impact the world.

Comment by G Gordon Worley III (gworley3) on Doing good easier: how to have passive impact · 2022-05-02T21:15:30.498Z · EA · GW

A couple comments.

First, I think there's something akin to creating a pyramid scheme for EA by leaning too heavy on this idea, e.g. "earn to give, or better yet get 3 friends to earn to give and you don't need to donate yourself because you had so much indirect impact!". I think david_reinstein's comment is in the same vein and good.

Second, this is a general complaint about the active/passive distinction that is not specific to your proposal but since your proposal relies on it I have to complain about it. :-)

I don't think the active/passive distinction is real (or at real enough to be useful). I think it just looks that way to people who only earn money by directly trading their labor for it. So-called passive income still requires work (otherwise money would just earn you more money with zero effort), just less of it. And that's the key. Thus I think it's better to talk about leverage rather than active/passive.

To say a bit more, trading labor for money/impact by default has 1:1 leverage, i.e. you get linear return on your labor. For example, literally handing out malaria nets, literally serving food to the destitute, etc.. Then you can do work that gets a bit of leverage but is still linear. So maybe you can leverage your knowledge, network, etc. to have 1:n leverage. This might be working as a researcher, doing work for an EA meta-org, etc.. Then there's opportunities to have non-linear levage where each unit of work gets quadratic or exponential returns. In the realm of money and "passive" income this is stuff like investing in or starting a company (I know, not what people usually think of as "passive" income). In EA this might be defining a new field, starting a new EA org, etc..

Note though that we rely on people having impact in all these different ways for the economy/ecosystem to function. Yes, 1:1 leverage work would best be automated, but sometimes it can't be, and then it's a bottleneck and we need someone to do it. If you squeeze out too much of this type work you get something like a high-income/impact trap: no one can be bothered to do important work because it isn't high leverage enough!

So, I think people should try to have as much leverage as they can, but also we need to be careful about how we promote leverage, especially in EA where there are fewer feedback systems in the economy to help the EA ecosystem self-regulate, so that we don't end up without anyone to do the essential, low-leverage work.

Comment by G Gordon Worley III (gworley3) on Free-spending EA might be a big problem for optics and epistemics · 2022-04-14T20:47:08.669Z · EA · GW

Maybe I can help Chris explain his point here, because I came to the comments to say something similar.

The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.

Neartermists are right to be worried about spending money on things that aren't clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying.

Longtermists are right to not be too worried about spending money. There's astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn't matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn't matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment.

That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors.

This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.

Comment by G Gordon Worley III (gworley3) on Go Republican, Young EA! · 2022-04-14T03:51:33.887Z · EA · GW

Two thoughts:

  1. We should be careful about claiming the GOP is the "worse party". Worse for whom? Maybe they are doing things you don't like, but half the country thinks the Democrats are the worse party. We should be wise to the state of normative uncertainty we are in. Neither party is really worse except by some measure, and because of how they are structured against each other one party being worse means the other is better by that measure. If you wanted to make a case that one party or the other is better for EA and then frame the claim that way I think it'd be fine.
  2. Yes, causing a party to lose its base is a great way to force the party to change, though note that this isn't an isolated system, changing the GOP will also change the Democratic Party and that might not actually be for the better. Some might argue we were better off before Southern white voters were "betrayed" by the Democratic Party on civil rights legislation and abortion, since my understanding is that that caused the shift to the current party alignment structure and ended a long era of bipartisanship. Looking back, many have said they would have moved slower to avoid the long term negative consequences caused by moving fast and then not really getting the desired outcome due to reactionary pushback. This suggests we might be better off trying for slow change given uncertain effects of what will happen in a dynamic system.
Comment by G Gordon Worley III (gworley3) on Go Republican, Young EA! · 2022-04-13T17:51:09.311Z · EA · GW

to the fall of US democracy and a party that has much worse views on almost every subject under most moral frameworks.

This seems like a pretty partisan take and fails to adequately consider metaethical uncertainty. There's nothing about this statement that I couldn't imagine a sincere Republican with good intentions saying about Democrats and being basically right (and wrong!) for the same reasons (right assuming their normative framework, wrong when we suppose normative uncertainty).

Comment by G Gordon Worley III (gworley3) on Go Republican, Young EA! · 2022-04-13T17:46:22.619Z · EA · GW

While I don't want to suggest that you or any other person who feels the GOP has an obligation to work for them, part of the reason they are able to be hostile to various groups is because those groups are not part of how they get elected. If tomorrow the GOP was dependent on LGBTQ votes to win elections, they'd transform into a different party.

So while I'm not expert enough here to see how to change the current situation, I think there is something interesting about changing the incentive gradients for both parties to make them both more inclusive (both construct on outgroup—GOP: minorities and foreigners, Democrats: rural and working-class white people) and I expect that to have positive outcomes.

Comment by G Gordon Worley III (gworley3) on How to Choose the Optimal Meditation Practice · 2022-03-18T17:23:24.929Z · EA · GW

The more I practice, the more I've come to believe that that only thing that really matters is that you do it. Not that you do it well by whatever standard one might judge, but just that you do it. 30 minutes of quiet time is a foundation on which more can be explored and discovered. You don't have to sit a special way, do a special thing with your mind, or do anything else in particular for it to be worth the effort, although all those things can help and are worth doing if you're called to them!

You should totally learn a bunch of techniques or practice a certain way if you feel called to it, but also I think there's a lot to be said for simply spending 30 minutes with the intention to be present with what is, even if that means 30 minutes spent with your mind racing or fidgeting. The time itself will work on you to allow you to find your own way.

Comment by G Gordon Worley III (gworley3) on .01% Fund - Ideation and Proposal · 2022-03-01T20:49:47.403Z · EA · GW

What does this funding source do that existing LT sources don’t?

Natural followup: why a new fund rather than convince an existing fund to use and emphasize the >0.0.1% xrisk reduction criterion?

Comment by G Gordon Worley III (gworley3) on Nuclear attack risk? Implications for personal decision-making · 2022-02-28T22:57:11.487Z · EA · GW

Even if he wants to do that, his power is not absolute. I'd expect/hope for his generals to step in if he tries something like that, perhaps using it as reason for a coup.

Comment by G Gordon Worley III (gworley3) on Nuclear attack risk? Implications for personal decision-making · 2022-02-27T16:19:47.505Z · EA · GW

I'm not super worried. Maybe this is because I am old enough that I grew up with a perception that nuclear war could happen at any time and unexpectedly kill us all. The current threat level feels like a return to the Cold War: something could happen, but MAD still works and Putin, like everyone else, doesn't really have anything to gain from all out nuclear war, but does have something to gain from playing chicken. So we should expect a lot of posturing but probably no real action, except by accident.

In think the largest risk of nuclear weapons comes from the use of tactical nukes being used in the conflict zones. I would expect Putin to use them if he felt desperate enough, especially since he would use them on Ukrainian soil. But presumably no nukes would be deployed on NATO countries or Russia itself since that would trigger all out nuclear retaliation. So most of the nuclear risk probably falls on people literally within Ukraine.

Comment by G Gordon Worley III (gworley3) on What psychological traits predict interest in effective altruism? · 2022-02-26T23:46:45.166Z · EA · GW

Yes, I suppose I left out non-English. I should have more properly made my claim that growth has slowed in English-speaking countries where the ideas have already had time to saturate and reach more of the affected people.

I forget where I got this from. I'm sure I can dig something up, but I seem to recall other posts on this forum showing that the growth of EA in places where it was already established had slowed.

Comment by G Gordon Worley III (gworley3) on What psychological traits predict interest in effective altruism? · 2022-02-26T23:44:36.907Z · EA · GW

It's unclear to me we've really investigated deeply enough to say that. We just know these factors matter, but it still seems quite possible that lots of other factors matter or that those other factors cause these two.

Comment by G Gordon Worley III (gworley3) on What psychological traits predict interest in effective altruism? · 2022-02-26T02:35:42.784Z · EA · GW

I don't mean to be rude, but this feels a bit like a non-result, since as your conclusion puts it effective altruists are basically people who like to act altruistically and like to be effective. Also seems not surprising that there's a small confluence of the two based on the fact that EA growth has slowed after quickly reaching most of the people who were going to be interested in it. It's nice to have some studies to back up the anecdotes powering the Basyesian evidence we already had about these claims, but am I correct that this is basically what you found?

Comment by G Gordon Worley III (gworley3) on We need more nuance regarding funding gaps · 2022-02-13T04:46:47.954Z · EA · GW

More info always seems better, but maybe it's not useful here?

My thinking is that perhaps all the gaps worth filling are already well known and being addressed roughly as soon as they become overdetermined. Other gaps maybe aren't worth addressing because the expected value of doing so is low. More info might help identify the marginal gap, but if there's something like a power law distribution of gaps in terms of expected value of filling them then we've likely already identified all the best ones to fill and the rest are the long tail where differences don't matter much and people should fill based on other criteria.

Comment by G Gordon Worley III (gworley3) on The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized · 2022-02-07T02:56:20.359Z · EA · GW

I often think of it as EA being too conservative rather than having a culture of fear, and maybe those are different things, but here's some of what I see happening.

People reason that EA orgs and people representing EA need to be respectable because this will later enable doing more good. And I'd be totally fine with that if every instance of it was clearly instrumental to doing the most good.

However, I think this goal of being respectable doesn't take long to become fixed in place, and now people are optimizing for doing the most good AND being respectable, which means they will trade off doing the most good and respectability along the efficiency frontier. Ditto for other things people might optimize for: being right, growing the movement, gaining power, etc.

To the extent that EA is about doing the most good, we should be very clear when we start trying to optimize for other things. Yes, if we optimize for doing the most good in the short term we'll likely harm ourselves in the long term, but so to does the movement harm itself by trading away doing the most good for other things that someone thinks maybe will matter rather than having a solid case that it's the right thing to do. You could argue that someone like Will MacAskill put a lot of thought into being respetable and had good reason to do it rather than just immediately do the short term thing that would have done the most good for EA but would have been weird and bad for the movement long term, but today I don't think most people are doing this sort of calculation and are instead just saying "ah, I think in EA we should be respectable or whatever" and then optimizing for that AND doing the most good, thus probably failing to get the most good. 😞

Comment by G Gordon Worley III (gworley3) on The Life-Goals Framework: How I Reason About Morality as an Anti-Realist · 2022-02-03T21:24:34.013Z · EA · GW

Life goals and life plans seem to me to sit somewhere between Heidegger's Sorge (both feel to be like aspects of Sorge) and general notions of axiology (life goals and life plans seem like a model of how axiology gets implemented). Curious if that resonates with what you mean by life goals and life plans.

Comment by G Gordon Worley III (gworley3) on Running for U.S. president as a high-impact career path · 2022-01-22T16:01:38.734Z · EA · GW

I don't know if someone has posted this before, but would be good to compare this to the idea of running for other political offices. For example, maybe a lot could be achieved as a senator or representative rather than as president and those seem easier jobs to get.

Comment by G Gordon Worley III (gworley3) on Illegible impact is still impact · 2022-01-06T15:55:41.561Z · EA · GW

Since I originally wrote this post I've only become more certain of the central message, which is that EAs and rationalist-like people in general are at extreme risk of Goodharting ourselves. See for example a more recent LW post on that theme.

In this post I use the idea of "legibility" to talk about impact that can be easily measured. I'm now less sure that was the right move, since legibility is a bit of jargon that, while it's taken off in some circles, hasn't caught on more broadly. Although the post deals with this, a better version of this post might avoid talking about legibility all together and instead speak in more familiar language about measurement, etc. that people are already familiar with. There's nothing in here that I think hinges on the idea of legibility, though it's certainly helpful for framing the point, so if there were interest I think I'd be willing to revisit this post and see if I can make a shorter version of it that doesn't teaching some extra jargon above all the other necessary jargon.

I think I'd also highlight the Goodharting part more, since that's really what the problem is. More time on Goodharting and why this is a consequence of that, less time on going round the topic.

Comment by G Gordon Worley III (gworley3) on The phrase “hard-core EAs” does more harm than good · 2022-01-05T01:19:40.552Z · EA · GW

I don't think I ever heard anyone use the phrase "hard-core EAs" or if I did it just passed by without note, but now that I bother to think about it I actually think it's really apt!

The etymology of hardcore has been a bit lost over the years. Here's what etymonline says:

also hard-core; 1936 (n.); 1951 (adj.); from hard (adj.) + core (n.). Original use seems to be among economists and sociologists, in reference to unemployables. Extension to pornography is attested by 1966. Also the name of a surfacing material.

Merriam-Webster seem to think it's a bit older, dating back at least to 1841:

So the earliest sense in which hard core was used was in reference to a sort of foundation on which something substantial was built. In the early 20th century the word broadened its sense to refer to serving as the foundation, or central element, of things aside of man-made structures, such as groups or organizations.

And in its perhaps better known application to pornography, the idea of a hard core that was irredeemable by virtue of how committed it was to immorality (or at least the morality of the time).

So actually I really like the idea of hardcore EAs. They're the bedrock, the foundation, the EAs who are still going to be there if EA becomes uncool or gets canceled or whatever. It makes me think of people like Peter Singer who would just keep on being an EA even if no one had come up with the label or built a movement. It has the metaphor of being so EA that even if someone brought in a jackhammer you wouldn't crack.

I don't know if I am or want to be a hardcore EA, but I'm sure as hell glad they exist!
 

Comment by G Gordon Worley III (gworley3) on Exegesis · 2022-01-01T18:37:00.042Z · EA · GW

I can only speak for myself, but assuming my experience generalizes, this means lots of people will miss out on what you have to say. Since you don't have a prior belief that posts by you are worth reading and this post has a vague title that could be about any number of things, it makes it hard to consider it worth the time to invest in reading. So just purely from the pragmatic point of view, I estimate a summary would help get more people to read.

The irony is that EdoArad and myself have probably now spend enough time engaging with comments on this post that we could have read it, but I know I still haven't. The comments feel valuable (chatting with a fellow forum member about possible ways to make a post better) while reading the post itself doesn't (since there's not even really much of a teaser to pull me in, I'm just not developing any motivation to read).

Comment by G Gordon Worley III (gworley3) on Exegesis · 2021-12-31T22:22:53.161Z · EA · GW

Friendly suggestion: a summary might help. I briefly skimmed this but was really hoping for a summary. These are often helpful to help readers like me to decide to invest time in a post or not.

Comment by G Gordon Worley III (gworley3) on Free Guy, a rom-com on the moral patienthood of digital sentience · 2021-12-24T02:57:40.653Z · EA · GW

I think what's great about Free Guy is that the AI part is not the center of the plot most of the time. Rather it's a story about some characters who find themselves in some unusual circumstances. That might not seem much different, but compare typical AI films that spend a lot of time being about AI rather than the characters. By being character-focused, I think it delivers on ideas better than most idea movies that get so caught up in the ideas they forget to tell a good story.

Comment by gworley3 on [deleted post] 2021-12-18T03:29:04.328Z

As you've noticed, the root of good and bad lies with individual preferences and values. What is good is "merely" that which satisfies our desires at the lowest levels (perhaps what is good is what is least surprising to us, if you buy the predictive processing model of the brain). I put "merely" in scare quote, though, because it's not so mere as it seems. This is in fact the root of all that matters to us in the world.

It's normal, when first noticing that good and bad rest on something so subjective as what individuals like, to feel a sense of disease because you've likely been carrying around a strong expectation that meaning is externalized and objective in the universe. Realizing that humans create meaning for themselves through their existence rather than relating to it out in the universe can feel like the ground has fallen away.

But, it always was this way, and that which was already true cannot destroy us by having realized it.

Now, we can say a bit more about good and bad. Because all humans are quite similar, we care about substantially similar things and a supermajority of us share common ideas about what is good and bad, even if we tend to focus a lot on the ways in which we differ in our values among each other. If we expand our moral circle to include other animals, we find that there's still a lot of commonality. Thus, people often choose to equate good with some fundamental thing common to all living beings, like preference satisfaction or not suffering. This is basically how various flavors of utilitarianism are grounded.

As to why are humans important, well, humans are important to us because we're humans, so it's reasonable that we value humans. The only confusion is if we previously thought our value was given by the universe to us rather than created by us caring about ourselves, so we're well entitled to care about things that benefit humanity. Although, while we're here, maybe we could expand the circle a bit to be all living things? The choice is really up to us!

There's lots more to explore here, but hopefully that gives you a start!

Comment by G Gordon Worley III (gworley3) on I want EA-charity gift cards! · 2021-12-08T03:52:18.519Z · EA · GW

I like this idea a lot. I spent O($1k) on giftcards this year from tisbest instead of giving more traditional gifts. This is nice in multiple ways: this is way more than I would have spent on regular gifts, and each person gets the chance to give to something they care about. And selfishly I get a tax deduction (although I would have gotten it anyway since most of this money would have been donated anyway) and get to push my agenda on family that giving money is good (this doesn't seem like the worst thing in the world, but I'll take it for what it is: I'm doing something that I hope will cause them to be more inclined to make marginally more altruistic choices).

There's not an easy way for me to make this about EA, though, other than if they ask for advice or something like that, since it ruins the gift a bit if I push them in some direction. But if the gift card mechanism could somehow nudge them towards effective charities, that would be awesome.

Comment by G Gordon Worley III (gworley3) on [Event] Bodhi Day All Night Sitting 2021-12-07 to 2021-12-8 · 2021-12-06T02:53:26.877Z · EA · GW

Note: Sorry for not creating this as an event post, but I can't do that yet, and this is time sensitive so I created it as a regular post.

Comment by G Gordon Worley III (gworley3) on A Red-Team Against the Impact of Small Donations · 2021-11-24T21:58:18.623Z · EA · GW

Fund weird things: A decent litmus test is "would it be really embarrassing for my parents, friends or employer to find out about this?" and if the answer is yes, more strongly consider making the grant.

Things don't even have to be that weird to be things that let you have outsized impact with small funding.

A couple examples come to mind of things I've either helped fund or encouraged others to fund that for one reason or another got passed over for grants. Typically the reason wasn't that the idea was in principle bad, but that there were trust issues with the principals: maybe the granters had a bad interaction with the principals, maybe they just didn't know them that well or know anyone who did, or maybe they just didn't pass a smell test for one reason or another. But, if I know and trust the principals and think the idea is good, then I can fund it when no one else would.

Basically this is a way of exploiting information asymmetries to make donations. It doesn't scale indefinitely, but if you're a small time funder with plenty of social connections in the community there's probably work you could fund that would get passed over for being weird in the sense I describe above.

Comment by G Gordon Worley III (gworley3) on Opportunity Costs of Technical Talent: Intuition and (Simple) Implications · 2021-11-19T17:44:24.537Z · EA · GW

This is basically my own experience. I worked a bunch on AI independent research, but now I don't really because it just doesn't make sense: I have way more opportunity to make money to do more good than any direct work I could do, in my estimation, so I just double down on that.

(For context I'm on the higher end of technical talent now: 12 years of work experience, L7-equivalent, in a group tech lead role, and if I can crank up to L8 the potential gains are quite large in terms of comp that I can then donate.)

Comment by gworley3 on [deleted post] 2021-11-19T01:47:35.511Z

I also really like the platform this uses, Tisbest. This year I decided to do all my Xmas giving by giving Tisbest cards to folks so they can make donations to places of their choosing. I think it's a nice way to spread the spirit of giving with folks, and it's a great chance to talk about EA if anyone asks "what should I donate it to?".

Comment by G Gordon Worley III (gworley3) on Help California implement Approval Voting - Time Sensitive - Nov. 18, 2021 · 2021-11-17T23:46:49.289Z · EA · GW

I don't want this to seem like it's directed at this post in particular, but more a general class of things on see on EA Forum, and this just happened to finally trigger the thought for me.

Calls to action like this for things that aren't broadly accepted as core EA areas would benefit substantially from including links to reminding us why we should care about this.

Like, if someone posts about x-risk or global poverty or animal welfare or something like that, I'm like, sure, seems on topic and relevant to EAs because there's broad agreement that this thing is solidly within EA and, even if individual EAs choose not to work on it, there's not a major dispute this is potentially effective, only disagreements about how much it matters relative to other things.

But when I see things about mental health of systematic change or, in this case, election reform, I'm left wondering when this became an EA concern. In this case, I have no idea if approval voting is actually better in terms of outcomes; I just know it's something people like because they feel like it better reflects their preferences.

Including a link at least to why election reform might be an effective cause area would be helpful for things like this that are calls to action. I dare say it should even really be a norm on the forum: if you're making a call to action, you need to at least include links to where you're making the case that it's an effective cause area.

Again, this is not especially directed at the content of this post, but it did make me realize it would be nice if we could address this more broadly.

Comment by G Gordon Worley III (gworley3) on Should Earners-to-Give Work at Startups Instead of Big Companies? · 2021-11-13T22:15:31.219Z · EA · GW

My own experience is that there's a sweet spot. Big tech companies only really offer high compensation for the most experienced and capable employees. If there's 10 levels and you're not at least at level 8, a big company is probably not, in my own informal analysis, like to offer you the best compensation in expectation. Some of this is simply because these folks have high opportunity costs, and the only way to get them as employees is to pay them enough that it balances off against what they would likely do instead: start a company.

If you're in the middle say 4-7, then a large, succeeding startup is probably the best bet. It offers better pay, more room for advancement and promotion, and decent equity.

If you're at the bottom, especially say because you're new to work, then early stage startups can provide really great returns in expectation. This works a couple ways. You won't make a lot of cash compensation, but you'll earn a lot of equity in expectation, possibly more than $10mm a year if the startup becomes a unicorn. Beyond that, you'll gain a lot of career capital by getting to do a bit of everything and having to operate fairly independently in ways that you won't get to do in a larger company, which means you'll be able to level up faster than you would in a more established place if you apply yourself.

This is all assuming you're best fit to be an employee rather than an entrepreneur, of course.

Comment by G Gordon Worley III (gworley3) on G Gordon Worley III's Shortform · 2021-10-30T19:55:50.549Z · EA · GW

Many people want the world to be better.

I feel like there's a lot of people who take this desire for a better world and then hope that they will be the one to make it all better. Maybe they'll discover some grand idea that will improve many things and lead us to salvation!

I don't think that's what we need though. We mostly need all us little people to just be a bit nicer, a bit more trusting, a bit more compassionate, and then not quite so many grand schemes will be required because we'll find we're already living in a better world.

Comment by G Gordon Worley III (gworley3) on The effective altruist case for parliamentarism · 2021-10-29T23:26:17.200Z · EA · GW

Thanks for your reply. Helps make a case that parliaments do something above and beyond the culture/tradition in which they are situated.

That said, I do want to respond to one thing you said:

Some would say that the aspects that matter are issues like trust, low corruption, respect of property rights, etc. But are there any cultures which do not value those things, which claim they are outright undesirable? I don't think there are.

Up until 2 days ago I likely would have shared this sentiment, but I was talking with someone who grew up in Romania and as he put it some of these are not so obvious. For example, although corruption was rampant, no one thought of it that way. Instead it was framed as a gifting custom and seen as normal to provide gifts to those providing services to you (doctors, teachers, government officials, etc.) because you want to show your respect and ensure good service. No one thought of this as bribery, so it seemed like they were already low corruption. And it's easy to imagine folks balking at the idea that it is corruption; how dare, they might say, you come in and disturb our local gift giving tradition!

That makes it quite easy for me to imagine similar stories for things like trust, property rights, etc.: a local equilibrium can become justified and then no one will think a thing is undesirable, or even necessarily realize that something undesirable is going on (in fact, locally it seems quite desirable!).

Comment by G Gordon Worley III (gworley3) on The effective altruist case for parliamentarism · 2021-10-29T01:41:46.134Z · EA · GW

I'm sure this is addressed in the book I haven't read, but I wonder how much of this is confounded by former British rule. That is, if you factor out parliamentary systems that were established after a legacy of British rule, would it still be the case that parliaments are better?

I'm guess the argument is "yes' but I'm not sure and am somewhat suspicious that some of these effects could be cultural ones that just happen to come along with parliaments, making parliamentarism an effect rather than a cause.

Comment by G Gordon Worley III (gworley3) on EA for Jews: Launch and Call for Volunteers · 2021-10-27T19:36:32.057Z · EA · GW

I think of it as coming from two angles. One is that it's a form of community building to expose folks to EA ideas who might otherwise not engage with them by doing so in a language they are familiar with. Two, it's a way for EAs who are religious to explore how EA impacts other spheres of their life.

I think it's also nice to have community by creating a sense of belonging. With EA being such a secular space normally, having a way to learn you're not the only one trying to combine EA and practice of a religion is nice. Good to have folks to talk to, etc.

Comment by G Gordon Worley III (gworley3) on EA for Jews: Launch and Call for Volunteers · 2021-10-26T19:25:35.742Z · EA · GW

Woo, as the person running Buddhists in EA, really excited to see more groups like this! At this point there's enough of us (3 groups) that maybe it's time to start thinking about an EA Interfaith group. :-)

Comment by G Gordon Worley III (gworley3) on On the assessment of volcanic eruptions as global catastrophic or existential risks · 2021-10-14T05:00:46.086Z · EA · GW

This is pretty long. Is there something like an abstract or executive summary of the post? Skimming a few of the expected places didn't feel like I was quite getting that without reading the whole thing.

Comment by G Gordon Worley III (gworley3) on The Cost of Rejection · 2021-10-10T00:59:43.471Z · EA · GW

True, but what you can do is have explicit values that you publicize and then ask candidates questions that assess how much they support/embody those values. Then you can reasonably say "rejected candidate because they didn't demonstrate value X" and have notes to back it up, or say "rejected because demonstrated ~X". This is harder feedback for candidates to hear, especially if X is something positive that everyone thinks they are like "hard working", but at the same time it should be made clear this isn't about what's true about the candidate, but what could be determined from their interview performance.

Comment by G Gordon Worley III (gworley3) on The Cost of Rejection · 2021-10-08T15:12:34.922Z · EA · GW

My vague understanding is that there's likely no legal issues with giving feedback as long as it's impartial. It's instead one of those things where lawyers reasonably advise against doing anything not required since literally anything you do exposes you to risk. Of course you could give feedback that would obviously land you in trouble, e.g. "we didn't hire you because you're [ethnicity]/[gender]/[physical attribute]", but I think most people are smart enough to give feedback of the form "we didn't hire you because legible reason X".

And it's quickly becoming legally the case that you can request not just feedback but all notes people took about you during the hiring process! Many companies use digital systems to keep notes on candidates, and the data in those systems is covered by GDPR, so candidates can make requests for data potential employers have about them in those systems (or so is my understanding; see for example this article for corroboration). Doesn't apply in the US, but does in the UK and EU.

Comment by G Gordon Worley III (gworley3) on EA Survey 2020: Geography · 2021-10-07T16:58:00.931Z · EA · GW

For many of the breakdowns it would be helpful to understand the base rate in those countries to understand what the data means. For example, gender is easy enough since the base rate is usually close to 50/50, but for things like race I have no idea how many people identify as white, black, asian, etc. in each region to compare against. I realize not everything has a base rate to compare against, but for those that do having that data would really help contextualize what's going on here.

Comment by G Gordon Worley III (gworley3) on Ambiguity aversion and reduction of X-risks: A modelling situation · 2021-09-16T15:58:06.219Z · EA · GW

I guess I don't understand why w > x > y > z implies w - y = x - y iff w - x = y - z. Sorry if this is a standard result I've forgotten, but at first glance it's not totally obvious to me.