Posts

G Gordon Worley III's Shortform 2020-08-19T02:09:07.652Z
Expected value under normative uncertainty 2020-06-08T15:45:24.374Z
Vive la Différence? Structural Diversity as a Challenge for Metanormative Theories 2020-05-26T00:45:01.131Z
Comparing the Effect of Rational and Emotional Appeals on Donation Behavior 2020-05-26T00:24:25.239Z
Rejecting Supererogationism 2020-04-20T16:19:16.032Z
Normative Uncertainty and the Dependence Problem 2020-03-23T17:29:03.369Z
Chloramphenicol as intervention in heart attacks 2020-02-17T18:47:44.328Z
Illegible impact is still impact 2020-02-13T21:45:00.234Z
If Veganism Is Not a Choice: The Moral Psychology of Possibilities in Animal Ethics 2020-01-20T18:07:53.003Z
EA and the Paramitas 2020-01-15T03:17:18.158Z
Normative Uncertainty and Probabilistic Moral Knowledge 2019-11-11T20:26:07.702Z
TAISU 2019 Field Report 2019-10-15T01:10:40.645Z
Announcing the Buddhists in EA Group 2019-07-02T20:41:23.737Z
Best thing at EAG SF 2019? 2019-06-24T19:19:49.700Z
What movements does EA have the strongest synergies with? 2018-12-20T23:36:55.641Z
HLAI 2018 Field Report 2018-08-29T00:13:22.489Z
Avoiding AI Races Through Self-Regulation 2018-03-12T20:52:06.475Z
Prioritization Consequences of "Formally Stating the AI Alignment Problem" 2018-02-19T21:31:36.942Z

Comments

Comment by G Gordon Worley III (gworley3) on Why we want to start a charity fortifying feed for hens (founders needed) · 2021-04-20T14:41:28.871Z · EA · GW

I'm also somewhat concerned because this seems like a clear case of a dual use intervention that makes life better for the animals but also confers benefits to the farmers that may ultimately result in more suffering rather than less by, for example, making chickens more palatable to consumers as "humanely farmed" (I'm guessing that's what is meant by "humane-washing") or making chicken production more profitable (either by humane-washing or by making the chickens produce a better quality meat product that is in higher demand).

Comment by G Gordon Worley III (gworley3) on Concerns with ACE's Recent Behavior · 2021-04-16T14:54:08.485Z · EA · GW

I can't seem to find the previous posts at the moment, but I have this sense that this is not an isolated issue and that ACE has some serious problems given that it draws continued criticism, not for its core mission, but for the way it carries that mission out. Although I can't remember at the moment what that other criticism was, I recall thinking "wow, ACE needs to get it together" or something similar. Maybe it has learned from those things and gotten better, but I notice I'm developing a belief that ACE is failing at the "effective" part of effective altruism.

Does this match what others are thinking or am I off?

Comment by G Gordon Worley III (gworley3) on What are your main reservations about identifying as an effective altruist? · 2021-03-30T14:43:28.263Z · EA · GW

I'll note that I used to have some reservations but no longer do, so I'll answer about why I previously had reservations.

When EA got interested in what we now call longtermism, it didn't seem obvious to me that EA was for me. My read was that EA was about near concerns like global poverty and animal welfare and not far concerns like x-risk and aging. So it seemed natural to me that I was on the outside of EA looking in because my primary cause area (though note that I wouldn't have thought of it that way at the time) wasn't clearly under the EA umbrella.

Obviously this has changed now, but hopefully useful for historical purposes, and there may be folks who still feel this way about other causes, like effective governance, that are, from my perspective, on the fringes of what EA is focused on.

Comment by G Gordon Worley III (gworley3) on Some quick notes on "effective altruism" · 2021-03-24T18:02:14.646Z · EA · GW

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it's appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there's something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.

Comment by G Gordon Worley III (gworley3) on Is Democracy a Fad? · 2021-03-13T19:25:47.934Z · EA · GW

Running with the valley metaphor, perhaps the 1990s were when we reached the most verdant floor of the valley. It remains unclear if we're still there or have started to climb out and away from it, assuming the model to be correct.

Comment by G Gordon Worley III (gworley3) on Mentorship, Management, and Mysterious Old Wizards · 2021-02-25T19:56:22.770Z · EA · GW

The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.

There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.

Slight push back here in that I've seen plenty of folks who make good mentors but who wouldn't be doing a lot of mentoring if not for systems in place to make that happen (because they stop doing it once they aren't within whatever system was supporting their mentoring), which makes me think there's a large supply of good mentors who just aren't connected in ways that help them match with people to mentor.

This suggests a lot of the difficulty with having enough mentorship is that the best mentors need to not only be good at mentoring but also be good at starting the mentorship relationship. Plenty of people, it seems though, can be good mentors if someone does the matching part for them and creates the context between them and the mentees.

Comment by G Gordon Worley III (gworley3) on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-15T21:34:39.574Z · EA · GW

On a related but different note, I wish there was a way to combine conversations on cross-posts between EA Forum and LW. I really like the way AI Alignment Forum works with LW and wish EA Forum worked the same way.

Comment by G Gordon Worley III (gworley3) on The Folly of "EAs Should" · 2021-01-06T18:59:42.905Z · EA · GW

I often make an adjacent point to folks, which is something like:

EA is not all one thing, just like the economy is not all one thing. Just as civilization as we know it doesn't work unless we have people willing to do different things for different reasons, EA depends on different folks doing different things for different reasons to give us a rounded out basket of altruistic "goods".

Like, if everyone thought saltine crackers were the best food and everyone competed to make the best saltines, we'd ultimately all be pretty disappointed that we had a mountain of amazing saltine crackers and literally nothing else, and so it makes sense even in the world where saltines really are the best food that generate the most benefit by their production that we instrumentally produce other things so we can enjoy our saltines in full.

I think the same is true of EA. I care a lot about AI  x-risk and it's what I focus on, but that doesn't mean I think everyone should do the same. In fact, if they did, I'm not sure it would be so good, because then maybe we stop paying attention to other causes that, if we don't address them, end up making trying to address AI risks moot. I'm always very glad to see folks working on things, even things I don't personally think are worthwhile, both because of uncertainty about what is best and because there's multiple dimensions along which it seems we can optimize (and would be happy if we did).

Comment by G Gordon Worley III (gworley3) on evelynciara's Shortform · 2021-01-05T19:35:14.514Z · EA · GW

I think it's worth saying that the context of "maximize paperclips" is not one where the person literally says the words "maximize paperclips" or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you'll get it doing things you wouldn't as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they'd have to kill themselves or their loved ones to make more paperclips.

The objection seems predicated on interpretation of human language, which is aside the primary point. That is, you could address all the human language interpretation issues and we'd still have an alignment problem, it just might not look literally like building a paperclip maximizer if someone asks the AI to make a lot of paperclips.

Comment by G Gordon Worley III (gworley3) on Can I have impact if I’m average? · 2021-01-03T21:10:24.594Z · EA · GW

I wrote about something similar about a year ago: https://forum.effectivealtruism.org/posts/Z94vr6ighvDBXmrRC/illegible-impact-is-still-impact

Comment by gworley3 on [deleted post] 2020-12-31T17:24:51.241Z

There's a lot to unpack in that tweet. I think something is going on like:

  • fighting about who is really the most virtuous
  • being upset people aren't more focused on the things you think are important
  • being upset that people claim status by doing things you can't or won't do
  • being jealous people are doing good doing things you aren't/can't/won't do
  • virtue signaling
  • righteous indignation
  • spillover of culture war stuff going on in SF

None of it looks like a real criticism of EA, but rather of lots of other things EA just happens to be adjacent to.

Doesn't mean it doesn't have to be addressed or isn't an issue, but I think also worth keeping these kinds of criticisms in context.

Comment by gworley3 on [deleted post] 2020-12-31T17:20:19.717Z

I find others answers about what the actual low resolution version of EA they see in the wild fascinating.

I go with the classic and if people ask I give them a three word answer: "doing good better".

If they ask for more, it's something like: "People want to do good in the world, and some good doing efforts produce better outcomes than others. EA is about figuring out how to get the best outcomes (or the largest positive impact) for time/money/effort relative to what a person thinks is important."

Comment by G Gordon Worley III (gworley3) on How modest should you be? · 2020-12-29T19:47:28.782Z · EA · GW

I realize this is a total tangent to the point of your post, but I feel you're giving short-shrift here to continental philosophy.

If it were only about writing style I'd say fair: continental philosophy has chosen a style of writing that resembles that used in other traditions to try to avoid over-simplifying and not compressing understanding down into just a few words that are easily misunderstood. Whereas you see unclear writing, I see a desperate attempt to say anything detailed about reality without accidentally pointing in the wrong direction.

This is not to say that there aren't bad continental philosophers who hide behind this method to say nothing, but I think it's unfair to complain about it just because it's hard to understand and takes a lot of effort to suss out what is being said.

As to the central confusion you bring up, the unfortunate thing is that the worst argument in the world is technically correct, we can't know things as they are in themselves, only as we perceive them to be, i.e. there is no view from nowhere. Where it's wrong is thinking that just because we always know the world from some vantage point that trying to understanding anything is pointless and any belief is equally useful. It is can both be true that there is no objective way that things are and that some ways of trying to understand reality do better at helping us predict reality than others.

I think the confusion that the worst argument in the world immediately implies we can't know anything useful comes from only seeing that the map is not itself the territory but not also seeing that the map is embedded in the territory (no Cartesian dualism).

Comment by G Gordon Worley III (gworley3) on Morality as "Coordination" vs "Altruism" · 2020-12-29T19:21:21.859Z · EA · GW

I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are."

I think it's worth challenging the idea that this conflation is actually an issue with ethics.

Although it's true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market economies and prayer) and so things that are bad because they break coordination mechanisms or because they don't express compassion are not bad for exactly the same reasons, this need not mean there is not something deeper going on that ties them together.

I think this is why there tends to be a focus on meta-ethics among philosophers of ethics rather than directly trying to figure out what people should do, even when setting meta-ethical uncertainty aside. There's some notion of badness or undesireableness (and conversely goodness or desirableness) that powers both of these, and so they are both different expressions of this same underlying phenomenon. So we can reasonably ties these two approaches together by looking at this question of what makes something seem good or bad to us, and simply consider these different domains over which we consider how we make good or bad things happen.

As to what good and bad mean, well, that's a larger discussion. My best theory is that in humans it's rooted in prediction error plus some evolved affinities, but this is an ongoing place where folks are trying to figure out what good and bad mean beyond our intuitive sense that something is good or bad.

Comment by G Gordon Worley III (gworley3) on Wholehearted choices and "morality as taxes" · 2020-12-23T01:30:21.791Z · EA · GW

Weird, that sounds strange to me because I don't really regret things since I couldn't have done anything better than what I did under the circumstances or else I would have done that, so the idea of regret awakening compassion feels very alien. Guilt seems more clear cut to me, because I can do my best but my best may not be good enough and I may be culpable for the suffering of others as a result, perhaps through insufficient compassion.

Comment by G Gordon Worley III (gworley3) on Wholehearted choices and "morality as taxes" · 2020-12-22T18:15:41.296Z · EA · GW

These cases seem not at all analogous to me because of the differing amount of uncertainty in each.

In the case of the drowning child, you presumably have high certainty that the child is going to die. The case is clear cut in that way.

In the case of the distant commotion on an autumn walk, it's just that, a distant commotion. As the walker, you have no knowledge about what it is and whether or not you could do anything. That you later learn you could have done something might lead you to experience regret, but in the moment you lacked information to make it clear you should have investigated. I think this entirely accounts for the difference in feeling about the two cases, and eliminates the power of the second case.

In the second case, any imposition on the walker to do anything hinges on their knowledge of what the result of the commotion will be. Given the uncertainty, you might reasonably conclude in the moment that it is better to avoid the commotion, maybe because you might do more harm than good by investigating.

Further, this isn't a case of negligence, where you failing to respond to the commotion makes you complicit in the harm, because you seem to have no responsibility to the machinery or the conditions by which the man came to be pinned under it. Instead it seems to be a case where you are morally neutral throughout because of your lack of knowledge, and your lack of active effort to avoid gaining knowledge that would otherwise make you complicit by trying to avoid becoming morally culpable. That is not the case here and so your example seems to lack the necessary conditions to make the point.

Comment by G Gordon Worley III (gworley3) on Incompatibility of moral realism and time discounting · 2020-12-12T20:43:03.517Z · EA · GW

Could the seeming contradiction be resolved by greater specificity of statements?

For example, rather than abandoning "Everyone should sell everything that begins with a 'C', but nothing that begins with an 'A'." as a norm, we might realize we underspecified it to begin with and really meant "Everyone should sell everything that is called by a word in English that begins with a 'C', but nothing that begins with an 'A' in English.". We could get even more specific if objections remained until we were not at risk of under specifying what we mean and suffering from relativity.

In the same vein, maybe the contradiction of the through experiment could be resolved by being more specific and including more context about the world. For example, cf. this attempt at thinking about preferences as conditioned on the entire state of the world. Maybe the same sort of technique could be applied here.

Comment by G Gordon Worley III (gworley3) on EAs working at non-EA organizations: What do you do? · 2020-12-10T19:44:20.931Z · EA · GW
  • Where do you work, and what do you do?

I'm a software engineer at Plaid working on the Infrastructure team. My main project is leading our internal observability efforts.

  • What are some things you've worked on that you consider impactful?

In terms of EA impact at my current job, not much. I view this as an earning to give situation where I'm taking my expertise as a software engineer and turning it into donations. I think there's some argument that Plaid has positive impact on the world by enabling lots of new financial applications built on our APIs, thereby increasing access to financial resources for those who historically had the least access to them. But I don't work directly on that stuff, instead working on the things that enables the org to carry out its mission.

I will say I considered some other jobs, say working at Facebook or continuing to work on ads as I had been doing, and although the mission was not the primary reason I chose Plaid it is nice that I don't worry I might work on something that harms the world.

  • What are a few ways in which you bring EA ideas/mindsets to your current job?

I often use the TIN framework informally in work and elsewhere in life. It's sort of baked into my soul to think about tractability, impact, and neglectedness when thinking about what to do. Plaid has a big internal focus on the idea of impact, including having a positive impact on the world, and of course as an engineer there's plenty of focus on doing things that are tractable (possible). Neglectedness considerations mostly show up in what I personally choose to work on: I look for things where I can have impact, that are tractable, and that are being neglected by others such that I can make things better in ways that are currently not being pursued. In a growing organization this is easy, because there's often a lot of stuff we'd do if someone had more time to do it, so then it largely becomes a question of prioritizing between different neglected issues.

Comment by G Gordon Worley III (gworley3) on Does Qualitative Research improve drastically with increasing expertise? · 2020-12-07T16:46:50.940Z · EA · GW

I think this holds true in more traditionally "quantitative" fields, too, because often things can be useful or not depending on how they are framed such that without the proper framing good numbers don't matter because they are measuring the right thing.

This seems to suggest that a lot of what makes quantitative research successful also makes qualitative research successful, and so we should expect any extent to which expertise matters in quantitative fields to matter in qualitative fields (although I think this mostly points at the quant/qual distinction being a very fuzzy one that is only relevant along certain dimensions).

Comment by G Gordon Worley III (gworley3) on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T00:17:59.480Z · EA · GW

Jonas also mentioned to me that EA Funds is considering offering Donor-Advised Funds that could grant to individuals as long as there’s a clear charitable benefit. If implemented, this would also allow donors to provide tax-deductible support to individuals.

This is pretty exciting to me. Without going into too much detail, I expect to have a large amount of money to donate in the near future, and LTF is basically the best option I know of (in terms of giving based on what I most want to give to) for the bulk of that money short of having the ability to do exactly this. I'd still want LTF as a fall back for funds I couldn't figure out how to better allocate myself, but the need for tax deductibility limits my options today (though, yes, there are donor lotteries).

Comment by G Gordon Worley III (gworley3) on Long-Term Future Fund: Ask Us Anything! · 2020-12-03T21:58:13.953Z · EA · GW

LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?

Comment by G Gordon Worley III (gworley3) on Long-Term Future Fund: Ask Us Anything! · 2020-12-03T21:57:20.597Z · EA · GW

How much room for additional funding does LTF have? Do you have an estimate of how much money you could take on and still achieve your same ROI on the marginal dollar donated?

Comment by G Gordon Worley III (gworley3) on Long-Term Future Fund: Ask Us Anything! · 2020-12-03T21:56:34.373Z · EA · GW

Do you have any plans to become more risk tolerant?

Without getting too much into details, I disagree with some things you've chosen not to fund, and as an outsider view it as being too unwilling to take risks on projects, especially projects where you don't know the requesters well, and truly pursue a hits-based model. I really like some of the big bets you've taken in the past on, for example, funding people doing independent research who then produce what I consider useful or interesting results, but I'm somewhat hesitant around donating to LTF because I'm not sure it takes enough risks that it represents a clearly better choice for someone like me whose fairly risk tolerant with their donations than donating to other established projects or just donating directly (but this has the disadvantage of making it hard for me to give something like seed funding and still get tax advantages).

Comment by G Gordon Worley III (gworley3) on Where are you donating in 2020 and why? · 2020-11-24T00:00:05.086Z · EA · GW

I'm being strategic in 2020 and shifting much of my giving for it into 2021 because I expect a windfall, but here's where I chose to give this year:

  • AI Safety Support
    • I think the work Linda (and now JJ) are doing is great and is woefully underfunded. I would give them more sooner but I have to shift that into 2021. They've had some trouble getting funding from more established sources for reasons I don't endorse but don't want to go into here, and I think giving to them now is especially high leverage to help AISS bootstrap.
    • I'll be giving $5k soon and plan to donate more once the funds to do so are unlocked.
    • Read Linda's post about AISS for more details.
  • MIRI
    • MIRI keeps doing great work on AI safety, and I've been especially impressed with Scott and Abram in the last couple years. I've cut back on some of my funding to MIRI because I view them as less neglected now relative to other things I could fund, but I continue to support them via Amazon Smile.
  • Wikipedia
    • This feels a little bit like paying for utilities I use, but I get a lot of value out of Wikipedia and think everyone who can should donate $5 or $10 to them. It also seems generally useful for maintaining and improving a source of facts in a world that increasingly uncertain about what facts even are.
  • Alcor
    • I have a cryonics contract with Alcor, and I pay annual dues to them. Most of this is counted as charitable giving.
  • Bay Zen Center
    • This isn't really EA giving, but it is charitable giving to a religious organization (full disclosure, I'm on the board of the Center). They get about 2% of my income. Listed for completeness.
  • Long Term Future Fund
    • LTF is generally aligned with my giving priorities and will get my marginal additional funding I don't have a better idea about how to allocate.

Long term my objective is to donate 30-50% of my income (limited by tax incentives and marginal value of money until I resolve some large outstanding expenses), but today it's closer to 5%.

Comment by G Gordon Worley III (gworley3) on How to best address Repetitive Strain Injury (RSI)? · 2020-11-19T17:28:48.388Z · EA · GW

I've had RSI in the past, but not from typing, but instead from repetitive motions loading paper into a machine for scanning. I didn't need to see a doctor about it, and addressing it was ultimately pretty straight forward and I was able to keep doing the job that caused it while I recovered. Things I did:

  • wore a stabilizing wrist brace to alleviate the strain on my wrist that was causing pain, even when I was not engaged in an activity that would necessarily cause pain
  • payed attention to and changed my motions to reduce wrist strain
  • rearranged my work so I had more breaks and less long periods of continually performing the motion (I had other job responsibilities so it was easy to interleave breaks from one thing with work on another)

It's now more than 10 years since I developed RSI, and maybe 4 years since I have needed the wrist brace (my need for it rapidly decreased once I left the job). I think never needing it correlated with increased strength, specifically from indoor rock climbing and related conditioning.

Comment by G Gordon Worley III (gworley3) on What is a book that genuinely changed your life for the better? · 2020-10-21T23:57:44.108Z · EA · GW

I've got a few:

  • GEB
    • Put me on the path to something like thinking of rationality as something intuitive/S1 rather than something I have to think about with a lot of deliberation/S2.
  • Seven Habits of Highly Effective People
    • I often forget how much this book is "in the water" for me. There's all kinds of great stuff in here about prioritization, relationships, and self-improvement. It can feel a little like platitudes at time, but it's really great.
  • The Design of Everyday Things
    • This is kind of out there, but this gave me a strong sense of the importance of grounding ideas in their concrete manifestation. It's not enough to have a good idea; the effects it causes in the world have to actually have the desired good effects, too.
  • Getting Things Done
    • There's alternatives to this, but it made my life better by really helping me adopt a "systems first" mindset to realize that I can improve my life by using systems/procedures and having them well defined and as automatic as possible pays dividends over time.
  • The Evolving Self
    • A very dense book about adult developmental psychology. Doesn't necessarily lay out the best possible model of adult psychological development, but it really got me deep on this and set me on a path that made my life much better.
  • Siddhartha
    • Okay, one book of fiction, but it's a coming of age story and contains something like suggestions for how to relate to your own life. This one was a slow burn for me: I didn't realize the effect it had had on me until I reread it years later.
Comment by G Gordon Worley III (gworley3) on EA's abstract moral epistemology · 2020-10-20T23:51:36.485Z · EA · GW

My somewhat uncharitable reaction while reading this was something like "people running ineffective charities are upset that EAs don't want to fund them, and their philosopher friend then tries to argue that efficiency is not that important".

Comment by G Gordon Worley III (gworley3) on Michael_Wiebe's Shortform · 2020-10-16T21:04:12.386Z · EA · GW

I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.

Comment by G Gordon Worley III (gworley3) on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-13T20:48:33.494Z · EA · GW

Taking a predictive processing perspective, we should expect to see an initial decrease in happiness upon finding oneself living a less expensive lifestyle because it would be a regular "surprise" violating the expected outcome, but then over time for this surprise to go away as daily evidence slowly retrains the brain the to expect less and so have less negative emotional valence around upon perceiving the actual conditions.

However I'd still expect someone who "fell from grace" like this to be somewhat sadder than a person who rose to the same level of wealth or grew up at it because they'd have more sad moments of nostalgia for better times that would be missing from the others, but this would likely be a small effect an not easily detectable (would expect it to be washed out by noise in a study).

Comment by G Gordon Worley III (gworley3) on Open Communication in the Days of Malicious Online Actors · 2020-10-07T08:52:06.971Z · EA · GW

Without rising to the level of maliciousness, I've noticed a related pattern to ones you describe here where sometimes my writing attracts supporters who don't really understand my point and whose statements of support I would not endorse because they misunderstand the ideas. They are easy to tolerate because they say nice things and may come to my defense against people who disagree with me, but much like with your many flavors of malicious supporters they can ultimately have negative effects.

Comment by G Gordon Worley III (gworley3) on If you like a post, tell the author! · 2020-10-07T08:43:22.591Z · EA · GW

I like the general idea here, but personally I dislike comments that don't tell the the reader new information, so just saying the equivalent of "yay" without adding something is likely to get a downvote from me if the comment is upvoted, especially if it gets upvoted above more substantial comments.

Comment by G Gordon Worley III (gworley3) on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-01T00:26:24.541Z · EA · GW

I was quite surprised to hear how large the Fraunhofer Society is given I've never heard of it before! I think in and of itself this is a kind of evidence against their effectiveness, although I could also imagine they've turned out some winning innovations as parts of contracts and so their involvement gets lost because I think of it as a thing that company X did.

Comment by G Gordon Worley III (gworley3) on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-11T00:39:24.835Z · EA · GW

It seems unclear to me that the level of CO2 emissions from one model being greater than one car necessarily implies that AI is likely to have an outsized impact on climate change. I think there's some missing calculations here about number of models, number of cars, how much additional marginal CO2 is being created here not accounted for by other segments, and how much marginal impact on climate change is to be expected from the additional CO2 from AI models. That in hand, we could potentially assess how much additional risk there is from AI in the short term on climate change.

Comment by G Gordon Worley III (gworley3) on How have you become more (or less) engaged with EA in the last year? · 2020-09-09T18:33:24.308Z · EA · GW

Mixed. On the one hand, I feel like I'm less involved because I have less time for engaging with people on the forum and during events and am spending less time on EA-aligned research and writing.

On the other, that's in no small part because I took a job that pays a lot more than my old one, dramatically increasing my ability to give, but it also requires a lot more of my time. So I've sort of transitioned towards an earning-to-give relationship with EA that leaves me feeling more on the outside but still connected and benefiting from EA to guide my giving choices and keep me motivated to give rather than keep more for myself.

Comment by G Gordon Worley III (gworley3) on It's Not Hard to Be Morally Excellent; You Just Choose Not To Be · 2020-08-24T19:21:00.167Z · EA · GW

While I appreciate what the author is getting at, as presented I think it shows a lack of compassion for how difficult it is to do what one reckons one ought to do.

It's true you can simply "choose" to be good, but this is about as easy as saying all you have to do to do X for a wide variety of things X that don't require special skills is choose to do X, such as wake up early, exercise, eat healthier food when it is readily available, etc.. Despite this, lots of people try to explicitly choose to do these things and fail anyway. What's up?

The issue lies in what it means to choose. Unless you suppose some sort of notion of free will, choosing is actually not that easy to control because there's a lot of complex brain functions essentially competing to get you to doing whatever the next thing you do is, and so "choosing" actually looks a lot more like "setting up a lot of conditions both in the external world and in your mind such that a particular choice happens" rather than some atomic, free-willed choice spontaneously happening. Getting to the point where you can feel like you can simply choose to do the right thing all the time requires a tremendous amount of alignment between different parts of the brain competing to produce your next action.

I think it's best to take this article as a kind of advice. Sometimes it will be that the only thing keeping you from doing what you believe you ought to do is just some minor hold-up where you don't believe you can do it, and accepting that you can do it suddenly means that you can, but most of the time the fruit will not hang so low and instead there will be a lot else to do in order to do what one considers morally best.

Comment by G Gordon Worley III (gworley3) on "Good judgement" and its components · 2020-08-21T16:40:28.636Z · EA · GW

Cool. Yeah, when I saw this it sort of jumped out at me as potentially helping deal with what I see as a problem, which is that there are a bunch of folks who are either EA aligned or identify as EA and are also anti-LW, and I would argue that for those folks they are to some extent throwing the baby out with the bathwater, so having a nice way to rebrand and talk about some of the insights from LW-style rationality that are clearly present in EA and that we might reasonably like to share with others without actually relying on LW-centric content is useful.

Comment by G Gordon Worley III (gworley3) on "Good judgement" and its components · 2020-08-20T18:07:33.453Z · EA · GW

To what extent are you thinking (without so far explicitly saying it) that "good judgment" is a possible EA rebranding of LessWrong-style rationality?

Comment by G Gordon Worley III (gworley3) on G Gordon Worley III's Shortform · 2020-08-19T02:09:22.329Z · EA · GW

Reading this article about the security value of inefficiency, I get the idea that a possibly neglected policy area for EAs is economic resilience, i.e. the idea that we can increase welfare of people both in the short and long term by ensuring our economies don't become brittle or fragile and collapse, wiping out welfare gains from modern economies and cutting off paths to greater welfare gains through economic growth in the future, or at least setting such growth back, causing harm, or making it economically unviable to work on averting existential risks.

Seems possibly related to other policy work focused on things like improving institutions for similar reasons, but more directed at economic policy rather than institution design.

Comment by G Gordon Worley III (gworley3) on Donating effectively does not necessarily imply donating tax-deductibly · 2020-08-18T19:09:06.742Z · EA · GW

One place where EAs paying taxes in the US can probably have differential impact is in making donations less than the standard deduction(s) they can take on their taxes such that they would not benefit from itemized deductions from donating to registered charities. Impact concerns aside, unless you're donating enough to exceed your standard deduction, you don't get much or any tax benefits from donating to registered charities, and so all of your donations will be post-tax anyway so you have a unique opportunity to give funds to EA-aligned causes that are otherwise neglected by larger donors because they can't get the tax benefits.

Some examples would include giving small (less than $10k USD) "angel" donations to not-yet-fully-established causes that are still organizing themselves and do not or will not ever have charitable tax status and participating in a donor lottery.

Plenty of caveats to this of course, like if you have employer matching that makes it worthwhile to give to registered charities even if you yourself won't reap any tax benefits, and state-level standard deductions are smaller than federal ones so it's often worth itemizing charitable giving on state returns even if it's not on federal returns.

Comment by G Gordon Worley III (gworley3) on Shifts in subjective well-being scales? · 2020-08-18T18:56:08.631Z · EA · GW

Might help to see how this is handled, if at all, with pain scales. For example, I can imagine someone thinking they're having 9/10 or 10/10 pain, say from an injury, but then after something much worse happening, say a cluster headache or a kidney stone, they realize their injury pain was only a 6/10 or 7/10 and the cluster headache or kidney stone was the actual 10/10.

I know there is already some stuff about how the pain scale has cross cultural issues, with people from different cultures reporting and possibly even experience their pain as more or less worse than others from other cultures, so might be an entry point to this line of investigation.

Comment by G Gordon Worley III (gworley3) on Book Review: Deontology by Jeremy Bentham · 2020-08-12T18:55:02.530Z · EA · GW

I really enjoyed reading this, and learned a lot about Bentham I didn't know (which wasn't a lot, since I haven't spent a lot of time studying him). I get the sense that his ideas on utilitarianism are convergent with, say, typical virtue ethics in the limit, only he get there by a different route. I also get the sense he didn't foresee super-optimization and was very much thinking about humans who do something closer to satisficing.

Comment by G Gordon Worley III (gworley3) on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-08T19:38:44.992Z · EA · GW

I think I agree, but my point is maybe more that the policy as worded now should allow this, so the policy probably needs to be worded more clearly so that a post like this is more clearly excluded.

Comment by G Gordon Worley III (gworley3) on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-07T18:08:47.601Z · EA · GW

FWIW, I don't think this post actually endorses a specific candidate, and instead is asking if endorsing a specific candidate makes sense. Maybe that's too close for comfort, but I don't see this post as arguing for a particular candidate, but asking for arguments for or against a particular candidate. Thus as the policy is worded now this seems okay for frontpage or community to me.

Comment by G Gordon Worley III (gworley3) on The world is full of wasted motion · 2020-08-06T16:21:27.851Z · EA · GW

FWIW, I think this is a better fit for LessWrong than EA Forum.

Comment by G Gordon Worley III (gworley3) on Recommendations for increasing empathy? · 2020-08-02T22:03:27.352Z · EA · GW

Enough meditation seems to pretty reliably increase empathy. My guess is there are studies purporting to show this, but I'm making this suggestion mostly based on personal observation. There's some risk of survivorship bias in this, though, so I don't know how repeatable this suggestion is for the average person.

Comment by G Gordon Worley III (gworley3) on What values would EA want to promote? · 2020-07-09T16:27:34.831Z · EA · GW

At its heart, EA seems to naturally tend to promote a few things:

  • a larger moral circle is better than a smaller one
  • considered reasoning ("rationality") is better than doing things for other reasons alone
  • efficiency in generating outcomes is better than being less efficient, even if it means less appealing at an emotional level

I don't know that any of this are what EA should promote, and I'm not sure there's anyone who can unilaterally make the decision of what is normative for EA, so instead I offer these as the norms I think EA is currently promoting in fact, regardless of what anyone thinks EA should be promoting.

Comment by G Gordon Worley III (gworley3) on Ramiro's Shortform · 2020-07-05T01:01:33.504Z · EA · GW

One challenge will be that any attempt to time donations based on economic conditions risks becoming a backdoor attempt to time the market, which is notoriously hard.

Comment by G Gordon Worley III (gworley3) on Democracy Promotion as an EA Cause Area · 2020-07-01T18:03:48.196Z · EA · GW
EA organizations are also less likely to be perceived as biased or self-interested actors.

I think this is unlikely. EAs disproportionately come from wealthy democratic nations and those who have reason to resist democratic reform will have an easy time painting EA participation in democracy promotion as a slightly more covert version of foreign-state-sponsored attempts at political reform. Further, EAs are also disproportionately from former colonizing states that have historically dominated other states, and I don't think that correlation will be ignored.

This is not to say I necessarily think it is the case that EA attempts at democracy promotion would in fact be covert extensions of existing efforts that have negative connotations, only that I think it will be possible to argue and convince people that they are, making this not an actual advantage.

Comment by G Gordon Worley III (gworley3) on Slate Star Codex, EA, and self-reflection · 2020-06-26T20:31:56.485Z · EA · GW

The downvotes are probably because, indeed, the claims only make sense if you look at the level of something like "has Scott ever said anything that could be construed as X". I think a complete engagement with SSC doesn't support the argument, and it's specifically the fact that SSC is willing to address issues in their whole without flinching away from topics that might make a person "guilty by association" that makes it a compelling blog.

Comment by G Gordon Worley III (gworley3) on Dignity as alternative EA priority - request for feedback · 2020-06-25T22:52:02.154Z · EA · GW

I think there could be a case that QALY/DALY/etc. calculations should factor in dignity in some way, and view mismatches between, say, QALY calculations and what feels "right" in terms of dignity as sign that the calculations may be leaving something important out. For example, if intervention X produces 10 QALY and makes someone feel 10% less dignified, then either we want to be sure the 10 QALY figure already incorporates that cost to dignity or it is adjusted to consider it. Seems like there is a strong case to be made for possibly more nuanced calculation of metrics, especially so we don't miss cases where ignoring something like dignity would cause us to think an intervention was good but in fact it is overall bad once dignity is factored in. That this has come up and seems an issue suggests some calculations people are doing today fail to factor it in.