Posts

G Gordon Worley III's Shortform 2020-08-19T02:09:07.652Z
Expected value under normative uncertainty 2020-06-08T15:45:24.374Z
Vive la Différence? Structural Diversity as a Challenge for Metanormative Theories 2020-05-26T00:45:01.131Z
Comparing the Effect of Rational and Emotional Appeals on Donation Behavior 2020-05-26T00:24:25.239Z
Rejecting Supererogationism 2020-04-20T16:19:16.032Z
Normative Uncertainty and the Dependence Problem 2020-03-23T17:29:03.369Z
Chloramphenicol as intervention in heart attacks 2020-02-17T18:47:44.328Z
Illegible impact is still impact 2020-02-13T21:45:00.234Z
If Veganism Is Not a Choice: The Moral Psychology of Possibilities in Animal Ethics 2020-01-20T18:07:53.003Z
EA and the Paramitas 2020-01-15T03:17:18.158Z
Normative Uncertainty and Probabilistic Moral Knowledge 2019-11-11T20:26:07.702Z
TAISU 2019 Field Report 2019-10-15T01:10:40.645Z
Announcing the Buddhists in EA Group 2019-07-02T20:41:23.737Z
Best thing at EAG SF 2019? 2019-06-24T19:19:49.700Z
What movements does EA have the strongest synergies with? 2018-12-20T23:36:55.641Z
HLAI 2018 Field Report 2018-08-29T00:13:22.489Z
Avoiding AI Races Through Self-Regulation 2018-03-12T20:52:06.475Z
Prioritization Consequences of "Formally Stating the AI Alignment Problem" 2018-02-19T21:31:36.942Z

Comments

Comment by G Gordon Worley III (gworley3) on Towards a longtermist framework for evaluating democracy-related interventions · 2021-07-28T17:02:09.602Z · EA · GW

So I remain unconvinced that there's a specific longtermist case for democracy, but I think there is a longtermist case for some kind of context in which longtermist work can happen.

What I have in mind is I'm not sure democracy or liberal democracy is necessary to work on longtermist cause areas, but liberal democracy is creating an environment in which this work can get done. So there's an interesting question, then: what are the feature of liberal democracy that enable longtermist work?

I ask this because I'm not sure that, for example, democracy is necessary to work on improving the longterm future, however it's also clear that something about liberal democracy has allowed people to start doing work towards bettering the longterm future, so it must have some features we care about for that purpose. Maybe it is the case that electing the government is the key feature that matters, but I don't see an obvious causal chain there between the two, which makes me wonder what the features are that do matter that we'd want to ensure are preserved if we want people to be able to work on making the longterm future better, even if it means having a government that is not what we'd consider to be democratic.

Maybe another way to put my comment is that this post feels like it's taking for granted that liberal democracy is good for longtermism so we want to figure out what it is about liberal democracy that makes it good, but I'd say it slightly differently: longtermism has been fostered within liberal democracies, so this means there must be something about liberal democracies that matters, but this doesn't imply that longtermism requires liberal democracy, so we should cast a wider net and look at features of specific liberal democracies where longtermist work is flourishing without presupposing that it's somehow connected to the system of government (for example, maybe it's just that liberal democracies are rich and have lots of extra money to spend on "hobby" interests like longtermism, and any sufficiently rich society, no matter the government, would be able to foster longtermism; I don't know, but that's the kind of question that seems to me worth exploring).

Comment by G Gordon Worley III (gworley3) on Part 1: EA tech work is inefficiently allocated & bad for technical career capital · 2021-07-25T22:41:08.806Z · EA · GW

As best I can tell you don't seem to address the main reasons most organizations don't choose to outsource:

  • additional communication and planning friction
  • principal-agent problems

You could of course hand-wave here and try to say that since you propose an EA-oriented agency to serve EA orgs this would be less of an issue, but I'm skeptical since if such a model worked I'd expect, for example, not not have ever had a job at a startup and instead have worked for a large firm that specialized in providing tech services to startups. Given that there's a lot of money at stake in startups, it's worth considering, for example, if these sorts of challenges will cause your plan to remain unappealing in reality, since the continue with the example most startups that succeed have in-house tech, not outsourced.

Comment by G Gordon Worley III (gworley3) on The case against “EA cause areas” · 2021-07-17T15:13:19.744Z · EA · GW

I think the obvious challenge here is how to be more inclusive in the ways you suggest without destroying the thing that makes EA valuable. The trouble as I see it is that you only have 4-5 words to explain an idea to most people, and I'm not sure you can cram the level of nuance you're advocating for into that for EA.

Comment by G Gordon Worley III (gworley3) on G Gordon Worley III's Shortform · 2021-06-22T12:59:04.027Z · EA · GW

This question on the EA Facebook group got some especially not EA answers. This seems not great given many people possibly first interact with EA via Facebook. I tend to ignore this group and maybe others do the same, but if this post is representative then we probably need to put more effort in there to make sure comments are moderated or replied to so it's at least clear who is speaking with an EA perspective and who isn't.

Comment by G Gordon Worley III (gworley3) on Why should we be effective in our altruism? · 2021-05-31T04:37:01.247Z · EA · GW

You want more good and less bad in the world? Would it be better if we had a little more good and a little less bad? If so, then we should care about the efficiency of our efforts to make the world better.

*note that I of course here mean something like efficiency that includes Pareto efficiency, not the narrow notion of efficiency we use everyday; you could also say "effective" but you asked for why giving should be effective, and we can ground effectiveness in Pareto efficiency across all dimensions we care about 

Comment by G Gordon Worley III (gworley3) on Problem area report: mental health · 2021-05-20T18:07:31.814Z · EA · GW

I've been pretty skeptical that mental health is something EAs should focus on. One thing I see lacking in this report (apologies if it's there and I didn't find it) seems to be a way of comparing it to alternatives, since I don't think that mental health is a source of suffering for people is in question, but whether it's compares favorably to other issues.

For example I'd love something like QALY analysis on mental health that would allow us to compare it to other cause areas more directly.

Comment by G Gordon Worley III (gworley3) on Should Chronic Pain be a cause area? · 2021-05-18T14:35:13.895Z · EA · GW

Having lived with someone who suffered chronic kidney stones, at least within the US, a huge problem in recent years has been the over-reaction to the so-called opioid crisis. The result has been a decreased willingness to actually treat what we might call chronic acute pain, like the kind that comes from kidney stones.

This is a somewhat technical distinction I'm making here. Kidney stone pain is acute in that it has a clear cause that can be remediated. However if someone produces kidney stones chronically (let's say at least one a month), they are chronically in acute pain. This creates a problem because standard treatment protocols for chronic pain don't always work because this is a continuous level of pain above what's normally experienced by chronic pain sufferers, perhaps with the exception of migraines. But since migraine pain is best treated with non-opioid drugs, they don't run into the same problems as chronic kidney stone sufferers do who need repeated access to opioids to deal with pain that can break through maintenance pain medications.

The result is people left in agony who suffer from chronic kidney stones that are resistant to treatment because of restrictions on opioid drug use in the name of curbing abuse. To make matters worse, treatment can become a catch-22: chronic pain doctors won't treat such pain because it's "acute" and at some point other doctors will stop wanting to treat repeated kidney stones because they are "chronic". The incentives are aligned perfectly to get doctors to not treat these patients since they can risk losing their license for improperly prescribing opioids. It doesn't matter if it's valid, all that matters is that it looks suspicious in a database, and doctors would rather avoid that attention than risk it to treat patients (but of course not all doctors are like this, just that there's a lot of them who follow the incentives rather than work against them in the name of patient care).

Comment by G Gordon Worley III (gworley3) on Should Chronic Pain be a cause area? · 2021-05-18T14:26:27.602Z · EA · GW

Regarding the difference in prevalence between chronic pain in men and women, there's a tendency, at least within the US medical system, to dismiss women's pain more often than men. A good example of this is pain resulting for endometriosis, which is often dismissed or downplayed by doctors as "just bad period cramps" rather than a serious source of chronic pain. So too for many other sources of pain unique to women.

I don't have a source, but my experience is that most of this seems to be due to a variant of the typical mind fallacy: male doctors and some female doctors have never experienced similar pain and so fail to appreciate its severity and sympathize with it less on the margin, being more likely to recommend more conservative treatment rather than more aggressively try to remediate the pain.

Comment by G Gordon Worley III (gworley3) on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-11T05:09:49.661Z · EA · GW

My model is the that the global angle is kind of boring: the drug war was pushed by the US, and I expect if the US ends it then other nations will either follow their example or at least drift in random directions with the US no longer imposing the drug war on them by threat of trade penalties.

Comment by G Gordon Worley III (gworley3) on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-07T16:38:34.941Z · EA · GW

I think this starts to get at questions of tractability, i.e. how neglected is this contingent on tractability (and vice versa). In my mind this is one of the big challenges of any kind of policy work where there's already a decent number of folks in the space: you have to have reasonably high confidence that you can do better than everyone else is doing now (and not just that you have an idea for how to do better, but like can actually succeed in executing better) in order for it to cross the bar of a sufficiently effective intervention (in expectation) to be worth working on.

Comment by G Gordon Worley III (gworley3) on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-06T20:56:20.214Z · EA · GW

I would expect this not to be very neglected, hence I would expect EAs to be able to have much impact here only if, for example, it's effectively neglected because the existing people pushing for an end to the drug war are unusually ineffective.

For example, there's already NORML, who's been working on cannabis angle of this since the 1970s to decent success, Portugal has already ended the drug war locally, and Oregon recently decriminalized possession of drugs for personal use.

Getting involved feels a bit like getting involved in, say, marriage equality in the 2000s: the change was already clearly in motion, plenty of people were working to push for it, and so there's not clearly a lot additional that EAs could have brought to the table.

Comment by G Gordon Worley III (gworley3) on Why We Need Abundant Housing · 2021-04-30T14:58:12.617Z · EA · GW

On the one hand I'm in favor of more housing. I live in the SF Bay Area where this is also a problem, and really insufficient housing is a problem for all of California, so I'm naturally supportive of efforts to address this problem. However, I'm not sure this project is a high priority for EAs.

This seems like something that's not especially neglected (lots of people are thinking about ways to improve the housing situation in American cities) and also unlikely to have high impact in relative terms (viz. globally rich Americans are not suffering as much due to expensive, limited housing in desirable cities as the global poor, animals, or far future beings (in expectation)). Cf. ITN framework for why I'm thinking about these criteria.

I think it would be hard to convince me this is working on something neglected, but I'm pretty open to the idea that I might be wrong about impact, especially if better housing in American cities is somehow on a critical path to other, more obviously higher impact projects. I'd be interested if there are better arguments for why this is impactful enough to be prioritized over other, more obviously high impact causes.

Comment by G Gordon Worley III (gworley3) on Is the current definition of EA not representative of hits-based giving? · 2021-04-26T13:41:20.898Z · EA · GW

One, I'd argue that hits-based giving is a natural consequence of working through what using "high-quality evidence and careful reasoning to work out how to help others as much as possible" reallying means, since that statement doesn't say anything about excluding high-variance strategies. For example, many would say there's high-quality evidence about AI risk, lots of careful reasoning has been done to assess its impact on the long term future, and many have concluded that working on such things is likely to help others as much as possible, though we may not be able to measure that help for a long time and we may make mistakes.

Two, it's likely a strategic choice to not be in-your-face about high variance giving strategies since they are pretty weird to most people. EA orgs have chosen to develop a public brand that is broadly appealing and not controversial on the surface (even if EA ends up courting controversy anyway because of its consequences for opportunities we judge to be relatively less effective than others). The definitions of EA you point to seem in line with this.

Comment by G Gordon Worley III (gworley3) on Naturalism and AI alignment · 2021-04-24T19:35:21.490Z · EA · GW

I do like the idea of being able to construct an experiment to test naturalism. I think it's mistaken in that I doubt there are any facts about what is right and wrong to be discovered, by observing the world or otherwise, but currently I and anyone else who wants to talk about metaethics is forced to rely primarily on argumentation. Being able to run an experiment using minds different from our own seems quite compelling to testing a variety of metaethical hypotheses.

Comment by G Gordon Worley III (gworley3) on Why we want to start a charity fortifying feed for hens (founders needed) · 2021-04-20T14:41:28.871Z · EA · GW

I'm also somewhat concerned because this seems like a clear case of a dual use intervention that makes life better for the animals but also confers benefits to the farmers that may ultimately result in more suffering rather than less by, for example, making chickens more palatable to consumers as "humanely farmed" (I'm guessing that's what is meant by "humane-washing") or making chicken production more profitable (either by humane-washing or by making the chickens produce a better quality meat product that is in higher demand).

Comment by G Gordon Worley III (gworley3) on Concerns with ACE's Recent Behavior · 2021-04-16T14:54:08.485Z · EA · GW

I can't seem to find the previous posts at the moment, but I have this sense that this is not an isolated issue and that ACE has some serious problems given that it draws continued criticism, not for its core mission, but for the way it carries that mission out. Although I can't remember at the moment what that other criticism was, I recall thinking "wow, ACE needs to get it together" or something similar. Maybe it has learned from those things and gotten better, but I notice I'm developing a belief that ACE is failing at the "effective" part of effective altruism.

Does this match what others are thinking or am I off?

Comment by G Gordon Worley III (gworley3) on What are your main reservations about identifying as an effective altruist? · 2021-03-30T14:43:28.263Z · EA · GW

I'll note that I used to have some reservations but no longer do, so I'll answer about why I previously had reservations.

When EA got interested in what we now call longtermism, it didn't seem obvious to me that EA was for me. My read was that EA was about near concerns like global poverty and animal welfare and not far concerns like x-risk and aging. So it seemed natural to me that I was on the outside of EA looking in because my primary cause area (though note that I wouldn't have thought of it that way at the time) wasn't clearly under the EA umbrella.

Obviously this has changed now, but hopefully useful for historical purposes, and there may be folks who still feel this way about other causes, like effective governance, that are, from my perspective, on the fringes of what EA is focused on.

Comment by G Gordon Worley III (gworley3) on Some quick notes on "effective altruism" · 2021-03-24T18:02:14.646Z · EA · GW

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it's appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there's something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.

Comment by G Gordon Worley III (gworley3) on Is Democracy a Fad? · 2021-03-13T19:25:47.934Z · EA · GW

Running with the valley metaphor, perhaps the 1990s were when we reached the most verdant floor of the valley. It remains unclear if we're still there or have started to climb out and away from it, assuming the model to be correct.

Comment by G Gordon Worley III (gworley3) on Mentorship, Management, and Mysterious Old Wizards · 2021-02-25T19:56:22.770Z · EA · GW

The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.

There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.

Slight push back here in that I've seen plenty of folks who make good mentors but who wouldn't be doing a lot of mentoring if not for systems in place to make that happen (because they stop doing it once they aren't within whatever system was supporting their mentoring), which makes me think there's a large supply of good mentors who just aren't connected in ways that help them match with people to mentor.

This suggests a lot of the difficulty with having enough mentorship is that the best mentors need to not only be good at mentoring but also be good at starting the mentorship relationship. Plenty of people, it seems though, can be good mentors if someone does the matching part for them and creates the context between them and the mentees.

Comment by G Gordon Worley III (gworley3) on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-15T21:34:39.574Z · EA · GW

On a related but different note, I wish there was a way to combine conversations on cross-posts between EA Forum and LW. I really like the way AI Alignment Forum works with LW and wish EA Forum worked the same way.

Comment by G Gordon Worley III (gworley3) on The Folly of "EAs Should" · 2021-01-06T18:59:42.905Z · EA · GW

I often make an adjacent point to folks, which is something like:

EA is not all one thing, just like the economy is not all one thing. Just as civilization as we know it doesn't work unless we have people willing to do different things for different reasons, EA depends on different folks doing different things for different reasons to give us a rounded out basket of altruistic "goods".

Like, if everyone thought saltine crackers were the best food and everyone competed to make the best saltines, we'd ultimately all be pretty disappointed that we had a mountain of amazing saltine crackers and literally nothing else, and so it makes sense even in the world where saltines really are the best food that generate the most benefit by their production that we instrumentally produce other things so we can enjoy our saltines in full.

I think the same is true of EA. I care a lot about AI  x-risk and it's what I focus on, but that doesn't mean I think everyone should do the same. In fact, if they did, I'm not sure it would be so good, because then maybe we stop paying attention to other causes that, if we don't address them, end up making trying to address AI risks moot. I'm always very glad to see folks working on things, even things I don't personally think are worthwhile, both because of uncertainty about what is best and because there's multiple dimensions along which it seems we can optimize (and would be happy if we did).

Comment by G Gordon Worley III (gworley3) on evelynciara's Shortform · 2021-01-05T19:35:14.514Z · EA · GW

I think it's worth saying that the context of "maximize paperclips" is not one where the person literally says the words "maximize paperclips" or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you'll get it doing things you wouldn't as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they'd have to kill themselves or their loved ones to make more paperclips.

The objection seems predicated on interpretation of human language, which is aside the primary point. That is, you could address all the human language interpretation issues and we'd still have an alignment problem, it just might not look literally like building a paperclip maximizer if someone asks the AI to make a lot of paperclips.

Comment by G Gordon Worley III (gworley3) on Can I have impact if I’m average? · 2021-01-03T21:10:24.594Z · EA · GW

I wrote about something similar about a year ago: https://forum.effectivealtruism.org/posts/Z94vr6ighvDBXmrRC/illegible-impact-is-still-impact

Comment by gworley3 on [deleted post] 2020-12-31T17:24:51.241Z

There's a lot to unpack in that tweet. I think something is going on like:

  • fighting about who is really the most virtuous
  • being upset people aren't more focused on the things you think are important
  • being upset that people claim status by doing things you can't or won't do
  • being jealous people are doing good doing things you aren't/can't/won't do
  • virtue signaling
  • righteous indignation
  • spillover of culture war stuff going on in SF

None of it looks like a real criticism of EA, but rather of lots of other things EA just happens to be adjacent to.

Doesn't mean it doesn't have to be addressed or isn't an issue, but I think also worth keeping these kinds of criticisms in context.

Comment by gworley3 on [deleted post] 2020-12-31T17:20:19.717Z

I find others answers about what the actual low resolution version of EA they see in the wild fascinating.

I go with the classic and if people ask I give them a three word answer: "doing good better".

If they ask for more, it's something like: "People want to do good in the world, and some good doing efforts produce better outcomes than others. EA is about figuring out how to get the best outcomes (or the largest positive impact) for time/money/effort relative to what a person thinks is important."

Comment by G Gordon Worley III (gworley3) on How modest should you be? · 2020-12-29T19:47:28.782Z · EA · GW

I realize this is a total tangent to the point of your post, but I feel you're giving short-shrift here to continental philosophy.

If it were only about writing style I'd say fair: continental philosophy has chosen a style of writing that resembles that used in other traditions to try to avoid over-simplifying and not compressing understanding down into just a few words that are easily misunderstood. Whereas you see unclear writing, I see a desperate attempt to say anything detailed about reality without accidentally pointing in the wrong direction.

This is not to say that there aren't bad continental philosophers who hide behind this method to say nothing, but I think it's unfair to complain about it just because it's hard to understand and takes a lot of effort to suss out what is being said.

As to the central confusion you bring up, the unfortunate thing is that the worst argument in the world is technically correct, we can't know things as they are in themselves, only as we perceive them to be, i.e. there is no view from nowhere. Where it's wrong is thinking that just because we always know the world from some vantage point that trying to understanding anything is pointless and any belief is equally useful. It is can both be true that there is no objective way that things are and that some ways of trying to understand reality do better at helping us predict reality than others.

I think the confusion that the worst argument in the world immediately implies we can't know anything useful comes from only seeing that the map is not itself the territory but not also seeing that the map is embedded in the territory (no Cartesian dualism).

Comment by G Gordon Worley III (gworley3) on Morality as "Coordination" vs "Altruism" · 2020-12-29T19:21:21.859Z · EA · GW

I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are."

I think it's worth challenging the idea that this conflation is actually an issue with ethics.

Although it's true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market economies and prayer) and so things that are bad because they break coordination mechanisms or because they don't express compassion are not bad for exactly the same reasons, this need not mean there is not something deeper going on that ties them together.

I think this is why there tends to be a focus on meta-ethics among philosophers of ethics rather than directly trying to figure out what people should do, even when setting meta-ethical uncertainty aside. There's some notion of badness or undesireableness (and conversely goodness or desirableness) that powers both of these, and so they are both different expressions of this same underlying phenomenon. So we can reasonably ties these two approaches together by looking at this question of what makes something seem good or bad to us, and simply consider these different domains over which we consider how we make good or bad things happen.

As to what good and bad mean, well, that's a larger discussion. My best theory is that in humans it's rooted in prediction error plus some evolved affinities, but this is an ongoing place where folks are trying to figure out what good and bad mean beyond our intuitive sense that something is good or bad.

Comment by G Gordon Worley III (gworley3) on Wholehearted choices and "morality as taxes" · 2020-12-23T01:30:21.791Z · EA · GW

Weird, that sounds strange to me because I don't really regret things since I couldn't have done anything better than what I did under the circumstances or else I would have done that, so the idea of regret awakening compassion feels very alien. Guilt seems more clear cut to me, because I can do my best but my best may not be good enough and I may be culpable for the suffering of others as a result, perhaps through insufficient compassion.

Comment by G Gordon Worley III (gworley3) on Wholehearted choices and "morality as taxes" · 2020-12-22T18:15:41.296Z · EA · GW

These cases seem not at all analogous to me because of the differing amount of uncertainty in each.

In the case of the drowning child, you presumably have high certainty that the child is going to die. The case is clear cut in that way.

In the case of the distant commotion on an autumn walk, it's just that, a distant commotion. As the walker, you have no knowledge about what it is and whether or not you could do anything. That you later learn you could have done something might lead you to experience regret, but in the moment you lacked information to make it clear you should have investigated. I think this entirely accounts for the difference in feeling about the two cases, and eliminates the power of the second case.

In the second case, any imposition on the walker to do anything hinges on their knowledge of what the result of the commotion will be. Given the uncertainty, you might reasonably conclude in the moment that it is better to avoid the commotion, maybe because you might do more harm than good by investigating.

Further, this isn't a case of negligence, where you failing to respond to the commotion makes you complicit in the harm, because you seem to have no responsibility to the machinery or the conditions by which the man came to be pinned under it. Instead it seems to be a case where you are morally neutral throughout because of your lack of knowledge, and your lack of active effort to avoid gaining knowledge that would otherwise make you complicit by trying to avoid becoming morally culpable. That is not the case here and so your example seems to lack the necessary conditions to make the point.

Comment by G Gordon Worley III (gworley3) on Incompatibility of moral realism and time discounting · 2020-12-12T20:43:03.517Z · EA · GW

Could the seeming contradiction be resolved by greater specificity of statements?

For example, rather than abandoning "Everyone should sell everything that begins with a 'C', but nothing that begins with an 'A'." as a norm, we might realize we underspecified it to begin with and really meant "Everyone should sell everything that is called by a word in English that begins with a 'C', but nothing that begins with an 'A' in English.". We could get even more specific if objections remained until we were not at risk of under specifying what we mean and suffering from relativity.

In the same vein, maybe the contradiction of the through experiment could be resolved by being more specific and including more context about the world. For example, cf. this attempt at thinking about preferences as conditioned on the entire state of the world. Maybe the same sort of technique could be applied here.

Comment by G Gordon Worley III (gworley3) on EAs working at non-EA organizations: What do you do? · 2020-12-10T19:44:20.931Z · EA · GW
  • Where do you work, and what do you do?

I'm a software engineer at Plaid working on the Infrastructure team. My main project is leading our internal observability efforts.

  • What are some things you've worked on that you consider impactful?

In terms of EA impact at my current job, not much. I view this as an earning to give situation where I'm taking my expertise as a software engineer and turning it into donations. I think there's some argument that Plaid has positive impact on the world by enabling lots of new financial applications built on our APIs, thereby increasing access to financial resources for those who historically had the least access to them. But I don't work directly on that stuff, instead working on the things that enables the org to carry out its mission.

I will say I considered some other jobs, say working at Facebook or continuing to work on ads as I had been doing, and although the mission was not the primary reason I chose Plaid it is nice that I don't worry I might work on something that harms the world.

  • What are a few ways in which you bring EA ideas/mindsets to your current job?

I often use the TIN framework informally in work and elsewhere in life. It's sort of baked into my soul to think about tractability, impact, and neglectedness when thinking about what to do. Plaid has a big internal focus on the idea of impact, including having a positive impact on the world, and of course as an engineer there's plenty of focus on doing things that are tractable (possible). Neglectedness considerations mostly show up in what I personally choose to work on: I look for things where I can have impact, that are tractable, and that are being neglected by others such that I can make things better in ways that are currently not being pursued. In a growing organization this is easy, because there's often a lot of stuff we'd do if someone had more time to do it, so then it largely becomes a question of prioritizing between different neglected issues.

Comment by G Gordon Worley III (gworley3) on Does Qualitative Research improve drastically with increasing expertise? · 2020-12-07T16:46:50.940Z · EA · GW

I think this holds true in more traditionally "quantitative" fields, too, because often things can be useful or not depending on how they are framed such that without the proper framing good numbers don't matter because they are measuring the right thing.

This seems to suggest that a lot of what makes quantitative research successful also makes qualitative research successful, and so we should expect any extent to which expertise matters in quantitative fields to matter in qualitative fields (although I think this mostly points at the quant/qual distinction being a very fuzzy one that is only relevant along certain dimensions).

Comment by G Gordon Worley III (gworley3) on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T00:17:59.480Z · EA · GW

Jonas also mentioned to me that EA Funds is considering offering Donor-Advised Funds that could grant to individuals as long as there’s a clear charitable benefit. If implemented, this would also allow donors to provide tax-deductible support to individuals.

This is pretty exciting to me. Without going into too much detail, I expect to have a large amount of money to donate in the near future, and LTF is basically the best option I know of (in terms of giving based on what I most want to give to) for the bulk of that money short of having the ability to do exactly this. I'd still want LTF as a fall back for funds I couldn't figure out how to better allocate myself, but the need for tax deductibility limits my options today (though, yes, there are donor lotteries).

Comment by G Gordon Worley III (gworley3) on Long-Term Future Fund: Ask Us Anything! · 2020-12-03T21:58:13.953Z · EA · GW

LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?

Comment by G Gordon Worley III (gworley3) on Long-Term Future Fund: Ask Us Anything! · 2020-12-03T21:57:20.597Z · EA · GW

How much room for additional funding does LTF have? Do you have an estimate of how much money you could take on and still achieve your same ROI on the marginal dollar donated?

Comment by G Gordon Worley III (gworley3) on Long-Term Future Fund: Ask Us Anything! · 2020-12-03T21:56:34.373Z · EA · GW

Do you have any plans to become more risk tolerant?

Without getting too much into details, I disagree with some things you've chosen not to fund, and as an outsider view it as being too unwilling to take risks on projects, especially projects where you don't know the requesters well, and truly pursue a hits-based model. I really like some of the big bets you've taken in the past on, for example, funding people doing independent research who then produce what I consider useful or interesting results, but I'm somewhat hesitant around donating to LTF because I'm not sure it takes enough risks that it represents a clearly better choice for someone like me whose fairly risk tolerant with their donations than donating to other established projects or just donating directly (but this has the disadvantage of making it hard for me to give something like seed funding and still get tax advantages).

Comment by G Gordon Worley III (gworley3) on Where are you donating in 2020 and why? · 2020-11-24T00:00:05.086Z · EA · GW

I'm being strategic in 2020 and shifting much of my giving for it into 2021 because I expect a windfall, but here's where I chose to give this year:

  • AI Safety Support
    • I think the work Linda (and now JJ) are doing is great and is woefully underfunded. I would give them more sooner but I have to shift that into 2021. They've had some trouble getting funding from more established sources for reasons I don't endorse but don't want to go into here, and I think giving to them now is especially high leverage to help AISS bootstrap.
    • I'll be giving $5k soon and plan to donate more once the funds to do so are unlocked.
    • Read Linda's post about AISS for more details.
  • MIRI
    • MIRI keeps doing great work on AI safety, and I've been especially impressed with Scott and Abram in the last couple years. I've cut back on some of my funding to MIRI because I view them as less neglected now relative to other things I could fund, but I continue to support them via Amazon Smile.
  • Wikipedia
    • This feels a little bit like paying for utilities I use, but I get a lot of value out of Wikipedia and think everyone who can should donate $5 or $10 to them. It also seems generally useful for maintaining and improving a source of facts in a world that increasingly uncertain about what facts even are.
  • Alcor
    • I have a cryonics contract with Alcor, and I pay annual dues to them. Most of this is counted as charitable giving.
  • Bay Zen Center
    • This isn't really EA giving, but it is charitable giving to a religious organization (full disclosure, I'm on the board of the Center). They get about 2% of my income. Listed for completeness.
  • Long Term Future Fund
    • LTF is generally aligned with my giving priorities and will get my marginal additional funding I don't have a better idea about how to allocate.

Long term my objective is to donate 30-50% of my income (limited by tax incentives and marginal value of money until I resolve some large outstanding expenses), but today it's closer to 5%.

Comment by G Gordon Worley III (gworley3) on How to best address Repetitive Strain Injury (RSI)? · 2020-11-19T17:28:48.388Z · EA · GW

I've had RSI in the past, but not from typing, but instead from repetitive motions loading paper into a machine for scanning. I didn't need to see a doctor about it, and addressing it was ultimately pretty straight forward and I was able to keep doing the job that caused it while I recovered. Things I did:

  • wore a stabilizing wrist brace to alleviate the strain on my wrist that was causing pain, even when I was not engaged in an activity that would necessarily cause pain
  • payed attention to and changed my motions to reduce wrist strain
  • rearranged my work so I had more breaks and less long periods of continually performing the motion (I had other job responsibilities so it was easy to interleave breaks from one thing with work on another)

It's now more than 10 years since I developed RSI, and maybe 4 years since I have needed the wrist brace (my need for it rapidly decreased once I left the job). I think never needing it correlated with increased strength, specifically from indoor rock climbing and related conditioning.

Comment by G Gordon Worley III (gworley3) on What is a book that genuinely changed your life for the better? · 2020-10-21T23:57:44.108Z · EA · GW

I've got a few:

  • GEB
    • Put me on the path to something like thinking of rationality as something intuitive/S1 rather than something I have to think about with a lot of deliberation/S2.
  • Seven Habits of Highly Effective People
    • I often forget how much this book is "in the water" for me. There's all kinds of great stuff in here about prioritization, relationships, and self-improvement. It can feel a little like platitudes at time, but it's really great.
  • The Design of Everyday Things
    • This is kind of out there, but this gave me a strong sense of the importance of grounding ideas in their concrete manifestation. It's not enough to have a good idea; the effects it causes in the world have to actually have the desired good effects, too.
  • Getting Things Done
    • There's alternatives to this, but it made my life better by really helping me adopt a "systems first" mindset to realize that I can improve my life by using systems/procedures and having them well defined and as automatic as possible pays dividends over time.
  • The Evolving Self
    • A very dense book about adult developmental psychology. Doesn't necessarily lay out the best possible model of adult psychological development, but it really got me deep on this and set me on a path that made my life much better.
  • Siddhartha
    • Okay, one book of fiction, but it's a coming of age story and contains something like suggestions for how to relate to your own life. This one was a slow burn for me: I didn't realize the effect it had had on me until I reread it years later.
Comment by G Gordon Worley III (gworley3) on EA's abstract moral epistemology · 2020-10-20T23:51:36.485Z · EA · GW

My somewhat uncharitable reaction while reading this was something like "people running ineffective charities are upset that EAs don't want to fund them, and their philosopher friend then tries to argue that efficiency is not that important".

Comment by G Gordon Worley III (gworley3) on Michael_Wiebe's Shortform · 2020-10-16T21:04:12.386Z · EA · GW

I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.

Comment by G Gordon Worley III (gworley3) on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-13T20:48:33.494Z · EA · GW

Taking a predictive processing perspective, we should expect to see an initial decrease in happiness upon finding oneself living a less expensive lifestyle because it would be a regular "surprise" violating the expected outcome, but then over time for this surprise to go away as daily evidence slowly retrains the brain the to expect less and so have less negative emotional valence around upon perceiving the actual conditions.

However I'd still expect someone who "fell from grace" like this to be somewhat sadder than a person who rose to the same level of wealth or grew up at it because they'd have more sad moments of nostalgia for better times that would be missing from the others, but this would likely be a small effect an not easily detectable (would expect it to be washed out by noise in a study).

Comment by G Gordon Worley III (gworley3) on Open Communication in the Days of Malicious Online Actors · 2020-10-07T08:52:06.971Z · EA · GW

Without rising to the level of maliciousness, I've noticed a related pattern to ones you describe here where sometimes my writing attracts supporters who don't really understand my point and whose statements of support I would not endorse because they misunderstand the ideas. They are easy to tolerate because they say nice things and may come to my defense against people who disagree with me, but much like with your many flavors of malicious supporters they can ultimately have negative effects.

Comment by G Gordon Worley III (gworley3) on If you like a post, tell the author! · 2020-10-07T08:43:22.591Z · EA · GW

I like the general idea here, but personally I dislike comments that don't tell the the reader new information, so just saying the equivalent of "yay" without adding something is likely to get a downvote from me if the comment is upvoted, especially if it gets upvoted above more substantial comments.

Comment by G Gordon Worley III (gworley3) on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-01T00:26:24.541Z · EA · GW

I was quite surprised to hear how large the Fraunhofer Society is given I've never heard of it before! I think in and of itself this is a kind of evidence against their effectiveness, although I could also imagine they've turned out some winning innovations as parts of contracts and so their involvement gets lost because I think of it as a thing that company X did.

Comment by G Gordon Worley III (gworley3) on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-11T00:39:24.835Z · EA · GW

It seems unclear to me that the level of CO2 emissions from one model being greater than one car necessarily implies that AI is likely to have an outsized impact on climate change. I think there's some missing calculations here about number of models, number of cars, how much additional marginal CO2 is being created here not accounted for by other segments, and how much marginal impact on climate change is to be expected from the additional CO2 from AI models. That in hand, we could potentially assess how much additional risk there is from AI in the short term on climate change.

Comment by G Gordon Worley III (gworley3) on How have you become more (or less) engaged with EA in the last year? · 2020-09-09T18:33:24.308Z · EA · GW

Mixed. On the one hand, I feel like I'm less involved because I have less time for engaging with people on the forum and during events and am spending less time on EA-aligned research and writing.

On the other, that's in no small part because I took a job that pays a lot more than my old one, dramatically increasing my ability to give, but it also requires a lot more of my time. So I've sort of transitioned towards an earning-to-give relationship with EA that leaves me feeling more on the outside but still connected and benefiting from EA to guide my giving choices and keep me motivated to give rather than keep more for myself.

Comment by G Gordon Worley III (gworley3) on It's Not Hard to Be Morally Excellent; You Just Choose Not To Be · 2020-08-24T19:21:00.167Z · EA · GW

While I appreciate what the author is getting at, as presented I think it shows a lack of compassion for how difficult it is to do what one reckons one ought to do.

It's true you can simply "choose" to be good, but this is about as easy as saying all you have to do to do X for a wide variety of things X that don't require special skills is choose to do X, such as wake up early, exercise, eat healthier food when it is readily available, etc.. Despite this, lots of people try to explicitly choose to do these things and fail anyway. What's up?

The issue lies in what it means to choose. Unless you suppose some sort of notion of free will, choosing is actually not that easy to control because there's a lot of complex brain functions essentially competing to get you to doing whatever the next thing you do is, and so "choosing" actually looks a lot more like "setting up a lot of conditions both in the external world and in your mind such that a particular choice happens" rather than some atomic, free-willed choice spontaneously happening. Getting to the point where you can feel like you can simply choose to do the right thing all the time requires a tremendous amount of alignment between different parts of the brain competing to produce your next action.

I think it's best to take this article as a kind of advice. Sometimes it will be that the only thing keeping you from doing what you believe you ought to do is just some minor hold-up where you don't believe you can do it, and accepting that you can do it suddenly means that you can, but most of the time the fruit will not hang so low and instead there will be a lot else to do in order to do what one considers morally best.

Comment by G Gordon Worley III (gworley3) on "Good judgement" and its components · 2020-08-21T16:40:28.636Z · EA · GW

Cool. Yeah, when I saw this it sort of jumped out at me as potentially helping deal with what I see as a problem, which is that there are a bunch of folks who are either EA aligned or identify as EA and are also anti-LW, and I would argue that for those folks they are to some extent throwing the baby out with the bathwater, so having a nice way to rebrand and talk about some of the insights from LW-style rationality that are clearly present in EA and that we might reasonably like to share with others without actually relying on LW-centric content is useful.