Posts

G Gordon Worley III's Shortform 2020-08-19T02:09:07.652Z · score: 6 (1 votes)
Expected value under normative uncertainty 2020-06-08T15:45:24.374Z · score: 14 (5 votes)
Vive la Différence? Structural Diversity as a Challenge for Metanormative Theories 2020-05-26T00:45:01.131Z · score: 17 (5 votes)
Comparing the Effect of Rational and Emotional Appeals on Donation Behavior 2020-05-26T00:24:25.239Z · score: 23 (12 votes)
Rejecting Supererogationism 2020-04-20T16:19:16.032Z · score: 10 (4 votes)
Normative Uncertainty and the Dependence Problem 2020-03-23T17:29:03.369Z · score: 15 (7 votes)
Chloramphenicol as intervention in heart attacks 2020-02-17T18:47:44.328Z · score: -1 (4 votes)
Illegible impact is still impact 2020-02-13T21:45:00.234Z · score: 102 (43 votes)
If Veganism Is Not a Choice: The Moral Psychology of Possibilities in Animal Ethics 2020-01-20T18:07:53.003Z · score: 15 (9 votes)
EA and the Paramitas 2020-01-15T03:17:18.158Z · score: 8 (5 votes)
Normative Uncertainty and Probabilistic Moral Knowledge 2019-11-11T20:26:07.702Z · score: 13 (4 votes)
TAISU 2019 Field Report 2019-10-15T01:10:40.645Z · score: 19 (6 votes)
Announcing the Buddhists in EA Group 2019-07-02T20:41:23.737Z · score: 25 (13 votes)
Best thing at EAG SF 2019? 2019-06-24T19:19:49.700Z · score: 16 (7 votes)
What movements does EA have the strongest synergies with? 2018-12-20T23:36:55.641Z · score: 8 (2 votes)
HLAI 2018 Field Report 2018-08-29T00:13:22.489Z · score: 10 (10 votes)
Avoiding AI Races Through Self-Regulation 2018-03-12T20:52:06.475Z · score: 4 (4 votes)
Prioritization Consequences of "Formally Stating the AI Alignment Problem" 2018-02-19T21:31:36.942Z · score: 2 (2 votes)

Comments

Comment by gworley3 on What is a book that genuinely changed your life for the better? · 2020-10-21T23:57:44.108Z · score: 9 (3 votes) · EA · GW

I've got a few:

  • GEB
    • Put me on the path to something like thinking of rationality as something intuitive/S1 rather than something I have to think about with a lot of deliberation/S2.
  • Seven Habits of Highly Effective People
    • I often forget how much this book is "in the water" for me. There's all kinds of great stuff in here about prioritization, relationships, and self-improvement. It can feel a little like platitudes at time, but it's really great.
  • The Design of Everyday Things
    • This is kind of out there, but this gave me a strong sense of the importance of grounding ideas in their concrete manifestation. It's not enough to have a good idea; the effects it causes in the world have to actually have the desired good effects, too.
  • Getting Things Done
    • There's alternatives to this, but it made my life better by really helping me adopt a "systems first" mindset to realize that I can improve my life by using systems/procedures and having them well defined and as automatic as possible pays dividends over time.
  • The Evolving Self
    • A very dense book about adult developmental psychology. Doesn't necessarily lay out the best possible model of adult psychological development, but it really got me deep on this and set me on a path that made my life much better.
  • Siddhartha
    • Okay, one book of fiction, but it's a coming of age story and contains something like suggestions for how to relate to your own life. This one was a slow burn for me: I didn't realize the effect it had had on me until I reread it years later.
Comment by gworley3 on EA's abstract moral epistemology · 2020-10-20T23:51:36.485Z · score: 8 (5 votes) · EA · GW

My somewhat uncharitable reaction while reading this was something like "people running ineffective charities are upset that EAs don't want to fund them, and their philosopher friend then tries to argue that efficiency is not that important".

Comment by gworley3 on Michael_Wiebe's Shortform · 2020-10-16T21:04:12.386Z · score: 2 (1 votes) · EA · GW

I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.

Comment by gworley3 on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-13T20:48:33.494Z · score: 2 (1 votes) · EA · GW

Taking a predictive processing perspective, we should expect to see an initial decrease in happiness upon finding oneself living a less expensive lifestyle because it would be a regular "surprise" violating the expected outcome, but then over time for this surprise to go away as daily evidence slowly retrains the brain the to expect less and so have less negative emotional valence around upon perceiving the actual conditions.

However I'd still expect someone who "fell from grace" like this to be somewhat sadder than a person who rose to the same level of wealth or grew up at it because they'd have more sad moments of nostalgia for better times that would be missing from the others, but this would likely be a small effect an not easily detectable (would expect it to be washed out by noise in a study).

Comment by gworley3 on Open Communication in the Days of Malicious Online Actors · 2020-10-07T08:52:06.971Z · score: 12 (5 votes) · EA · GW

Without rising to the level of maliciousness, I've noticed a related pattern to ones you describe here where sometimes my writing attracts supporters who don't really understand my point and whose statements of support I would not endorse because they misunderstand the ideas. They are easy to tolerate because they say nice things and may come to my defense against people who disagree with me, but much like with your many flavors of malicious supporters they can ultimately have negative effects.

Comment by gworley3 on If you like a post, tell the author! · 2020-10-07T08:43:22.591Z · score: 3 (2 votes) · EA · GW

I like the general idea here, but personally I dislike comments that don't tell the the reader new information, so just saying the equivalent of "yay" without adding something is likely to get a downvote from me if the comment is upvoted, especially if it gets upvoted above more substantial comments.

Comment by gworley3 on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-01T00:26:24.541Z · score: 5 (3 votes) · EA · GW

I was quite surprised to hear how large the Fraunhofer Society is given I've never heard of it before! I think in and of itself this is a kind of evidence against their effectiveness, although I could also imagine they've turned out some winning innovations as parts of contracts and so their involvement gets lost because I think of it as a thing that company X did.

Comment by gworley3 on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-11T00:39:24.835Z · score: 4 (3 votes) · EA · GW

It seems unclear to me that the level of CO2 emissions from one model being greater than one car necessarily implies that AI is likely to have an outsized impact on climate change. I think there's some missing calculations here about number of models, number of cars, how much additional marginal CO2 is being created here not accounted for by other segments, and how much marginal impact on climate change is to be expected from the additional CO2 from AI models. That in hand, we could potentially assess how much additional risk there is from AI in the short term on climate change.

Comment by gworley3 on How have you become more (or less) engaged with EA in the last year? · 2020-09-09T18:33:24.308Z · score: 13 (7 votes) · EA · GW

Mixed. On the one hand, I feel like I'm less involved because I have less time for engaging with people on the forum and during events and am spending less time on EA-aligned research and writing.

On the other, that's in no small part because I took a job that pays a lot more than my old one, dramatically increasing my ability to give, but it also requires a lot more of my time. So I've sort of transitioned towards an earning-to-give relationship with EA that leaves me feeling more on the outside but still connected and benefiting from EA to guide my giving choices and keep me motivated to give rather than keep more for myself.

Comment by gworley3 on It's Not Hard to Be Morally Excellent; You Just Choose Not To Be · 2020-08-24T19:21:00.167Z · score: 13 (8 votes) · EA · GW

While I appreciate what the author is getting at, as presented I think it shows a lack of compassion for how difficult it is to do what one reckons one ought to do.

It's true you can simply "choose" to be good, but this is about as easy as saying all you have to do to do X for a wide variety of things X that don't require special skills is choose to do X, such as wake up early, exercise, eat healthier food when it is readily available, etc.. Despite this, lots of people try to explicitly choose to do these things and fail anyway. What's up?

The issue lies in what it means to choose. Unless you suppose some sort of notion of free will, choosing is actually not that easy to control because there's a lot of complex brain functions essentially competing to get you to doing whatever the next thing you do is, and so "choosing" actually looks a lot more like "setting up a lot of conditions both in the external world and in your mind such that a particular choice happens" rather than some atomic, free-willed choice spontaneously happening. Getting to the point where you can feel like you can simply choose to do the right thing all the time requires a tremendous amount of alignment between different parts of the brain competing to produce your next action.

I think it's best to take this article as a kind of advice. Sometimes it will be that the only thing keeping you from doing what you believe you ought to do is just some minor hold-up where you don't believe you can do it, and accepting that you can do it suddenly means that you can, but most of the time the fruit will not hang so low and instead there will be a lot else to do in order to do what one considers morally best.

Comment by gworley3 on "Good judgement" and its components · 2020-08-21T16:40:28.636Z · score: 11 (4 votes) · EA · GW

Cool. Yeah, when I saw this it sort of jumped out at me as potentially helping deal with what I see as a problem, which is that there are a bunch of folks who are either EA aligned or identify as EA and are also anti-LW, and I would argue that for those folks they are to some extent throwing the baby out with the bathwater, so having a nice way to rebrand and talk about some of the insights from LW-style rationality that are clearly present in EA and that we might reasonably like to share with others without actually relying on LW-centric content is useful.

Comment by gworley3 on "Good judgement" and its components · 2020-08-20T18:07:33.453Z · score: 2 (6 votes) · EA · GW

To what extent are you thinking (without so far explicitly saying it) that "good judgment" is a possible EA rebranding of LessWrong-style rationality?

Comment by gworley3 on G Gordon Worley III's Shortform · 2020-08-19T02:09:22.329Z · score: 3 (2 votes) · EA · GW

Reading this article about the security value of inefficiency, I get the idea that a possibly neglected policy area for EAs is economic resilience, i.e. the idea that we can increase welfare of people both in the short and long term by ensuring our economies don't become brittle or fragile and collapse, wiping out welfare gains from modern economies and cutting off paths to greater welfare gains through economic growth in the future, or at least setting such growth back, causing harm, or making it economically unviable to work on averting existential risks.

Seems possibly related to other policy work focused on things like improving institutions for similar reasons, but more directed at economic policy rather than institution design.

Comment by gworley3 on Donating effectively does not necessarily imply donating tax-deductibly · 2020-08-18T19:09:06.742Z · score: 19 (6 votes) · EA · GW

One place where EAs paying taxes in the US can probably have differential impact is in making donations less than the standard deduction(s) they can take on their taxes such that they would not benefit from itemized deductions from donating to registered charities. Impact concerns aside, unless you're donating enough to exceed your standard deduction, you don't get much or any tax benefits from donating to registered charities, and so all of your donations will be post-tax anyway so you have a unique opportunity to give funds to EA-aligned causes that are otherwise neglected by larger donors because they can't get the tax benefits.

Some examples would include giving small (less than $10k USD) "angel" donations to not-yet-fully-established causes that are still organizing themselves and do not or will not ever have charitable tax status and participating in a donor lottery.

Plenty of caveats to this of course, like if you have employer matching that makes it worthwhile to give to registered charities even if you yourself won't reap any tax benefits, and state-level standard deductions are smaller than federal ones so it's often worth itemizing charitable giving on state returns even if it's not on federal returns.

Comment by gworley3 on Shifts in subjective well-being scales? · 2020-08-18T18:56:08.631Z · score: 5 (3 votes) · EA · GW

Might help to see how this is handled, if at all, with pain scales. For example, I can imagine someone thinking they're having 9/10 or 10/10 pain, say from an injury, but then after something much worse happening, say a cluster headache or a kidney stone, they realize their injury pain was only a 6/10 or 7/10 and the cluster headache or kidney stone was the actual 10/10.

I know there is already some stuff about how the pain scale has cross cultural issues, with people from different cultures reporting and possibly even experience their pain as more or less worse than others from other cultures, so might be an entry point to this line of investigation.

Comment by gworley3 on Book Review: Deontology by Jeremy Bentham · 2020-08-12T18:55:02.530Z · score: 5 (3 votes) · EA · GW

I really enjoyed reading this, and learned a lot about Bentham I didn't know (which wasn't a lot, since I haven't spent a lot of time studying him). I get the sense that his ideas on utilitarianism are convergent with, say, typical virtue ethics in the limit, only he get there by a different route. I also get the sense he didn't foresee super-optimization and was very much thinking about humans who do something closer to satisficing.

Comment by gworley3 on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-08T19:38:44.992Z · score: 4 (2 votes) · EA · GW

I think I agree, but my point is maybe more that the policy as worded now should allow this, so the policy probably needs to be worded more clearly so that a post like this is more clearly excluded.

Comment by gworley3 on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-07T18:08:47.601Z · score: 3 (4 votes) · EA · GW

FWIW, I don't think this post actually endorses a specific candidate, and instead is asking if endorsing a specific candidate makes sense. Maybe that's too close for comfort, but I don't see this post as arguing for a particular candidate, but asking for arguments for or against a particular candidate. Thus as the policy is worded now this seems okay for frontpage or community to me.

Comment by gworley3 on The world is full of wasted motion · 2020-08-06T16:21:27.851Z · score: 3 (2 votes) · EA · GW

FWIW, I think this is a better fit for LessWrong than EA Forum.

Comment by gworley3 on Recommendations for increasing empathy? · 2020-08-02T22:03:27.352Z · score: 3 (2 votes) · EA · GW

Enough meditation seems to pretty reliably increase empathy. My guess is there are studies purporting to show this, but I'm making this suggestion mostly based on personal observation. There's some risk of survivorship bias in this, though, so I don't know how repeatable this suggestion is for the average person.

Comment by gworley3 on What values would EA want to promote? · 2020-07-09T16:27:34.831Z · score: 14 (11 votes) · EA · GW

At its heart, EA seems to naturally tend to promote a few things:

  • a larger moral circle is better than a smaller one
  • considered reasoning ("rationality") is better than doing things for other reasons alone
  • efficiency in generating outcomes is better than being less efficient, even if it means less appealing at an emotional level

I don't know that any of this are what EA should promote, and I'm not sure there's anyone who can unilaterally make the decision of what is normative for EA, so instead I offer these as the norms I think EA is currently promoting in fact, regardless of what anyone thinks EA should be promoting.

Comment by gworley3 on Ramiro's Shortform · 2020-07-05T01:01:33.504Z · score: 2 (2 votes) · EA · GW

One challenge will be that any attempt to time donations based on economic conditions risks becoming a backdoor attempt to time the market, which is notoriously hard.

Comment by gworley3 on Democracy Promotion as an EA Cause Area · 2020-07-01T18:03:48.196Z · score: 3 (4 votes) · EA · GW
EA organizations are also less likely to be perceived as biased or self-interested actors.

I think this is unlikely. EAs disproportionately come from wealthy democratic nations and those who have reason to resist democratic reform will have an easy time painting EA participation in democracy promotion as a slightly more covert version of foreign-state-sponsored attempts at political reform. Further, EAs are also disproportionately from former colonizing states that have historically dominated other states, and I don't think that correlation will be ignored.

This is not to say I necessarily think it is the case that EA attempts at democracy promotion would in fact be covert extensions of existing efforts that have negative connotations, only that I think it will be possible to argue and convince people that they are, making this not an actual advantage.

Comment by gworley3 on Slate Star Codex, EA, and self-reflection · 2020-06-26T20:31:56.485Z · score: 29 (13 votes) · EA · GW

The downvotes are probably because, indeed, the claims only make sense if you look at the level of something like "has Scott ever said anything that could be construed as X". I think a complete engagement with SSC doesn't support the argument, and it's specifically the fact that SSC is willing to address issues in their whole without flinching away from topics that might make a person "guilty by association" that makes it a compelling blog.

Comment by gworley3 on Dignity as alternative EA priority - request for feedback · 2020-06-25T22:52:02.154Z · score: 3 (3 votes) · EA · GW

I think there could be a case that QALY/DALY/etc. calculations should factor in dignity in some way, and view mismatches between, say, QALY calculations and what feels "right" in terms of dignity as sign that the calculations may be leaving something important out. For example, if intervention X produces 10 QALY and makes someone feel 10% less dignified, then either we want to be sure the 10 QALY figure already incorporates that cost to dignity or it is adjusted to consider it. Seems like there is a strong case to be made for possibly more nuanced calculation of metrics, especially so we don't miss cases where ignoring something like dignity would cause us to think an intervention was good but in fact it is overall bad once dignity is factored in. That this has come up and seems an issue suggests some calculations people are doing today fail to factor it in.

Comment by gworley3 on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-23T02:59:56.522Z · score: 4 (3 votes) · EA · GW

I think we don't quite have the words to distinguish between all these things in English, but in my mind there's something like

  • pain - the experience of negative valence
  • suffering - the experience of pain (i.e. the experience of the experience of negative valence)
  • expected suffering - the experience of pain that was expected, so you only suffer for the pain itself
  • unexpected suffering - the experience of pain that was not expected, so you suffer both the pain itself and the pain of suffering itself from it not being expected and thus having negative valence

Of them all, unexpected suffering is the worst because it involves both pain and meta-pain.

Comment by gworley3 on What are good sofware tools? What about general productivity/wellbeing tips? · 2020-06-15T17:58:40.659Z · score: 1 (1 votes) · EA · GW

I live by the advice that best tools are the ones that are available, so for that reason I love to use Google products with few modifications so the same tools/data are accessible on multiple platforms.

I only regularly use a few other things that are either specific to my job or are needed to fill gaps in Google's product line for core use cases I have, like Pocket and Feedly, and even those I'm constantly checking to see if I could get away with not using them.

Thus my task list, documents, calendar, etc. are all in Google.

Comment by gworley3 on How to make the most impactful donation, in terms of taxes? · 2020-06-15T16:30:47.321Z · score: 10 (6 votes) · EA · GW

In some states and municipalities the tax rate is higher due to local taxes. For example, in California the maximum marginal rate is 37% + 13.5% = 50.5%.

Comment by gworley3 on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-08T16:05:58.102Z · score: 4 (3 votes) · EA · GW

I think whether or not B1G1 is effective depends on what you care about. It's clearly not the most effective way to, say, give shoes to people without shoes, since its creating an inefficiency by tying the supply of free shoes to the demand for shoes from wealthy people. And this is to say nothing of whether or not giving shoes to people without shoes is an effective use of money relative to the alternatives.

But, maybe B1G1 is effective in making people more altruistic, and is an effective intervention for creating the conditions under which people will give more effectively. Intuitively I doubt it and expect it fails on this measure because of effects like people feeling their good doing responsibility is discharged by their purchase and so may feel they less owe others more altruism because they already bought $X of "good" goods, thus decreasing their giving on the margin. But I'm not an expert here so I could very well be wrong.

The difficulty is that B1G1 potentially has many effects to consider other than just the direct good done via giving, although that we even need consider those effects is itself evidence in my eyes that it's not effective since we don't, for example, think much about how people giving money to AMF will influence their charitable thoughts since we already feel pretty good about the outcomes of the donations.

Comment by gworley3 on Trade Heroism For Grit. · 2020-06-08T15:56:26.140Z · score: 2 (2 votes) · EA · GW

I think this is a great point.

In the startup world there's a similar notion. Starting a successful business can seem impossible, and many of the big successes depend to a certain extent on luck. It's hard to control for having the right idea at the right time, and people who do manage to do this are usually lucky rather than good at having the right idea at the right time, and only believe otherwise due to survivorship bias.

But the reality of what you can do to make a business succeed or fail is not in having the right idea, but what we might call the right effort. It's putting in the work, having the grit to keep going, and building the skills to improve your baseline chances of success.

Put another way, you can't make yourself lucky, but you can make yourself prepared to take advantage of luck when it appears so as not to fail to take advantage of an opportunity presented to you.

I think this same idea translates back into EA. There's a lot of unseen work that goes into improving the world. It's easy to look at someone who is already having impact and all the things they did and the conditions they found themselves in that made it possible for them to have impact and feel like it would be impossible for you to do that yourself, but actually they did it and so can you, it just takes a lot of work and a willingness to put in years of work to make yourself ready to achieve something.

I think a useful framing is to see the grit and determination to keep going as the real heroism, not the highly visible stuff that gains you accolades or is causally near impact.

Comment by gworley3 on Why and how the EA-Movement has to change · 2020-05-29T16:26:30.373Z · score: 25 (16 votes) · EA · GW

Although this is getting downvotes I do find it interesting at least in that it points out that at least one local group (and so probably more) are operating in ways that turn off interested folks. Unfortunately we don't know which group, although I encourage the poster to reach out to someone at CEA and maybe they can look into it and see if there is anything they can do to help this group improve (if that is indeed appropriate) as part of their community-building efforts.

But I think it's worth highlighting that here we have someone who care about about EA that they came here to make a post about how frustrated they are with their experience with EA! I think that points out that there is likely some opportunity to do better embedded in this!

Comment by gworley3 on [deleted post] 2020-05-26T00:14:39.556Z

These problems appear promising based on the "TIN" framework that looks for problems that are tractable, important/impactful, and neglected. The reason EAs tend not to focus on particular issues tends to be that they are either considered insufficiently tractible or already being paid enough attention that marginal effort by EAs would be unlikely to have as much of an effect as work on more neglected areas.

I often think of EA as looking for where there is the highest return on investment of money, attention, and effort, and that often means ignoring various issues because they offer comparatively worse ROI, even if they are important. For example, most cancer research is important and tractable (insofar as people keep coming up with ideas and work on them), but it's not neglected (people already put billions of dollars into it) so unless you find some corner of cancer research that is neglected the marginal impact of an EA is small, whereas an EA can have large marginal impact in the areas mentioned because they are relatively neglected in addition to being important and, it is believed, tractable.

Comment by gworley3 on Developing my inner self vs. doing external actions · 2020-05-25T03:29:44.151Z · score: 3 (3 votes) · EA · GW

More generally I think this is a question of what is sometimes called the explore/exploit trade-off: how much time to you spend building capacity compared to using that capacity, in cases where effort on those actions don't overlap.

In the real world there tends to be a lot of overlap, but there is always some marginal amount given up at any choice made along the explore/exploit Pareto frontier. So there's no one answer since it largely depends on what you are trying to achieve, other than to say you should look to expand the frontier wherever possible so you can get more of both.

Comment by gworley3 on Ben_Snodin's Shortform · 2020-05-06T17:45:00.812Z · score: 7 (3 votes) · EA · GW

FWIW I think you should make this a top level post.

Comment by gworley3 on Outcome vs process · 2020-05-04T20:57:50.979Z · score: 2 (2 votes) · EA · GW

I don't know of any good resources to point you at, but I'll add this comment about how I see this as it exists in the EA community.

EA has a tendency to focus on outcomes. This makes sense given the philosophy of EA, and especially makes sense when you look at the state of charitable giving outside EA, where just paying serious attention to outcomes at all would be a major change (in the sense of focusing on things like ROI, impact, effectiveness and efficiency, etc.).

But as always when you set a direction, it can easily overshoot and land you with more of what you wanted than you want, i.e. too much focus on outcomes and not enough on process. So I think EA actively has to work to make sure it doesn't forget about balancing process with outcomes given the outcome-focused outlook of the movement. So far I think folks have done a good job of this (at least in the last few years, maybe less so in the early days), but I also feel a regular pressure when speaking with EAs to need to continually seek the balance rather than letting outcomes overtake process focus.

Comment by gworley3 on saulius's Shortform · 2020-04-24T17:00:20.918Z · score: 1 (1 votes) · EA · GW

Thanks for the gdocs to markdown tip. I didn't know I could do that, but it'll make writing posts for LW and EAF more convenient!

Comment by gworley3 on COVID-19 in developing countries · 2020-04-23T15:50:59.861Z · score: 6 (5 votes) · EA · GW

Assuming you mean this seriously, I think most people value human lives for more than their economic products, such that most people are willing to spend more than what a person contributes back to the global economy. Yes, sometimes people make arguments from economics to try to assess how much we value a human life in terms of money, but these tend to be looking at how much we actually spend on such efforts, which in rich countries works out to about $50,000/year when looking primarily at medical spending, not on how much the average person produces.

Comment by gworley3 on Why Don’t We Use Chemical Weapons Anymore? · 2020-04-23T15:44:53.642Z · score: 1 (1 votes) · EA · GW

I think you make a great point, and it in fact fits with the reasoning here. Although militaries are mobile and stealthy, civilians, even during wartime, remain rooted and obvious. That's just the nature of things: it's much easier to make soldiers mobile than it is to make civilians because during a war, not considering the value of human life for its own sake, civilians serve purposes tied to fixed resources like farms and factories. This suggests that chemical weapons should still be appealing in war, but only against civilian targets.

Quick Googling isn't getting me something like a list of times chemical warfare agents were used, but I expect it would show a trend towards primarily use against civilians after the first world war.

Comment by gworley3 on The Case for Impact Purchase | Part 1 · 2020-04-23T15:33:40.561Z · score: 3 (3 votes) · EA · GW
This could be a particularly interesting time to trial impact purchases used in conjunction with government UBI (if that ends up being fully brought in anywhere). UBI then removes the barrier of requiring a secure salary before taking on a project.

Impact purchases + EA Hotel seems like a match made in heaven. EA Hotel even talks about taking a hits-based approach, so having a pool of funds to award both EA Hotel (or whatever it's new name is) and the persons staying at the hotel who did the work that earned the funding sounds like a pretty interesting idea!

Comment by gworley3 on The Case for Impact Purchase | Part 1 · 2020-04-23T15:27:46.545Z · score: 3 (2 votes) · EA · GW

That's pretty cool, but it seems mostly focused around supporting government agencies using this as an alternative funding mechanism to save money or defer costs or avoid paying for undelivered services, thus improving government spending efficiency. I wonder what it would take to develop that into something that would support a wider range of funding sources? Seems like something someone with some expertise and experience in finance could potentially pioneer as a neglected way to generally support EA.

Comment by gworley3 on The Case for Impact Purchase | Part 1 · 2020-04-14T15:42:46.673Z · score: 15 (9 votes) · EA · GW

I've not thought about this idea much or read the linked articles on impact purchases, but a few quick thoughts:

  • I think prizes suffer from only allowing the most risk-tolerant to be incentivized by them since there is generally an aspect of competition in them and the winner often takes all or most of the prize funds.
  • Impact purchases seem like an improvement over this if you set it up like a grant that pays at the end rather than the beginning, so it's tied to a single project/team and not a competition.
  • There might be hybrid model possible where a certain amount of funds are granted at the start of the project to cover costs and additional funding is awarded only as certain project milestones are hit, up to and including completion of the project. Some of this completion money is for awarding impact and not just funding the next phase of the project, as would be the case in a grant, with most of the impact award money held back until the end.
  • This lets me imagine funding something at like 20% the value of its impact up until it is created at which point I pay off the remaining 80% owed.
Comment by gworley3 on Why I'm Not Vegan · 2020-04-09T15:55:50.734Z · score: 12 (10 votes) · EA · GW

Upvoted for sharing your reason for downvoting. I wish people did this more often!

Comment by gworley3 on What do we mean by 'suffering'? · 2020-04-08T15:42:00.806Z · score: 3 (2 votes) · EA · GW

Yes, Lukas's post was what got me thinking about suffering in more detail and helped lead to the creation of those two posts. I think it's linked from one or both of them.

Comment by gworley3 on What do we mean by 'suffering'? · 2020-04-07T16:24:52.641Z · score: 14 (6 votes) · EA · GW

I wrote two posts exploring suffering, both with plenty of links to more resources thinking about what we mean by "suffering": "Is Feedback Suffering?" and "Suffering and Intractable Pain".

My views have evolved since I wrote those posts so I don't necessarily endorse everything in them anymore, but hopefully they are useful starting points. For what it's worth, my view now is more akin to the traditional Buddhist view on suffering as described by the teaching on dependent origination.

Comment by gworley3 on Normative Uncertainty and the Dependence Problem · 2020-03-24T18:26:01.886Z · score: 1 (1 votes) · EA · GW

Yeah, sounds interesting!

Comment by gworley3 on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-18T19:12:46.480Z · score: 4 (3 votes) · EA · GW

You don't mention this, and maybe there is no research on it, but do we expect there to be much opportunity for resistance effects, similar to what we see with antibiotics and the evolution of resistant strains?

For example, would the deployment of large amounts of far-ultraviolet lamps result in selection pressures on microbes to become resistant to them? I think it's not clear, since for example we don't seem to see lots of heat resistant microbes evolving (outside of places like thermal vents) even though we regularly use high heat to kill them.

And even if it did would it be worth the tradeoff? For example, I think even if we knew about the possibility of antibiotic resistance bacteria when penicillin was created, we would still have used penicillin extensively because it was able to cure so many diseases and increase human welfare, although we might have done it with greater care about protocols and their enforcement, so with hindsight maybe we would do something similar here with far-ultraviolet light if we used it.

Comment by gworley3 on [deleted post] 2020-03-17T16:49:53.296Z

I think no.

History is full of plagues and other global threats of similar or worse scale. For example, I thought it could be argued that bubonic plague or smallpox were much bigger threats to humanity and individual humans than COVID 19. Yes from the inside COVID 19 feels particularly threatening, but I think that has more to do with the context in which it is happening, i.e. a world where it felt to many people like something like this couldn't really happen. Smallpox, on the other hand, just kept killing people all the time for hundreds of years and everyone just accepted it as part of life. So on that measure COVID 19 doesn't seem special to simulate vs. other similar types of threats humanity has faced.

Further, it's hard to see why COVID 19 would be of interest to simulators. Presumably they would be technologically advanced enough that something like COVID 19 would not likely be interesting to learn from for some specific situation they are likely to deal with, so it would only be for historical purposes, hence I think the only relevant question is if COVID 19 is interesting enough that it would be more likely to be simulated than other past events, and I think no, so I think it offer no update to the likelihood that we are in a simulation.

Comment by gworley3 on Is nanotechnology (such as APM) important for EAs' to work on? · 2020-03-12T17:56:40.091Z · score: 4 (3 votes) · EA · GW

There are at least two things that go by the term "nanotechnology" but are really different things: atomically precise manufacturing (e.g. Drexler, grey goo, and other stuff that is what originally went by the term "nanotech") and nanoscale materials science (e.g. advanced modern materials science that uses various techniques, but not APM, to create materials with properties based on controlling nanoscale features of the material). Which did you have in mind? I think that will affect the kinds of answers people will give.

Comment by gworley3 on Should effective altruists give money to local beggars? · 2020-02-28T22:45:18.165Z · score: 1 (1 votes) · EA · GW

My impression is that many of these beggars are earning enough to survive, albeit in poverty, so your marginal dollar is probably more effective elsewhere given most people are not making the choice to give to them or not based on EA principles and others will continue to support them. If you consider local homelessness a top priority, my guess is that other interventions than small direct giving would be more effective, though I have not looked into it.

Comment by gworley3 on Option Value, an Introductory Guide · 2020-02-21T18:25:24.729Z · score: 3 (3 votes) · EA · GW

Thanks for this. I didn't know that option value is a thing in the literature as opposed to just a common pattern in reasoning. Having handles for things is often useful, and I really appreciate it when people help bring those things in explicitly to EA, since like the rationality community I find it has a tendency to reinvent terms for existing things because of unfamiliarity with wider literature (which is not a complaint, since humans have figured out so much stuff, it's sometimes hard to know that someone else already worked out the same ideas, especially if they did so in a different domain from the one where you are working).