Posts

Expected value under normative uncertainty 2020-06-08T15:45:24.374Z · score: 14 (5 votes)
Vive la Différence? Structural Diversity as a Challenge for Metanormative Theories 2020-05-26T00:45:01.131Z · score: 17 (5 votes)
Comparing the Effect of Rational and Emotional Appeals on Donation Behavior 2020-05-26T00:24:25.239Z · score: 23 (12 votes)
Rejecting Supererogationism 2020-04-20T16:19:16.032Z · score: 10 (4 votes)
Normative Uncertainty and the Dependence Problem 2020-03-23T17:29:03.369Z · score: 15 (7 votes)
Chloramphenicol as intervention in heart attacks 2020-02-17T18:47:44.328Z · score: -1 (4 votes)
Illegible impact is still impact 2020-02-13T21:45:00.234Z · score: 101 (42 votes)
If Veganism Is Not a Choice: The Moral Psychology of Possibilities in Animal Ethics 2020-01-20T18:07:53.003Z · score: 15 (9 votes)
EA and the Paramitas 2020-01-15T03:17:18.158Z · score: 8 (5 votes)
Normative Uncertainty and Probabilistic Moral Knowledge 2019-11-11T20:26:07.702Z · score: 13 (4 votes)
TAISU 2019 Field Report 2019-10-15T01:10:40.645Z · score: 19 (6 votes)
Announcing the Buddhists in EA Group 2019-07-02T20:41:23.737Z · score: 25 (13 votes)
Best thing at EAG SF 2019? 2019-06-24T19:19:49.700Z · score: 16 (7 votes)
What movements does EA have the strongest synergies with? 2018-12-20T23:36:55.641Z · score: 8 (2 votes)
HLAI 2018 Field Report 2018-08-29T00:13:22.489Z · score: 10 (10 votes)
Avoiding AI Races Through Self-Regulation 2018-03-12T20:52:06.475Z · score: 4 (4 votes)
Prioritization Consequences of "Formally Stating the AI Alignment Problem" 2018-02-19T21:31:36.942Z · score: 2 (2 votes)

Comments

Comment by gworley3 on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-08T19:38:44.992Z · score: 2 (1 votes) · EA · GW

I think I agree, but my point is maybe more that the policy as worded now should allow this, so the policy probably needs to be worded more clearly so that a post like this is more clearly excluded.

Comment by gworley3 on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-07T18:08:47.601Z · score: 2 (3 votes) · EA · GW

FWIW, I don't think this post actually endorses a specific candidate, and instead is asking if endorsing a specific candidate makes sense. Maybe that's too close for comfort, but I don't see this post as arguing for a particular candidate, but asking for arguments for or against a particular candidate. Thus as the policy is worded now this seems okay for frontpage or community to me.

Comment by gworley3 on The world is full of wasted motion · 2020-08-06T16:21:27.851Z · score: 2 (1 votes) · EA · GW

FWIW, I think this is a better fit for LessWrong than EA Forum.

Comment by gworley3 on Recommendations for increasing empathy? · 2020-08-02T22:03:27.352Z · score: 3 (2 votes) · EA · GW

Enough meditation seems to pretty reliably increase empathy. My guess is there are studies purporting to show this, but I'm making this suggestion mostly based on personal observation. There's some risk of survivorship bias in this, though, so I don't know how repeatable this suggestion is for the average person.

Comment by gworley3 on What values would EA want to promote? · 2020-07-09T16:27:34.831Z · score: 14 (11 votes) · EA · GW

At its heart, EA seems to naturally tend to promote a few things:

  • a larger moral circle is better than a smaller one
  • considered reasoning ("rationality") is better than doing things for other reasons alone
  • efficiency in generating outcomes is better than being less efficient, even if it means less appealing at an emotional level

I don't know that any of this are what EA should promote, and I'm not sure there's anyone who can unilaterally make the decision of what is normative for EA, so instead I offer these as the norms I think EA is currently promoting in fact, regardless of what anyone thinks EA should be promoting.

Comment by gworley3 on Ramiro's Shortform · 2020-07-05T01:01:33.504Z · score: 2 (2 votes) · EA · GW

One challenge will be that any attempt to time donations based on economic conditions risks becoming a backdoor attempt to time the market, which is notoriously hard.

Comment by gworley3 on Democracy Promotion as an EA Cause Area · 2020-07-01T18:03:48.196Z · score: 3 (4 votes) · EA · GW
EA organizations are also less likely to be perceived as biased or self-interested actors.

I think this is unlikely. EAs disproportionately come from wealthy democratic nations and those who have reason to resist democratic reform will have an easy time painting EA participation in democracy promotion as a slightly more covert version of foreign-state-sponsored attempts at political reform. Further, EAs are also disproportionately from former colonizing states that have historically dominated other states, and I don't think that correlation will be ignored.

This is not to say I necessarily think it is the case that EA attempts at democracy promotion would in fact be covert extensions of existing efforts that have negative connotations, only that I think it will be possible to argue and convince people that they are, making this not an actual advantage.

Comment by gworley3 on Slate Star Codex, EA, and self-reflection · 2020-06-26T20:31:56.485Z · score: 23 (12 votes) · EA · GW

The downvotes are probably because, indeed, the claims only make sense if you look at the level of something like "has Scott ever said anything that could be construed as X". I think a complete engagement with SSC doesn't support the argument, and it's specifically the fact that SSC is willing to address issues in their whole without flinching away from topics that might make a person "guilty by association" that makes it a compelling blog.

Comment by gworley3 on Dignity as alternative EA priority - request for feedback · 2020-06-25T22:52:02.154Z · score: 2 (2 votes) · EA · GW

I think there could be a case that QALY/DALY/etc. calculations should factor in dignity in some way, and view mismatches between, say, QALY calculations and what feels "right" in terms of dignity as sign that the calculations may be leaving something important out. For example, if intervention X produces 10 QALY and makes someone feel 10% less dignified, then either we want to be sure the 10 QALY figure already incorporates that cost to dignity or it is adjusted to consider it. Seems like there is a strong case to be made for possibly more nuanced calculation of metrics, especially so we don't miss cases where ignoring something like dignity would cause us to think an intervention was good but in fact it is overall bad once dignity is factored in. That this has come up and seems an issue suggests some calculations people are doing today fail to factor it in.

Comment by gworley3 on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-23T02:59:56.522Z · score: 4 (3 votes) · EA · GW

I think we don't quite have the words to distinguish between all these things in English, but in my mind there's something like

  • pain - the experience of negative valence
  • suffering - the experience of pain (i.e. the experience of the experience of negative valence)
  • expected suffering - the experience of pain that was expected, so you only suffer for the pain itself
  • unexpected suffering - the experience of pain that was not expected, so you suffer both the pain itself and the pain of suffering itself from it not being expected and thus having negative valence

Of them all, unexpected suffering is the worst because it involves both pain and meta-pain.

Comment by gworley3 on What are good sofware tools? What about general productivity/wellbeing tips? · 2020-06-15T17:58:40.659Z · score: 1 (1 votes) · EA · GW

I live by the advice that best tools are the ones that are available, so for that reason I love to use Google products with few modifications so the same tools/data are accessible on multiple platforms.

I only regularly use a few other things that are either specific to my job or are needed to fill gaps in Google's product line for core use cases I have, like Pocket and Feedly, and even those I'm constantly checking to see if I could get away with not using them.

Thus my task list, documents, calendar, etc. are all in Google.

Comment by gworley3 on How to make the most impactful donation, in terms of taxes? · 2020-06-15T16:30:47.321Z · score: 10 (6 votes) · EA · GW

In some states and municipalities the tax rate is higher due to local taxes. For example, in California the maximum marginal rate is 37% + 13.5% = 50.5%.

Comment by gworley3 on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-08T16:05:58.102Z · score: 4 (3 votes) · EA · GW

I think whether or not B1G1 is effective depends on what you care about. It's clearly not the most effective way to, say, give shoes to people without shoes, since its creating an inefficiency by tying the supply of free shoes to the demand for shoes from wealthy people. And this is to say nothing of whether or not giving shoes to people without shoes is an effective use of money relative to the alternatives.

But, maybe B1G1 is effective in making people more altruistic, and is an effective intervention for creating the conditions under which people will give more effectively. Intuitively I doubt it and expect it fails on this measure because of effects like people feeling their good doing responsibility is discharged by their purchase and so may feel they less owe others more altruism because they already bought $X of "good" goods, thus decreasing their giving on the margin. But I'm not an expert here so I could very well be wrong.

The difficulty is that B1G1 potentially has many effects to consider other than just the direct good done via giving, although that we even need consider those effects is itself evidence in my eyes that it's not effective since we don't, for example, think much about how people giving money to AMF will influence their charitable thoughts since we already feel pretty good about the outcomes of the donations.

Comment by gworley3 on Trade Heroism For Grit. · 2020-06-08T15:56:26.140Z · score: 2 (2 votes) · EA · GW

I think this is a great point.

In the startup world there's a similar notion. Starting a successful business can seem impossible, and many of the big successes depend to a certain extent on luck. It's hard to control for having the right idea at the right time, and people who do manage to do this are usually lucky rather than good at having the right idea at the right time, and only believe otherwise due to survivorship bias.

But the reality of what you can do to make a business succeed or fail is not in having the right idea, but what we might call the right effort. It's putting in the work, having the grit to keep going, and building the skills to improve your baseline chances of success.

Put another way, you can't make yourself lucky, but you can make yourself prepared to take advantage of luck when it appears so as not to fail to take advantage of an opportunity presented to you.

I think this same idea translates back into EA. There's a lot of unseen work that goes into improving the world. It's easy to look at someone who is already having impact and all the things they did and the conditions they found themselves in that made it possible for them to have impact and feel like it would be impossible for you to do that yourself, but actually they did it and so can you, it just takes a lot of work and a willingness to put in years of work to make yourself ready to achieve something.

I think a useful framing is to see the grit and determination to keep going as the real heroism, not the highly visible stuff that gains you accolades or is causally near impact.

Comment by gworley3 on Why and how the EA-Movement has to change · 2020-05-29T16:26:30.373Z · score: 25 (16 votes) · EA · GW

Although this is getting downvotes I do find it interesting at least in that it points out that at least one local group (and so probably more) are operating in ways that turn off interested folks. Unfortunately we don't know which group, although I encourage the poster to reach out to someone at CEA and maybe they can look into it and see if there is anything they can do to help this group improve (if that is indeed appropriate) as part of their community-building efforts.

But I think it's worth highlighting that here we have someone who care about about EA that they came here to make a post about how frustrated they are with their experience with EA! I think that points out that there is likely some opportunity to do better embedded in this!

Comment by gworley3 on [deleted post] 2020-05-26T00:14:39.556Z

These problems appear promising based on the "TIN" framework that looks for problems that are tractable, important/impactful, and neglected. The reason EAs tend not to focus on particular issues tends to be that they are either considered insufficiently tractible or already being paid enough attention that marginal effort by EAs would be unlikely to have as much of an effect as work on more neglected areas.

I often think of EA as looking for where there is the highest return on investment of money, attention, and effort, and that often means ignoring various issues because they offer comparatively worse ROI, even if they are important. For example, most cancer research is important and tractable (insofar as people keep coming up with ideas and work on them), but it's not neglected (people already put billions of dollars into it) so unless you find some corner of cancer research that is neglected the marginal impact of an EA is small, whereas an EA can have large marginal impact in the areas mentioned because they are relatively neglected in addition to being important and, it is believed, tractable.

Comment by gworley3 on Developing my inner self vs. doing external actions · 2020-05-25T03:29:44.151Z · score: 3 (3 votes) · EA · GW

More generally I think this is a question of what is sometimes called the explore/exploit trade-off: how much time to you spend building capacity compared to using that capacity, in cases where effort on those actions don't overlap.

In the real world there tends to be a lot of overlap, but there is always some marginal amount given up at any choice made along the explore/exploit Pareto frontier. So there's no one answer since it largely depends on what you are trying to achieve, other than to say you should look to expand the frontier wherever possible so you can get more of both.

Comment by gworley3 on Ben_Snodin's Shortform · 2020-05-06T17:45:00.812Z · score: 7 (3 votes) · EA · GW

FWIW I think you should make this a top level post.

Comment by gworley3 on Outcome vs process · 2020-05-04T20:57:50.979Z · score: 2 (2 votes) · EA · GW

I don't know of any good resources to point you at, but I'll add this comment about how I see this as it exists in the EA community.

EA has a tendency to focus on outcomes. This makes sense given the philosophy of EA, and especially makes sense when you look at the state of charitable giving outside EA, where just paying serious attention to outcomes at all would be a major change (in the sense of focusing on things like ROI, impact, effectiveness and efficiency, etc.).

But as always when you set a direction, it can easily overshoot and land you with more of what you wanted than you want, i.e. too much focus on outcomes and not enough on process. So I think EA actively has to work to make sure it doesn't forget about balancing process with outcomes given the outcome-focused outlook of the movement. So far I think folks have done a good job of this (at least in the last few years, maybe less so in the early days), but I also feel a regular pressure when speaking with EAs to need to continually seek the balance rather than letting outcomes overtake process focus.

Comment by gworley3 on saulius's Shortform · 2020-04-24T17:00:20.918Z · score: 1 (1 votes) · EA · GW

Thanks for the gdocs to markdown tip. I didn't know I could do that, but it'll make writing posts for LW and EAF more convenient!

Comment by gworley3 on COVID-19 in developing countries · 2020-04-23T15:50:59.861Z · score: 6 (5 votes) · EA · GW

Assuming you mean this seriously, I think most people value human lives for more than their economic products, such that most people are willing to spend more than what a person contributes back to the global economy. Yes, sometimes people make arguments from economics to try to assess how much we value a human life in terms of money, but these tend to be looking at how much we actually spend on such efforts, which in rich countries works out to about $50,000/year when looking primarily at medical spending, not on how much the average person produces.

Comment by gworley3 on Why Don’t We Use Chemical Weapons Anymore? · 2020-04-23T15:44:53.642Z · score: 1 (1 votes) · EA · GW

I think you make a great point, and it in fact fits with the reasoning here. Although militaries are mobile and stealthy, civilians, even during wartime, remain rooted and obvious. That's just the nature of things: it's much easier to make soldiers mobile than it is to make civilians because during a war, not considering the value of human life for its own sake, civilians serve purposes tied to fixed resources like farms and factories. This suggests that chemical weapons should still be appealing in war, but only against civilian targets.

Quick Googling isn't getting me something like a list of times chemical warfare agents were used, but I expect it would show a trend towards primarily use against civilians after the first world war.

Comment by gworley3 on The Case for Impact Purchase | Part 1 · 2020-04-23T15:33:40.561Z · score: 3 (3 votes) · EA · GW
This could be a particularly interesting time to trial impact purchases used in conjunction with government UBI (if that ends up being fully brought in anywhere). UBI then removes the barrier of requiring a secure salary before taking on a project.

Impact purchases + EA Hotel seems like a match made in heaven. EA Hotel even talks about taking a hits-based approach, so having a pool of funds to award both EA Hotel (or whatever it's new name is) and the persons staying at the hotel who did the work that earned the funding sounds like a pretty interesting idea!

Comment by gworley3 on The Case for Impact Purchase | Part 1 · 2020-04-23T15:27:46.545Z · score: 3 (2 votes) · EA · GW

That's pretty cool, but it seems mostly focused around supporting government agencies using this as an alternative funding mechanism to save money or defer costs or avoid paying for undelivered services, thus improving government spending efficiency. I wonder what it would take to develop that into something that would support a wider range of funding sources? Seems like something someone with some expertise and experience in finance could potentially pioneer as a neglected way to generally support EA.

Comment by gworley3 on The Case for Impact Purchase | Part 1 · 2020-04-14T15:42:46.673Z · score: 15 (9 votes) · EA · GW

I've not thought about this idea much or read the linked articles on impact purchases, but a few quick thoughts:

  • I think prizes suffer from only allowing the most risk-tolerant to be incentivized by them since there is generally an aspect of competition in them and the winner often takes all or most of the prize funds.
  • Impact purchases seem like an improvement over this if you set it up like a grant that pays at the end rather than the beginning, so it's tied to a single project/team and not a competition.
  • There might be hybrid model possible where a certain amount of funds are granted at the start of the project to cover costs and additional funding is awarded only as certain project milestones are hit, up to and including completion of the project. Some of this completion money is for awarding impact and not just funding the next phase of the project, as would be the case in a grant, with most of the impact award money held back until the end.
  • This lets me imagine funding something at like 20% the value of its impact up until it is created at which point I pay off the remaining 80% owed.
Comment by gworley3 on Why I'm Not Vegan · 2020-04-09T15:55:50.734Z · score: 12 (10 votes) · EA · GW

Upvoted for sharing your reason for downvoting. I wish people did this more often!

Comment by gworley3 on What do we mean by 'suffering'? · 2020-04-08T15:42:00.806Z · score: 3 (2 votes) · EA · GW

Yes, Lukas's post was what got me thinking about suffering in more detail and helped lead to the creation of those two posts. I think it's linked from one or both of them.

Comment by gworley3 on What do we mean by 'suffering'? · 2020-04-07T16:24:52.641Z · score: 14 (6 votes) · EA · GW

I wrote two posts exploring suffering, both with plenty of links to more resources thinking about what we mean by "suffering": "Is Feedback Suffering?" and "Suffering and Intractable Pain".

My views have evolved since I wrote those posts so I don't necessarily endorse everything in them anymore, but hopefully they are useful starting points. For what it's worth, my view now is more akin to the traditional Buddhist view on suffering as described by the teaching on dependent origination.

Comment by gworley3 on Normative Uncertainty and the Dependence Problem · 2020-03-24T18:26:01.886Z · score: 1 (1 votes) · EA · GW

Yeah, sounds interesting!

Comment by gworley3 on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-18T19:12:46.480Z · score: 4 (3 votes) · EA · GW

You don't mention this, and maybe there is no research on it, but do we expect there to be much opportunity for resistance effects, similar to what we see with antibiotics and the evolution of resistant strains?

For example, would the deployment of large amounts of far-ultraviolet lamps result in selection pressures on microbes to become resistant to them? I think it's not clear, since for example we don't seem to see lots of heat resistant microbes evolving (outside of places like thermal vents) even though we regularly use high heat to kill them.

And even if it did would it be worth the tradeoff? For example, I think even if we knew about the possibility of antibiotic resistance bacteria when penicillin was created, we would still have used penicillin extensively because it was able to cure so many diseases and increase human welfare, although we might have done it with greater care about protocols and their enforcement, so with hindsight maybe we would do something similar here with far-ultraviolet light if we used it.

Comment by gworley3 on [deleted post] 2020-03-17T16:49:53.296Z

I think no.

History is full of plagues and other global threats of similar or worse scale. For example, I thought it could be argued that bubonic plague or smallpox were much bigger threats to humanity and individual humans than COVID 19. Yes from the inside COVID 19 feels particularly threatening, but I think that has more to do with the context in which it is happening, i.e. a world where it felt to many people like something like this couldn't really happen. Smallpox, on the other hand, just kept killing people all the time for hundreds of years and everyone just accepted it as part of life. So on that measure COVID 19 doesn't seem special to simulate vs. other similar types of threats humanity has faced.

Further, it's hard to see why COVID 19 would be of interest to simulators. Presumably they would be technologically advanced enough that something like COVID 19 would not likely be interesting to learn from for some specific situation they are likely to deal with, so it would only be for historical purposes, hence I think the only relevant question is if COVID 19 is interesting enough that it would be more likely to be simulated than other past events, and I think no, so I think it offer no update to the likelihood that we are in a simulation.

Comment by gworley3 on Is nanotechnology (such as APM) important for EAs' to work on? · 2020-03-12T17:56:40.091Z · score: 4 (3 votes) · EA · GW

There are at least two things that go by the term "nanotechnology" but are really different things: atomically precise manufacturing (e.g. Drexler, grey goo, and other stuff that is what originally went by the term "nanotech") and nanoscale materials science (e.g. advanced modern materials science that uses various techniques, but not APM, to create materials with properties based on controlling nanoscale features of the material). Which did you have in mind? I think that will affect the kinds of answers people will give.

Comment by gworley3 on Should effective altruists give money to local beggars? · 2020-02-28T22:45:18.165Z · score: 1 (1 votes) · EA · GW

My impression is that many of these beggars are earning enough to survive, albeit in poverty, so your marginal dollar is probably more effective elsewhere given most people are not making the choice to give to them or not based on EA principles and others will continue to support them. If you consider local homelessness a top priority, my guess is that other interventions than small direct giving would be more effective, though I have not looked into it.

Comment by gworley3 on Option Value, an Introductory Guide · 2020-02-21T18:25:24.729Z · score: 3 (3 votes) · EA · GW

Thanks for this. I didn't know that option value is a thing in the literature as opposed to just a common pattern in reasoning. Having handles for things is often useful, and I really appreciate it when people help bring those things in explicitly to EA, since like the rationality community I find it has a tendency to reinvent terms for existing things because of unfamiliarity with wider literature (which is not a complaint, since humans have figured out so much stuff, it's sometimes hard to know that someone else already worked out the same ideas, especially if they did so in a different domain from the one where you are working).

Comment by gworley3 on Chloramphenicol as intervention in heart attacks · 2020-02-20T22:26:47.447Z · score: 1 (3 votes) · EA · GW

Sure, this was just me taking a guess because I needed a figure to work out the numbers. I expect better analysis, if this is of interest to someone, might produce a different figure and different conclusion about cost effectiveness.

Comment by gworley3 on Using Charity Performance Metrics as an Excuse Not to Give · 2020-02-19T19:43:35.153Z · score: 3 (2 votes) · EA · GW

A quick scan of the article makes me want to say "more evidence needed before we can conclude much": they ran two studies, one on 50 Stanford students, one on 400 Mechanical Turkers. Neither seems to provide very strong evidence to me about how people might make giving decisions in the real world since the study conditions feel pretty far to me to what actual giving decision feel like. Here's the setup of the two studies from the paper:

Study 1 involves data from 50 Stanford University undergraduate students in April 2014 who made a series of binary decisions between money for charities and/or money for themselves. In addition to receiving a $20 completion fee, participants knew that one of their decisions would be randomly selected to count for payment.14 The design and results for Study 1 are detailed below (and see Online Appendix B.1 for instructions and screenshots).
Three types of charities are involved in Study 1. The first charity type involves three Make-A-Wish Foundation state chapters that vary according to their program expense rates, or percentages of their budgets spent directly on their programs and services (i.e., not spent on overhead costs): the New Hampshire chapter (90%), the Rhode Island chapter (80%), and the Maine chapter (71%).15 The second charity type involves three Knowledge Is Power Program (KIPP) charter schools that vary according to college matriculation rates among their students who completed the eighth grade: Chicago (92%), Philadelphia (74%), and Denver (61%).16 The third charity type involves three Bay Area animal shelters that vary according to their live release rates: the San Francisco SPCA (97%), the Humane Society of Silicon Valley (82%), and the San Jose Animal Care and Services (66%).

And the second one:

Study 2 involves data from 400 Amazon Mechanical Turk workers in January 2018 who made five decisions about how much money to keep for themselves or to instead donate to the Make-A-Wish Foundation.32 In addition to receiving a $1 completion fee, participants knew that one of their decisions would be randomly selected to count for payment.33 Relative to Study 1, Study 2 allows for a test of excuse-driven responses to charity performance metrics on a larger sample and via an identification strategy that does not require a normalization procedure. The design and results for Study 2 are detailed below (and see Online Appendix B.4 for instructions and screenshots).
Comment by gworley3 on The Web of Prevention · 2020-02-19T19:34:23.272Z · score: 1 (3 votes) · EA · GW

I've noticed something similar around "security mindset": Eliezer and MIRI have used the phrase to talk about a specific version of it in relation to AI safety, but the term, as far as I know, originates with Bruce Schneier and computer security, although I can't recall MIRI publications mentioning that much, possibly because they didn't even realize that's where the term came from. Hard to know, a probably not very relevant other than to weirdos like us. ;-)

Comment by gworley3 on Thoughts on electoral reform · 2020-02-18T19:48:40.447Z · score: 15 (9 votes) · EA · GW

In the US, especially for federal elections and especially especially for election of the president, I expect voting reform to have low tractability because I believe it requires constitutional reform at the national and possibly the state level. Given how hard it is to pass amendments to the federal constitution and given that there are a lot of incentives to maintain the status quo, this seems like an uphill battle that can suck up money and generate no results.

Local election reform is probably much more tractable, especially at the municipal level, since the voting procedures are managed in ways that are more easily changed.

Comment by gworley3 on Neglected EA Regions · 2020-02-18T19:37:36.753Z · score: 1 (1 votes) · EA · GW

This makes we think of a useful perspective on this post: we still have a long way to go to spread EA within the cultures/regions where it has already taken root such that there is still a lot to be gained from doing that without dealing with the added complications of taking EA to new cultures.

Comment by gworley3 on Neglected EA Regions · 2020-02-17T18:51:00.657Z · score: 10 (6 votes) · EA · GW

I don't have a source for previous discussions, but it's been my impression that expansion of EA to new regions/cultures is currently intentionally conservative due to a belief that success hinges on getting it right the first time and the difficulty of crafting the EA message to resonate with a particular culture.

Comment by gworley3 on Thoughts on The Weapon of Openness · 2020-02-14T01:20:30.420Z · score: 3 (4 votes) · EA · GW

Ugh, I'd have to dig things up, but some things that come to mind that could be confirmed by looking that I count as evidence of this:

  • lag to figuring out the thing about the DES recommended magic numbers vs. when they were given out
  • NSA lead on public key crypto and sending agents to discourage mathematicians from publishing (this one was likely shorter because it was earlier)
  • lag on figuring out the problems with elliptic curve during which the NSA encouraged its use
Comment by gworley3 on My personal cruxes for working on AI safety · 2020-02-13T19:31:15.509Z · score: 17 (9 votes) · EA · GW

Regarding the 14% estimate, I'm actually surprised it's this high. I have the opposite intuition, that there is so much uncertainty, especially about whether or not any particular thing someone does will have impact, that I place the likelihood of anything any particular person working on AI safety does producing positive outcomes at <1%. The only reason it seems worth working on to me despite all of this is that when you multiply it against the size of the payoff it ends up being worthwhile anyway.

Comment by gworley3 on Thoughts on The Weapon of Openness · 2020-02-13T19:24:24.391Z · score: 3 (4 votes) · EA · GW

I see you mention the NSA in a footnote. One thing worth keeping in mind is that the NSA is both highly secretive and is generally believed based on past leaks and cases of "catching up" by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research. It's possible this situation is not stable, but my best guess as an outsider is that they are a proof by example that secrecy as a strategy for maintaining a technological lead against adversaries can work, but there are likely a lot of specifics to making that work that you should probably expect any random attempt at secrecy of this sort not to be as successful as the NSA's, i.e. the NSA is a massive outlier in this regard.

Comment by gworley3 on Some (Rough) Thoughts on the Value of Campaign Contributions · 2020-02-10T18:21:52.735Z · score: 1 (1 votes) · EA · GW
unless it’s an exceptionally good opportunity

Echoing some of the discussion in your post, I think it's very hard for us to determine in what cases political giving impact is "an exceptionally good opportunity" due to strong biases on what we think is good and, I think importantly, given how much most people value signaling their values even if the person they vote for to send the signal fails to adequately deliver on their stated values. To me this is one of the great challenges of making political choices: many candidates stand for things you might like, but then after the fact they consistently take or approve of government action that goes against those things you stand for in the name of "compromise" to "get things done".

I have no special beef with realpolitik—that's just how people works—but it does make it very hard to know what the net impact of a voting choice is since it's hard to find politicians without mixed records that sometimes contain surprises that, in the final evaluation, might swap them from net positive to net negative effect on the world.

Comment by gworley3 on The Web of Prevention · 2020-02-05T19:36:46.913Z · score: 5 (4 votes) · EA · GW

A related notion from computer security, defense in depth.

Comment by gworley3 on When to post here, vs to LessWrong, vs to both? · 2020-01-27T20:14:22.700Z · score: 4 (3 votes) · EA · GW

Maybe it's not the best answer, but what I've been doing is mostly posting to LW/AF and mostly only posting to EAF for things that are very strongly EA relevant, as in so relevant to EA I would have posted them to EAF if LW didn't exist. I don't have a consistent policy for cross-posting myself, other than that I only cross-post when it feels particularly likely that the content is strongly relevant to both communities independent of the shared aspects of the two sites' cultures.

Comment by gworley3 on Why it’s important to think through all of the factors that influence a charity’s impact · 2020-01-22T20:20:52.800Z · score: 9 (6 votes) · EA · GW

As of this writing the post has a total score of 2 over 7 votes, suggesting some mix of up and down votes. I'm curious why the downvotes, since to me this seems a straightforwardly good post in terms of content and relevance. For example, I liked learning about how they went through the process of improving the evaluation mechanism when they realized something was left out to get what is hopefully a better estimate.

Comment by gworley3 on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-22T20:13:24.533Z · score: 5 (3 votes) · EA · GW

Normalization of deviance

"Social normalization of deviance means that people within the organization become so much accustomed to a deviant behavior that they don't consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety" [5]. People grow more accustomed to the deviant behavior the more it occurs [6] . To people outside of the organization, the activities seem deviant; however, people within the organization do not recognize the deviance because it is seen as a normal occurrence. In hindsight, people within the organization realize that their seemingly normal behavior was deviant.

(from Wikibooks)

I think this generalizes to cases where there is a stated norm, that norm is regularly violated, and the violation of the norm becomes the new norm.

Relevance

Scrupulous people or people otherwise committed to particular stances may be concerned about ways in which norms are not upheld around, for example, truth telling, donating, veganism, etc..

Comment by gworley3 on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-14T19:09:03.471Z · score: 10 (6 votes) · EA · GW
Die Vergabepraxis orientiert sich an der vorhandenen wissenschaftlichen Forschung über Wirksamkeit und Wirtschaftlichkeit sowie an den Aspekten der Transparenz und der Ökologie.
The award practice shall be based on the available scientific research on effectiveness and cost-effectiveness as well as on the aspects of transparency and ecology.

What is the likelihood of this sentence of the policy having teeth? For example, let's say people administering this money want to use it for a prototypical low-effectiveness intervention, like opening an art gallery in a poor country. Is there a mechanism in place to stop them? Who decides if a grant was chosen based on scientific research on effectiveness? Can, for example, a citizen sue the city for failing to follow this policy and have a judge rule they misallocated the funds, impose some penalty, and require they act differently in the future?

To me this language seems just vague enough that a motivated politician could use it to fund almost anything they wanted, so I'm wondering what evidence there is to believe this policy will do anything, as this has a great deal of impact on the measure of its effectiveness (so much so that it could flip the sign of your assessment and maybe all they money was spent to buy empty words).

Obviously we can't know for sure until after we have seen grants awarded and especially seen grants misawarded and what the response was to that, but I'm curious what information we have now about this since I'm unfamiliar enough with Swiss government that I can only make an estimation based on my outside view prior that governments tend to find a way to do whatever they want regardless of what the law says unless the law or popular sentiment can actually force them to do what a policy intended.

Comment by gworley3 on Physical Exercise for EAs – Why and How · 2020-01-13T20:21:03.312Z · score: 4 (5 votes) · EA · GW

This is great advice, but also I suspect many people will read it and go "yep, sounds like a thing I should do" and then not exercise, taking the outside view that EAs are not too different from most affluent people who continually choose not to exercise despite it being readily available.

So my advice is to forget about all of this at first and just do something physical and fun. What is fun differs between people. I didn't make a habit of exercising until I lived somewhere where I could do a fun physical activity (indoor rock climbing) whenever I liked. Some people really like running or riding a bike, others like rowing, others like team sports (baseball, basketball, gridiron football, football/soccer, cricket, rugby, etc.), others like "solo" or 1-on-1 sports (tennis, racquetball, squash, golf, etc.), and some people really get into dance or acrobatics or yoga or something else. The point is to first find a physical activity that is fun.

Then let exercise come after. In order to be good at a physical activity, you will be better if you are in good general shape, so good endurance and good strength. This will make exercise instrumentally useful to having more fun, so you'll want to do it because you like having fun, right?

This might not work for everyone (maybe you can't find a physical activity you think is fun after trying lots), but it was a powerful change in mindset for me that got me to go from basically never exercising to spending ~4/hours a week at the gym climbing and training to climb.