Posts

Normative Uncertainty and the Dependence Problem 2020-03-23T17:29:03.369Z · score: 15 (7 votes)
Chloramphenicol as intervention in heart attacks 2020-02-17T18:47:44.328Z · score: -1 (4 votes)
Illegible impact is still impact 2020-02-13T21:45:00.234Z · score: 96 (37 votes)
If Veganism Is Not a Choice: The Moral Psychology of Possibilities in Animal Ethics 2020-01-20T18:07:53.003Z · score: 15 (9 votes)
EA and the Paramitas 2020-01-15T03:17:18.158Z · score: 8 (5 votes)
Normative Uncertainty and Probabilistic Moral Knowledge 2019-11-11T20:26:07.702Z · score: 13 (4 votes)
TAISU 2019 Field Report 2019-10-15T01:10:40.645Z · score: 19 (6 votes)
Announcing the Buddhists in EA Group 2019-07-02T20:41:23.737Z · score: 25 (13 votes)
Best thing at EAG SF 2019? 2019-06-24T19:19:49.700Z · score: 16 (7 votes)
What movements does EA have the strongest synergies with? 2018-12-20T23:36:55.641Z · score: 8 (2 votes)
HLAI 2018 Field Report 2018-08-29T00:13:22.489Z · score: 10 (10 votes)
Avoiding AI Races Through Self-Regulation 2018-03-12T20:52:06.475Z · score: 4 (4 votes)
Prioritization Consequences of "Formally Stating the AI Alignment Problem" 2018-02-19T21:31:36.942Z · score: 2 (2 votes)

Comments

Comment by gworley3 on What do we mean by 'suffering'? · 2020-04-08T15:42:00.806Z · score: 3 (2 votes) · EA · GW

Yes, Lukas's post was what got me thinking about suffering in more detail and helped lead to the creation of those two posts. I think it's linked from one or both of them.

Comment by gworley3 on What do we mean by 'suffering'? · 2020-04-07T16:24:52.641Z · score: 11 (4 votes) · EA · GW

I wrote two posts exploring suffering, both with plenty of links to more resources thinking about what we mean by "suffering": "Is Feedback Suffering?" and "Suffering and Intractable Pain".

My views have evolved since I wrote those posts so I don't necessarily endorse everything in them anymore, but hopefully they are useful starting points. For what it's worth, my view now is more akin to the traditional Buddhist view on suffering as described by the teaching on dependent origination.

Comment by gworley3 on Normative Uncertainty and the Dependence Problem · 2020-03-24T18:26:01.886Z · score: 1 (1 votes) · EA · GW

Yeah, sounds interesting!

Comment by gworley3 on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-18T19:12:46.480Z · score: 2 (2 votes) · EA · GW

You don't mention this, and maybe there is no research on it, but do we expect there to be much opportunity for resistance effects, similar to what we see with antibiotics and the evolution of resistant strains?

For example, would the deployment of large amounts of far-ultraviolet lamps result in selection pressures on microbes to become resistant to them? I think it's not clear, since for example we don't seem to see lots of heat resistant microbes evolving (outside of places like thermal vents) even though we regularly use high heat to kill them.

And even if it did would it be worth the tradeoff? For example, I think even if we knew about the possibility of antibiotic resistance bacteria when penicillin was created, we would still have used penicillin extensively because it was able to cure so many diseases and increase human welfare, although we might have done it with greater care about protocols and their enforcement, so with hindsight maybe we would do something similar here with far-ultraviolet light if we used it.

Comment by gworley3 on [deleted post] 2020-03-17T16:49:53.296Z

I think no.

History is full of plagues and other global threats of similar or worse scale. For example, I thought it could be argued that bubonic plague or smallpox were much bigger threats to humanity and individual humans than COVID 19. Yes from the inside COVID 19 feels particularly threatening, but I think that has more to do with the context in which it is happening, i.e. a world where it felt to many people like something like this couldn't really happen. Smallpox, on the other hand, just kept killing people all the time for hundreds of years and everyone just accepted it as part of life. So on that measure COVID 19 doesn't seem special to simulate vs. other similar types of threats humanity has faced.

Further, it's hard to see why COVID 19 would be of interest to simulators. Presumably they would be technologically advanced enough that something like COVID 19 would not likely be interesting to learn from for some specific situation they are likely to deal with, so it would only be for historical purposes, hence I think the only relevant question is if COVID 19 is interesting enough that it would be more likely to be simulated than other past events, and I think no, so I think it offer no update to the likelihood that we are in a simulation.

Comment by gworley3 on Is nanotechnology (such as APM) important for EAs' to work on? · 2020-03-12T17:56:40.091Z · score: 4 (3 votes) · EA · GW

There are at least two things that go by the term "nanotechnology" but are really different things: atomically precise manufacturing (e.g. Drexler, grey goo, and other stuff that is what originally went by the term "nanotech") and nanoscale materials science (e.g. advanced modern materials science that uses various techniques, but not APM, to create materials with properties based on controlling nanoscale features of the material). Which did you have in mind? I think that will affect the kinds of answers people will give.

Comment by gworley3 on Should effective altruists give money to local beggars? · 2020-02-28T22:45:18.165Z · score: 1 (1 votes) · EA · GW

My impression is that many of these beggars are earning enough to survive, albeit in poverty, so your marginal dollar is probably more effective elsewhere given most people are not making the choice to give to them or not based on EA principles and others will continue to support them. If you consider local homelessness a top priority, my guess is that other interventions than small direct giving would be more effective, though I have not looked into it.

Comment by gworley3 on Option Value, an Introductory Guide · 2020-02-21T18:25:24.729Z · score: 3 (3 votes) · EA · GW

Thanks for this. I didn't know that option value is a thing in the literature as opposed to just a common pattern in reasoning. Having handles for things is often useful, and I really appreciate it when people help bring those things in explicitly to EA, since like the rationality community I find it has a tendency to reinvent terms for existing things because of unfamiliarity with wider literature (which is not a complaint, since humans have figured out so much stuff, it's sometimes hard to know that someone else already worked out the same ideas, especially if they did so in a different domain from the one where you are working).

Comment by gworley3 on Chloramphenicol as intervention in heart attacks · 2020-02-20T22:26:47.447Z · score: 1 (3 votes) · EA · GW

Sure, this was just me taking a guess because I needed a figure to work out the numbers. I expect better analysis, if this is of interest to someone, might produce a different figure and different conclusion about cost effectiveness.

Comment by gworley3 on Using Charity Performance Metrics as an Excuse Not to Give · 2020-02-19T19:43:35.153Z · score: 3 (2 votes) · EA · GW

A quick scan of the article makes me want to say "more evidence needed before we can conclude much": they ran two studies, one on 50 Stanford students, one on 400 Mechanical Turkers. Neither seems to provide very strong evidence to me about how people might make giving decisions in the real world since the study conditions feel pretty far to me to what actual giving decision feel like. Here's the setup of the two studies from the paper:

Study 1 involves data from 50 Stanford University undergraduate students in April 2014 who made a series of binary decisions between money for charities and/or money for themselves. In addition to receiving a $20 completion fee, participants knew that one of their decisions would be randomly selected to count for payment.14 The design and results for Study 1 are detailed below (and see Online Appendix B.1 for instructions and screenshots).
Three types of charities are involved in Study 1. The first charity type involves three Make-A-Wish Foundation state chapters that vary according to their program expense rates, or percentages of their budgets spent directly on their programs and services (i.e., not spent on overhead costs): the New Hampshire chapter (90%), the Rhode Island chapter (80%), and the Maine chapter (71%).15 The second charity type involves three Knowledge Is Power Program (KIPP) charter schools that vary according to college matriculation rates among their students who completed the eighth grade: Chicago (92%), Philadelphia (74%), and Denver (61%).16 The third charity type involves three Bay Area animal shelters that vary according to their live release rates: the San Francisco SPCA (97%), the Humane Society of Silicon Valley (82%), and the San Jose Animal Care and Services (66%).

And the second one:

Study 2 involves data from 400 Amazon Mechanical Turk workers in January 2018 who made five decisions about how much money to keep for themselves or to instead donate to the Make-A-Wish Foundation.32 In addition to receiving a $1 completion fee, participants knew that one of their decisions would be randomly selected to count for payment.33 Relative to Study 1, Study 2 allows for a test of excuse-driven responses to charity performance metrics on a larger sample and via an identification strategy that does not require a normalization procedure. The design and results for Study 2 are detailed below (and see Online Appendix B.4 for instructions and screenshots).
Comment by gworley3 on The Web of Prevention · 2020-02-19T19:34:23.272Z · score: 1 (3 votes) · EA · GW

I've noticed something similar around "security mindset": Eliezer and MIRI have used the phrase to talk about a specific version of it in relation to AI safety, but the term, as far as I know, originates with Bruce Schneier and computer security, although I can't recall MIRI publications mentioning that much, possibly because they didn't even realize that's where the term came from. Hard to know, a probably not very relevant other than to weirdos like us. ;-)

Comment by gworley3 on Thoughts on electoral reform · 2020-02-18T19:48:40.447Z · score: 15 (9 votes) · EA · GW

In the US, especially for federal elections and especially especially for election of the president, I expect voting reform to have low tractability because I believe it requires constitutional reform at the national and possibly the state level. Given how hard it is to pass amendments to the federal constitution and given that there are a lot of incentives to maintain the status quo, this seems like an uphill battle that can suck up money and generate no results.

Local election reform is probably much more tractable, especially at the municipal level, since the voting procedures are managed in ways that are more easily changed.

Comment by gworley3 on Neglected EA Regions · 2020-02-18T19:37:36.753Z · score: 1 (1 votes) · EA · GW

This makes we think of a useful perspective on this post: we still have a long way to go to spread EA within the cultures/regions where it has already taken root such that there is still a lot to be gained from doing that without dealing with the added complications of taking EA to new cultures.

Comment by gworley3 on Neglected EA Regions · 2020-02-17T18:51:00.657Z · score: 10 (6 votes) · EA · GW

I don't have a source for previous discussions, but it's been my impression that expansion of EA to new regions/cultures is currently intentionally conservative due to a belief that success hinges on getting it right the first time and the difficulty of crafting the EA message to resonate with a particular culture.

Comment by gworley3 on Thoughts on The Weapon of Openness · 2020-02-14T01:20:30.420Z · score: 3 (4 votes) · EA · GW

Ugh, I'd have to dig things up, but some things that come to mind that could be confirmed by looking that I count as evidence of this:

  • lag to figuring out the thing about the DES recommended magic numbers vs. when they were given out
  • NSA lead on public key crypto and sending agents to discourage mathematicians from publishing (this one was likely shorter because it was earlier)
  • lag on figuring out the problems with elliptic curve during which the NSA encouraged its use
Comment by gworley3 on My personal cruxes for working on AI safety · 2020-02-13T19:31:15.509Z · score: 16 (8 votes) · EA · GW

Regarding the 14% estimate, I'm actually surprised it's this high. I have the opposite intuition, that there is so much uncertainty, especially about whether or not any particular thing someone does will have impact, that I place the likelihood of anything any particular person working on AI safety does producing positive outcomes at <1%. The only reason it seems worth working on to me despite all of this is that when you multiply it against the size of the payoff it ends up being worthwhile anyway.

Comment by gworley3 on Thoughts on The Weapon of Openness · 2020-02-13T19:24:24.391Z · score: 3 (4 votes) · EA · GW

I see you mention the NSA in a footnote. One thing worth keeping in mind is that the NSA is both highly secretive and is generally believed based on past leaks and cases of "catching up" by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research. It's possible this situation is not stable, but my best guess as an outsider is that they are a proof by example that secrecy as a strategy for maintaining a technological lead against adversaries can work, but there are likely a lot of specifics to making that work that you should probably expect any random attempt at secrecy of this sort not to be as successful as the NSA's, i.e. the NSA is a massive outlier in this regard.

Comment by gworley3 on Some (Rough) Thoughts on the Value of Campaign Contributions · 2020-02-10T18:21:52.735Z · score: 1 (1 votes) · EA · GW
unless it’s an exceptionally good opportunity

Echoing some of the discussion in your post, I think it's very hard for us to determine in what cases political giving impact is "an exceptionally good opportunity" due to strong biases on what we think is good and, I think importantly, given how much most people value signaling their values even if the person they vote for to send the signal fails to adequately deliver on their stated values. To me this is one of the great challenges of making political choices: many candidates stand for things you might like, but then after the fact they consistently take or approve of government action that goes against those things you stand for in the name of "compromise" to "get things done".

I have no special beef with realpolitik—that's just how people works—but it does make it very hard to know what the net impact of a voting choice is since it's hard to find politicians without mixed records that sometimes contain surprises that, in the final evaluation, might swap them from net positive to net negative effect on the world.

Comment by gworley3 on The Web of Prevention · 2020-02-05T19:36:46.913Z · score: 5 (4 votes) · EA · GW

A related notion from computer security, defense in depth.

Comment by gworley3 on When to post here, vs to LessWrong, vs to both? · 2020-01-27T20:14:22.700Z · score: 4 (3 votes) · EA · GW

Maybe it's not the best answer, but what I've been doing is mostly posting to LW/AF and mostly only posting to EAF for things that are very strongly EA relevant, as in so relevant to EA I would have posted them to EAF if LW didn't exist. I don't have a consistent policy for cross-posting myself, other than that I only cross-post when it feels particularly likely that the content is strongly relevant to both communities independent of the shared aspects of the two sites' cultures.

Comment by gworley3 on Why it’s important to think through all of the factors that influence a charity’s impact · 2020-01-22T20:20:52.800Z · score: 9 (6 votes) · EA · GW

As of this writing the post has a total score of 2 over 7 votes, suggesting some mix of up and down votes. I'm curious why the downvotes, since to me this seems a straightforwardly good post in terms of content and relevance. For example, I liked learning about how they went through the process of improving the evaluation mechanism when they realized something was left out to get what is hopefully a better estimate.

Comment by gworley3 on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-22T20:13:24.533Z · score: 5 (3 votes) · EA · GW

Normalization of deviance

"Social normalization of deviance means that people within the organization become so much accustomed to a deviant behavior that they don't consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety" [5]. People grow more accustomed to the deviant behavior the more it occurs [6] . To people outside of the organization, the activities seem deviant; however, people within the organization do not recognize the deviance because it is seen as a normal occurrence. In hindsight, people within the organization realize that their seemingly normal behavior was deviant.

(from Wikibooks)

I think this generalizes to cases where there is a stated norm, that norm is regularly violated, and the violation of the norm becomes the new norm.

Relevance

Scrupulous people or people otherwise committed to particular stances may be concerned about ways in which norms are not upheld around, for example, truth telling, donating, veganism, etc..

Comment by gworley3 on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-14T19:09:03.471Z · score: 10 (6 votes) · EA · GW
Die Vergabepraxis orientiert sich an der vorhandenen wissenschaftlichen Forschung über Wirksamkeit und Wirtschaftlichkeit sowie an den Aspekten der Transparenz und der Ökologie.
The award practice shall be based on the available scientific research on effectiveness and cost-effectiveness as well as on the aspects of transparency and ecology.

What is the likelihood of this sentence of the policy having teeth? For example, let's say people administering this money want to use it for a prototypical low-effectiveness intervention, like opening an art gallery in a poor country. Is there a mechanism in place to stop them? Who decides if a grant was chosen based on scientific research on effectiveness? Can, for example, a citizen sue the city for failing to follow this policy and have a judge rule they misallocated the funds, impose some penalty, and require they act differently in the future?

To me this language seems just vague enough that a motivated politician could use it to fund almost anything they wanted, so I'm wondering what evidence there is to believe this policy will do anything, as this has a great deal of impact on the measure of its effectiveness (so much so that it could flip the sign of your assessment and maybe all they money was spent to buy empty words).

Obviously we can't know for sure until after we have seen grants awarded and especially seen grants misawarded and what the response was to that, but I'm curious what information we have now about this since I'm unfamiliar enough with Swiss government that I can only make an estimation based on my outside view prior that governments tend to find a way to do whatever they want regardless of what the law says unless the law or popular sentiment can actually force them to do what a policy intended.

Comment by gworley3 on Physical Exercise for EAs – Why and How · 2020-01-13T20:21:03.312Z · score: 4 (5 votes) · EA · GW

This is great advice, but also I suspect many people will read it and go "yep, sounds like a thing I should do" and then not exercise, taking the outside view that EAs are not too different from most affluent people who continually choose not to exercise despite it being readily available.

So my advice is to forget about all of this at first and just do something physical and fun. What is fun differs between people. I didn't make a habit of exercising until I lived somewhere where I could do a fun physical activity (indoor rock climbing) whenever I liked. Some people really like running or riding a bike, others like rowing, others like team sports (baseball, basketball, gridiron football, football/soccer, cricket, rugby, etc.), others like "solo" or 1-on-1 sports (tennis, racquetball, squash, golf, etc.), and some people really get into dance or acrobatics or yoga or something else. The point is to first find a physical activity that is fun.

Then let exercise come after. In order to be good at a physical activity, you will be better if you are in good general shape, so good endurance and good strength. This will make exercise instrumentally useful to having more fun, so you'll want to do it because you like having fun, right?

This might not work for everyone (maybe you can't find a physical activity you think is fun after trying lots), but it was a powerful change in mindset for me that got me to go from basically never exercising to spending ~4/hours a week at the gym climbing and training to climb.

Comment by gworley3 on Space governance is important, tractable and neglected · 2020-01-07T19:17:43.199Z · score: 19 (7 votes) · EA · GW

I'm not sure where this falls exactly between importance and tractability, but I think one concern is that any work we do on space governance now is likely to be washed out by later, near-term, and more powerful forces in the future.

My thinking on this is by analogy to previous developments in frontier governance. For example, in the history of the United States, it was common to form treaties with native peoples and exist with them relatively peacefully right up until the native people had resources the colonists/settlers/government wanted badly enough that they found expedience excuses to ignore the treaties, such as by fabricating treaty violations to allow ignoring them or just outright using force against a weaker entity.

And that's just to consider how governance becomes fluid when one entity far out powers another. Equally powered entities have their own methods of renegotiating for what is presently desirable, past agreement be damned.

On the other hand some things have stuck well. For example, even if actors sometimes violate them, international rules of war are often at least nominally respected and effort is put into punishing (some of) those who violate those rules. As ever, exceptions are made for those powerful enough to be beyond the reach of other actors to impose their will by force.

All of this makes me somewhat pessimistic that we can expect to do much to have a strong, positive influence on space governance.

Comment by gworley3 on Response to recent criticisms of EA "longtermist" thinking · 2020-01-07T18:49:24.210Z · score: 4 (3 votes) · EA · GW

Thanks for the context. My initial reaction to seeing that case included was "surely this is all made up", so surprised to learn there's someone making this as a serious critique on the level of publishing a journal article about it, and not just random Tweets aiming to score points with particular groups who see EA in general and long-termism specifically as clustering closer to their enemies than their allies.

Comment by gworley3 on Response to recent criticisms of EA "longtermist" thinking · 2020-01-06T18:55:18.186Z · score: 10 (6 votes) · EA · GW

The analysis of issues around white-supremacism seem like a bit of a strawman to me. Are there people making serious objections to long-termist views on the grounds that it will maybe favor the wealthy, and since the globally wealthy are predominantly of European descent, this implies a kind of de facto white supremacism? This seems like a kind of vague, guilty-by-correlation argument we need not take seriously, but you devote a lot of space to it so I take it you have some reason to believe many people, including those on this forum, honestly believe it.

Comment by gworley3 on Tentative thoughts on which kinds of speech are harmful · 2020-01-03T18:52:11.914Z · score: 11 (5 votes) · EA · GW
I don’t know what that threshold is, but it is still an important principle to keep in mind when deciding which kinds of speech are better or worse candidates for policing.

I think without more work on figuring out where this threshold is, no amount of proposals as to what is or is not over the threshold will seem beyond possibly being cases of "I just don't like this type of speech". Considering whether people believe a particular type of speech is beyond the harmfulness threshold to warrant censorship seems worthwhile as evidence about what may be harmful, but I don't think your presentation of what types of speech you consider harmful is that, thus much of this post to me reads like "what kinds of speech kbog thinks are harmful" and that's not very interesting or useful to me.

I'll also say that as a US citizen I'm culturally biased to immediately oppose any suggestion that we might censor speech, and only cautiously accept it if there is a really strong and compelling argument in favor of censorship for some particular type of speech. For example, of your entire list only "calls to violence" passes what I would consider the test of speech so harmful it should be restricted. I realize the US free speech norm is not globally shared, but it does make it hard for me to take your arguments seriously when you include, for example, political speech ("supporting Donald Trump"), since this is specifically one of the types of speech considered most in need of protection within US culture since it also the most tempting for political opponents to suppress on spurious grounds.

I'm not saying I can't be convinced some kinds of speech are harmful and that we should do more to restrict them, but I don't find your case compelling, especially around why you think some certain kinds of speech are harmful.

Comment by gworley3 on On Collapse Risk (C-Risk) · 2020-01-02T21:34:05.968Z · score: 1 (1 votes) · EA · GW

One of the objection I've heard from talking about s-risks separate from x-risks is that s-risks are already fully captured by the x-risk category and nothing is gained by making them distinct. A similar objection could reasonably be applied here, that c-risks are just a small subset of x-risks and not worth considering as a separate category in need of a name (rather they would just be x-risks from civilization collapse).

Do you have any thoughts on this? For example, can you think of cases that are c-risks but not x-risks such that they don't entirely overlap, like maybe a collapse scenario that does not pose an existential risk?

Comment by gworley3 on How Fungible Are Interests? · 2019-12-22T23:46:11.005Z · score: 3 (2 votes) · EA · GW

The view you put forward is tempting, but I think it also misses important aspects of why what you consider "fishy" about the Sue to poet scenario is not.

You suggest it is possible to develop other passions, and I agree that this is possible, but this fails to account for the human aspect of doing this. Most people are not psychological safe enough to simply switch passions to something that it would be beneficial to be passionate about given the massive amount of experience necessary to make something new feel good enough to be passionate about given the difficulty of making that change while embedded in the conditions of life as they are for an individual. It can be done, but it takes longer and is harder to succeed at than simply thinking it's possible implies, such that Sue the poet's reason is hardly fishy, but instead a recognition that she may not be situated to be able to make the change that she would need to to be a more effective altruist.

This is not to say that she couldn't and that she might use this as an excuse to avoid doing what she thinks is necessary to excuse doing what is convenient, but to say that we should have compassion for those who may find they agree with EA but find they cannot immediately make the changes they would like to due to life conditions, and we should not judge them as less good EAs even if they are less able to contribute to EA missions than if they were a different person in a different world that doesn't exist.

Comment by gworley3 on Are Humans 'Human Compatible'? · 2019-12-09T19:08:31.412Z · score: 1 (1 votes) · EA · GW

We're in an analogous situation with AI. AI is too complex for us to fully understand what it does (by design), and this is also true of mundane, human-programmed software (asking any software engineer who has worked on something more than 1k lines long if their program ever did anything unexpected and I can promise you the answer is "yes"). Thus although we in theory have control of what goes on inside AI, that's much less the case than it seems at first, so much so that we often have better models of how humans decide to do things than we do for AI.

Comment by gworley3 on Are Humans 'Human Compatible'? · 2019-12-06T18:53:08.209Z · score: 5 (4 votes) · EA · GW
Are humans ‘human compatible’?
I put down this book agreeing that we need to control AI (and indeed we can, according to Russell, with good engineering). But if intelligence is intelligence is intelligence then must we necessarily turn to humans, and constrain them in the same way so that humans don’t pursue ‘goals inside the human’ that are significantly at odds with ‘our’ preferences?

Elsewhere we sometimes call this the "human alignment problem" and use it as a test case in the sense that if we can't design a mechanism at least robust enough to solve human alignment we probably can't use it to solve AI alignment because AIs (especially superhuman AI) are much better optimizers than humans. Some might argue against this, pointing out that humans are fallible in ways that machines are not, but the point is that if you can't make safe something so bad at optimizing as humans who often look like they are just taking random walks due to a wide variety of reasons, you can't possibly hope to make safe something that is reliably good at achieving its goals.

Comment by gworley3 on Russian x-risks newsletter, fall 2019 · 2019-12-03T18:54:43.918Z · score: 5 (3 votes) · EA · GW

I'm curious: do you originally write this in Russian for Russian EAs, transhumanists, rationalists, etc. and then translate it for us or is this content primarily to keep non-Russians informed of what's happening in the Russosphere with x-risk?

Comment by gworley3 on Against value drift · 2019-11-21T18:43:42.641Z · score: 3 (2 votes) · EA · GW
My values being differently expressed seems very important, though. If I feel as if I value the welfare of distant people, but I stop taking actions in line with that (e.g. making donations to global poverty charities), do I still value it to the same extent?

Right, it sounds to me like you identify with your values in some way, like you wouldn't consider yourself to still be yourself if they were different. That confuses things because now there's this extra thing going on that feels causally relevant but isn't, but I'm not sure I can hope to convince you in a short comment that you are not your values, even if your values are (temporarily) you.

Comment by gworley3 on Some Modes of Thinking about EA · 2019-11-11T20:52:04.737Z · score: 4 (3 votes) · EA · GW

One I was very glad not to see in this list was "EA as Utilitarianism". Although utilitarian ethics are popular among EAs, I think we leave out many people who would "do good better" but from a different meta-ethical perspective. One of the greatest challenges I've seen in my own conversations about EA is with those who reject the ideas because they associate them with Singer-style moral arguments and living a life of subsistence until not one person is in poverty. This sadly seems to turn them off of ways they might think about better allocating resources, for example, because they think their only options are either to do what they feel good about or to be a Singer-esque maximizer. Obviously this is not the case, there's a lot of room for gradation and different perspectives, but it does create a situation where people see themselves in an adversarial relationship to EA and so reject all its ideas rather than just the subset of EA-related ideas they actually disagree with because they got the idea that one part of EA was the whole thing.

Comment by gworley3 on Against value drift · 2019-11-11T19:10:24.658Z · score: 1 (1 votes) · EA · GW
As a concrete example, I worry that living in the SF bay area is making me care less about extreme wealth disparities. I witness them so regularly that it's hard for me to feel the same flare of frustration that I once did. This change has felt like a gradual hedonic adaptation, rather than a thoughtful shifting of my beliefs; the phrase "value drift" fits that experience well.

This seems to me adequately and better captured as saying the conditions of the world are different in ways that make you respond differently that you wouldn't have endorsed prior to those conditions changing. That doesn't mean your values changed, but the conditions to which you are responded changed such that your values are differently expressed; I suspect your values themselves didn't change because you say you are worried about this change in behavior you've observed in yourself, and if your values had really changed you wouldn't be worried.

Comment by gworley3 on Against value drift · 2019-10-30T17:37:20.103Z · score: 14 (7 votes) · EA · GW

I agree that there is something very confused about worries of value drift. I tried to write something up about it before, although that didn't land so well. Let's try again.

I keep noticing something is confused when people worry about value drift because to me it seems they are worried they might learn more and decide they were wrong and now want something different. That to me seems good: if you don't update and change in the face of new information you're less alive and agenty and more dead and static. People will often phrase this though as worries that their life will change and they won't, for example, want to be as altruistic because they are pulled away by other things, but to me this is a kind of confused clinging to what is now and expecting it to forever be. If you truly, deeply care about altruism, you'll keep picking it in every moment, up until the world changes enough that you don't.

Talking in terms of incentives I think helps make this clearer, in that people may want to be against the world changing in ways that will make it less likely to continue into a future they like. I think it's even more general, though, and we should be worried about something like "world state listing" where the world fails to be more filled with what we desire and starts to change at random rather than as a result of our efforts. In this light worry about value drift is a short-sighted way of noticing one doesn't want to the world state to list.

Comment by gworley3 on Should CEA buy ea.org? · 2019-10-07T19:47:07.566Z · score: 1 (1 votes) · EA · GW

I think generally no. Given the quality of search engines today I think having a short domain name doesn't provide much (I'm not sure that it ever did, given my own experience with them, although maybe I'm unusual and I could be swayed I'm sure by experimental results).

Comment by gworley3 on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T19:34:53.113Z · score: 5 (3 votes) · EA · GW

My guess is that reading a bunch of EA posts is not the thing you really care about if, say, what you care about is people engaging fruitfully on EA topics with people already in the EA movement.

By way of comparison, over on LW I have the impression (that is, I think I have seen this pattern but don't want to go to the trouble of digging up example links) that there are folks trying to engage on the site who claim to have read large chunks of the Sequences but also produce low quality content, and then there are also people who haven't read a lot of the literature who manage to write things that engage well with the site or do well engaging in rationalist discussions in person.

Reading background literature seems like one way that sometimes works to make a person into the kind of person who can engage fruitfully with a community, but I don't think it always works and it's not the thing itself, hence why I think you see such differing views when you look for related thinking on the topic.

Comment by gworley3 on What actions would obviously decrease x-risk? · 2019-10-07T19:21:22.580Z · score: 10 (5 votes) · EA · GW

Develop and deploy a system to protect Earth from impacts from large asteroids, etc.

Comment by gworley3 on What actions would obviously decrease x-risk? · 2019-10-07T19:19:04.263Z · score: 1 (4 votes) · EA · GW

+1

Further the OP gives a specific notion of obviousness to use here:

"obviously" (meaning: you believe it with high probability, and you expect that belief to be uncontroversial)

This doesn't leave a lot of room for debate about what is "obvious" unless you want to argue that a person doesn't believe it with high probability and they are wrong about their own belief about how controversial it is.

Comment by gworley3 on Why is the amount of child porn growing? · 2019-10-02T18:02:10.136Z · score: 6 (2 votes) · EA · GW

My suspicion is that we are seeing a "one time" increase due to better ability to create and share child abuse content. That is, my guess is the incident rate of child abuse is not much changing, but the visibility of it is because it's become easier to produce and share content featuring the actions that were already happening privately. I could imagine some small (let's say 10%) marginal increase in abuse incentivized by the ability to share, but on the whole I expect the majority of child abuser are continuing to abuse at the same rate.

Most of this argument rests on a prior I have that unexpected large increases like this are usually not signs of change in the thing we care about, but instead changes in secondary things that make the primary thing more visible. I'm sure I could be convinced this was evidence of an increase in child abuse proportionate with the reported numbers, but I think it far more likely lacking such evidence that it's mostly explained by increased ease producing and sharing content only.

Comment by gworley3 on Is pain just a signal to enlist altruists? · 2019-10-02T17:46:08.311Z · score: 21 (9 votes) · EA · GW

One possibility, if this theory is correct, is that cluster headaches are a spandrel, i.e. a (very unfortunate) unintended side effect of the pain system being accidentally fired in a case when it isn't beneficial for it to be but doesn't get selected out because it doesn't have much of an impact on differential reproduction rates.

Another is that causality is slightly different, pain is amped up in some cases to elicit altruism, but the mechanisms of pain are "lower in the stack" and so can be triggered by things other than those considered here, making cluster headaches something outside the bounds of what this model would need to explain since there can be many things accentuating pain and the considered cases are just one and the situation with cluster headaches is another.

Comment by gworley3 on Announcing the Buddhists in EA Group · 2019-09-24T17:39:07.208Z · score: 2 (2 votes) · EA · GW

It's a lot of things. I'd say that at its heart it's a way of life, or a way to live life. That way manifests itself in many ways such that we can talk about common Buddhist values, world models, practices, community forms, etc. but all of those are implementation details of how you bring about something deeper, more subtle, and more fundamental than any of them. It's a little hard to point at what that way is, though, so that I can say some words about it, because whatever words I say it will not be the thing itself, like the way a finger pointing at the moon is not itself the moon. If I had to pick some very few words to capture the essential nature of the Buddha way, I would say that it asks us to be here, now, in our totality, fully engaged in the act of living as compassionate agents embedded in the world.

Comment by gworley3 on Announcing the Buddhists in EA Group · 2019-09-23T16:59:01.476Z · score: 1 (1 votes) · EA · GW

Hmm, I'm not sure. The group is set to be publicly visible so anyone should be able to find it and ask to join, although it's a "private" group meaning only members can see who else are members and can see posts. The link is live and works for me, so I'm not sure. As an alternative you can search "Buddhists in Effective Altruism" on Facebook and that should find the group.

Comment by gworley3 on Does improving animal rights now improve the far future? · 2019-09-16T19:09:25.049Z · score: 4 (4 votes) · EA · GW
Through the spread of more humane attitudes, this would increase the expected value of the future of humanity by 0.01-0.1%.

I don't know how 80k evaluates the expected value of the future of humanity in other cases, but to me that number seems small in a way that suggests to me they have already "priced in" the uncertainty you are seeing.

Comment by gworley3 on How do you, personally, experience "EA motivation"? · 2019-08-16T22:52:56.215Z · score: 9 (8 votes) · EA · GW

I describe it as a calling. It's not so much that I feel a strong emotion as I feel like it's the most natural thing in the world that I would want to help people and do that in the most effective way possible. Since I focus specifically on x-risk from AI, I find this as a calling to address AI safety due to the natural way this feels like an obvious problem in desperate need of a solution.

For me it's very similar to the kind of "calling" people talk about in religious contexts, and now that I'm Buddhists I conceptualized what happened when I was 18 that made me care about and start pursuing AI safety as the awakening of bodhicitta because although I already wanted to become enlightened at that time (even though I didn't really appreciate what that meant) it wasn't until I cared about saving humanity from AI that I developed the compassion and desire that drove me to bodhicitta. With time that calling has broadened even though I mainly focus on AI safety.

Comment by gworley3 on Four practices where EAs ought to course-correct · 2019-08-01T17:41:09.076Z · score: 7 (5 votes) · EA · GW
This feels very timely, because several of us at CEA have recently been working on updating our resources for media engagement. In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received. I’d be happy to have people’s feedback on this resource!

This seems to be a private document. When I try to follow that link I get a page asking for me to log in to Google Drive with a @centreforeffectivealtruism.org Google account, which I don't have (I'm already logged into Google with two other Google accounts, so those don't seem to give me enough permission to access this document).

Maybe this document is intended to be private right now, but if it's allowed to be accessed outside CEA it doesn't seem that you currently can.

Comment by gworley3 on Four practices where EAs ought to course-correct · 2019-07-30T17:44:36.738Z · score: 37 (19 votes) · EA · GW

I can't speak for any individual, but being careful in how one engages with the media is prudent. Journalists often have a larger story they are trying to tell over the course of multiple articles and they are actively cognitively biased towards figuring out how what you're saying confirms and fits in with that story (or goes against it such that you are now Bad because you're not with whatever force for Good is motivating their narrative). This isn't just an idle worry either: I've talked to multiple journalists and they've independently told me as much straight out, e.g. "I'm trying to tell a story, so I'm only interested if you can tell me something that is about that story".

Keeping quiet is probably a good idea unless you have media training so you know how to interact with journalists. Otherwise you function like a random noise generator that might accidentally generate noise that confirms what the journalist wanted to believe anyway and if you don't endorse whatever the journalist believes you've just done something that works against your own interests and you probably didn't even realize it!

Comment by gworley3 on If physics is many-worlds, does ethics matter? · 2019-07-10T17:54:46.577Z · score: 3 (2 votes) · EA · GW

So assuming the Copenhagen interpretation is wrong and something like MWI or zero-world or something else is right, it's likely the case that there are multiple, disconnected casual histories. This is true to a lesser extent even in classical physics due to the expansion of the universe and the gradual shrinking of Hubble volumes (light cones), so even a die-hard Cophenhagenist should consider what we might call generally acausal ethics.

My response is generally something like this, keeping in mind my ethical perspective is probably best described as virtue ethics with something like negative preference utilitarianism applied on top:

  • Causal histories I am not causally linked with still matter for a few reasons:
    • My compassion can extend beyond causality in the same way it can extend beyond my city, country, ethnicity, species, and planet (moral circle expansion).
    • I am unsure what I will be causally linked with in the future (veil of ignorance).
    • Agents in other causal histories can extend compassion for me in kind if I do it for them (acausal trade).
  • Given that other causal histories matter, I can:
    • act to make other causal histories better in those cases where I am currently causally connected but later won't be (e.g. MWI worlds that will split causally later from the one I will find myself in that share a common history prior to the split),
    • engage in acausal trade to create in the causal history I find myself in more of what is wanted in other causal histories when the tradeoffs are nil or small knowing that my causal history will receive the same in exchange,
    • otherwise generally act to increase the measure (or if the universe is finite, count) of causal histories that are "good" ("good" could mean something like "want to live in" or "enjoy" or something else that is a bit beyond the scope of this analysis).