Posts

Normative Uncertainty and Probabilistic Moral Knowledge 2019-11-11T20:26:07.702Z · score: 5 (1 votes)
TAISU 2019 Field Report 2019-10-15T01:10:40.645Z · score: 19 (6 votes)
Announcing the Buddhists in EA Group 2019-07-02T20:41:23.737Z · score: 25 (13 votes)
Best thing at EAG SF 2019? 2019-06-24T19:19:49.700Z · score: 16 (7 votes)
What movements does EA have the strongest synergies with? 2018-12-20T23:36:55.641Z · score: 8 (2 votes)
HLAI 2018 Field Report 2018-08-29T00:13:22.489Z · score: 10 (10 votes)
Avoiding AI Races Through Self-Regulation 2018-03-12T20:52:06.475Z · score: 4 (4 votes)
Prioritization Consequences of "Formally Stating the AI Alignment Problem" 2018-02-19T21:31:36.942Z · score: 2 (2 votes)

Comments

Comment by gworley3 on Some Modes of Thinking about EA · 2019-11-11T20:52:04.737Z · score: 2 (2 votes) · EA · GW

One I was very glad not to see in this list was "EA as Utilitarianism". Although utilitarian ethics are popular among EAs, I think we leave out many people who would "do good better" but from a different meta-ethical perspective. One of the greatest challenges I've seen in my own conversations about EA is with those who reject the ideas because they associate them with Singer-style moral arguments and living a life of subsistence until not one person is in poverty. This sadly seems to turn them off of ways they might think about better allocating resources, for example, because they think their only options are either to do what they feel good about or to be a Singer-esque maximizer. Obviously this is not the case, there's a lot of room for gradation and different perspectives, but it does create a situation where people see themselves in an adversarial relationship to EA and so reject all its ideas rather than just the subset of EA-related ideas they actually disagree with because they got the idea that one part of EA was the whole thing.

Comment by gworley3 on Against value drift · 2019-11-11T19:10:24.658Z · score: 1 (1 votes) · EA · GW
As a concrete example, I worry that living in the SF bay area is making me care less about extreme wealth disparities. I witness them so regularly that it's hard for me to feel the same flare of frustration that I once did. This change has felt like a gradual hedonic adaptation, rather than a thoughtful shifting of my beliefs; the phrase "value drift" fits that experience well.

This seems to me adequately and better captured as saying the conditions of the world are different in ways that make you respond differently that you wouldn't have endorsed prior to those conditions changing. That doesn't mean your values changed, but the conditions to which you are responded changed such that your values are differently expressed; I suspect your values themselves didn't change because you say you are worried about this change in behavior you've observed in yourself, and if your values had really changed you wouldn't be worried.

Comment by gworley3 on Against value drift · 2019-10-30T17:37:20.103Z · score: 14 (7 votes) · EA · GW

I agree that there is something very confused about worries of value drift. I tried to write something up about it before, although that didn't land so well. Let's try again.

I keep noticing something is confused when people worry about value drift because to me it seems they are worried they might learn more and decide they were wrong and now want something different. That to me seems good: if you don't update and change in the face of new information you're less alive and agenty and more dead and static. People will often phrase this though as worries that their life will change and they won't, for example, want to be as altruistic because they are pulled away by other things, but to me this is a kind of confused clinging to what is now and expecting it to forever be. If you truly, deeply care about altruism, you'll keep picking it in every moment, up until the world changes enough that you don't.

Talking in terms of incentives I think helps make this clearer, in that people may want to be against the world changing in ways that will make it less likely to continue into a future they like. I think it's even more general, though, and we should be worried about something like "world state listing" where the world fails to be more filled with what we desire and starts to change at random rather than as a result of our efforts. In this light worry about value drift is a short-sighted way of noticing one doesn't want to the world state to list.

Comment by gworley3 on Should CEA buy ea.org? · 2019-10-07T19:47:07.566Z · score: 1 (1 votes) · EA · GW

I think generally no. Given the quality of search engines today I think having a short domain name doesn't provide much (I'm not sure that it ever did, given my own experience with them, although maybe I'm unusual and I could be swayed I'm sure by experimental results).

Comment by gworley3 on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T19:34:53.113Z · score: 3 (2 votes) · EA · GW

My guess is that reading a bunch of EA posts is not the thing you really care about if, say, what you care about is people engaging fruitfully on EA topics with people already in the EA movement.

By way of comparison, over on LW I have the impression (that is, I think I have seen this pattern but don't want to go to the trouble of digging up example links) that there are folks trying to engage on the site who claim to have read large chunks of the Sequences but also produce low quality content, and then there are also people who haven't read a lot of the literature who manage to write things that engage well with the site or do well engaging in rationalist discussions in person.

Reading background literature seems like one way that sometimes works to make a person into the kind of person who can engage fruitfully with a community, but I don't think it always works and it's not the thing itself, hence why I think you see such differing views when you look for related thinking on the topic.

Comment by gworley3 on What actions would obviously decrease x-risk? · 2019-10-07T19:21:22.580Z · score: 10 (5 votes) · EA · GW

Develop and deploy a system to protect Earth from impacts from large asteroids, etc.

Comment by gworley3 on What actions would obviously decrease x-risk? · 2019-10-07T19:19:04.263Z · score: 0 (3 votes) · EA · GW

+1

Further the OP gives a specific notion of obviousness to use here:

"obviously" (meaning: you believe it with high probability, and you expect that belief to be uncontroversial)

This doesn't leave a lot of room for debate about what is "obvious" unless you want to argue that a person doesn't believe it with high probability and they are wrong about their own belief about how controversial it is.

Comment by gworley3 on Why is the amount of child porn growing? · 2019-10-02T18:02:10.136Z · score: 6 (2 votes) · EA · GW

My suspicion is that we are seeing a "one time" increase due to better ability to create and share child abuse content. That is, my guess is the incident rate of child abuse is not much changing, but the visibility of it is because it's become easier to produce and share content featuring the actions that were already happening privately. I could imagine some small (let's say 10%) marginal increase in abuse incentivized by the ability to share, but on the whole I expect the majority of child abuser are continuing to abuse at the same rate.

Most of this argument rests on a prior I have that unexpected large increases like this are usually not signs of change in the thing we care about, but instead changes in secondary things that make the primary thing more visible. I'm sure I could be convinced this was evidence of an increase in child abuse proportionate with the reported numbers, but I think it far more likely lacking such evidence that it's mostly explained by increased ease producing and sharing content only.

Comment by gworley3 on Is pain just a signal to enlist altruists? · 2019-10-02T17:46:08.311Z · score: 21 (9 votes) · EA · GW

One possibility, if this theory is correct, is that cluster headaches are a spandrel, i.e. a (very unfortunate) unintended side effect of the pain system being accidentally fired in a case when it isn't beneficial for it to be but doesn't get selected out because it doesn't have much of an impact on differential reproduction rates.

Another is that causality is slightly different, pain is amped up in some cases to elicit altruism, but the mechanisms of pain are "lower in the stack" and so can be triggered by things other than those considered here, making cluster headaches something outside the bounds of what this model would need to explain since there can be many things accentuating pain and the considered cases are just one and the situation with cluster headaches is another.

Comment by gworley3 on Announcing the Buddhists in EA Group · 2019-09-24T17:39:07.208Z · score: 2 (2 votes) · EA · GW

It's a lot of things. I'd say that at its heart it's a way of life, or a way to live life. That way manifests itself in many ways such that we can talk about common Buddhist values, world models, practices, community forms, etc. but all of those are implementation details of how you bring about something deeper, more subtle, and more fundamental than any of them. It's a little hard to point at what that way is, though, so that I can say some words about it, because whatever words I say it will not be the thing itself, like the way a finger pointing at the moon is not itself the moon. If I had to pick some very few words to capture the essential nature of the Buddha way, I would say that it asks us to be here, now, in our totality, fully engaged in the act of living as compassionate agents embedded in the world.

Comment by gworley3 on Announcing the Buddhists in EA Group · 2019-09-23T16:59:01.476Z · score: 1 (1 votes) · EA · GW

Hmm, I'm not sure. The group is set to be publicly visible so anyone should be able to find it and ask to join, although it's a "private" group meaning only members can see who else are members and can see posts. The link is live and works for me, so I'm not sure. As an alternative you can search "Buddhists in Effective Altruism" on Facebook and that should find the group.

Comment by gworley3 on Does improving animal rights now improve the far future? · 2019-09-16T19:09:25.049Z · score: 4 (4 votes) · EA · GW
Through the spread of more humane attitudes, this would increase the expected value of the future of humanity by 0.01-0.1%.

I don't know how 80k evaluates the expected value of the future of humanity in other cases, but to me that number seems small in a way that suggests to me they have already "priced in" the uncertainty you are seeing.

Comment by gworley3 on How do you, personally, experience "EA motivation"? · 2019-08-16T22:52:56.215Z · score: 9 (8 votes) · EA · GW

I describe it as a calling. It's not so much that I feel a strong emotion as I feel like it's the most natural thing in the world that I would want to help people and do that in the most effective way possible. Since I focus specifically on x-risk from AI, I find this as a calling to address AI safety due to the natural way this feels like an obvious problem in desperate need of a solution.

For me it's very similar to the kind of "calling" people talk about in religious contexts, and now that I'm Buddhists I conceptualized what happened when I was 18 that made me care about and start pursuing AI safety as the awakening of bodhicitta because although I already wanted to become enlightened at that time (even though I didn't really appreciate what that meant) it wasn't until I cared about saving humanity from AI that I developed the compassion and desire that drove me to bodhicitta. With time that calling has broadened even though I mainly focus on AI safety.

Comment by gworley3 on Four practices where EAs ought to course-correct · 2019-08-01T17:41:09.076Z · score: 7 (5 votes) · EA · GW
This feels very timely, because several of us at CEA have recently been working on updating our resources for media engagement. In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received. I’d be happy to have people’s feedback on this resource!

This seems to be a private document. When I try to follow that link I get a page asking for me to log in to Google Drive with a @centreforeffectivealtruism.org Google account, which I don't have (I'm already logged into Google with two other Google accounts, so those don't seem to give me enough permission to access this document).

Maybe this document is intended to be private right now, but if it's allowed to be accessed outside CEA it doesn't seem that you currently can.

Comment by gworley3 on Four practices where EAs ought to course-correct · 2019-07-30T17:44:36.738Z · score: 37 (19 votes) · EA · GW

I can't speak for any individual, but being careful in how one engages with the media is prudent. Journalists often have a larger story they are trying to tell over the course of multiple articles and they are actively cognitively biased towards figuring out how what you're saying confirms and fits in with that story (or goes against it such that you are now Bad because you're not with whatever force for Good is motivating their narrative). This isn't just an idle worry either: I've talked to multiple journalists and they've independently told me as much straight out, e.g. "I'm trying to tell a story, so I'm only interested if you can tell me something that is about that story".

Keeping quiet is probably a good idea unless you have media training so you know how to interact with journalists. Otherwise you function like a random noise generator that might accidentally generate noise that confirms what the journalist wanted to believe anyway and if you don't endorse whatever the journalist believes you've just done something that works against your own interests and you probably didn't even realize it!

Comment by gworley3 on If physics is many-worlds, does ethics matter? · 2019-07-10T17:54:46.577Z · score: 3 (2 votes) · EA · GW

So assuming the Copenhagen interpretation is wrong and something like MWI or zero-world or something else is right, it's likely the case that there are multiple, disconnected casual histories. This is true to a lesser extent even in classical physics due to the expansion of the universe and the gradual shrinking of Hubble volumes (light cones), so even a die-hard Cophenhagenist should consider what we might call generally acausal ethics.

My response is generally something like this, keeping in mind my ethical perspective is probably best described as virtue ethics with something like negative preference utilitarianism applied on top:

  • Causal histories I am not causally linked with still matter for a few reasons:
    • My compassion can extend beyond causality in the same way it can extend beyond my city, country, ethnicity, species, and planet (moral circle expansion).
    • I am unsure what I will be causally linked with in the future (veil of ignorance).
    • Agents in other causal histories can extend compassion for me in kind if I do it for them (acausal trade).
  • Given that other causal histories matter, I can:
    • act to make other causal histories better in those cases where I am currently causally connected but later won't be (e.g. MWI worlds that will split causally later from the one I will find myself in that share a common history prior to the split),
    • engage in acausal trade to create in the causal history I find myself in more of what is wanted in other causal histories when the tradeoffs are nil or small knowing that my causal history will receive the same in exchange,
    • otherwise generally act to increase the measure (or if the universe is finite, count) of causal histories that are "good" ("good" could mean something like "want to live in" or "enjoy" or something else that is a bit beyond the scope of this analysis).
Comment by gworley3 on For older EA-oriented career changers: discussion and community formation · 2019-07-02T23:47:52.215Z · score: 3 (3 votes) · EA · GW

Google Drive has a simple survey function that lots of people use and is pretty convenient and can dump the results in Google Sheets for export. For example, it seems to be good enough for Scott's monster SSC reader survey.

Comment by gworley3 on Effective Altruism is an Ideology, not (just) a Question · 2019-06-28T23:37:32.420Z · score: 3 (2 votes) · EA · GW

Sure, this is the ideology part that springs up and people end up engaging with. Thinking of EA as a question can help us hew to a less political, less assumption-laden approach, but this can't stop people entirely from forming an ideology anyway and hewing to that instead, producing the types of behaviors you see (and that I'm similarly concerned about, as I've noticed and complained about similar voting patterns as well).

The point of my comment was mostly to save the aspiration and motivation for thinking of EA as a question rather than ideology, as I think if we stop thinking of it as a question it will become nothing more than an ideology and much of what I love about EA today would then be lost.

Comment by gworley3 on Effective Altruism is an Ideology, not (just) a Question · 2019-06-28T18:04:38.224Z · score: 46 (24 votes) · EA · GW

You are, of course, right: effective altruism is an ideology by most definitions of ideology, and you give a persuasive argument of that.

But I also think it misses the most valuable point of saying that it is not.

I think what Helen wrote resonates with many people because it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology. Effective altruism stays away from the worst tribalism of other -isms by being able to continually refresh itself by asking the simple question, "how can I do the most good?"

When we ask this question we don't get so tied up in what others think, what is expected of us, and what the "right" answer is. We can simply ask, right here and right now, given all that I've got, what can I do that will do the most good, as I judge it? Simple as that we create altruism through our honest intention to consider the good and effectiveness through our willingness to ask "most?".

Further, thinking of effective altruism as more question than ideology is valuable on multiple fronts. When I talk to people about EA, I could talk about Singer or utilitarianism or metaethics, and some times for some people those topics are the way to get them engaged, but I find most people resonate most with the simple question "how can we do the most good?". It's tangible, it's a question they can ask themselves, and it's a clear practice of compassion that need not come with any overly strong pre-conceived notions, and so everyone feels they can ask themselves the question and find an answer that may help make the world better.

When we approach EA this way, even if it doesn't connect for someone or even if they are confused in ways that make it hard for them to be effective, they still have the option to engage in it positively as a practice that can lead them to more effectiveness and more altruism over time. By contrast, if they think of EA as an ideology that is already set, they see themselves outside it and with no path to get in, and so leave it off as another thing they are not part of or is not a part of them—another identity shard in our atomized world they won't make part of their multifaceted lives.

And for those who choose not to consider the most good, seeing that there are those who ask this question my seem silly to them, but hardly threatening. An ideology can mean an opposing tribe you have to fight against so your own ideology has the resources to win. A question is just a question, and if a bunch of folks want to spend their time asking a question you think you already know the answer to, so much the better that you can offer them your answer and so less the worse that they pose a threat, those silly people wasting time asking a question. EA as question is flexibility and strength and pliancy to overcome those who would oppose and detract from our desire to do more good.

And that I think is the real power of thinking of EA as more question than ideology: it's a source of strength, power, curiosity, freedom, and alacrity to pursue the most good. Yes, it may be that there is an ideology around EA, and yes that ideology may offer valuable insights into how we answer the question, but so long as we keep the question first and the ideology second, we sustain ourselves with the continually renewed forces of inquiry and compassion.

So, yes, EA may be an ideology, but only by dint of the question that lies at its heart.

Comment by gworley3 on Ways Frugality Increases Productivity · 2019-06-26T17:19:23.483Z · score: 17 (11 votes) · EA · GW

I think much of the work being done by what you think of as frugality here is actually being done by slack: creating conditions under which you have enough flexibility to take advantage of situations when they arise and not be so attached to things as they are that you miss opportunities you value after taking. Only in your first case do I think frugality does the heavy lifting; everywhere else it is a way you created slack for yourself, but it could have been accomplished many other ways while living a more materially lavish life.

Comment by gworley3 on Best thing at EAG SF 2019? · 2019-06-24T20:50:30.068Z · score: 8 (5 votes) · EA · GW

I'll go ahead and give an answer to get us started.

The best thing for me was discovering that there is a way I can take an idea I had a while ago and apply it within the framework of iterated amplification to likely make the idea both more relatable and more useful in the nearer term. This discovery came thanks to one of the one-on-one meetings I scheduled via the network feature of the conference app and that conversation leading to a mutual realization that this idea might have new legs via iterated amplification. I think it is unlikely I would have figured that out without the conversation facilitated by the networking features of the app!

Comment by gworley3 on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-21T19:54:45.748Z · score: 2 (2 votes) · EA · GW

I suspect much of the trouble is the same as the trouble investors have trying to take advantage of this strategy: it requires marking a better prediction than the prediction the market is implicitly making with its current prices. Although it seems reasonable to predict that a recession will come "soon" since it's been unusually long since the last one and they appear cyclically (approximately coordinated with the approximately 5-year business cycle?), making that prediction too soon and switching to hoarding assets in anticipation of a drop so you can re-buy assets when they are at the bottom to maximize gains on the way back up will result in unnecessarily giving up potential gains. You might make a lucky guess once, but in the long run you'd need some reason to believe you can predict recessions or else you will perform worse than the market, not better.

So this seems probably only relevant if you are so good at predicting recessions so you can use that to make money and then donate that money, and will probably also require keeping quiet about your prediction and your evidence such that you can maximize the amount of advantage you can take (up to the limit of your funds, including the use of leverage, which might cause you to carefully share your knowledge in an attempt to fill gaps in opportunity you wouldn't be able to take advantage of yourself). If you're a non-profit, regular donor, or anyone else, you're probably best off not trying to beat the market, and only accounting for this in the normal way of holding funds in reserve so you can weather temporarily shocks to the market, i.e. have enough operating capital that you won't have to draw down on your investments before they recover.

Comment by gworley3 on What books or bodies of work, not about EA or EA cause areas, might be beneficial to EAs? · 2019-06-12T18:29:13.669Z · score: 3 (5 votes) · EA · GW

Although related, EA has grown and includes many people who don't share the rationalist/LW most prevalent among EAs concerned with x-risk, so LessWrong and especially the Sequences are probably worth mentioning.

Comment by gworley3 on Should we Resist Taxes? · 2019-05-30T19:42:17.772Z · score: 8 (4 votes) · EA · GW

Taxes seem tricky. I view it as generally good that governments allow offsetting of tax burden via donation to allow more flexibility in allocation of money to public goods, and in this way taxes being used for purposes you disagree with can actually incentivize spending on things we each care about more. Of course, it would be nice if you could just give more and be taxed less, and eventually donation offsetting runs out because governments still need some money.

My guess is that tax resistance won't be an effective cause area unless you especially believe there is large harm caused to people by making them pay taxes (a sort of libertarian suffering consequentialist argument), but for a variety of reasons it is probably worthwhile to minimize the amount you pay in taxes, i.e. don't give up money to a government that you could have otherwise spent in a way better aligned with your interests.

There is also some impact here based on who you pay taxes to. A citizen of the USA, like me, does more to fund war than a citizen of Switzerland, and thus if I were to pay less tax to the USA than a Swiss citizen were to pay to Switzerland I would more be reducing war spending than a Swiss citizen would, who would likely be more reducing funding of other public goods they would endorse being supported.

On the whole I don't think we can conclude anything especially strong, but it does at least seem like an interesting case to think about to sharpen our skills!

Comment by gworley3 on Why do you downvote EA Forum posts & comments? · 2019-05-30T19:29:05.700Z · score: 11 (3 votes) · EA · GW

For what it's worth, the reason I dislike yay/boo voting is that it incentivizes people towards posting/commenting in ways that maximize applause lights at the expense of saying things that are more useful to other purposes, like becoming less confused and doing more good. I worry that the current voting system is too heavily suffering from Goodhart effects and as a result shaping people's motivation in posting and commenting in ways that work against what most people would prefer we do on this and its sister forums (though of course maybe many people genuinely want applause lights, though the comments on this post seem to suggest otherwise).

Comment by gworley3 on Drowning children are rare · 2019-05-30T19:14:16.294Z · score: 18 (8 votes) · EA · GW

What do you mean by "better" here? That there is a discrepancy suggests to me that people are voting for different reasons between the two places, not that the voting is better in some universal way (compare the way "better" in economics could mean redistribution to things you like or more efficiency so everyone gets more of what they want).

Also, just further noting voting patterns, no disrespect intended to you kbog, but your comment contains little content (in a very straightforward sense: it is short) and is purely a statement of opinion with no justification provided (though some is implied), yet at time of writing has 6 votes for 14 karma, which relative to what I see on average comments on EAF, where more thorough comments receive less karma and less attention, suggests to me you hit an applause light and people are upvoting it for that reason rather than anything else.

None of this is to say people can't vote the way they like or that you don't deserve the karma. I merely seek to highlight how people seem to use voting today. The way people use voting is not aligned with how I would like voting to be used, hence why I mention these things and am interested in them, but it is also not up to me to shape this particular mechanism.

Comment by gworley3 on Drowning children are rare · 2019-05-30T19:03:22.981Z · score: 9 (7 votes) · EA · GW

I think we lack clear evidence to conclude that, though. I can just as easily believe the story, given what we've seen, that EAF users are more likely to downvote anything criticizing EA (just as LW users are more likely to downvote anything that goes against the standard interpretation of LW rationality). I'd be very interested to know if there are posts that both criticize something EA in a cogent way as this post does and don't receive large numbers of downvotes.

Also, don't forget many posts that have pro-EA results are about equally well reasoned as what we see here, but receive overwhelmingly positive votes, even if they receive criticism in the comments. So the question remains, why downvote this post when we respond to it and not downvote other posts when we criticize them?

Comment by gworley3 on Why do you downvote EA Forum posts & comments? · 2019-05-29T23:35:24.031Z · score: 9 (5 votes) · EA · GW

My general algorithm for voting is to vote up that which I would have liked to have recommend for me to read and downvote that which I would be disappointed if it were recommended to me, where the criterion for wanting something recommended is does it thoughtfully engage with a topic in a way that advances my understanding (and in the case that my understanding already includes what is presented, I try to imagine the case that I didn't know what I know and vote from that place of counterfactual ignorance). I don't vote on things that either fail to pique my interest or that I feel indifferent on having recommended to me.

Strong votes (up and down) go to things that I would, respectively, be visibly happy or sad if someone recommended it to me, i.e. someone sent me an email about it and I light up and smile or frown and droop when I read the content.

Comment by gworley3 on [Question] 20,000/40,000 Hours- MidCareer Options · 2019-05-29T18:19:39.352Z · score: 11 (7 votes) · EA · GW

Since I am both mid-career and EA, maybe I can say a little about this even if I can't give a full answer.

I was concerned about existential risk due to AI prior to the start of my career (heck, prior to going to college, and this was in 2000), but for a variety of reasons I failed to do much directly about this. I got distracted by life, had to get a job to deal with more pressing needs, and spent several years just trying to get along without putting much effort into AI safety.

Then a couple of years ago my life got better, I had more slack, and I used that slack to start working on AI safety as a "hobby". So far this has proven pretty successful: I've published some things, had many interesting conversations with people who are also doing direct work on AI safety (part or full time), and helped influence research directions and progress.

I don't know what this will turn into, but the hobby model is worth considering as a way to transition mid-career: get interested in and start working on something you care about, and eventually maybe transition to doing that work full time. Plus you'll be somewhat unique in that you'll be carrying forward all your existing career capital that others in your chosen space likely won't have.

The downside of this approach is that it requires you have enough time and energy to do it. To make progress here it may be necessary to take a less demanding job to creating that time and energy or give up other commitments.

Definitely interested to see what others suggest or have tried.

Comment by gworley3 on My state allows for a 1 member nonprofit board and I like that idea in order to keep my vision. However I want to have a "board of directors", but have them as a body to give me advice, as opposed to the traditional governing board? How can I actually apply this and what non misleading title can I give to the "board of directors"? · 2019-05-29T18:07:15.130Z · score: 9 (5 votes) · EA · GW

Small formatting tip: it would be nice if you put a very short title to your question in the title and asked the full question in the body of the question. I found it a bit hard to read the question when the whole thing is in title styling.

Comment by gworley3 on Drowning children are rare · 2019-05-29T17:56:13.578Z · score: 11 (8 votes) · EA · GW

Also also, just want to register the observation that this post seems further evidence of my continuing claim that votes on LW/EAF/AF are boos/yays: at time of this writing here the score is 0 with 17 votes and on LW it's 36 with 24 votes. I don't want to detract from the direct discussion of the topic, but I find that discrepancy very interesting and clearer evidence than we've seen in the past of how voting patterns are a poor signal of post quality.

Comment by gworley3 on Framing Effective Altruism as Overcoming Indifference · 2019-05-28T19:45:33.402Z · score: 1 (1 votes) · EA · GW
Instead, I use an "unawareness" framework. Rather than "most people are indifferent to these problems", I say something like "most people aren't fully aware of the extent of the problems, or do know about the problems but aren't sure how to address them; instead, they stick to working on things they feel they understand better".

I would guess that similarly this is why "woke" as caught on as a popular way of talking about those who "wake up" to the problems around them that they were previously ignorant of and "asleep to": it's a framing that let's you feel good about becoming aware of and doing more about various issues in the world without having to feel too bad about having not done things about them in the past, so you aren't as much on the defensive when someone tries to "shake you awake" to those problems.

Comment by gworley3 on Please use art to convey EA! · 2019-05-28T19:40:55.537Z · score: 5 (2 votes) · EA · GW

I like this idea a lot. I've been playing with the idea of writing a bildungsroman around some of my insights into personal development, which of course touches on topics related to EA and rationality, so I'm quite fond of seeing others do this as well.

What's worth noting is that I haven't done it because I'm constantly pulled by other things that seem higher priority. This is maybe the big challenge for making more EA art: its comparative benefit. I'm tempted to say "maybe there will be more time for EA art when EA is bigger", but if that's the case it's a chicken-and-egg problem because EA art seems to be a great way to grow the movement.

So on the whole my guess is we can't directly go for EA art beyond making sure folks in the community are more aware that it's a thing they could maybe do so that on the margin we might get more EA art replacing EA-relevant art that would have otherwise been produced.

Comment by gworley3 on Jade Leung: Why Companies Should be Leading on AI Governance · 2019-05-16T17:31:32.078Z · score: 5 (3 votes) · EA · GW

For a related perspective, I've written (here for a general audience, here for an academic one) about using self-regulatory organizations, which I think could be a natural extension of this position depending on implementation.

Comment by gworley3 on How does one live/do community as an Effective Altruist? · 2019-05-16T17:28:43.782Z · score: 16 (7 votes) · EA · GW

There's been a good deal of recent, related discussion over on LW with a different framing which is likely relevant to this.

Comment by gworley3 on Non-Profit Insurance Agency · 2019-05-14T01:54:51.237Z · score: 1 (1 votes) · EA · GW

I don't know the answer to these specific questions, as I've not done it. A 501(c)(3) organization is tax advantaged on its "profits", but only in certain ways and not others, and in my engagement in helping run such orgs it's never come up (or if it has someone else handled it before I learned about it). It's probably best to recruit the advice of a CPA or other expert in this area. My main goal was just to warn you that operating as anything other than an LLC (whether passthrough or not) is more complicated, so it's seriously worth evaluating the options and seeing if you can't get most of what you want by operating your LLC for public benefit so long as all the partners (so probably just you!) are on board with it.

Comment by gworley3 on Structure EA organizations as WSDNs? · 2019-05-13T18:00:42.747Z · score: 3 (2 votes) · EA · GW

My experience with organizational design is that the formal structure tends to follow not lead the informal structures that arise among the people in the organizations. Yes, over time organizations become "ossified" such that the formal structure also creates the informal structure, but this is not much the case in early and small orgs, although there are usually some exceptions to this as certain formal relationships develop early, such as the founder(s) or some other persons having authority via legal and financial control that backs their ability to influence others and hence seeds the creation of the org structure.

Overall this is to say my guess is these sorts of structures are either already naturally arising and where they don't it's because there are other incentives that push those organizations in other directions.

---

That's one way to explain my thinking. Another is this:

I read your post as suggesting something like "hey, what if we tried this different org structure; I think it might be better", but to actually try a different org structure you have to have people who want to relate to each other in a different way. It's typically only at large orgs with ossified structures where people are not relating to each other in the way they would like and where suggesting a change of org structure might manage to shift an equilibrium by getting everyone to re-coordinate towards something they prefer.

In a small org you probably can't make the structure much other than what it is unless you first change the people who are creating the structure to be the kind of people who would create the desired structure. That's because I expect the existing structure to already be a natural equilibrium that is roughly correlated with the kind of structure desired proportional to the amount of (official) control each person in the org has. Thus unlike in a large org there is not a hope that you can hit reset and get a different outcome by breaking the existing inadequate equilibrium.

Comment by gworley3 on Non-Profit Insurance Agency · 2019-05-13T17:41:11.404Z · score: 5 (4 votes) · EA · GW

When you say "non-profit" what comes to my mind is operating as a legally and financially advantaged organization with special non-profit status. But a non-profit (especially if you are interested in 501(c)(3) tax status) are more complex than LLCs, with more strenuous reporting requirements, so guessing that you're operating as an LLC since, I'd seriously consider if there's any actual benefit from operating as a non-profit. Presumably you wouldn't be taking donations so you wouldn't need special tax status to allow your donors to deduct their donations, so unless there is also a reduction in taxes on profits that goes along with whatever status you obtain that could not be gained already from donating the profits of an LLC, then it's probably not worthwhile. If you're a C-corp then go ahead; it's probably similarly complex, if different.

I bring all this up because it's possible and easy to operate an LLC for public benefit, and you can take whatever measures you like to demonstrate that you are doing this to interested folks, so you should probably consider that the default course and only do something different if you reckon there are clear benefits from operating another way.

Comment by gworley3 on Why we should be less productive. · 2019-05-10T17:45:00.467Z · score: 20 (8 votes) · EA · GW

Having spent significant time around both the EA and the LW community and having written several controversial posts and then subsequently talked with folks who downvoted those posts, I now have strong reason to believe that most downvotes are in fact "boos" rather than anything more substantive. When people have substantive disagreements with posts they more often post comments indicating that and just don't vote on a post either way.

I'm sure this is not universally true but it's been my experience, so when I see downvotes on a post that isn't obviously spam, trolling, or otherwise clearly low-quality (rather than in this case just not containing much content, a kind of post that is clearly not universally downvoted because many low content posts get either neutral or positive responses, which I must assume given their lack of content is a function of agreement with the idea presented), I find it reasonable to ask "why 'boo' at this?". Hence my comment as a possible explanation for more "boos" than "yays".

I agree it would be preferable if people didn't use votes as "boos" and "yays", and I think we could fix this—maybe by only allowing people who comment on a post to vote on it, although I think that risks creating lots of meaningless comments because people just want to vote, so there is probably some other solution that would work better—but unfortunately my experience suggests that's exactly how most people vote on posts and comments.

Comment by gworley3 on Why we should be less productive. · 2019-05-09T19:07:47.205Z · score: 6 (6 votes) · EA · GW

Honestly, I think even if you only value getting "productive" things done and don't much value "unproductive" things, there's a lot of evidence that you can be more productive by being less productive where the mechanism of action is something like you burn out your capacity to do more work by consistently pushing yourself beyond what you can comfortably do to the point where you "burn out" and then find yourself unmotivated to do anything while you recover. A person can be sustainably more productive by giving themselves unproductive time to recover.

Meta note: that you got downvotes (I can surmise this from the number of votes and the total score) seems to suggest this is advice people don't want to hear, but maybe they need.

Comment by gworley3 on Meditation and Effective Altruism · 2019-04-23T19:34:42.586Z · score: 15 (7 votes) · EA · GW

I'd say a teacher is even more important than that.

Meditation is a powerful class of techniques for examining the mind, and sometimes people struggle to deal with what they discover doing it. Meditation is not all upside, as this post suggests; plenty of people have negative experiences as part of meditation practice, although they usually, with some guidance from a teacher, see their way through them and find themselves in a better place at the end of the experience. In fact, meditation can be especially rough if you have a lot of psychological "shadow", i.e. "stuff" or "baggage" you would normally think of working through in therapy, since meditation won't on it's own help with that stuff and can make the experience of it worse as you see it more clearly. A teacher can help you deal with these sorts of issues, offering advice, practices, and the compassion of another human as you deal with the negatives that can come up.

This isn't to put anyone off meditation, just to give appropriate warning that it's a very intimate and powerful practice that can bring up positive as well as negative experiences, and navigating that on your own can work out for some people but doesn't for everyone.

Comment by gworley3 on Most important unfulfilled role in the EA ecosystem? · 2019-04-05T18:23:58.845Z · score: 7 (4 votes) · EA · GW

This is a great answer. I would have said something like "leadership" in that EA has leaders but few of them are people you would march into battle and die for. I feel like there's almost no one in EA proper and only a couple people on the edges (mostly because their cause area was taken up by EA, and they didn't come from within EA) who has demonstrated something like the 10x skill of leadership and motivation.

Put more colloquially, EA needs a Steve Jobs, an FDR, a Winston Churchill, an Oda Nobunaga.

Comment by gworley3 on Should EA Groups Run Organ Donor Registration Drives? · 2019-03-28T19:21:07.340Z · score: 2 (2 votes) · EA · GW

I'm always of mixed opinion about organ donation. Yes, it seems straightforwardly beneficial, but it's also at odds with surprising things. For example, I'm signed up for cryonics, and this means it's very import I not be an organ donor both because my organs would be unusable after perfusion and because if I were an organ donor and was willing to accept a lower quality preservation by possibly not having my regularly circulatory system in place to help with cooling, it would still be a bad deal because doctors would hold on to my body for an unspecified amount of time in not necessarily ideal preservation conditions for my brain before maybe releasing me to the cryonics team hours or days later.

This would effectively mean pitting organ donation and life extension, at least in part, against each other within EA. Not necessarily a blocker if people think more organ donation among people who don't sign up for cryonics is worth it in expectation over, say, getting more people signed up for cryonics, but it's worth factoring into the calculation.

Comment by gworley3 on Effective Altruism and Meaning in Life · 2019-03-18T18:44:29.901Z · score: 4 (4 votes) · EA · GW

I really like this. We can be effective, but we can't do that if we're all sad and depressed because we tie our sense of self worth to something unattainable. I also enjoyed the fun stylistic choices!

Comment by gworley3 on [Link] A Modest Proposal: Eliminate Email · 2019-03-18T15:14:59.514Z · score: 5 (4 votes) · EA · GW

I always find these sentiments strange because I find what I love about email and dislike about other forms of online communication is that email is strongly asynchronous and puts me in a lot of control of how I choose to interact with it. Slack, IRC, and other more synchronous forms of communication (even if they are supposedly asynchronous in some cases they are designed and used with synchronous use in mind) are much harder for me to be in control of how I use them because there are stronger cues to use them in interrupt-driven ways. Email can, of course, degenerate in this way, and it seems that's what happens in some cultures (offices, etc.), but then the problem is the culture, not the tool.

If you dislike a particular email (or Slack or in-person) culture you don't like, change it, not the tools. If you don't, you'll just end up unhappy on a different tool.

Comment by gworley3 on The career coordination problem · 2019-03-17T15:31:13.220Z · score: 4 (4 votes) · EA · GW

I think there is enough difficulty in achieving specialization that you are better off ignoring coordination concerns here in favor of choosing based on personal inclination. It's hard to put in all the time it takes to become an expert in something, it's even harder when you don't love that something for its own sake, and my own suspicion is that without that love you will never achieve to the highest level of expertise, so best to look for the confluence of what you most love and what is most useful than to worry about coordinating over usefulness. You and everyone else is not sufficiently interchangeable when it comes to developing sufficient specialization to be helpful to EA causes.

Comment by gworley3 on Identifying Talent without Credentialing In EA · 2019-03-12T00:02:01.861Z · score: 5 (4 votes) · EA · GW

I very much like this approach to finding ways to deal with credentialism, however I'm also unsure how much of an impact credentials are having on current EA hiring. That is, my impression is that current EA orgs are hiring folks more based on work experience rather than credentials, and in fact EA orgs are unusually willing to consider candidates without traditional credentials (EA-orgs within universities being an exception due to their hiring processes being tied to those of the host institution). This suggests your premise may not apply (EA orgs not hiring folks because they lack credentials), but regardless I think your solutions apply anyway because they also address the issue where candidates lack experience rather than credentials.

Comment by gworley3 on Making discussions in EA groups inclusive · 2019-03-05T01:48:30.134Z · score: 2 (2 votes) · EA · GW

This seems to miss the point of my question, because it already seems to be the case that the people who could do something already don't much engage in these discussions. Rather it's primarily the folks who are causing the feelings of alienation and do not themselves feel alienated that are starting and engaging in discussions that cause feelings of alienation for others, and presuming they do so because they either do not consider their actions to be contrary to the purpose of inclusiveness or because they do not value inclusiveness, what actions can those who are alienated or value inclusiveness take that would address this issue. That is, if you feel there are things being done and said that cause alienation, how do you get that to stop other than just hoping that other people decide on their own not to do it anymore?

Comment by gworley3 on Making discussions in EA groups inclusive · 2019-03-04T20:56:59.365Z · score: 1 (1 votes) · EA · GW

Identifying this is a start, but it remains unclear to me that this post is likely to result in any action that will change anything (realizing some people may disagree that this is the experience of some people in the community or that their experience of alienation matters). But supposing you agree that this post describes a real problem and the problem deserves solving, what are things we might do as a community to be more inclusive?

I'm thinking here of asking for specific, actionable ideas and not just generic stuff like "spread awareness", and additionally these need to be actions that will be carried about by the people who care about this to make the community different than it is today and not demands for people who are in the alienator group to change because that's also unlikely to be an effective strategy. I imagine most actions that would work well would be of the form "I want EA to be more inclusive, and to make it that way I'm going to do X". What is X?

Comment by gworley3 on Profiting-to-Give: harnessing EA talent with a new funding model · 2019-03-04T18:41:30.763Z · score: 3 (3 votes) · EA · GW

Hmm, I wonder why there were some downvotes. This seems like a rather creative idea to me to find a way towards creating for-profit endeavors that may help soak up excess talent and generate additional revenue for EA projects (not to mention some of these EA-corps might directly do work that has benefit to people; Wave comes to mind as a possible example of such an existing organization).