Announcing the Buddhists in EA Group 2019-07-02T20:41:23.737Z · score: 22 (8 votes)
Best thing at EAG SF 2019? 2019-06-24T19:19:49.700Z · score: 16 (7 votes)
What movements does EA have the strongest synergies with? 2018-12-20T23:36:55.641Z · score: 8 (2 votes)
HLAI 2018 Field Report 2018-08-29T00:13:22.489Z · score: 10 (10 votes)
Avoiding AI Races Through Self-Regulation 2018-03-12T20:52:06.475Z · score: 4 (4 votes)
Prioritization Consequences of "Formally Stating the AI Alignment Problem" 2018-02-19T21:31:36.942Z · score: 2 (2 votes)


Comment by gworley3 on If physics is many-worlds, does ethics matter? · 2019-07-10T17:54:46.577Z · score: 3 (2 votes) · EA · GW

So assuming the Copenhagen interpretation is wrong and something like MWI or zero-world or something else is right, it's likely the case that there are multiple, disconnected casual histories. This is true to a lesser extent even in classical physics due to the expansion of the universe and the gradual shrinking of Hubble volumes (light cones), so even a die-hard Cophenhagenist should consider what we might call generally acausal ethics.

My response is generally something like this, keeping in mind my ethical perspective is probably best described as virtue ethics with something like negative preference utilitarianism applied on top:

  • Causal histories I am not causally linked with still matter for a few reasons:
    • My compassion can extend beyond causality in the same way it can extend beyond my city, country, ethnicity, species, and planet (moral circle expansion).
    • I am unsure what I will be causally linked with in the future (veil of ignorance).
    • Agents in other causal histories can extend compassion for me in kind if I do it for them (acausal trade).
  • Given that other causal histories matter, I can:
    • act to make other causal histories better in those cases where I am currently causally connected but later won't be (e.g. MWI worlds that will split causally later from the one I will find myself in that share a common history prior to the split),
    • engage in acausal trade to create in the causal history I find myself in more of what is wanted in other causal histories when the tradeoffs are nil or small knowing that my causal history will receive the same in exchange,
    • otherwise generally act to increase the measure (or if the universe is finite, count) of causal histories that are "good" ("good" could mean something like "want to live in" or "enjoy" or something else that is a bit beyond the scope of this analysis).
Comment by gworley3 on For older EA-oriented career changers: discussion and community formation · 2019-07-02T23:47:52.215Z · score: 3 (3 votes) · EA · GW

Google Drive has a simple survey function that lots of people use and is pretty convenient and can dump the results in Google Sheets for export. For example, it seems to be good enough for Scott's monster SSC reader survey.

Comment by gworley3 on Effective Altruism is an Ideology, not (just) a Question · 2019-06-28T23:37:32.420Z · score: 3 (2 votes) · EA · GW

Sure, this is the ideology part that springs up and people end up engaging with. Thinking of EA as a question can help us hew to a less political, less assumption-laden approach, but this can't stop people entirely from forming an ideology anyway and hewing to that instead, producing the types of behaviors you see (and that I'm similarly concerned about, as I've noticed and complained about similar voting patterns as well).

The point of my comment was mostly to save the aspiration and motivation for thinking of EA as a question rather than ideology, as I think if we stop thinking of it as a question it will become nothing more than an ideology and much of what I love about EA today would then be lost.

Comment by gworley3 on Effective Altruism is an Ideology, not (just) a Question · 2019-06-28T18:04:38.224Z · score: 43 (22 votes) · EA · GW

You are, of course, right: effective altruism is an ideology by most definitions of ideology, and you give a persuasive argument of that.

But I also think it misses the most valuable point of saying that it is not.

I think what Helen wrote resonates with many people because it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology. Effective altruism stays away from the worst tribalism of other -isms by being able to continually refresh itself by asking the simple question, "how can I do the most good?"

When we ask this question we don't get so tied up in what others think, what is expected of us, and what the "right" answer is. We can simply ask, right here and right now, given all that I've got, what can I do that will do the most good, as I judge it? Simple as that we create altruism through our honest intention to consider the good and effectiveness through our willingness to ask "most?".

Further, thinking of effective altruism as more question than ideology is valuable on multiple fronts. When I talk to people about EA, I could talk about Singer or utilitarianism or metaethics, and some times for some people those topics are the way to get them engaged, but I find most people resonate most with the simple question "how can we do the most good?". It's tangible, it's a question they can ask themselves, and it's a clear practice of compassion that need not come with any overly strong pre-conceived notions, and so everyone feels they can ask themselves the question and find an answer that may help make the world better.

When we approach EA this way, even if it doesn't connect for someone or even if they are confused in ways that make it hard for them to be effective, they still have the option to engage in it positively as a practice that can lead them to more effectiveness and more altruism over time. By contrast, if they think of EA as an ideology that is already set, they see themselves outside it and with no path to get in, and so leave it off as another thing they are not part of or is not a part of them—another identity shard in our atomized world they won't make part of their multifaceted lives.

And for those who choose not to consider the most good, seeing that there are those who ask this question my seem silly to them, but hardly threatening. An ideology can mean an opposing tribe you have to fight against so your own ideology has the resources to win. A question is just a question, and if a bunch of folks want to spend their time asking a question you think you already know the answer to, so much the better that you can offer them your answer and so less the worse that they pose a threat, those silly people wasting time asking a question. EA as question is flexibility and strength and pliancy to overcome those who would oppose and detract from our desire to do more good.

And that I think is the real power of thinking of EA as more question than ideology: it's a source of strength, power, curiosity, freedom, and alacrity to pursue the most good. Yes, it may be that there is an ideology around EA, and yes that ideology may offer valuable insights into how we answer the question, but so long as we keep the question first and the ideology second, we sustain ourselves with the continually renewed forces of inquiry and compassion.

So, yes, EA may be an ideology, but only by dint of the question that lies at its heart.

Comment by gworley3 on Ways Frugality Increases Productivity · 2019-06-26T17:19:23.483Z · score: 17 (11 votes) · EA · GW

I think much of the work being done by what you think of as frugality here is actually being done by slack: creating conditions under which you have enough flexibility to take advantage of situations when they arise and not be so attached to things as they are that you miss opportunities you value after taking. Only in your first case do I think frugality does the heavy lifting; everywhere else it is a way you created slack for yourself, but it could have been accomplished many other ways while living a more materially lavish life.

Comment by gworley3 on Best thing at EAG SF 2019? · 2019-06-24T20:50:30.068Z · score: 8 (5 votes) · EA · GW

I'll go ahead and give an answer to get us started.

The best thing for me was discovering that there is a way I can take an idea I had a while ago and apply it within the framework of iterated amplification to likely make the idea both more relatable and more useful in the nearer term. This discovery came thanks to one of the one-on-one meetings I scheduled via the network feature of the conference app and that conversation leading to a mutual realization that this idea might have new legs via iterated amplification. I think it is unlikely I would have figured that out without the conversation facilitated by the networking features of the app!

Comment by gworley3 on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-21T19:54:45.748Z · score: 2 (2 votes) · EA · GW

I suspect much of the trouble is the same as the trouble investors have trying to take advantage of this strategy: it requires marking a better prediction than the prediction the market is implicitly making with its current prices. Although it seems reasonable to predict that a recession will come "soon" since it's been unusually long since the last one and they appear cyclically (approximately coordinated with the approximately 5-year business cycle?), making that prediction too soon and switching to hoarding assets in anticipation of a drop so you can re-buy assets when they are at the bottom to maximize gains on the way back up will result in unnecessarily giving up potential gains. You might make a lucky guess once, but in the long run you'd need some reason to believe you can predict recessions or else you will perform worse than the market, not better.

So this seems probably only relevant if you are so good at predicting recessions so you can use that to make money and then donate that money, and will probably also require keeping quiet about your prediction and your evidence such that you can maximize the amount of advantage you can take (up to the limit of your funds, including the use of leverage, which might cause you to carefully share your knowledge in an attempt to fill gaps in opportunity you wouldn't be able to take advantage of yourself). If you're a non-profit, regular donor, or anyone else, you're probably best off not trying to beat the market, and only accounting for this in the normal way of holding funds in reserve so you can weather temporarily shocks to the market, i.e. have enough operating capital that you won't have to draw down on your investments before they recover.

Comment by gworley3 on What books or bodies of work, not about EA or EA cause areas, might be beneficial to EAs? · 2019-06-12T18:29:13.669Z · score: 3 (5 votes) · EA · GW

Although related, EA has grown and includes many people who don't share the rationalist/LW most prevalent among EAs concerned with x-risk, so LessWrong and especially the Sequences are probably worth mentioning.

Comment by gworley3 on Should we Resist Taxes? · 2019-05-30T19:42:17.772Z · score: 8 (4 votes) · EA · GW

Taxes seem tricky. I view it as generally good that governments allow offsetting of tax burden via donation to allow more flexibility in allocation of money to public goods, and in this way taxes being used for purposes you disagree with can actually incentivize spending on things we each care about more. Of course, it would be nice if you could just give more and be taxed less, and eventually donation offsetting runs out because governments still need some money.

My guess is that tax resistance won't be an effective cause area unless you especially believe there is large harm caused to people by making them pay taxes (a sort of libertarian suffering consequentialist argument), but for a variety of reasons it is probably worthwhile to minimize the amount you pay in taxes, i.e. don't give up money to a government that you could have otherwise spent in a way better aligned with your interests.

There is also some impact here based on who you pay taxes to. A citizen of the USA, like me, does more to fund war than a citizen of Switzerland, and thus if I were to pay less tax to the USA than a Swiss citizen were to pay to Switzerland I would more be reducing war spending than a Swiss citizen would, who would likely be more reducing funding of other public goods they would endorse being supported.

On the whole I don't think we can conclude anything especially strong, but it does at least seem like an interesting case to think about to sharpen our skills!

Comment by gworley3 on Why do you downvote EA Forum posts & comments? · 2019-05-30T19:29:05.700Z · score: 11 (3 votes) · EA · GW

For what it's worth, the reason I dislike yay/boo voting is that it incentivizes people towards posting/commenting in ways that maximize applause lights at the expense of saying things that are more useful to other purposes, like becoming less confused and doing more good. I worry that the current voting system is too heavily suffering from Goodhart effects and as a result shaping people's motivation in posting and commenting in ways that work against what most people would prefer we do on this and its sister forums (though of course maybe many people genuinely want applause lights, though the comments on this post seem to suggest otherwise).

Comment by gworley3 on Drowning children are rare · 2019-05-30T19:14:16.294Z · score: 18 (8 votes) · EA · GW

What do you mean by "better" here? That there is a discrepancy suggests to me that people are voting for different reasons between the two places, not that the voting is better in some universal way (compare the way "better" in economics could mean redistribution to things you like or more efficiency so everyone gets more of what they want).

Also, just further noting voting patterns, no disrespect intended to you kbog, but your comment contains little content (in a very straightforward sense: it is short) and is purely a statement of opinion with no justification provided (though some is implied), yet at time of writing has 6 votes for 14 karma, which relative to what I see on average comments on EAF, where more thorough comments receive less karma and less attention, suggests to me you hit an applause light and people are upvoting it for that reason rather than anything else.

None of this is to say people can't vote the way they like or that you don't deserve the karma. I merely seek to highlight how people seem to use voting today. The way people use voting is not aligned with how I would like voting to be used, hence why I mention these things and am interested in them, but it is also not up to me to shape this particular mechanism.

Comment by gworley3 on Drowning children are rare · 2019-05-30T19:03:22.981Z · score: 9 (7 votes) · EA · GW

I think we lack clear evidence to conclude that, though. I can just as easily believe the story, given what we've seen, that EAF users are more likely to downvote anything criticizing EA (just as LW users are more likely to downvote anything that goes against the standard interpretation of LW rationality). I'd be very interested to know if there are posts that both criticize something EA in a cogent way as this post does and don't receive large numbers of downvotes.

Also, don't forget many posts that have pro-EA results are about equally well reasoned as what we see here, but receive overwhelmingly positive votes, even if they receive criticism in the comments. So the question remains, why downvote this post when we respond to it and not downvote other posts when we criticize them?

Comment by gworley3 on Why do you downvote EA Forum posts & comments? · 2019-05-29T23:35:24.031Z · score: 9 (5 votes) · EA · GW

My general algorithm for voting is to vote up that which I would have liked to have recommend for me to read and downvote that which I would be disappointed if it were recommended to me, where the criterion for wanting something recommended is does it thoughtfully engage with a topic in a way that advances my understanding (and in the case that my understanding already includes what is presented, I try to imagine the case that I didn't know what I know and vote from that place of counterfactual ignorance). I don't vote on things that either fail to pique my interest or that I feel indifferent on having recommended to me.

Strong votes (up and down) go to things that I would, respectively, be visibly happy or sad if someone recommended it to me, i.e. someone sent me an email about it and I light up and smile or frown and droop when I read the content.

Comment by gworley3 on [Question] 20,000/40,000 Hours- MidCareer Options · 2019-05-29T18:19:39.352Z · score: 10 (6 votes) · EA · GW

Since I am both mid-career and EA, maybe I can say a little about this even if I can't give a full answer.

I was concerned about existential risk due to AI prior to the start of my career (heck, prior to going to college, and this was in 2000), but for a variety of reasons I failed to do much directly about this. I got distracted by life, had to get a job to deal with more pressing needs, and spent several years just trying to get along without putting much effort into AI safety.

Then a couple of years ago my life got better, I had more slack, and I used that slack to start working on AI safety as a "hobby". So far this has proven pretty successful: I've published some things, had many interesting conversations with people who are also doing direct work on AI safety (part or full time), and helped influence research directions and progress.

I don't know what this will turn into, but the hobby model is worth considering as a way to transition mid-career: get interested in and start working on something you care about, and eventually maybe transition to doing that work full time. Plus you'll be somewhat unique in that you'll be carrying forward all your existing career capital that others in your chosen space likely won't have.

The downside of this approach is that it requires you have enough time and energy to do it. To make progress here it may be necessary to take a less demanding job to creating that time and energy or give up other commitments.

Definitely interested to see what others suggest or have tried.

Comment by gworley3 on My state allows for a 1 member nonprofit board and I like that idea in order to keep my vision. However I want to have a "board of directors", but have them as a body to give me advice, as opposed to the traditional governing board? How can I actually apply this and what non misleading title can I give to the "board of directors"? · 2019-05-29T18:07:15.130Z · score: 9 (5 votes) · EA · GW

Small formatting tip: it would be nice if you put a very short title to your question in the title and asked the full question in the body of the question. I found it a bit hard to read the question when the whole thing is in title styling.

Comment by gworley3 on Drowning children are rare · 2019-05-29T17:56:13.578Z · score: 11 (8 votes) · EA · GW

Also also, just want to register the observation that this post seems further evidence of my continuing claim that votes on LW/EAF/AF are boos/yays: at time of this writing here the score is 0 with 17 votes and on LW it's 36 with 24 votes. I don't want to detract from the direct discussion of the topic, but I find that discrepancy very interesting and clearer evidence than we've seen in the past of how voting patterns are a poor signal of post quality.

Comment by gworley3 on Framing Effective Altruism as Overcoming Indifference · 2019-05-28T19:45:33.402Z · score: 1 (1 votes) · EA · GW
Instead, I use an "unawareness" framework. Rather than "most people are indifferent to these problems", I say something like "most people aren't fully aware of the extent of the problems, or do know about the problems but aren't sure how to address them; instead, they stick to working on things they feel they understand better".

I would guess that similarly this is why "woke" as caught on as a popular way of talking about those who "wake up" to the problems around them that they were previously ignorant of and "asleep to": it's a framing that let's you feel good about becoming aware of and doing more about various issues in the world without having to feel too bad about having not done things about them in the past, so you aren't as much on the defensive when someone tries to "shake you awake" to those problems.

Comment by gworley3 on Please use art to convey EA! · 2019-05-28T19:40:55.537Z · score: 5 (2 votes) · EA · GW

I like this idea a lot. I've been playing with the idea of writing a bildungsroman around some of my insights into personal development, which of course touches on topics related to EA and rationality, so I'm quite fond of seeing others do this as well.

What's worth noting is that I haven't done it because I'm constantly pulled by other things that seem higher priority. This is maybe the big challenge for making more EA art: its comparative benefit. I'm tempted to say "maybe there will be more time for EA art when EA is bigger", but if that's the case it's a chicken-and-egg problem because EA art seems to be a great way to grow the movement.

So on the whole my guess is we can't directly go for EA art beyond making sure folks in the community are more aware that it's a thing they could maybe do so that on the margin we might get more EA art replacing EA-relevant art that would have otherwise been produced.

Comment by gworley3 on Jade Leung: Why Companies Should be Leading on AI Governance · 2019-05-16T17:31:32.078Z · score: 5 (3 votes) · EA · GW

For a related perspective, I've written (here for a general audience, here for an academic one) about using self-regulatory organizations, which I think could be a natural extension of this position depending on implementation.

Comment by gworley3 on How does one live/do community as an Effective Altruist? · 2019-05-16T17:28:43.782Z · score: 16 (7 votes) · EA · GW

There's been a good deal of recent, related discussion over on LW with a different framing which is likely relevant to this.

Comment by gworley3 on Non-Profit Insurance Agency · 2019-05-14T01:54:51.237Z · score: 1 (1 votes) · EA · GW

I don't know the answer to these specific questions, as I've not done it. A 501(c)(3) organization is tax advantaged on its "profits", but only in certain ways and not others, and in my engagement in helping run such orgs it's never come up (or if it has someone else handled it before I learned about it). It's probably best to recruit the advice of a CPA or other expert in this area. My main goal was just to warn you that operating as anything other than an LLC (whether passthrough or not) is more complicated, so it's seriously worth evaluating the options and seeing if you can't get most of what you want by operating your LLC for public benefit so long as all the partners (so probably just you!) are on board with it.

Comment by gworley3 on Structure EA organizations as WSDNs? · 2019-05-13T18:00:42.747Z · score: 3 (2 votes) · EA · GW

My experience with organizational design is that the formal structure tends to follow not lead the informal structures that arise among the people in the organizations. Yes, over time organizations become "ossified" such that the formal structure also creates the informal structure, but this is not much the case in early and small orgs, although there are usually some exceptions to this as certain formal relationships develop early, such as the founder(s) or some other persons having authority via legal and financial control that backs their ability to influence others and hence seeds the creation of the org structure.

Overall this is to say my guess is these sorts of structures are either already naturally arising and where they don't it's because there are other incentives that push those organizations in other directions.


That's one way to explain my thinking. Another is this:

I read your post as suggesting something like "hey, what if we tried this different org structure; I think it might be better", but to actually try a different org structure you have to have people who want to relate to each other in a different way. It's typically only at large orgs with ossified structures where people are not relating to each other in the way they would like and where suggesting a change of org structure might manage to shift an equilibrium by getting everyone to re-coordinate towards something they prefer.

In a small org you probably can't make the structure much other than what it is unless you first change the people who are creating the structure to be the kind of people who would create the desired structure. That's because I expect the existing structure to already be a natural equilibrium that is roughly correlated with the kind of structure desired proportional to the amount of (official) control each person in the org has. Thus unlike in a large org there is not a hope that you can hit reset and get a different outcome by breaking the existing inadequate equilibrium.

Comment by gworley3 on Non-Profit Insurance Agency · 2019-05-13T17:41:11.404Z · score: 5 (4 votes) · EA · GW

When you say "non-profit" what comes to my mind is operating as a legally and financially advantaged organization with special non-profit status. But a non-profit (especially if you are interested in 501(c)(3) tax status) are more complex than LLCs, with more strenuous reporting requirements, so guessing that you're operating as an LLC since, I'd seriously consider if there's any actual benefit from operating as a non-profit. Presumably you wouldn't be taking donations so you wouldn't need special tax status to allow your donors to deduct their donations, so unless there is also a reduction in taxes on profits that goes along with whatever status you obtain that could not be gained already from donating the profits of an LLC, then it's probably not worthwhile. If you're a C-corp then go ahead; it's probably similarly complex, if different.

I bring all this up because it's possible and easy to operate an LLC for public benefit, and you can take whatever measures you like to demonstrate that you are doing this to interested folks, so you should probably consider that the default course and only do something different if you reckon there are clear benefits from operating another way.

Comment by gworley3 on Why we should be less productive. · 2019-05-10T17:45:00.467Z · score: 18 (7 votes) · EA · GW

Having spent significant time around both the EA and the LW community and having written several controversial posts and then subsequently talked with folks who downvoted those posts, I now have strong reason to believe that most downvotes are in fact "boos" rather than anything more substantive. When people have substantive disagreements with posts they more often post comments indicating that and just don't vote on a post either way.

I'm sure this is not universally true but it's been my experience, so when I see downvotes on a post that isn't obviously spam, trolling, or otherwise clearly low-quality (rather than in this case just not containing much content, a kind of post that is clearly not universally downvoted because many low content posts get either neutral or positive responses, which I must assume given their lack of content is a function of agreement with the idea presented), I find it reasonable to ask "why 'boo' at this?". Hence my comment as a possible explanation for more "boos" than "yays".

I agree it would be preferable if people didn't use votes as "boos" and "yays", and I think we could fix this—maybe by only allowing people who comment on a post to vote on it, although I think that risks creating lots of meaningless comments because people just want to vote, so there is probably some other solution that would work better—but unfortunately my experience suggests that's exactly how most people vote on posts and comments.

Comment by gworley3 on Why we should be less productive. · 2019-05-09T19:07:47.205Z · score: 6 (6 votes) · EA · GW

Honestly, I think even if you only value getting "productive" things done and don't much value "unproductive" things, there's a lot of evidence that you can be more productive by being less productive where the mechanism of action is something like you burn out your capacity to do more work by consistently pushing yourself beyond what you can comfortably do to the point where you "burn out" and then find yourself unmotivated to do anything while you recover. A person can be sustainably more productive by giving themselves unproductive time to recover.

Meta note: that you got downvotes (I can surmise this from the number of votes and the total score) seems to suggest this is advice people don't want to hear, but maybe they need.

Comment by gworley3 on Meditation and Effective Altruism · 2019-04-23T19:34:42.586Z · score: 15 (7 votes) · EA · GW

I'd say a teacher is even more important than that.

Meditation is a powerful class of techniques for examining the mind, and sometimes people struggle to deal with what they discover doing it. Meditation is not all upside, as this post suggests; plenty of people have negative experiences as part of meditation practice, although they usually, with some guidance from a teacher, see their way through them and find themselves in a better place at the end of the experience. In fact, meditation can be especially rough if you have a lot of psychological "shadow", i.e. "stuff" or "baggage" you would normally think of working through in therapy, since meditation won't on it's own help with that stuff and can make the experience of it worse as you see it more clearly. A teacher can help you deal with these sorts of issues, offering advice, practices, and the compassion of another human as you deal with the negatives that can come up.

This isn't to put anyone off meditation, just to give appropriate warning that it's a very intimate and powerful practice that can bring up positive as well as negative experiences, and navigating that on your own can work out for some people but doesn't for everyone.

Comment by gworley3 on Most important unfulfilled role in the EA ecosystem? · 2019-04-05T18:23:58.845Z · score: 7 (4 votes) · EA · GW

This is a great answer. I would have said something like "leadership" in that EA has leaders but few of them are people you would march into battle and die for. I feel like there's almost no one in EA proper and only a couple people on the edges (mostly because their cause area was taken up by EA, and they didn't come from within EA) who has demonstrated something like the 10x skill of leadership and motivation.

Put more colloquially, EA needs a Steve Jobs, an FDR, a Winston Churchill, an Oda Nobunaga.

Comment by gworley3 on Should EA Groups Run Organ Donor Registration Drives? · 2019-03-28T19:21:07.340Z · score: 2 (2 votes) · EA · GW

I'm always of mixed opinion about organ donation. Yes, it seems straightforwardly beneficial, but it's also at odds with surprising things. For example, I'm signed up for cryonics, and this means it's very import I not be an organ donor both because my organs would be unusable after perfusion and because if I were an organ donor and was willing to accept a lower quality preservation by possibly not having my regularly circulatory system in place to help with cooling, it would still be a bad deal because doctors would hold on to my body for an unspecified amount of time in not necessarily ideal preservation conditions for my brain before maybe releasing me to the cryonics team hours or days later.

This would effectively mean pitting organ donation and life extension, at least in part, against each other within EA. Not necessarily a blocker if people think more organ donation among people who don't sign up for cryonics is worth it in expectation over, say, getting more people signed up for cryonics, but it's worth factoring into the calculation.

Comment by gworley3 on Effective Altruism and Meaning in Life · 2019-03-18T18:44:29.901Z · score: 3 (3 votes) · EA · GW

I really like this. We can be effective, but we can't do that if we're all sad and depressed because we tie our sense of self worth to something unattainable. I also enjoyed the fun stylistic choices!

Comment by gworley3 on [Link] A Modest Proposal: Eliminate Email · 2019-03-18T15:14:59.514Z · score: 5 (4 votes) · EA · GW

I always find these sentiments strange because I find what I love about email and dislike about other forms of online communication is that email is strongly asynchronous and puts me in a lot of control of how I choose to interact with it. Slack, IRC, and other more synchronous forms of communication (even if they are supposedly asynchronous in some cases they are designed and used with synchronous use in mind) are much harder for me to be in control of how I use them because there are stronger cues to use them in interrupt-driven ways. Email can, of course, degenerate in this way, and it seems that's what happens in some cultures (offices, etc.), but then the problem is the culture, not the tool.

If you dislike a particular email (or Slack or in-person) culture you don't like, change it, not the tools. If you don't, you'll just end up unhappy on a different tool.

Comment by gworley3 on The career coordination problem · 2019-03-17T15:31:13.220Z · score: 4 (4 votes) · EA · GW

I think there is enough difficulty in achieving specialization that you are better off ignoring coordination concerns here in favor of choosing based on personal inclination. It's hard to put in all the time it takes to become an expert in something, it's even harder when you don't love that something for its own sake, and my own suspicion is that without that love you will never achieve to the highest level of expertise, so best to look for the confluence of what you most love and what is most useful than to worry about coordinating over usefulness. You and everyone else is not sufficiently interchangeable when it comes to developing sufficient specialization to be helpful to EA causes.

Comment by gworley3 on Identifying Talent without Credentialing In EA · 2019-03-12T00:02:01.861Z · score: 5 (4 votes) · EA · GW

I very much like this approach to finding ways to deal with credentialism, however I'm also unsure how much of an impact credentials are having on current EA hiring. That is, my impression is that current EA orgs are hiring folks more based on work experience rather than credentials, and in fact EA orgs are unusually willing to consider candidates without traditional credentials (EA-orgs within universities being an exception due to their hiring processes being tied to those of the host institution). This suggests your premise may not apply (EA orgs not hiring folks because they lack credentials), but regardless I think your solutions apply anyway because they also address the issue where candidates lack experience rather than credentials.

Comment by gworley3 on Making discussions in EA groups inclusive · 2019-03-05T01:48:30.134Z · score: 2 (2 votes) · EA · GW

This seems to miss the point of my question, because it already seems to be the case that the people who could do something already don't much engage in these discussions. Rather it's primarily the folks who are causing the feelings of alienation and do not themselves feel alienated that are starting and engaging in discussions that cause feelings of alienation for others, and presuming they do so because they either do not consider their actions to be contrary to the purpose of inclusiveness or because they do not value inclusiveness, what actions can those who are alienated or value inclusiveness take that would address this issue. That is, if you feel there are things being done and said that cause alienation, how do you get that to stop other than just hoping that other people decide on their own not to do it anymore?

Comment by gworley3 on Making discussions in EA groups inclusive · 2019-03-04T20:56:59.365Z · score: 1 (1 votes) · EA · GW

Identifying this is a start, but it remains unclear to me that this post is likely to result in any action that will change anything (realizing some people may disagree that this is the experience of some people in the community or that their experience of alienation matters). But supposing you agree that this post describes a real problem and the problem deserves solving, what are things we might do as a community to be more inclusive?

I'm thinking here of asking for specific, actionable ideas and not just generic stuff like "spread awareness", and additionally these need to be actions that will be carried about by the people who care about this to make the community different than it is today and not demands for people who are in the alienator group to change because that's also unlikely to be an effective strategy. I imagine most actions that would work well would be of the form "I want EA to be more inclusive, and to make it that way I'm going to do X". What is X?

Comment by gworley3 on Profiting-to-Give: harnessing EA talent with a new funding model · 2019-03-04T18:41:30.763Z · score: 3 (3 votes) · EA · GW

Hmm, I wonder why there were some downvotes. This seems like a rather creative idea to me to find a way towards creating for-profit endeavors that may help soak up excess talent and generate additional revenue for EA projects (not to mention some of these EA-corps might directly do work that has benefit to people; Wave comes to mind as a possible example of such an existing organization).

Comment by gworley3 on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T03:34:21.081Z · score: 12 (7 votes) · EA · GW

Having some experience with hiring, it might be some consolation that you did actually provide value to the EA orgs you applied to by giving them:

  • more practice hiring
  • more exposure to candidates to figure out who they want
  • a better intuitive grasp of the talent landscape

It's unfortunate that this has such a large opportunity cost and that you bore so much of it, but the unfortunate reality on the hiring side for any org is that we often need to interview additional candidates we know we won't hire because if we don't we won't know enough to trust the people we do hire are the right people. Of course we don't know which candidates we will and won't hire in advance (otherwise if we already knew it would be extremely unfair to everyone to do the interviews), but at least in this case your interview time helped EA focused organizations gain that knowledge rather than other orgs who you may be less aligned with values on and so less value their institutional learning from additional interviews that don't result in hires.

Comment by gworley3 on Pre-announcement and call for feedback: Operations Camp 2019 · 2019-02-19T20:04:51.587Z · score: 3 (2 votes) · EA · GW

This project sounds pretty exciting! My time is occupied by a lot of other things right now, but if you would like I'd be happy to talk about operations things (especially as they relate to organizational development and culture) at the camp. Depending on timing and cost I might not be able to show up in person, but happy to do something over Skype. This seems like a great opportunity to share what I've learned about this stuff so that it can help others as they contribute to effective causes.

Comment by gworley3 on Tech volunteering: market failure? · 2019-02-19T04:47:07.573Z · score: 6 (5 votes) · EA · GW

I think much of the difficulty is that tech work is not usually able to be done with minimal context the way, say, being a volunteer where your main skill is being a human rather than being a professional. For example, it's pretty easy to piece together volunteers to do things like fill a receptionist roll, assist with construction, or perform some other labor that requires minimal training. There's not much easily identifiable tech work that could be knocked out in an hour or two such that the person assisting can then just forget about it and the org needing it will be able to easily take advantage of the work done. This means tech volunteering is going to require a sustained commitment from someone, and that's much harder to arrange.

Comment by gworley3 on What are the most ideal locations (less bureaucratic roadblocks for setting up, ease of functioning etc.) for setting up a charitable foundation, from across the world? · 2019-01-21T20:11:15.692Z · score: 2 (2 votes) · EA · GW

An important question is going to be what features you want of your organization. For example, do you want it to make possible tax-exempt giving (many countries let you deduct giving to recognized charitable organizations against your income tax burden up to some limit)? Do you want to just avoid lots of bureaucracy (generally not compatible with being a charitable organization)? Do you want low or no tax burden? And, of course, as you note, can you create the organization on your own or do you need a local sponsor? I think trying to answer those will help you explore the space and narrow down the options.

Comment by gworley3 on [Link] The option value of civilization · 2019-01-07T18:45:06.119Z · score: 1 (1 votes) · EA · GW

I don't know enough about options it seems for this to feel like it's giving me a useful additional way to think about moral weight of future patients relative to present ones, but I expect if I did know enough about them this would feel like a useful model to employ intuitions about options to explain an issue in ethics, similar to the way preference theory is often useful for helping to make sense of some questions related to values.

Comment by gworley3 on are values a potential obstacle in scaling the EA movement? · 2019-01-03T20:54:51.994Z · score: 1 (1 votes) · EA · GW

This seems connected to a perennial question in EA: should organizations be means-focused or ends-focused. By that I mean should an EA-aligned org focus primarily on methods or primarily on outcomes. For example, when it comes to community building, an ends-focused approach would suggest we should grow as large as possible and get as many people as possible to give effectively, even if you have to lie to them to do it. A means-focused approach to community building would look more like what we have now, where there is a heavy focus on keeping EA true to its values even if it comes at a cost of convincing some people to give money effectively that could be had by methods that go against EA values like careful epistemics.

So far it seems EA orgs have decided to be primarily means-focused and accept giving up some of the gains possible via an ends-focus since it would risk diluting EA values and missions, and folks in the community have been pretty vocal when they feel orgs list too close to become ends-focused if they compromise too much on holding to EA values. I don't know if that will continue in the future or if everyone in EA is on board with such a choice, but it's at least what I've observed happening. Given that mean EAs are consequentialists I expect we'll always see some version of this conversation happening so long as EA exists.

Comment by gworley3 on quant model for ai safety donations? · 2019-01-03T01:37:02.366Z · score: 3 (3 votes) · EA · GW

Right. For comparison software engineers (of all kinds, including ML engineers) at early-stage startups generally add between $500k and $1mm to the company's valuation, i.e. investors believe these employees make the company worth buying/selling for that much additional money. There's a lot that goes into where that number comes from, but it does at least suggest that O($1mm) is reasonable.

Comment by gworley3 on Who sets the read time estimates? · 2018-12-28T18:39:04.782Z · score: 7 (5 votes) · EA · GW

I believe they are set at 300 words per minute, so a 1000 word post would either show as 3 or 4 minute read (depending on how it rounds). There was some discussion of this feature on LW recently if you wanted to join the conversation there about the feature as it's implemented upstream.

Comment by gworley3 on What movements does EA have the strongest synergies with? · 2018-12-20T23:38:56.494Z · score: 4 (4 votes) · EA · GW

Context: I asked a more narrow version of this question on LW about the connection between EA and the rationality movement.

Comment by gworley3 on Non-Consequentialist Considerations For Cause-Prioritzation Part 1 · 2018-11-29T03:00:52.240Z · score: 6 (6 votes) · EA · GW

So I've not written extensively on this (and in some cases not at all), but I'm a virtue ethicist and I care about animal welfare and x-risks (or just future-folks more generally) as an expression of compassion. That is, satisfying the virtue of compassion that I aspire to (now drawn from Buddhist notions of compassion, but originally from a more folky notion of it that I learned to adopt as a virtue from my upbringing in secular Protestant America) encourages me to give consideration to the welfare of animals and future folks, and this results in my choice to eat almost-only plants and to work on addressing AI-related x-risks.

This is not exactly something you can cite nor a full-fledge argument, but figured you might find it worthwhile to hear more from someone in EA who has one of these non-consequentialist moral views. I expect there are a number of crypto-Kantians around (crypto only in the sense though that they just don't bring it up much because it's not part of normal EA conversation to reason via deontology) and a decent number of contractualist given that position's affinity with libertarian ethics and the number of libertarian EAs drawn from the rationalist community.

Comment by gworley3 on So you want to do operations [Part one] - which skills do you need? · 2018-11-29T02:37:51.611Z · score: 1 (1 votes) · EA · GW

I would just ask them questions, although to be transparent I care only some about their answers and a lot about how they answer, since I believe that to be the place where most of the information I use to make the assessment comes from. I say this because I want to make clear I don't know how to assess this in a scalable and repeatable way I can teach others, though that might be possible although I suspect that it's not short of teaching you to be substantially more like me in several dimensions.

Comment by gworley3 on So you want to do operations [Part one] - which skills do you need? · 2018-11-28T19:55:48.004Z · score: 1 (1 votes) · EA · GW

I expect the disposition to take responsibility can be developed, since at least for myself I didn't always do it and now I do, but I only learned to do it after some significant psychology development (what I would call making the 3-4 transition in Kegan/CDT terminology), although I'm not sure how tied it is to that (haven't spent much time thinking about what enables the disposition to total responsibility). I'm not sure how to test it but I'm fairly confident I could suss out whether someone has the disposition in an interview, though I'm not sure with what level of precision, specially being unsure how many false negatives I would generate in my assessment.

Comment by gworley3 on So you want to do operations [Part one] - which skills do you need? · 2018-11-28T18:32:29.595Z · score: 10 (10 votes) · EA · GW

context: I work in an operational capacity for a startup and have for several years

To me this misses what I consider the most important thing to success in operational roles: total responsibility. Just about anyone can learn to do stuff, and lots of times operations is treated as the function of the organization that does the stuff no one else wants to do, but to me this isn't exactly right. It's more about being responsible, probably heroically so, and being willing to do whatever you have to do to take care of the things you care about. Another person I know describes it has "holding parental mind" for something, which I think helps point at the breadth and depth of what operations is really all about.

This is not to say all these other things are not important, you can probably get along okay with someone who can just do stuff, and no amount of responsibility can overcome all other skill deficiencies, but to my mind great operational competence only arises when a person takes radical, total responsibility for the thing they are charged to protect.

Comment by gworley3 on Narrative but Not Philosophical Argument Motivates Giving to Charity · 2018-11-27T20:55:34.172Z · score: 3 (3 votes) · EA · GW

Nice, this matches my intuition that most people will give more if you make the reasons to give at the near construal level rather than far. I do wonder how much this generalizes, though: I would expect the effect to be much smaller if, say, you exposed people to the two stories and then one week later asked them to make the giving decision.

My guess is that philosophy is good for convincing the sort of people who will be convinced of a point in general, and narratives are great for use during specific asks, like during a fundraising event. But I'm pretty sure most people who do fundraising for non-profits already know this even if they didn't have proof; now they have a little more.

Comment by gworley3 on Earning to Save (Give 1%, Save 10%) · 2018-11-27T20:46:26.739Z · score: 2 (2 votes) · EA · GW

Since I'm doing direct work part-time, I view saving as a kind of donating since saving directly translates into increased flexibility for me to devote more time to direct work, and specifically on the work I think has the highest differential impact based on my own assessment of the information (to abuse terms, we might call this my "alpha"). For example, if I save enough I could stop working a full-time job while I look for funding, and in the mean time it means I can spend more effort on AI risk work without worrying too much that the impact on my day job will result in something I can't weather. I'm not sure if I would make the same assessment if I weren't doing direct work, though, so I've not thought as much about advising saving as a general strategy, although I generally prefer for myself having more runway so it seems reasonable to suggest others might also like having it.