What is the impact of the Nuclear Ban Treaty? 2020-11-29T00:26:31.318Z
Which is better for animal welfare, terraforming planets or space habitats? And by how much? 2020-10-17T21:49:51.311Z
DonyChristie's Shortform 2020-08-22T17:49:36.928Z
What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? 2020-08-07T01:50:34.172Z
Will Three Gorges Dam Collapse And Kill Millions? 2020-07-26T02:43:40.087Z
How would you advise someone decide between different biosecurity interventions? 2020-03-30T17:05:26.161Z
What are people's objections to earning-to-give? 2019-04-13T20:16:43.283Z
What are ways to get more biologists into EA? 2019-01-11T21:06:01.945Z
We Could Move $80 Million to Effective Charities, Pineapples Included 2017-12-14T04:40:26.648Z


Comment by DonyChristie on Why did EA organizations fail at fighting to prevent the COVID-19 pandemic? · 2021-06-19T18:14:14.448Z · EA · GW

When in 2017 the US moratorium on gain of functions was lifted the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn't react. That's despite those organizations counting pandemics as a possible Global Catastrophic Risk.

Looking back it seems like this was easy mode, given that a person in the EA community had done the math. Why didn't the big EA organizations listen more?

Can you describe in greater detail what the world looks like in which big EA organizations did do the thing you wish they had done? And what features our current world diverges from this one on, specifically? What is your anticipation of experience?

Comment by DonyChristie on How can I best use product management skills for digital services for good? · 2021-06-01T19:40:13.145Z · EA · GW

I’m also very open to offer support and help (for free of course) related to product management and product development to non-profit organisations and startups connected to EA, but I’m uncertain of where my skills would be needed.

Thanks for the offer. I sent you a message!

Comment by DonyChristie on New Top EA Causes for 2021? · 2021-04-02T03:43:09.196Z · EA · GW


I think the axis of Imaginary Time has been entirely neglected. It is time chauvinism to prefer one dimension of time over any other. 

Comment by DonyChristie on DonyChristie's Shortform · 2021-04-01T00:46:23.224Z · EA · GW

This post claims the financial system could collapse due to Reasons. I am pretty skeptical but haven't looked at it closely. Signal-boosting due to the low chance it's right. Can someone else who knows more finance analyze its claims?

Comment by DonyChristie on Apply to the Stanford Existential Risks Conference! (April 17-18) · 2021-03-27T05:29:42.656Z · EA · GW

Is this in-person or virtual? (I haven't clicked on the link yet.)

Edit: I found my answer, but leaving this for others' benefit.

Comment by DonyChristie on DonyChristie's Shortform · 2021-02-21T03:16:35.120Z · EA · GW

I am seeking funding so I can work on my collective action project over the next year without worrying about money so much. If this interests you, you can book a call with me here. If you know nothing about me, one legible accomplishment of mine is creating the EA Focusmate group, which has 395 members as of writing.

Comment by DonyChristie on Creating A Kickstarter for Coordinated Action · 2021-02-04T00:25:31.800Z · EA · GW

The URL has a period at the end that needs removing. :)

Comment by DonyChristie on Creating A Kickstarter for Coordinated Action · 2021-02-04T00:24:28.309Z · EA · GW

DonyChristie (a programmer who already build a prototype website)

I don't know if I'm much of a programmer yet enough to be called one. That Google Site was just a quick  attempt at a proof-of-concept and explanation of my thoughts back in June. 

Comment by DonyChristie on Religious Texts and EA: What Can We Learn and What Can We Inform? · 2021-02-01T00:52:49.438Z · EA · GW

The trappings of organized religion are a hollow shell of the mystic states at their core. Make sure the texts you focus on constitute the heart of that system's spiritual practice.

Comment by DonyChristie on (Autistic) visionaries are not natural-born leaders · 2021-01-26T02:36:14.967Z · EA · GW

This was something I needed, thank you!

Comment by DonyChristie on Progress Open Thread: January 2021 · 2021-01-24T02:51:37.569Z · EA · GW

"I have no idea what the hell I'm doing."

Do you ever feel this?

It's terrifying to really begin building an organization, especially one with as grand an ambition as saving the world, with a good chance of failure from any number of directions.

And to wonder... Am I taking the right action with this choice? Is this even the right choice to be focusing on? 

Past me precommitted to work on this for a year for a reason. He knew I would face self-doubt.

Knowing that of all the sources of failure, the biggest ones are endogeneous.

And it sucks, going through a metaruminatory loop, knowing that I can't just fix my errors. That my own awareness of my bugs is itself an impediment.

To be uncertain whether the uncertainty is the kind to accept, or the kind to change.

To be frozen in fear, imagining others observing my frozenness and feedbacking to me that this is unacceptable if I want to strive to perform at the tempo that they feel confident is symbolic and symbiotic of progress.

I am a longtermist. Morever I am a ponderer, a dilettante, an explorer. Yet I am also supposed to "move fast and break things". I need to be hypercompetent in 47 different ways, yet I need to expose my incompetency to learn how to be competent.

And the Pointed Questions from Projections Of Mine. 

"What's your plan?"

Well, uh, like, it's a fractal ball of wibbly wobbly stuff. Very ambitious endgoal as compass.  I have a very clear plan but it's not descriptively legible to you and it very quickly decomposes into a bunch of question marks.

"How is your thing different from X?"

It's, uh, it's not all about X. That's just a necessary utility to start with that I want to play with partly because it's aesthetically interesting--

"Things aren't working at this pace, you should take on this collaborator for increased motivation and success."

Well, how will that result in 20 years? Are all of their incentives aligned? Cofoundership is like a marriage.


"Given you're sharing all of this lack of confidence, maybe it's a sign this isn't the thing to work on?"

That one's just in my head, I think (unless there's an illegible memory behind it). But I'll respond: I think the probably correct answer is that I would feel the same regardless of what I was working on to the point where I took the thing seriously enough to start feeling these feelings when hitting roadblocks. The hypothetical asker probably is typicalminding from their own differing psychology.

It's also the highest-impact thing to work on uncertain, counterfactually neglected projects! Probably. In my worldview, at least. 

"Is it really neglected though? What if there are competitors better than us? There are more competent organizations already out there, aren't there? Shouldn't we just go work for them?"

Uh. Well. I mean. I don't know. Can I work for another human being? Typically not? Most people can't take most jobs though, right? Neither of us knows how to fly a jet plane.

"That's a problem then. You should fix what's keeping you from getting a Real Job."

Oh...kay? Like I haven't debugged bits and pieces of that? And why would it matter? The world is burning and if you want to stop that you have to git gud at things that are related to putting the fire out.

"You should just go to college."

And put off working on important things for years? How am I going to learn more than through a startup? The option palette that comes to mind for you is conveniently shaped from a high-level ghost perspective that doesn't take into account that I am in the territory, not the map, and am navigating trailhead by trailhead. Your statement has no skin in the game. It sounds like you're saying you're not confident in my ability to bite and chew off high-variance objectives. (Maybe everyone in the process of constructing success gets shit-tested by people with bad advice.)

"My point is to do a scoped-down version of the thing you want in a training environment with plenty of slack."

Which is... sort of what I'm doing, with my R&D and slow MVP-building?

"But you should speed up and go faster."

Wha--? But you just said--

"Work on a different side-project that will make more money faster."

That's Goodharting! That's Mara and/or Moloch! Why the hell would you think that's more impactful than directly working on a thing of actual value rather than perceived value?

"Well you need money to comfortably work on an altruistic project."

So then shouldn't I... ask for fundraising?

"Justify why you think you deserve fundraising over ALL the other effective altruists asking for it, who are clearly more competent than you and have way more stuff on their resume."

I... okay. I'll meek out as small an income as I can to survive.

"You should get funding if this is a serious project. Also where is your website, why aren't you writing a nicely-worded whitepaper, where is your Github repo with code you've written, where is your stamp of approval from Prestigious Institution? Where the hell is your funding, how can I take this seriously if no one else has confidence in you enough to fund it?"

Well I have my Patreon...

"Are you really providing enough benefits to the world to justify that? Haven't you bought too many Taco Bell burritos, with meat in them to boot? Is it okay that your Patreon has grown statistically bigger than most people's yet you don't slave to provide artistic compensations like they do?"

Well I was just applying for foodstamps during the previous Focusmate session earlier...

"Do we deserve foodstamps!? We are taking from the collective coffer. We are privileged and if we're taking benefits then we're lazy, good-for-nothing--"


"Isn't that pretty entitled of you to assume that your project is 99.99% more valuable than anything else? Your ego seems pretty involved in this."

Yes. I am pretty emotionally invested with the mission. Critiques of it are bucket-errored with attacks on my survival. Perhaps this is a crux of my insecurity. I feel I have to succeed at saving the world and therefore this project. I would like to be more dispassionately objective about the situation.

"Why not just maintain a portfolio of projects?"

Which ones, of the 20-50 ideas I have? Where does the buck stop in one's decision to invest various amounts of resources in different things? Aren't we supposed to focus on high-potential things and Kelly bet with our resources? 

(The solution actually I think is that you can multitask if it's constructing a vertical of synergistic components.)

Why is failure so bad anyway? We just keep on trying until we hit a homerun; the costs of swinging are actually basically nil, all perceptions otherwise.


Few want to hear vulnerable talk such as this, at least in my broader culture. Evolutionarily, we don't want to have leaders who are losers, or we will, well, lose in the zero-sum games if we are part of their coalition. We want certain answers to important questions. Even if it's a lie, as long as it's confident and leads to strong coordination, and we believe others believe it even if we don't, then we'll accept lies from strong leaders who don't apologize for their bullshit. I find the levels of intentionality fascinating as a lens into understanding much social behavior.

Well, my serotonin isn't high enough, bucko. If you ask me anything skeptical I'll basically assume I'm about to be ostracized from the tribe forever for my flagrant stupidity.

I suppose therefore my only recourse is to register my Forever Incompetency in the face of all possible agents. There is always a human or an AI that is better. There is always a more advantaged comparator. I'm never not going to be this way against the biggest challenge I can, so I might as well get used to it and half ass it with everything I've got.

None of this is to say I don't have plenty confidence, or longterm grit. I just wanted to get this out there before I was tempted to make progress reports look shinier than they are, constantly adjusting the rough draft to make it look nice and impeachable, a possibility which becomes so costly that I end up not writing it.

We aren't in as many zero-sum games as we think. I think there is a fierce, blazing positive-sum game ahead of us, read to be built. On the Ethereum blockchain, naturally. ;)

I have made quite a bit of progress in the past month, subjectively speaking. Just don't ask me to quickly justify that statement in a few words. :)

And since I notice I didn't Declare it in the previous post, I will note:

I am committing a year of my life to making this work.

If this post resonated with you in some way and you want to talk, you can book a call here.

Comment by DonyChristie on How might better collective decision-making backfire? · 2020-12-13T19:21:34.065Z · EA · GW

Malevolent Empowerment

Better collective decisionmaking could lead a group to cause more harm to the world than good via entering a valley of bad decisionmaking. This presumes that humans tend to have a lot of bad effects on the world and that naively empowering humans can make those effects worse. 

e.g. a group with better epistemics and decisions could decide to take more effective action against a hated outgroup. Or it could lead to better economic & technological growth, leading to more meat eating or more CO2 production.

Humans tend to engage in the worst acts of violence when mobilized as a group that thinks it's doing something good. Therefore, helping a single group improve its epistemics and decisionmaking could make that group commit greater atrocities, or amplify negative unintended side effects.

Comment by DonyChristie on What are some potential coordination failures in our community? · 2020-12-13T03:14:13.549Z · EA · GW

Yeah I tried contacting people on it and it was pretty hard.

Comment by DonyChristie on Progress Open Thread: December 2020 · 2020-12-13T03:00:46.557Z · EA · GW

I'm jumping back into the assurance contract project (see here for previous discussion on a "Kickstarter for Inadequate Equilibria", though I'll note I feel like at this point that the contributors to those threads missed a bunch of relevant detail on this topic; I should do a writeup, not sure what though).

The long-term mission: Supply global public goods via dominant assurance contracts.

I intend to provide updates here on a regular basis, probably monthly although I can do more frequent if people are interested.

Comment by DonyChristie on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-13T02:40:12.235Z · EA · GW

I'm not as high on the social ladder as you'd think, though some of my perspective is probably colored by class views rubbing off on me from other people around me. I have been sort of homeless technically for much of the past couple years and actually think EAs should live in tent cities/off-grid villages. I've also briefly researched becoming a professional beggar in a really wealthy place such as Switzerland; this shifted into the idea of becoming a street performer, which didn't work out.

My perspective was heavily informed by a couple of experiences with people using body language to get really close up to me and ask for a much larger amount of cash than I would otherwise give to the median person. I also had someone yell at me angrily when I didn't say anything to them when they approached me at night. My experience of the reciprocity trick wasn't with someone homeless, but with people hawking their mixtape and "giving it away", as well as signing my name on it, only to take it away if I wasn't giving them "a donation". So I'm not lambasting the average beggar, it's just we need to not let the most dark triad people shake people's pockets and make it harder for more down-on-their luck, earnest people to be able to ask for help.

It was raining yesterday and I offered $4 to someone who was huddling in a tunnel, but they didn't take it. I tend to feel spontaneously generous sometimes when the spirit moves me.

Yesterday was novel, I imagine it gets old.

Inurement is the strongest feature here, I believe. Once you see an endless sea of people, then it becomes overwhelming, demotivating, and you stop being as empathetic.

(Meta: Annoyed as heck by the upvote/downvotes here, coloring our discussion; is there a mod out there to not see karma on EA Forum, like there was for old LessWrong?)

Comment by DonyChristie on Progress Open Thread: December 2020 · 2020-12-13T02:16:50.004Z · EA · GW


Comment by DonyChristie on DonyChristie's Shortform · 2020-12-13T00:04:19.952Z · EA · GW

What does it mean for a human to properly orient their lives around the Singularity, to update on upcoming accelerating technological changes?

This is a hard problem I've grappled with for years.

It's similar to another question I think about, but with regards to downsides: if you in fact knew Doom was coming, in the form of World War 3 or whatever GCR is strong enough to upset civilization, then what in fact should you do? Drastic action is required. For this, I think the solution is on the order of building an off-grid colony that can survive, assuming one can't prevent the Doom. It's still hard to act on that, though. What is it like to go against the grain in order to do that?

Comment by DonyChristie on Linch's Shortform · 2020-12-12T05:38:24.620Z · EA · GW

I'm curious what it looks like to backchain from something so complex. I've tried it repeatedly in the past and feel like I failed.

Comment by DonyChristie on vaidehi_agarwalla's Shortform · 2020-12-12T05:35:20.442Z · EA · GW

+1 the math there. How does building an app compare to throwing more resources at finding better pre-existing apps? 

I'll just add I find it kind of annoying how the event app keeps getting switched up. I thought Grip was better than whatever was used recently for EAGxAsia_Pacific (Catalyst?). 

Comment by DonyChristie on Linch's Shortform · 2020-12-12T05:32:08.778Z · EA · GW

The biggest risk here I believe is anthropogenic; supervolcanoes could theoretically be weaponized.

Comment by DonyChristie on DonyChristie's Shortform · 2020-12-12T05:27:04.795Z · EA · GW

Would you be interested in a video coworking group for EAs? Like a dedicated place where you can go to work for 4-8 hours/day and see familiar faces (vs Focusmate which is 1 hour, one-on-one with different people). EAWork instead of WeWork.

Comment by DonyChristie on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-06T05:00:59.358Z · EA · GW

I relate to the angst.

After having interactions like this, I made a rule of not giving money to beggars who explicitly ask for it. If I want to give money to homeless people, it has to be for people not optimizing for it (sometimes aggressively so) in a race to the bottom against other homeless people. Especially if people pull the Reciprocity heuristic; due to my disagreeableness it was relatively not that hard to set an intention to still say no when people do that, for the most part.

A different idea I've considered is to have $1 bills (or whatever unit you wish) explicitly allocated for this. Or to keep track of people you pass, and use that counter to track giving to some more effective thing.

N-order effects of giving are probably more valuable than losing a few dollars, though. It feels really good to give to people. I spent a couple hours spontaneously delivering a $2 cookie to someone as a gift, recently.

Some people suggest to carry food on you that you can give to homeless people instead of money.

I'd advise setting a 5 minute timer to come up with a simple policy, and stick to it.

Comment by DonyChristie on What is the impact of the Nuclear Ban Treaty? · 2020-12-04T03:16:21.630Z · EA · GW

When I originally got an email from Ploughshares Fund about it, the headline was suggestive of nukes everywhere being banned. This seemed like a probably-wrong impression to me. Nevertheless, I contended with a possible reality in which nuclear war suddenly was no longer a problem. I grappled with just how worthy of ecstatic celebration this would be; social norms do not suggest the correctly calibrated mood. I let myself feel some existential hope.

As I guessed, it turned out not every country signed it. That being what it is, I still felt it is the case this is a monumental achievement, something that leads to the elimination of global thermonuclear war as a matter of when, not if. That the timeline to that world, of Global Zero or Global Very Small Defensive Stockpile, is definite and finite. That this treaty will gather momentum, and not decrease in efficacy over time. That the Sword of Damocles was a little less wobbly over our heads; perhaps it was on the order of a 1% reduction in x-risk, given it was a major chunk of reduction in nuclear risk, and nuclear risk is a major chunk in x-risk. That maybe, just maybe, people collectively can in fact be saner than I thought, and not get narrowmindedly stuck in brittle finite game framings of coordination problems. I did not actually expect a treaty banning nukes to exist, and it was a welcome surprise. I think we should let ourselves recognize when a major, major existential achievement has been unlocked, and not get stuck in perpetual cynicism about the state of the world. 

Luisa's article suggests otherwise. Reading it, I agree that formal impact seems very low. It's still another step in the right direction. I look forward to the article on informal means of impact.

Comment by DonyChristie on Helping future researchers to better understand long-term forecasting · 2020-11-29T03:34:51.313Z · EA · GW

The Long Now Foundation started something in this direction: "Long Bets".

Comment by DonyChristie on DonyChristie's Shortform · 2020-11-29T00:34:20.290Z · EA · GW

What are ways we could get rid of the FDA?

(Flippant question inspired by the FDA waiting a month to discuss approval for coronavirus vaccines, and more generally it dragging its legs during the pandemic, killing many people, in addition to its other prohibitions being net-negative for humanity. IMO.)

Comment by DonyChristie on DonyChristie's Shortform · 2020-11-19T04:59:31.816Z · EA · GW

Would you be interested in a Cause Prioritization Newsletter? What would you want to read on it?

Comment by DonyChristie on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-10T17:57:48.081Z · EA · GW

For all of the new commenters: it would have been more valuable to comment when I asked this question, as I was considering trying to coordinate EAs using an assurance contract to provide enough volunteers to help his campaign win. Given how the comments turned out, I decided it was not worth pursuing and therefore assume the Wayne campaign will lose with 50-80% probability, moreso because I didn't think EAs would buy-in (for better or worse) than due to having a sense of how good Wayne's mayorship would actually be for the world on the object-level.

(Since basically no one gave a good, quantitative answer to the question beyond their own social-emotional reasoning.)

So I've moved on. In general, dialogue about an election is worth much less in expectation a couple weeks out from the election than it is in advance.

Comment by DonyChristie on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-10T17:55:26.049Z · EA · GW

Thank you for this answer! I liked how reflectively balanced it was on the different considerations and how it tracked the object-level sentient beings at stake.

Comment by DonyChristie on DonyChristie's Shortform · 2020-08-25T20:43:25.086Z · EA · GW

I don't have much slack to respond given I don't enjoy internet arguments, but if you think about the associated reference class of situations, you might note that a common problem is a lack of self-awareness of there being a problem. This is not the case with this dialogue, which should allay your worry somewhat.

The main point here, which this is vagueposting about, is that people on here will dismiss things rather quickly especially if it's a dismissal by someone with a lot of status, in a pile-on way without much overt reflection by the people who upvote such comments. I concluded from seeing this several times that at some point this will happen with a project of mine, and that I should be ok with this world, because this is not a location in which to get good project feedback as far as I can tell. The real risk here I am facing is that I would be dissuaded from the highest-impact projects by people who only believe in things vetted by a lot of academic-style reasoning and evidence that makes legible sense, at the cost of not being able to exploit secrets in the Thielian sense.

Comment by DonyChristie on DonyChristie's Shortform · 2020-08-25T20:41:30.813Z · EA · GW

This is also valid! :)

Comment by DonyChristie on DonyChristie's Shortform · 2020-08-22T17:49:46.396Z · EA · GW

Someday, someone is going to eviscerate me on this forum, and I'm not sure how to feel about that. The prospect feels bad. I tentatively think I should just continue diving into not giving a fuck and inspire others similarly since one of my comparative advantages is that my social capital is not primarily tied in with fragile appearance-keeping for employment purposes. But it does mean I should not rely on my social capital with Ra-infested EA orgs.

I'm registering now that if you snipe me on here, I'm not gonna defensively respond. I'm not going to provide 20 citations on why I think I'm right. In fact, I'm going to double down on whatever it is I'm doing, because I anticipate in advance that the expected disvalue of discouraging myself due to really poor feedback on here is greater than the expected disvalue of unilaterally continuing something the people with Oxford PhDs think is bad.

Comment by DonyChristie on Should we think more about EA dating? · 2020-07-26T23:30:34.582Z · EA · GW


Addition: I think this is a serious need for many people and it would save a lot of time and energy to make the process more effective. I think worries about cultishness are quite overblown; there are dating sites for various kinds of groups. Solutions could look like either intra-EA dating or a consultancy solving the dating problems for particular EAs, matchmaking them with people outside the movement. Working on this would also be a great way to create a startup that could scale to millions of customers (see also Roam, which started out catering to individual EAs and is now a fast-growing success).

Comment by DonyChristie on Against opposing SJ activism/cancellations · 2020-06-19T03:29:48.032Z · EA · GW

Coordination infrastructure that would allow sane people to defuse runaway information cascades in generality would be very valuable to discover, unless physics doesn't allow it.

Can any historically and/or sociologically familiar people comment as to the ability for the witch hunts to have been stopped by counterfactually motivated and capable parties, and what order of magnitude of motivation and capability might have stopped it?

Comment by DonyChristie on What are some software development needs in EA causes? · 2020-03-09T05:39:59.154Z · EA · GW

The Covid-19 Risk Assessment app is looking for programmers! (Note, that website may need updating with the latest progress).

You can email them at

Comment by DonyChristie on Who in EA enjoys managing people? · 2019-04-13T01:16:19.000Z · EA · GW

I enjoy organizing events and coordinating new online groups and projects, as well as coaching people, but am rather unskilled at these things at the moment. I expect management to ever-increasingly be an important hat of mine in my future. I would highly welcome marginal domain-specific mentorship here!

(coughcough, to the person reading this who knows something about these things) :)

Comment by DonyChristie on Mental support · 2019-03-28T04:28:32.281Z · EA · GW

Because people who need help the most often don't have the money?

Comment by DonyChristie on Should there be an EA crowdfunding platform? · 2018-05-04T04:36:18.385Z · EA · GW

Someone just try and build something.

Comment by DonyChristie on 69 things that might be pretty effective to fund · 2018-01-22T05:19:13.652Z · EA · GW

Global catastrophic risks: North Korea: Fund ‘Flash Drives for freedom’, which smuggles flash drives with unbiased information into North Korea. Such an approach was implicitly endorsed in November by Thae Yong-ho, once number two at North Korea’s London embassy and now defector. There’s also academic analysis of this isolation being one of the reasons for the lack of uprising in North Korea.

Any thoughts on the expected value of this in particular? It says $1 ~ 1 flash drive.

Comment by DonyChristie on [deleted post] 2018-01-20T23:32:10.596Z

The political mobilization you are prematurely demanding to rectify the laundry list of concerns you present is first contingent on individuals like myself being persuaded by the veracity of your claims, which this post makes a lot of, the conjunction of which is exceedingly improbable. It would be easier for me to be persuaded if one concrete opportunity for intervention was first expounded on, such as this pipeline (or whichever is the best specific intervention here), its cost-effectiveness in creating QALYs (or your preferred measure), and how the resulting expected output of our contributions would compare to other potential effective interventions in a similar class of human-concernedness such as ALLFED, AMF, or biosecurity, or even more dissimilar ones like AGI alignment, animal welfare, etc, rather than presenting shock that we do not hold the same inside view on what is literally the most important thing to do with one's resources.

This recently made guide on introducing new interventions to aspiring effective altruists, if followed, will help achieve that. You can also post any calculations in this group and receive feedback. Effective Environmentalism might interest you as well. :)

Comment by DonyChristie on How to get a new cause into EA · 2018-01-10T12:18:59.993Z · EA · GW

Effective altruism has had three main direct broad causes (global poverty, animal rights, and far future), for quite some time.

The whole concept of EA having specific recognizable compartmentalized cause areas and charities associated with it is bankrupt and should be zapped, because it invites stagnation as founder effects entrench further every time a newcomer joins and devotes mindshare to signalling ritual adherence to the narrative of different finite tribal Houses to join and build alliances between or cannibalize, crowding out new classes of intervention and eclipsing the prerogative to optimize everything as a whole without all these distinctions. "Oh, I'm an (animal, poverty, AI) person! X-risk aversion!"

"Effective altruism" in itself should be a scaleable cause-neutral methodology de-identified from its extensional recommendations. It should stop reinforcing these arbitrary divisions as though they were somehow sancrosanct. The task is harder when people and organizations ostensibly about advancing that methodology settle into the same buildings and object-level positions, or when charity evaluators do not even strive for cause-neutrality in their consumer offerings. Not saying those can't be net-goods, but the effects on homogenization, centralization, and bias all restrict the purview of Effective Altruism.

I have often heard people worry that it’s too hard for a new cause to be accepted by the effective altruism movement.

Everyone here knows there are new causes and wants to accept them, but they don't know that everyone knows there are new causes, etc, a common-knowledge problem. They're waiting for chosen ones to update the leaderboard.

If the tribally-approved list were opened it would quickly spiral out of working memory bounds. This is a difficult problem to work with but not impossible. Let's make the list and put it somewhere prominent for salient access.

Anyway, here is an experimental Facebook group explicitly for initial cause proposal and analysis. Join if you're interested in doing these!

Comment by DonyChristie on Donation Plans for 2017 · 2018-01-08T05:27:06.844Z · EA · GW

For more speculative things, we want to put part of the money towards a project that a friend we know through the Effective Altruism movement is starting. In general I think this is a good way for people to get funding for early stage projects, presenting their case to people who know them and have a good sense of how to evaluate their plans.

What is the project (at the finest granularity of detail you are comfortable disclosing)?

Comment by DonyChristie on Viewing Effective Altruism as a System · 2018-01-08T05:22:33.422Z · EA · GW

Yes, fortify health.

Comment by DonyChristie on We Could Move $80 Million to Effective Charities, Pineapples Included · 2017-12-14T21:09:54.558Z · EA · GW

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Oh dear! No, I didn't explicitly realize this beyond passing thoughts. In retrospect, I'm confused why this wasn't cached in my mind as being against reddiquette. I should eat my own dogfood regarding brigading. I edited it so it's not soliciting. Let me know here or privately if there are any further fixes I should make to the post (i.e. if I should just remove the links to the known EA comments).

Comment by DonyChristie on Mental Health Shallow Review · 2017-11-27T09:48:25.434Z · EA · GW

Did you look into coherence therapy or other modalities that use memory reconsolidation? It is theoretically more potent than CBT.

Comment by DonyChristie on [deleted post] 2017-11-11T23:31:32.675Z

Having now installed the userstyles, in order to unblind (and re-blind) myself I need to press the Stylish icon and press 'Deactivate' on the script? This might be a trivial inconvenience.

Comment by DonyChristie on Introducing Canada’s first political advocacy group on AI Safety and Technological Unemployment · 2017-10-31T17:49:08.848Z · EA · GW

To what extent have you (whoever's in charge of CHS) talked with the relevant AI Safety organizations and people?

To what extent have you researched the technical and strategic issues, respectively?

What is CHS's comparative advantage in political mobilization and advocacy?

What do you think the risks are to political mobilization and advocacy, and how do you plan on mitigating them?

If CHS turned out to be net harmful rather than net good - what process would discover that, and what would the result be?

Comment by DonyChristie on Should we be spending no less on alternate foods than AI now? · 2017-10-30T19:20:37.069Z · EA · GW

Truly one of the most satiating interventions on the menu of causes!

Could you go more into the full list of what the food alternatives look like, and how tractable each of them are?

Comment by DonyChristie on Introducing fortify hEAlth: an EA-aligned charity startup · 2017-10-28T01:42:18.124Z · EA · GW

That is awesome and exciting!

What made you decide to go down this path? What decision-making procedure was used? How would you advise other people determine whether they are a fit for charity entrepreneurship?

How do you plan on overcoming the lack of expertise? How does the reference class of nonprofit startups founded by non-experts compare to the reference class of nonprofit startups founded by experts?

fortify hEAlth

Is this the actual name? I personally think it's cute, but it might be confusing to those not familiar with the acronym.

I think what you're doing could be very high-impact compared to the counterfactual; indeed, it may be outright heroic. ^_^

Comment by DonyChristie on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T21:56:06.276Z · EA · GW

I second most of these concerns.

Does this not risk diluting EA into just another ineffective community.

The core of EA is cause-neutral good-maximization. The more we cater to people who cannot switch their chosen object-level intervention, the less ability the movement will have to coordinate and switch tracks. They will become offended by suggestions that their chosen intervention is not the best one. As it is I wish more people challenged how I prioritize things, but they probably don't for fear of offending others as a general policy.

You say that suing the term "AI" without explanation is too much jargon. Is that really a reasonable standard? AI is not an obscure term. If you want us to avoid the term "AI" your standards of accessibility seem rather extreme.

I am in favor of non-dumbed-down language as it creates an added constraint in how I can communicate when I have to keep running a check on whether a person understands a concept I am referring to. I do agree that jargon generation is sometimes fueled by the desire for weird neologisms moreso than the desire to increase clarity.

You claim we should focus on making altruistic people effective instead of effective people altruistic.

I once observed: "Effectiveness without altruism is lame; altruism without effectiveness is blind." 'Effectiveness' seems to load most of the Stuff that is needed; to Actually Do Good Things requires more of the Actually than the Good. It seems that people caring about others takes less skill than being able to accomplish consequential things. I am open to persuasion otherwise; I've experienced most people as more apathetic and nonchalant about the fate of the world, an enormous hindrance to being interested in effective altruism.

Comment by DonyChristie on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T21:17:21.185Z · EA · GW

Discussion about inclusivity is really conspicuous by it's absence within EA. It's honeslty really weird we barely talk about it.

Are you sure? Here are some previous discussions (most of which were linked in the article above):

I recall more discussions elsewhere in comments. Admittedly this is over several years. What would not barely talking about it look like, if not that?