Posts

What should Future Fund regrantors fund? 2022-09-28T05:48:01.408Z
Hypercerts: A new primitive for public goods funding 2022-09-10T21:43:45.843Z
Experiment in Retroactive Funding: An EA Forum Prize Contest 2022-06-01T21:15:09.031Z
Mosul Dam Could Kill 1 Million Iraqis. 2022-03-01T03:28:56.438Z
thank machine doggo 2021-10-30T06:52:49.450Z
What is the impact of the Nuclear Ban Treaty? 2020-11-29T00:26:31.318Z
Which is better for animal welfare, terraforming planets or space habitats? And by how much? 2020-10-17T21:49:51.311Z
DonyChristie's Shortform 2020-08-22T17:49:36.928Z
What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? 2020-08-07T01:50:34.172Z
Will Three Gorges Dam Collapse And Kill Millions? 2020-07-26T02:43:40.087Z
How would you advise someone decide between different biosecurity interventions? 2020-03-30T17:05:26.161Z
What are people's objections to earning-to-give? 2019-04-13T20:16:43.283Z
What are ways to get more biologists into EA? 2019-01-11T21:06:01.945Z
We Could Move $80 Million to Effective Charities, Pineapples Included 2017-12-14T04:40:26.648Z

Comments

Comment by DonyChristie on [deleted post] 2022-10-02T23:21:25.067Z

My apologies but I had to strong downvote because this is the sort of content that I want to stay far away from the forum. I would have maybe given a weak downvote or maybe even none if:

  • it was nonpartisan, nonpolarized, and neutral
  • there was a transcript of the video (I watched only a couple minutes, too long)
  • there was a specific theory of change with an expected value calculation for a given amount of resources to improve a specific problem
  • it compared this to other possible uses of those resources within the same or different cause areas

(Here is an example of a post from today that seems somewhat more neutral and specific, though still not as mechanistic as I'd like, but I only skimmed it: https://forum.effectivealtruism.org/posts/FtHhC7CfN4r5xD3Lm/easy-fixing-voting)

Comment by DonyChristie on [deleted post] 2022-10-01T15:40:14.758Z

The probability has increased by some amount, yes.

https://www.lesswrong.com/posts/gyMXuhcYyMRpwEjGq/a-few-terrifying-facts-about-the-russo-ukrainian-war

Comment by DonyChristie on NASA will re-direct an asteroid tonight as a test for planetary defence (link-post) · 2022-09-26T18:44:04.881Z · EA · GW

As noted in 'The Precipice' though, while potentially reducing the risk from asteroids, such a capability may pose a larger risk itself if used by malicious actors to target asteroids towards Earth.

 

I am very confident that dual-use risk of improved asteroid deflection technology in general is much more likely than a random asteroid hitting us, and that therefore  this experiment has likely made the world worse off (with a bit less confidence, because maybe it's still easier to deflect asteroids defensively rather than offensively, and this experiment improved that defensive capability?). This is possibly my favorite example  of a crucial consideration, and also more speculatively, evidence that the sum of all x-risk reduction efforts taken together could be net-harmful (I'd give that a 5-25% chance?).

Comment by DonyChristie on 9/26 is Petrov Day · 2022-09-26T03:03:29.997Z · EA · GW

The post should be updated stating he is deceased.

Comment by DonyChristie on EA criticism: not yet a religion · 2022-09-02T01:03:35.100Z · EA · GW

I don't think you're gonna make a religion by having the agenda set top-down by cool-headed moral vanguarding. You need to be unhinged enough to go into a cave and hear voices. 

Comment by DonyChristie on Notes on how prizes may fail and how to reduce the risk of them failing · 2022-08-30T21:00:26.964Z · EA · GW

2 FTEs doesn't seem that bad to me for something as important as cause exploration and given how big the movement is? This just seems fine to me?

Comment by DonyChristie on Should I force myself to work on AGI alignment? · 2022-08-24T18:56:33.362Z · EA · GW

What does forcing yourself look like concretely as an anticipated physical experience? What would working on the other stuff you would rather work on look like concretely as an anticipated physical experience?

Comment by DonyChristie on [deleted post] 2022-08-24T18:32:21.004Z

Put simply, Bitcoin is widely perceived as the most promising candidate, because it benefits from the network effect. Anyone can invent a cryptocurrency, in the same way that anyone is free to invent their own language or found their own social-media website. The hard part, however, is in getting lots of people to buy in to your new system to the point where it dominates the market – and that’s why Bitcoin’s first-mover advantage is so important. The more people who use Bitcoin over alternative cryptocurrencies (“altcoins”), the more incentive there is for others to use Bitcoin too.

 

As someone who strongly desires the increased ability to coordinate out of inadequate equilibria, such as via tech like assurance contracts, I am spiritually vehemently against deciding the Schelling Point for decentralized currency (or any other technological equilibrium) based on whatever protocol was first to capture initial network effects independent of whether its properties are ideal. It is a line of argument born of existential pessimism, whether used for Bitcoin, using social media platforms, transportation, or any other status quo.

Comment by DonyChristie on Anti-squatted AI x-risk domains index · 2022-08-12T21:22:36.159Z · EA · GW

Relevant domains I own that I'm basically (anti-)squatting until a great use is found by myself or others: 

  • effectivealtruism.money
  • effectivealtruism.capital
  • effectivealtruism.plus
  • effectivealtruism.ventures (good for an incubator or a revamp of the previous EA Ventures)

The first two are pretty relevant to my work on impact markets, so I will want to see a case for more relevant usage of the name  before handing them off.

Comment by DonyChristie on Wanting to dye my hair a little more because Buck dyes his hair · 2022-07-23T02:11:24.001Z · EA · GW

reward the virtue of silence

I would be quite curious to know how this could work!

Comment by DonyChristie on Open Thread: June — September 2022 · 2022-07-21T20:24:41.378Z · EA · GW

What might your plan be?

Comment by DonyChristie on Energy Access in Sub-Saharan Africa: Open Philanthropy Cause Exploration Prize Submission · 2022-07-19T21:57:26.866Z · EA · GW

One consideration not mentioned here is the impact on animal consumption with access to more power. Do you (or anyone else) have a sense of what that could be?

(Edit: I am talking about the poorly-named "Poor Meat Eater Problem", which is a consideration that applies to basically all poverty interventions.)

Comment by DonyChristie on Impact Markets: The Annoying Details · 2022-07-17T20:40:23.764Z · EA · GW

There should be a consensus in EA

This is prohibitively vague. How do you operationalize this exactly? Can you give examples of when EA has achieved a consensus that is analogical with what you desire in this situation?

If e.g. Future Fund and Open Phil were to use it, wouldn't that be a pretty strong signal, especially since they would want to derisk it pretty heavily before scaled up usage of it, with months of dialogue and planning? What are you looking for here that wouldn't already happen as a matter of course during the significant amount of downside mitigation work that would need to happen while building and scaling it up in concert with grantmakers, donors, and charities, who are pretty risk-averse on average and will generally incline towards wanting to be satisfied with at least interim solutions to at least some of the downside risks we, you, and others have identified (and probably ones not yet identified)?

I am pretty happy with Avengers Assemble-ing some kind of group discussion on impact markets as a consensing vehicle, perhaps a virtual event on the EA Forum using Polis, maybe a pop-up event during EAG SF, if it will please you or meet some objective criteria you specify. I find this sort of thing generally desirable regardless (cost-willing) but I additionally want to know what gets your thumbs up specifically given my impression is you want to stop people doing anything before achieving this consensus, whereas I view this downside work as a thing done concurrently alongside the long and arduous road of the empirical work of building that will provide spades of course-correcting feedback.

about a specific potential intervention to create an impact market, before it is decided to carry out that intervention

Decisionmaking by committee (my current impression of your ask) on specific product versions is not how things get built, especially early-on, especially with multiple parties involved, and is a recipe for not getting things done. The space of decisions is way too high-dimensional and things change based on feedback. Approaching a consensus on the important parts of the general theory of impact markets early on such that robust net-positivity is agreed-upon seems much more tractable and important in comparison, as well as generally keeping people working in the field coordinated and in communication as they iterate through different parameters of their visions.

Comment by DonyChristie on Impact Markets: The Annoying Details · 2022-07-15T21:58:50.234Z · EA · GW

One suggestion we got at the EA Econ retreat was to have a prediction market somehow tracking the expected value paired to each certificate as a requirement so that the history of the expected value is known, reducing the risk of retro funding ex ante risky projects.

Comment by DonyChristie on Crypto markets, EA funding and optics · 2022-07-14T16:27:01.668Z · EA · GW

I think we should develop a reputation for banking on innovative high-risk high-reward technology. This is what drives progress and creates wealth.

Comment by DonyChristie on Four focus areas of effective altruism · 2022-07-12T16:13:48.921Z · EA · GW

I think this post was harmful in one particular way even though it was most likely net-positive. At least in my conceptual landscape it helped crystallize this appearance that these focus areas were the only ones in existence. Early EA culture could have done a lot better at recognizing a potential vast plethora of yet-unknown causes rather than hill-climbing on what was known. Many times I saw the meme of people claiming "the four cause areas of EA" which may have stemmed from this post. A lot of it was my own psychology of course, though this is correlated with others'. Of course, people need a map of causes to navigate and communicate about. 

Comment by DonyChristie on Art Recommendation: Dr. Stone · 2022-07-09T20:54:32.786Z · EA · GW

Here are English translations of the first season's theme songs, which I deeply enjoy.

Comment by DonyChristie on What is the top concept that all EAs should understand? · 2022-07-05T12:28:24.499Z · EA · GW

Cause-Neutrality

I've been worried that the basic mental  motions of being able to evenhandedly consider switching between different causes in a single session of thought or conversation will be marginalized as people settle more into established hierarchies around certain causes. 

(I will fill out my answer more sometime in the future probably; others are welcome to comment and add to  it.)

Comment by DonyChristie on The Future Might Not Be So Great · 2022-07-01T22:40:26.927Z · EA · GW

I recommend a mediator be hired to work with Jacy and whichever stakeholders are relevant (speaking broadly). This will be more productive than a he-said she-said forum discussion that is very emotionally toxic for many bystanders.

Comment by DonyChristie on Future Fund June 2022 Update · 2022-07-01T03:18:44.436Z · EA · GW

We appreciate you! ❤️

Comment by DonyChristie on The Future Might Not Be So Great · 2022-06-30T18:53:02.269Z · EA · GW

I like "quality risks" (q-risks?) and think this is more broadly appealing to people who don't want to think about suffering-reduction as the dominantly guiding frame for whatever reason. Moral trade can be done with people concerned with other qualities, such as worries about global totalitarianism due to reasons independent of suffering such as freedom and diversity. 

It's also relatively more neglected than the standard extinction risks, which I am worried we are collectively Goodharting on as our focus (and to a lesser extent, focus on classical suffering risks may fall into this as well). For instance, nuclear war or climate change are blatant and obvious scary problems that memetically propagate well, whereas there may be many q-risks to future value that are more subtle and yet to be evinced.

Tangentially, this gets into a broader crux I am confused by: should we work on obvious things or nonobvious things? I am disposed towards the latter. 

Comment by DonyChristie on Impact markets may incentivize predictably net-negative projects · 2022-06-30T01:57:58.662Z · EA · GW

I am going to be extremely busy over the next week as I prep for the EAEcon retreat and wrap up the retro funding contest among other things, and combining that with the generally high personal emotional cost of engaging with this post will choose to not comment further for at least a week to focus my energy elsewhere (unless inspiration strikes me).

Here are a couple of considerations relevant to why I at least have not been more responsive, generally speaking:

Comment by DonyChristie on Impact markets may incentivize predictably net-negative projects · 2022-06-26T01:01:47.529Z · EA · GW

I think you're missing the part where if such a marketplace was materially changing the incentives and behavior of the Alignment Forum, people could get an impact certificate for counterbalancing externalities such as critiquing/flagging/moderating a harmful AGI capabilities post, possibly motivating them to curate more than a small moderation team could handle.

That's not to say that in that equilibrium there couldn't be an even stronger force of distributionally mismatched positivity bias, e.g. upvote-brigading assuming there are some Goodhart incentives to retro fund posts in proportion to their karma, but it is at least strongly suggestive.

Comment by DonyChristie on EA-break, EA-slow · 2022-06-23T00:33:05.368Z · EA · GW

I'm gonna go on an EA break for this evening. Wish me luck!

Comment by DonyChristie on Impact markets may incentivize predictably net-negative projects · 2022-06-22T22:36:03.288Z · EA · GW

Ofer (and Owen), I want to understand and summarize your cruxes one by one, in order to sufficiently pass your Ideological Turing Test that I can regenerate the core of your perspective. Consider me your point person for communications.

Crux: Distribution Mismatch of Impact Markets & Anthropogenic X-Risk

If I understand one of the biggest planks of your perspective correctly, you believe that there is a high-variance normal distribution of utility centered around 0 for x-risk projects, such that x-risk projects can often increase x-risk rather than decrease it. I have been concerned for a while that the x-risk movement may be bad for x-risk, so I am quite sympathetic to this claim, though I do believe some significant fraction of potential x-risk projects approach being robustly good. That said I think we are basically in agreement that a large subset of potential mathematically realisable x-risk projects actually increase it, though it's harder to be sure about the share of it in-practice with real x-risk projects given that people generally if not totally avoid the obviously bad stuff.

It seems especially important to prevent the risk from materializing in the domains of anthropogenic x-risks and meta-EA.

The examples you are most concerned by in particular are biosecurity and AI safety (as mentioned in a previous comment of yours), due to potential infohazards of posts on the EA Forum, as well as meta EA mentioned above. You have therefore suggested that impact markets should not deal with these causes, either early on such as during our contest or presumably indefinitely.

Let me use at least one example set of particular submissions that may fall under these topics, and let me know what you think of them.

I was thinking it would be quite cool if both Yudkowsky and Christiano respectively submitted certificates for their posts, 'List of Lethalities' and 'Where I agree and disagree with Eliezer'. These are valuable posts in my opinion and they would help grow an impact marketplace.

My model of you would say either that:

1) funding those particular posts is net bad, or 

2) funding those two posts in particular may be net good, but it sets a precedent that will cause there to be further counterfactual AI safety posts on EA Forum due to retroactive funding, which is net bad, or 

3) posts on the EA Forum/LW/Alignment Forum being further incentivized would be net good (minus stuff such as infohazards, etc), but a more mature impact market at scale risks funding the next OpenAI or other such capabilities project, therefore it's not worth retroactively funding forum posts if it risks causing that.

I am tentatively guessing your view is something at least subtly different from those rough disjunctions, though not too different.

Looking at our current submissions empirically, my sense is that the potentially riskiest certificate we have received is 'The future of nuclear war' by Alexei Turchin. The speculation in it could potentially provide new ideas to bad actors. I don't know, I haven't read/thought about this one in detail yet.  For instance, core degasation could be a new x-risk but it also seems highly unlikely. This certificate could also be the most valuable. My model of you says this certificate is net-negative. I would agree that it may be an example of the sort of situation where some people believe a project is a positive externality and some believe it's a negative externality, but the mismatch distribution means it's valuated positively by a marketplace that can observe the presence of information but not its absence. Or maybe the market thinks riskier stuff may win the confidence game. 'Variance is sexy'. This is a very provisional thought and not anything I would clearly endorse; I respect Alexei's work quite highly!

After your commentary saying it would be good to ban these topics, I was considering conceding that condition because it doesn't seem too problematic to do so for the contest, and by and large I still think that,  though again I would also specifically quite like to see those two AI posts submitted if the authors want that. 

I'm curious to know your evaluation of the following possible courses of action, particularly by what percentage your concern is reduced vs other issues:

  • impact markets are isolated from x-risk topics for all time using magic, they are not isolated from funding meta EA which could downstream affect x-risk
  • impact markets are isolated from x-risk topics and from funding meta EA for all time using magic, they only fund object-level stuff such as global health and development
  • we don't involve x-risk topics in our marketplace for the rest of the year
  • we don't involve x-risk topics until there is a clear counterbalancing force to mismatch distribution in the mechanism design, in a way that can be mathematically modelled, which may be necessary if not sufficient for proving the mechanism design works
    • or you or agents you designate are satisfied that a set of informal processes, norms, curation processes, etc. are achieving this for a centralized marketplace
      • though I would predict this does not address your crux that a centralized impact market may inspire / devolve into a simpler set of equilibria of retro funding that doesn't use e.g. Attributed Impact, probably in conjunction with decentralization
        • I can start a comment thread for discussing this crux separately
  • we do one of the above but allow the two AI posts as exceptions

That list is just a rough mapping of potential actions. I have probably not characterized sufficiently well your position to offer a full menu of actions you may like to see taken regarding this issue.

tl;dr is that I'm basically curious 1) how much you think the risk is dominated by mismatch distribution applying specifically to x-risk vs say global poverty, 2) on which timeframes it is most important to shape the cause scope of the market in light of that (now? at full scale? both?), 3) whether banning x-risk topics from early impact markets (in ~2022) is a significant risk reducer by your lights.

(Meta note: I will drop in more links and quotes some time after publishing this.)

Comment by DonyChristie on New cause area: Violence against women and girls · 2022-06-16T05:41:43.476Z · EA · GW

Happy to help arrange this down the line.

Comment by DonyChristie on What’s the theory of change of “Come to the bay over the summer!”? · 2022-06-08T22:46:21.848Z · EA · GW

I think you're understating how gatekept the inner ring offices are.

I'm happy to host couchsurfers.

Comment by DonyChristie on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-06-02T23:37:28.099Z · EA · GW

Quick comment here - thanks for chipping in! 

I guess you'd want to handle this by saying that people shouldn't buy impact for any work trying to establish them at the moment, since it's ex ante risky?

I personally agree overall with the general gist of this (something like not selling the impact of working on impact markets in the short term, probably years or decades, maybe forever) and was going to make a statement of my own long-term intention along these lines at some point when I got around to personally responding to one of Ofer's comments; the way you put it solidifies further my sense that this would probably be prudent. I have more to say but will bow out for now due to personal needs. I'd prefer to have these discussions in a space dedicated to curiously examining downsides, which I will make a separate post for.

Comment by DonyChristie on Mastermind Groups: A new Peer Support Format to help EAs aim higher · 2022-06-01T01:49:34.215Z · EA · GW

If you're interested in joining a mastermind, please enter your email address. We are considering starting an EA mastermind service or linking up with others already working on that. 

 

I lazily skimmed the post very quickly because I know about Mastermind groups and was looking for a thing to sign up to, and it took a second read for me to notice the sign-up call-to-actions! I recommend making it very big font.

Comment by DonyChristie on EA and the current funding situation · 2022-05-15T22:15:41.709Z · EA · GW

As someone who knows Anthony in-person and has engaged in more high-bandwidth communication with him than anyone else on this thread, I am happy to stake social capital on his insights being very much worth listening to broadly speaking and that he's worth connecting to anyone who could give his ideas legs. 

I have downvoted at least one comment  in this thread that I felt was not conducive to more of his ideas being externalized into the world due to what I believe to be unnecessary focus on social norms/tone policing over tracking object-level ideas. I am not responding further nor am I responding to particular comments as I don't want to feed the demon thread, but I do want to provide clarity on my judgement of what-is-in-the-right and also state I think Anthony could very possibly provide us Cause X as much as anyone I've seen. 

To that end, I believe his interest in new/different infrastructure for how to communicate and internalize ideas is reasonable, and that it's unreasonable to expect idea providers to also have to be the idea executors in the ideal impact marketplace, especially to the extent of expecting them to engage in implicit politics more than is strictly necessary to get the ball rolling.

Comment by DonyChristie on When did the EA Forum get so good?! · 2022-05-06T11:18:21.003Z · EA · GW

I personally think the average quality of posts has gone down but that this is probably okay. The total number of good posts has gone up; it's harder but not that much harder to sift through to find them. It would be nice if good recent posts were visible longer. Or if there was a feature to save posts (maybe I could use Thought Saver for this). Karma inflation seems quite high and less informative than it used to be, boosted by social desirability biases, though it makes sense to take it as good news that there are more people reading, evaluating, and contributing. I want more object-level stuff over community-building stuff (maybe these can be separated?).

Comment by DonyChristie on The AI Messiah · 2022-05-05T17:59:42.550Z · EA · GW

Moynihan's book X-Risk makes a key subtle distinction between religious apocalypticism where God is in control and the secular notion of a risk that destroys all value that only humanity can stop. I'm uncertain how much the distinction is maintained in practice but I recommend the book.

Comment by DonyChristie on A retroactive grant for creating the HPMoR audiobook (Eneasz Brodski)? · 2022-05-03T17:33:25.831Z · EA · GW

We are building a marketplace (newly published site, very rough-looking) for impact markets. We think it's importance to set it up right as there are short-term and long-term risks involved we want to mitigate. We would be happy to facilitate this sort of funding between Eneasz and retro funders down the line or soon as part of an experiment. (I would personally be quite excited to see retro funding of his video, Shit Rationalists Say.)

Comment by DonyChristie on Targeting Celebrities to Spread Effective Altruism · 2022-05-03T00:56:02.758Z · EA · GW

I both think this is 'obviously' good in the expected value sense that's going to be undervalued by most people, and likely to be cringe if done by many or at least some fraction of people who try to cram some propaganda down someone's throat.

"Have you heard the good word of effective altruism?" click

Maybe I'd point at an ethos more like 'find scenes you vibe in, befriend and converse with powerful people in those scenes, and they will naturally receive your values through osmosis'. 

Maybe most people here should just focus on engaging in forthright exchange with each other and others you find in the world, and this will naturally tend to exert a memetic gravity that pulls people in rather than trying to push on others.

Comment by DonyChristie on What should I ask Lewis Dartnell (author of 'The Knowledge' and 'Origins')? · 2022-04-29T17:52:17.368Z · EA · GW

Something like: What does he think about the utility of bunkers and/or Faraday Cages for resilience against GCRs/nukes?

Comment by DonyChristie on An uncomfortable thought experiment for anti-speciesist non-vegans · 2022-04-19T09:06:59.615Z · EA · GW

Considering your analogy, it is easy to buy clothes that didn't require slave labor

Is this true? I have heard the claim 'there is more slavery going on than at any point in history', but know very little about this and how it's defined. I would guess it's hard for me to avoid this if I'm going to a normal clothing store.

Comment by DonyChristie on Help us make civilizational refuges happen · 2022-04-14T00:33:00.166Z · EA · GW

I volunteer my amateur enthusiasm to whoever works on this.

One thing I'm curious is what percentage of worlds would require the bunker to be well-hidden in order to be useful, e.g. due to an all-out WW3 where there are automated weapons seeking out targets that would include bunkers. I am less sure the size of the risk of Local Warlords In Your Area, though will note that if it's near a local population the bunker should be cooperative with nearby inhabitants rather than engage in the false individualist bias that is rampant in survivalist thought.

I think it would make sense to have multiple bunkers distributed in different geographies and suited for different GCRs, where some fraction of these bunkers are kept very very secret. But I strongly don't think a v1 (Version 1 / Vault 1) should have that feature.

Comment by DonyChristie on DonyChristie's Shortform · 2022-04-06T23:59:47.983Z · EA · GW

When I say this (in a highly compressed form, on the shortform where that's okay), it gets a bit downvoted; when Scott says it, or at least, says something highly similar to my intent, it gets highly upvoted.

Comment by DonyChristie on Open Thread: Spring 2022 · 2022-04-01T02:34:20.957Z · EA · GW

This looks like a great idea!

Comment by DonyChristie on DonyChristie's Shortform · 2022-03-16T22:16:10.291Z · EA · GW

Stop using "Longtermism" as a pointer to  a particular political coalition with a particular cause.

Comment by DonyChristie on Let Russians go abroad · 2022-03-12T22:03:50.554Z · EA · GW

PSA: I know a brilliant Russian ML researcher who was working on an AI safety grant before the war started. The grant was cancelled due to sanctions, and this is your chance to hire him to work abroad or remotely.]

Wait, seriously? Is this a grant by an EA institution? How does "a grant get cancelled due to sanctions"? That sounds terribly risk-averse. Someone replace this funding!

Comment by DonyChristie on The Future Fund’s Project Ideas Competition · 2022-03-12T02:56:25.199Z · EA · GW

I've thought about this space a good deal. I think this is really dangerous stuff. It must be aligned with the good. Don't call up what you can't put down.

"Coordination is also collusion." - Alex Tabarrok

Comment by DonyChristie on Mosul Dam Could Kill 1 Million Iraqis. · 2022-03-10T04:43:27.213Z · EA · GW

According to this article which I found as a source for this page,  the Iraq government is:

In addition to the Mosul Dam, it is preparing to rebuild the Badush Dam in Ninawa province, which ceased operation decades ago.

The Ministry of Water Resources on May 18th began preparing technical studies in preparation for the project.

which would be a permanent solution to the flood risk. I don't know what the original source for that article is, but here is a more recent one that corroborates it:

“The ministry is working to re-evaluate the status of the Badush Dam, in coordination with international companies, and in the event that positive results of the re-evaluation appear, the ministry will begin to complete the dam," he added.

Can anyone corroborate this with additional data on the Iraq government's intentions? If it was really going to happen, I would consider it ca(u)se closed from our perspective. 

Comment by DonyChristie on AI Risk is like Terminator; Stop Saying it's Not · 2022-03-08T22:51:50.836Z · EA · GW

Fiction can be a powerful tool for generating public interest in an issue, as Toby Ord describes in the case of asteroid preparedness as part of his appearance on the 80,000 Hours Podcast:

 

I think general additional asteroid preparedness awareness is net negative because it increases the amount of dual-use asteroid deflection capabilities moreso than it increases the amount of non-dual-use asteroid defense capabilities. 

The sign though of asteroid awareness is probably dominated by the number of people who go on to think about and work on other existential risks, which in itself may either be really good by preventing x-risks or may be dual-use in itself, causing general mass awareness of x-risks as a category to be net bad.

Comment by DonyChristie on The Future Fund’s Project Ideas Competition · 2022-03-08T07:08:12.585Z · EA · GW

Improving Critical Infrastructure

Effective Altruism

Some dams are at risk of collapse, potentially killing hundreds of thousands. The grid system is very vulnerable to electromagnetic pulse attack. Infrastructural upgrades could prevent sudden catastrophes from failure of critical systems our civilization runs on.

Comment by DonyChristie on The Future Fund’s Project Ideas Competition · 2022-03-08T06:09:10.981Z · EA · GW

Legalization of MDMA & psychedelics to reduce trauma and cluster headaches

Values and Reflective Processes, Empowering Exceptional People

Millions of people have PTSD that causes massive suffering. 

MDMA and psychedelics are being legalized in the U.S., and there are both non-profit and for-profit organizations working in this space. Making sure everyone who wants it has access, via more legalization, and subsidization, would reduce the amount of trauma, which could have knock-on benefits not just for them but the people they interact with.

Cluster headaches are a particularly nasty condition that has extreme amounts of suffering associated. Legalizing psychedelics that ameliorate the condition, such as DMT, would help sufferers get access that they need.

Comment by DonyChristie on The Future Fund’s Project Ideas Competition · 2022-03-08T05:47:56.518Z · EA · GW

Research into Goodhart’s Law

Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism, Research That Can Help Us Improve

Goodhart’s Law states: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”, or more simply, "When a measure becomes a target, it ceases to be a good measure.”

The problem of ‘Goodharting’ seems to crop up in many relevant places, including alignment of artificial intelligence and the social and economic coordination of individuals and institutions. More research into this property of reality and how to mitigate it may be fruitful for structuring the processes that are reordering the world.

Comment by DonyChristie on The Future Fund’s Project Ideas Competition · 2022-03-08T05:22:05.345Z · EA · GW

Antarctic Colony as Civilizational Backup

Recovery from Catastrophe

Antarctica could be a good candidate for a survival colony. It is isolated, making it more likely to survive a nuclear war, pandemic, or roving band of automated killer drones. It is tough, making it easier to double up as a practice space for a Mars colony. Attempting to build and live there at a larger scale than has been done may spur some innovations. One bottleneck here that may likely need resolving is how to get cheaper transportation to Antarctica, which currently relies on flying there or a limited number of specialized boats.

Comment by DonyChristie on The Future Fund’s Project Ideas Competition · 2022-03-08T05:16:55.809Z · EA · GW

Research into the dual-use risks of asteroid safety

Space Governance

There is a small base rate of asteroids/comets hitting the Earth naturally. There are efforts out there to deflect/destroy asteroids if they were about to hit Earth. However, based on the relative risk of anthropogenic vs natural risk, we think that getting better at manipulating space objects is dual-use as it would allow malevolent actors to weaponize asteroids, and that this risk could be orders of magnitudes larger. We want to see research on what kinds of asteroid defense techniques are likely to not lead towards concomitant progress in asteroid offense techniques.

See:

  • https://forum.effectivealtruism.org/posts/RZf2KqeMFZZEpvBHp/risks-from-asteroids
    • "This ‘dual-use’ concern mirrors other kinds of projects aimed at making us safer, but which pose their own risks, like ‘gain of function’ research on diseases. In such cases, effective governance may be required to regulate the dual-use technology, especially through monitoring its uses, in order to avoid the outcomes where a malign actor gets their hands on it. With international buy-in, a monitoring network can be set up, and strict regulations around technology with the potential to divert planetary bodies can (and probably should) be implemented."
  • https://forum.effectivealtruism.org/posts/vuXH2XAeAYLc4Hxyj/why-making-asteroid-deflection-tech-might-be-bad
    • "A cost benefit analysis that examines the pros and cons of developing asteroid deflection technology in a rigorous and numerical way should be a high priority. Such an analysis would consider the expected value of damage of natural asteroid impacts in comparison with the increased risk from developing technology (and possibly examine the opportunity cost of what could otherwise be done with the R&D funding). An example of such an analysis exists in the space of global health pandemics research, which would be a good starting point. I believe it is unclear at this time whether the benefits outweigh the risks, or vice versa (though at this time I lean towards the risks outweighing the benefits – an unfortunate conclusion for a PhD candidate researching asteroid exploration and deflection to come to).
    • Research regarding the technical feasibility of deflecting an asteroid into a specific target (e.g. a city) should be examined, however this analysis comes with drawbacks (see section on information hazards).
    • We should also consider policy and international cooperation solutions that can be set in place today to reduce the likelihood of accidental and malicious asteroid deflection occurring.
  • https://www.nature.com/articles/368501a0.pdf
    • "It is of course sensible to seek cost effective reduction of risks from all hazards to our civilization - even low probability hazards, of which many may remain unidentified. At a total cost of some $300 million, Spaceguard arguably constitutes a reasonable measure of defence against the impact hazard. But premature deployment of any asteroid orbit modification capability, in the real world and in light of well-established human frailty and fallibility, may introduce a new category of danger that dwarfs that posed by the objects themselves."
Comment by DonyChristie on The Future Fund’s Project Ideas Competition · 2022-03-08T03:28:05.064Z · EA · GW

Research on Competitive Sovereignties

Governance, New Institutions, Economic Growth

The current world order is locked in stasis and status quo bias. Enabling the creation of new jurisdictions, whether via charter cities, special economic zones, or outright creation of new lands such as seasteading, could allow more competition between countries to attract subscriber-citizens, increasing welfare.

It would also behoove us to think about standards for international interoperability in a world where '1000 nations bloom'. Greater decentralization of power could increase certain kinds of existential risk, so standards for cooperation at scale should be created. Otherwise, the greater the N of actors, the more surface area for them to go to war with each other.