What EA projects could grow to become megaprojects, eventually spending $100m per year?

post by Nathan Young (nathan) · 2021-08-06T11:24:31.775Z · EA · GW · 175 comments

This is a question post.

Ben Todd (CEO of 80,000 Hours) says "Effective altruism needs more 'megaprojects'. Most projects in the community are designed to use up to ~$10m per year effectively, but the increasing funding overhang means we need more projects that could deploy ~$100m per year."

What are some $100m projects that you think might be worth consideration?

By megaproject, I'm referring to any project that could eventually be scaled up to $100 million, not ones that are planned from the start to cost $100 million. In many cases, this could include very small efforts that would have to achieve multiple levels of success to eventually get $100Million+ per year.


answer by HaydnBelfield · 2021-08-06T16:00:38.725Z · EA(p) · GW(p)

Filling the $100m funding gap in nuclear, since the MacArthur Foundation is pulling out of nuclear policy.

"Since 2015 alone, MacArthur directed 231 grants totaling >$100m in some cases providing more than half the annual funding for individual institutions or programs."
"MacArthur was providing something like 40 to 55 percent of all the funding worldwide of the non-government funding worldwide on nuclear policy”

comment by GMcGowan · 2021-08-06T16:19:13.567Z · EA(p) · GW(p)

Out of all the ideas, this seems the most shovel-ready. 

MacArthur will (presumably) be letting go of some staff who do nuclear policy work, and would (presumably) be happy to share the organisations they've granted to in the past. So you have a ready-made research staff list + grant list.

All ("all" :) ) you need is a foundation and a team to execute on it. Seems like $100 million could actually be deployed pretty rapidly. 

Possibly not all of that money would meet EA standards of cost-effectiveness though - indeed MacArthur's withdrawal provides some evidence that it isn't cost effective (if we trust their judgement).

Replies from: HaydnBelfield
comment by HaydnBelfield · 2021-08-06T16:34:27.840Z · EA(p) · GW(p)

Here's the interesting, frustrating evaluation report:  https://www.macfound.org/media/article_pdfs/nuclear-challenges-synthesis-report_public-final-1.29.21.pdf[16].pdf
Looks to me like a classic hits-based giving bet - you mostly don't make much impact, then occassionaly (Nixon arms control, H.W. Bush's START and Nunn-Lugar, maybe Obama JCPOA/New START) get a home run.

comment by Davidmanheim · 2021-08-16T17:07:46.582Z · EA(p) · GW(p)

To clarify, this is $100m over around 5 years, or $20m/year - which is a good start, but far less than $100m/year.

comment by BrownHairedEevee (evelynciara) · 2021-08-14T21:25:59.714Z · EA(p) · GW(p)

I agree with this. As the article says, multiple funders are pulling out of nuclear arms control, not just MacArthur. So it would be a good idea for EA funders like Open Phil to come in and close the gap. But in doing so, we should understand why MacArthur and other funders are exiting this field and learn from them to figure out how to do better.

comment by BrownHairedEevee (evelynciara) · 2021-08-14T21:24:34.189Z · EA(p) · GW(p)

I misread this as "nuclear power", not "nuclear arms control"  😂

answer by Charles He · 2021-08-06T19:05:53.719Z · EA(p) · GW(p)

This is a weird one that is illustrative:

Taking over the US private prison system (described here).


  • Benefits from returns to scale, maybe only available as a "mega project"
  • Could literally make a profit (CEA is infinite, pretty much the only way to beat GiveWell?)
  • It gives access to institutions, even political capital for reform aligned to social change cause areas
  • Almost no one else would do this
  • Probably a lot of bad things going on inside of them EAs could improve

There's a ton of drawbacks. These include barriers to entry like regulations and capture which could make this impractical. Once inside, implementation issues such as cultural/institutional challenges will be far outside the typical circle of competence of EA. 

But I think that's the point—this idea has a flavor orthogonal to "New R&D/policy institute for X".

comment by Chris Leong (casebash) · 2021-10-07T05:29:02.252Z · EA(p) · GW(p)

Certaintly innovative, although I wonder about the PR consequences

comment by tomstocker · 2021-09-17T13:42:13.390Z · EA(p) · GW(p)

I love this. Could be big or small nearly anywhere in the world. Some precedent too: Prison reform charity Nacro joins bid to run jails | Prisons and probation | The Guardian

Replies from: tomstocker
comment by tomstocker · 2021-09-17T13:48:27.235Z · EA(p) · GW(p)

I think why I like this so much is that it isn't another idea that is fiddling on the margins of a problem with a complicated theory of impact - it just provides a project vehicle to solve one of the more tractable key problems head on.

comment by Fern · 2021-09-18T14:54:45.975Z · EA(p) · GW(p)

I'm late to this, but I wonder if Charles' analysis ought to extend beyond private prisons to address all the ways in which prisons and jails have come to privatize essential services. This includes telephone calls and digital communication, which are largely controlled by a legal monopoly, along with medical treatment and food preparation.

The hyperlinked stories and legal cases are but a few examples of the potentially life-altering negative outcomes that have come out of privatization. One of the major challenges with combating this trend is that documenting wrongdoing and amassing evidence necessary to prepare conditions of confinement claims is extremely, extremely hard (and expensive, for a population that is perhaps the most economically disenfranchised of any in the U.S). 

But we have seen that organized social movements have won victories and that zealous legal advocacy can unwind some of the worst consequences of mass incarceration. EA organizations are already supporting organizations doing this work, like Prison Policy Initiative (an Open Philanthropy recipient). But because of how localized punishment is and how limited resources remain, there is far more that could be done.

Replies from: Charles He
comment by Charles He · 2021-09-18T18:48:15.308Z · EA(p) · GW(p)

This is a great and deep comment.

I think it’s extremely generous to call my little blurb above an “analysis”. I am not informed and I am not involved in this area of prison or justice reform. 

I’m writing this because I don’t want anyone to “wait” on me or anyone else.

If you are reading this and want to dedicate some time on this cause or intervention, you should absolutely do so!

Again, thanks for this comment.

comment by Sami Kassirer · 2021-11-10T18:47:47.354Z · EA(p) · GW(p)

Love this! We could also use prisons as a place where social scientists could study how to optimize ethical development amongst criminals. These samples are so hard to access, but could produce so much impactful insight on when and why ethical decision-making fails, and how to improve ethical decision-making under conflict. This could also be coupled this with a grant competition that would fund the best ideas on how to rehabilitate inmates and improve their ethical decision-making both while in prison and after being reintegrated back into society.

answer by MaxRa · 2021-08-07T07:42:40.606Z · EA(p) · GW(p)

Build up an institution that does the IGM economic experts survey with every scientific field, with paid editors, additionally probabilistic forecasts, monetary incentives for the experts maybe. https://www.igmchicago.org/igm-economic-experts-panel/

comment by anonymous_ea · 2021-08-07T20:59:04.492Z · EA(p) · GW(p)

I like this idea in general, but would it ever really be able to employ $100m+ annually? For comparison, GiveWell spends about $6 million/year and CSET was set up for $55m/5 years (11m/year)

Replies from: MaxRa
comment by MaxRa · 2021-08-07T22:02:17.628Z · EA(p) · GW(p)

I think you’re right. Even if the experts were paid really well for their participation, say 10k per year (maybe as a fixed sum or in expectation given some incentive scheme), and you might have on the order of 50 experts each for 20(?) fields, then you end up with 10 million per year. But probably it wouldn’t even require that, as long as it’s prestigious and is set up well with enough buy-in. Paying for their judgement would make the latter easier I suppose.

comment by Nathan Young (nathan) · 2021-08-07T18:15:00.615Z · EA(p) · GW(p)

I would upvote if someone wrote a quick summary of this and a number of the other ideas which aren't immediately clear on first reading.

Replies from: Juan Cambeiro
comment by Juan Cambeiro · 2021-08-08T15:28:38.691Z · EA(p) · GW(p)

I think the gist of this idea might be something like a massively-scaled up prediction platform that focuses on recruiting subject-matter experts and pays them to make predictions on questions relevant to their expertise while perhaps additionally discussing important/neglected trends in their fields. 

answer by aaronhamlin · 2021-08-08T20:28:24.101Z · EA(p) · GW(p)

The Center for Election Science could easily make efficient use of greater than $50M a year with infrastructure and ballot initiatives. We've already laid out a plan on how we would spend it. We could also potentially build towards some hyper-aggressive $100M years by including lobbying in the remaining states that don't allow ballot initiatives. In any case, we are woefully underfunded relative to our goals and could at the very least surpass the $50M threshold in a couple of years with sufficient funding. If even greater funding were available, we could build in lobbying following more state-level wins.

For clarity, our lack of funding has already cost us approval voting campaign opportunities and is a big issue for us.

comment by dpiepgrass · 2022-01-28T23:51:55.948Z · EA(p) · GW(p)

Okay, but I'm not persuaded that the Center for Election Science is scientific. I think it should be called "The Center for Approval Voting (especially the single-winner district kind)™"

I studied electoral systems for a school project and reached very different conclusions, for instance: that all single-winner-district systems are inherently  non-proportional and subject to gerrymandering. I went so far as to design my own system (I suppose its merits are debatable — but never debated). In emails from the CES I see none of the insights I gained in my school project — nothing about criteria for evaluating voting systems, no theories about what the goals of a voting system should be and how to achieve them... except narrowly-crafted articles focused on crowning Approval Voting the winner, usually without surveying alternatives.

Quite the contrary, CES newsletters read more like the many political propaganda emails from which I have long since unsubscribed.

I agree that maybe this is the best way to achieve your Approval Voting goals. Most political emails simply tell people what to believe and what to vote for, not bothering with evidence or balance. It's probably done this way because it works. But don't call it "science", okay?

Edit: Downvotes are not counterarguments. If you can't say why I'm wrong, maybe I'm not wrong.

answer by Davidmanheim · 2021-08-06T11:30:50.704Z · EA(p) · GW(p)

Here's a few suggestions for near-term megaprojects: 

- Longevity research
- Meat-replacement mega-cost reduction investments (leapfrogging current tech) 
- Eliminating disease-bearing mosquitoes 
- Eliminating all vaccine-preventable diseases worldwide 
- Developing cheap, universal metagenomic scanning for biosecurity (Also see this slightly less ambitious version, mentioned by Alex in a different answer.)
- Large-scale governance reform initiatives 
- Universally available, validated, well-build apps for CBT to reduce depression / increase happiness 
- AI safety (We're doing this one already, so the key players may not have room for funding.)

comment by Nathan Young (nathan) · 2021-08-06T12:14:28.835Z · EA(p) · GW(p)

I suggest you split this into different comments so each can be upvoted seperately

comment by abukeki · 2021-12-09T22:26:02.810Z · EA(p) · GW(p)

For AI safety - maybe Redwood has the most room for funding? They seem to be the most interested in growth (correct me if I'm wrong). And even if the existing players don't have more room, other ways need to be thought of to scale up further through funding as the field is clearly still too small to compete in the race against the titanic field of AI capabilities.

Agree longevity needs to be funded more as well, though lots of aging billionaires like Bezos seem to be throwing tons of money at it these days too so maybe EA money would be much less useful/uniquely needed there than e.g. AI alignment.

answer by kokotajlod · 2021-08-07T05:33:12.866Z · EA(p) · GW(p)

Finally get acceptable information security by throwing money at the problem.

Spend $100M/year to hire, say, 10 world-class security experts and get them everything they need to build the right infrastructure for us, and for e.g. Anthropic.

comment by Davidmanheim · 2021-08-08T08:52:56.654Z · EA(p) · GW(p)

Strong second - we should build up secure open computing from bare metal (secure, open verifiable CPUS, memory, etc.) to the OS, to compilers, to a secure applications layer.

Replies from: kokotajlod
comment by kokotajlod · 2021-08-08T09:24:13.669Z · EA(p) · GW(p)

Is this something we could purchase for a few hundred million in a few years?

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-08T10:43:45.632Z · EA(p) · GW(p)

I discussed this with a couple people ca. 2 years ago, and thought it was likely that a company like Google could design and produce a full stack secure system as a moderately large internal project. And some groups are already doing parts of this - for example, a provably secure OS microkernel, for far less than what we'd be able to spend.

As a fermi estimate on the high end, if we hire 10 top hardware design people for $500k/year each, throw in the same number of OS design people, and compiler designers at the same cost, and a team of 50 great people to do the rest of the development and testing at $300k/year, $100m means that we have 3 years to do this - and it's an open source project, so we'd get universities, etc. working on this as well.  (i.e. we could not mass produce the hardware at theses prices, but that's commercialization, not design, and it should be funded by sales.)

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-08-08T12:23:56.272Z · EA(p) · GW(p)

(not an expert) My impression is that a perfectly secure OS doesn't buy you much if you use insecure applications on an insecure network etc.

Also, if you think about classified work, the productivity tradeoff is massive: you can't use your personal computer while working on the project, you can't use any of your favorite software while working on the project, you can't use an internet-connected computer while working on the project, you can't have your cell phone in your pocket while talking about the project, you can't talk to people about the project over normal phone lines and emails... And then of course viruses get into air-gapped classified networks within hours anyway. :-P

Not that we can't or shouldn't buy better security, I'm just slightly skeptical of specifically focusing on building a new low-level foundation rather than doing all the normal stuff really well, like network traffic monitoring, vetting applications and workflows, anti-spearphishing training, etc. etc. Well, I guess you'll say, "we should do both". Sure. I guess I just assume that the other things would rapidly become the weakest link.

In terms of low-level security, my old company has a big line of business designing chips themselves to be more secure; they spun out Dover Microsystems to sell that particular technology to commercial (as opposed to military) customers. Just FYI, that's just one thing I happen to be familiar with. Actually I guess it's not that relevant.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-08T14:29:02.128Z · EA(p) · GW(p)

Agreed that secure low level without application security doesn't get you there, which is why I said we need a full stack - and even if it wasn't part of this, redeveloping network infrastructure to be done well and securely seems like a very useful investment.

But doing all the normal stuff well on top of systems that still have insecure chips, BIOS, and kernel just means that the exploits move to lower levels - even if there are fewer, the differences between 90% secure and 100% secure is far more important than moving from 50% to 90%. So we need the full stack.

comment by Aaron Gertler (aarongertler) · 2021-08-08T03:55:32.704Z · EA(p) · GW(p)

Epistemic status: Confused person with zero expertise in this area

Who is "us" in this scenario? I assume it's meant to be "organizations with access to infohazardous bio/AI data"?

If so, what makes you think of the current infosec of these orgs as "unacceptable"? If you think they'd disagree with this characterization, do you have a sense for why?

If not, what do you see as some plausible consequences of weak infosec that could plausibly total $100m in damages for EA orgs if they came to pass, given that EA is a network of lots of organizations, with pretty limited funding and access to other valuable data per org?

(Even if something happened along the lines of "GiveWell leaks every donor's credit card number", I wonder what the actual damage would look like, given how often this sort of thing seems to happen to large organizations that don't go bankrupt as a result. And it's hard to imagine that most charities on GiveWell's scale would actually go positive-EV by investing millions of dollars in infosec.)

Replies from: kokotajlod
comment by kokotajlod · 2021-08-08T05:44:16.800Z · EA(p) · GW(p)

This is my impression based on (a) talking to a bunch of people and hearing things like "Yeah our security is unacceptably weak" and "I don't think we are in danger yet, we probably aren't on anyone's radar" and "Yeah we are taking it very seriously, we are looking to hire someone. It's just really hard to find a good security person." These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I haven't talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general don't take security seriously until there's actually a breach.  (c) I've talked to some people who are also worried about this, and they told me there basically isn't any professional security person in the EA community willing to work full time on this.


Replies from: Davidmanheim, aarongertler
comment by Davidmanheim · 2021-08-16T17:13:31.778Z · EA(p) · GW(p)

I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim:
"No amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you can't put anything on a computer."

If AI safety is a critical enabler for national security, and/or AI system security is important for their alignment, that means we're in deep trouble.

comment by Aaron Gertler (aarongertler) · 2021-08-08T06:51:26.409Z · EA(p) · GW(p)

Makes sense. Just to clarify — the phrasing here makes me think these are organizations with potentially dangerous technical knowledge, rather than e.g. CEA. Is that right?

Replies from: kokotajlod
comment by kokotajlod · 2021-08-08T09:23:37.036Z · EA(p) · GW(p)


comment by Florian Habermacher (FlorianH) · 2021-08-08T17:07:54.333Z · EA(p) · GW(p)

I see enormous value in it and think it should be considered seriously.

On the other hand, the huge amount of value in it is also a reason I'm skeptical about it being obvious to be achievable: there are already individual giant firms who'd internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack 'for everything', yet none seems to have something close to it (though I guess many may have something like that in some sub-systems/niches). 

So just wondering whether we might underestimate the cost of development/use - despite from gut feeling strongly agreeing that it would seem like such a tractable problem.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-16T17:16:55.847Z · EA(p) · GW(p)

I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.

Replies from: FlorianH
comment by Florian Habermacher (FlorianH) · 2021-08-17T22:51:10.145Z · EA(p) · GW(p)

Agree with the "easily tens of millions a year", which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.

I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let's consider the idea seriously, but let's also not forget that we're obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.

comment by Cianmullarkey · 2021-08-08T15:43:20.058Z · EA(p) · GW(p)

https://evervault.com/ are launching in October and generally working on problems in this space

answer by kokotajlod · 2021-08-07T05:29:49.179Z · EA(p) · GW(p)

Impact certificates. Announce that we will purchase NFT's representing altruistic acts created by one of the actors. (Starting now, but with a one-year delay, such that we can't purchase an NFT unless it's at least a year old.) Commit to buy $100M/year of these NFTs, and occasionally reselling them and using the proceeds to buy even more. Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have. 

comment by ofer · 2021-08-07T08:39:26.849Z · EA(p) · GW(p)

Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.

It may be critical that the purchasing decisions will somehow account for historical risks (even ones that did not materialize and are no longer relevant), otherwise this approach may fund/incentivize net-negative interventions that are extremely risky (and have some chance of being very beneficial). I elaborated some more on this here [EA · GW].

comment by Charles He · 2021-08-07T21:58:26.378Z · EA(p) · GW(p)

I don't understand. Can you explain more what this project would do and how it would create change?

Also, this project seems to involve commitment of i) hundreds of millions of dollars of funding and ii) reliable guarantees that these will be used cost effectively.

These (extraordinarily) strong promises are structurally necessarily and also seem only achievable by "centralization".

Given this centralization, what is the function or purpose of the NFT?

(Note that my question isn't about technical knowledge about "blockchain" or "NFTs" and you can assume gears knowledge of them and their instantiations up through 2020.)

Replies from: kokotajlod
comment by kokotajlod · 2021-08-08T05:33:35.046Z · EA(p) · GW(p)

Think of it like a grants program, except that instead of evaluating someone's pitch for what they intend to do, you are evaluating what they actually did, with the benefit of hindsight. Presumably your evaluations will be significantly more accurate this way. (Also, the fact that it's NFT-based means that you can recruit the "wisdom of the efficient market" to help you  in various ways, e.g. lots of non-EAs will be buying and selling these NFTs trying to predict what you will think of them, and thus producing lots of research you can use.)

I don't think it should replace our regular grants programs. But it might be a nice complement to them.

I don't see what you mean by centralization here, or how it's a problem. As for reliable guarantees the money will be used cost effectively, hell no, the whole point of impact certificates is that the evaluation happens after the event, not before. People can do whatever they want with the money, because they've already done the thing for which they are getting paid.

Replies from: Charles He
comment by Charles He · 2021-08-09T17:12:21.893Z · EA(p) · GW(p)

Think of it like a grants program, except that instead of evaluating someone's pitch for what they intend to do, you are evaluating what they actually did, with the benefit of hindsight. Presumably your evaluations will be significantly more accurate this way. (Also, the fact that it's NFT-based means that you can recruit the "wisdom of the efficient market" to help you  in various ways, e.g. lots of non-EAs will be buying and selling these NFTs trying to predict what you will think of them, and thus producing lots of research you can use.)


But the reason why you would evaluate someone's pitch as opposed to using hindsight is that nothing would be done without funding?

I don't see what you mean by centralization here, or how it's a problem. 

I think I am using centralization in the same way that cryptocurrency designers/architects talk about crypto currency systems actually work ("centralization pressures").

The point of NFTs, as opposed to you, me, or a giant granter producing certificates, is that it is part of a decentralized system, not under any one entity's control. 

My understanding is that this is the only logical reason why NFTs have any value, and are not a gimmick. 

They don't have any magical power by themselves or have any special function or information or anything like that.

Under this premise, centralization is undermined, if any other structural component of the system is missing. 

For example, if the grantors or their decisions come from a central source. Then the value of having a decentralized certificate is unclear.

Note undermining "centralization" is sort of like having a wrong step in a math theorem, it's existentially bad as opposed to a reduction in quality or something.

As for reliable guarantees the money will be used cost effectively, hell no, the whole point of impact certificates is that the evaluation happens after the event, not before. People can do whatever they want with the money, because they've already done the thing for which they are getting paid.

I meant that you have written out two distinct promises here that seem to be necessary for this system to structurally work in this proposal. One of these promises seem to be high quality evaluation:

Commit to buy $100M/year of these NFTs, and occasionally reselling them and using the proceeds to buy even more. 

Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.

Replies from: kokotajlod
comment by kokotajlod · 2021-08-10T08:00:40.245Z · EA(p) · GW(p)

Once it's established that you will be giving $100M a year to buy impact certificates, that will motivate lots of people already doing good to mint impact certificates, and probably also motivate lots of people to do good (so that they can mint the certificate and later get money for it)

By buying the certificate rather than paying the person who did the good, you enable flexibility -- the person who did the good can sell the certificate to speculators and get money immediately rather than waiting for your judgment. Then the speculators can sell it back and forth to each other as new evidence comes in about the impact of the original act, and the conversations the speculators have about your predicted evaluation can then help you actually make the evaluation, thanks to e.g. facts and evidence the speculators uncover. So it saves you effort as well.

Replies from: Charles He
comment by Charles He · 2021-08-10T21:38:24.423Z · EA(p) · GW(p)

Ok, I see what you're saying now.

I might see this as a creating bounty program for altruistic successes, while at the same time creating a "thick" market for bounties that is crowd sourced, hopefully with virtuous effects.

Replies from: kokotajlod
comment by kokotajlod · 2021-08-11T08:13:20.757Z · EA(p) · GW(p)

Thats a succinct way of putting it, nice!

answer by MaxRa · 2021-08-06T23:10:20.682Z · EA(p) · GW(p)

Hire ~5 film-studios to each make a movie that concretely shows an AI risk scenario which at least roughly survives the rationalist fiction sniff test. Goal: Improve AI Safety discourse, motivate more smart people to work on this.

comment by HaydnBelfield · 2021-08-07T12:29:32.869Z · EA(p) · GW(p)

Hell yeah! Get JGL to star - https://www.eaglobal.org/speakers/joseph-gordon-levitt/

answer by alexrjl · 2021-08-06T11:26:47.732Z · EA(p) · GW(p)

Sentinel seems promising

comment by Nathan Young (nathan) · 2021-08-06T11:30:57.704Z · EA(p) · GW(p)

(Sentinel is a system for testing new diseases such that unknown pathogens could be recognised from the first sample. Listen to the podcast alexrjl has linked) 

answer by PabloAMC · 2021-08-13T11:53:06.689Z · EA(p) · GW(p)

What about creating academic institutes in reputable universities to tackle important problems, eg similar to FHI or CSER, creating research prizes, and sponsoring conferences. I'm mostly thinking about AI Safety, but it may be useful in other areas too.

answer by HaydnBelfield · 2021-08-06T16:06:32.354Z · EA(p) · GW(p)

Hard science funding seems able to absorb this scale of funding, though this might not count as 'EA-specific' projects:
On climate: carbon capture, new solar materials, new battery R&D, maybe even fusion as 'hits-based giving'?
On bio preparedness there's quite a lot, e.g. Cassidy Nelson recommendations, Andy Weber recommendations

answer by samhbarton · 2021-08-07T00:48:56.751Z · EA(p) · GW(p)

Something that could increase economic growth, dramatically reduce inequality of opportunity, and improve well-being of people worldwide:

Try to get as many people connected to the internet with a personal device as possible. 

The stat is that ~50% of the world is connected to the internet is misleading. To be connected you must have used a networked device once in three months, which is far from what most people would expect. 

Source: International Telecommunication Union ( ITU ) World Telecommunication/ICT Indicators Database

The importance of internet connectivity is hard to understate. It's necessary to function as 21st-century citizens and is the backbone of our societies. It's also necessary for securing various human rights. 

Some quick reasons why internet access is important: 

  • Grants access to free education on just about anything 
  • Access to banking, communication technologies, etc.
  • Increase economic growth, which well-being is somewhat a function of as internet access effectively increases the computational power of the economic system and can 'improve' the substrate upon which it runs (people).
  • Increase awareness of EA in general  

Wrote this quickly so apologies for the brevity. I've been working on a longer post where I dive into this in a lot more detail. 

comment by Nathan Young (nathan) · 2021-08-07T14:18:45.124Z · EA(p) · GW(p)

My very uninformed sense is that starlink might make internet access a lot easier. Metaculus question writing opportunitiy.

Replies from: alexlyzhov, samhbarton
comment by alexlyzhov · 2021-08-19T16:00:36.880Z · EA(p) · GW(p)

Some people have been saying that Starlink system's limit is 0.5M consumers even after they release a whole lot more satellites: https://www.techdirt.com/articles/20200928/09175145397/report-notes-musks-starlink-wont-have-capacity-to-truly-disrupt-us-telecom.shtml. This would mean you can't expect it to turn even 0.1% of unconnected people to netizens.

comment by samhbarton · 2021-08-08T22:39:12.569Z · EA(p) · GW(p)

Yeap, it's incredibly exciting. 

I see a few issues with it in this context, though. 

In the short-run, it will be prohibitively expensive for most of the world's population, and it doesn't solve for the device ownership necessity. 

I also don't like the idea internet access being in the control of a company that is subject to the national laws. I feel that we need a censorship-resistant internet, especially in the existing climate. We're increasingly seeing crack-downs across the world, and I don't the US will be immune from increased internet suppression. 

comment by BrownHairedEevee (evelynciara) · 2021-08-19T03:20:59.528Z · EA(p) · GW(p)

I think this would be broadly useful and in particular increase the reach of mobile payment-based activities like GiveDirectly. I'd be curious about estimates of how cost-effective increasing internet penetration would be, compared to throwing more money at GD.

comment by tamgent · 2021-08-08T15:09:59.456Z · EA(p) · GW(p)

Mozilla have a fellowship aimed at this: https://foundation.mozilla.org/en/what-we-fund/fellowships/fellows-for-open-internet-engineering/

answer by HaydnBelfield · 2021-08-06T16:13:30.780Z · EA(p) · GW(p)

Developing new climate models has costs in the hundreds of millions of dollars. Useful longtermist climate modelling could include:

comment by John G. Halstead (Halstead) · 2021-08-06T18:21:12.219Z · EA(p) · GW(p)

I don't climate research as very valuable. The value of information would only be high if this research would change how people act. Climate inaction seems to be mainly political inertia, not lack of information about potential catastrophe. 

Replies from: HaydnBelfield
comment by HaydnBelfield · 2021-08-07T12:27:30.581Z · EA(p) · GW(p)

Do you mean just the fourth bullet, or do you think this about all four? 

The 1980s nuclear winter and asteroid papers (I'm thinking especially Sagan et al, and Alvarez et al) were very influential in changing political behaviour - Gorbachev and Reagan explicitly acknowledged that on nuclear, the asteroid evidence contributed to the 90s asteroid films and the (hugely successful!) NASA effort to track all 'dino-killers'. On the margin now, I think more scary stuff would be motivating. There's also VOI in resolving how big a concern nuclear winter is (eg some recent papers are skeptical) - if it turned out to not be as existential as we thought, that would change cause prioritisation for GCRs.

On geoengineering (sorry 'climate interventions'(!)), note 'getting more climate modelling' is a key aim for e.g. Silver Lining

On the fourth one, on the margin, I think more research - especially if it were the basis for an IPCC special report - would be influential. There's also VOI for our cause priotisation. It just is really remarkable how understudied it is!
https://forum.effectivealtruism.org/posts/HaXxEtx4QdykBjJi7/betting-on-the-best-case-higher-end-warming-is [EA · GW]

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-08-07T14:22:58.540Z · EA(p) · GW(p)

i was just referring to the last bullet re climate change. eg in the last IPCC report, it would have been reasonable for govts to believe that there was a >10% chance of >6C of warming and that has been true since the 1970s, without having any impact. The political response to climate change seems to be influenced by most mainstream media coverage and public opinion in some circles which it would be fair to characterise as 'very concerned' about climate change.  An opinion poll suggests that 54% of British people think that climate change threatens human extinction (depending on question framing). I agree that in a rational world we want to know how bad climate change could be, but the world isn't rational.

If you're just talking about EA cause prioritisation, the cost-benefit ratio looks pretty poor to me. Wrt reducing uncertainty about climate sensitivity, you're talking costs of $100m per year to have a slim chance of pushing climate change up above AI, bio, great power war for major EA funders. Or we might find out that climate change is less pressing than we thought in which case this wouldn't make any difference to the current priorities of EA funders. 

I also don't see how research on solar geoengineering could be a top pick - stratospheric aerosol injection just doesn't seem like it will get used for decades because it requires unrealistic levels of international coordination. Also, I don't think extra modelling studies on solar geo would shed much light unless we spent hundreds of millions. Climate models are very inaccurate and would provide much insight into the impacts of solar geo in the real world. There might be a case for regional solar geo research, though.

(fwiw, i really don't rate that Xu and Ramanathan paper. they're not using existential in the sense we are concerned about. They define it as "posing an existential threat to the majority of the population". The evidence they use to support their conclusions is very weak. For example, they note following the Mora et al study that currently 30% of the population is exposed to deadly heat, which would increase to 74% at 4C warming. But obviously, it is not the case that all of these people will die, just as it is not the case that 30% of the world population today is dying due to heat waves. Moreover, 4C will take until the end of the century when most people will probably be a lot richer and so will have greater access to air conditioning. Climate change of that magnitude only makes the tropics uninhabitable in the sense that the Persian Gulf is uninhabitable today. There would be great humanitarian costs in low growth agrarian economies but that is a separate question to whether climate change poses an existential risk)

Replies from: HaydnBelfield
comment by HaydnBelfield · 2021-08-07T16:32:06.037Z · EA(p) · GW(p)

Interesting first point, but I disagree. To me, the increased salience of climate change in recent years can be traced back to the 2018 Special Report on Global Warming of 1.5 °C (SR15), and in particular the meme '12 years to save the world'. Seems to have contributed to the start of School Strike for Climate, Extinction Rebellion and the Green New Deal. Another big new scary IPCC report on catastrophic climate change would further raise the salience of this issue-area.

I was thinking that $100m would be for all four of these topics, and that we'd  get cause-prioritisation VOI across all four of these areas. $100m for impact and VOI across all four seems pretty good to me (however I'm a researcher not a funder!)

On solar geo, I'm not an expert on it and am not arguing for it myself, merely reporting that its top of the 'asks' list for orgs like Silver Lining.

I actually rather like the framing in Xu & Ram - I don't think we know enough about >5 °C scenarios, so describing them as "unknown, implying beyond catastrophic, including existential threats" seems pretty reasonable to me. In any case, I cited that more to demonstrate the lack of research thats been done on these scenarios.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-08-07T16:42:37.224Z · EA(p) · GW(p)

On the last point, during the early Pliocene, early hominids  with much worse technology than us lived in a world in which temperatures were 4.5C warmer than pre-industrial. It would be a surprise to me if this level of warming would kill off everyone, including people in temperate regions.  There's more to come from me on this topic, but I will leave it at that for now

comment by BrownHairedEevee (evelynciara) · 2021-08-13T04:40:14.439Z · EA(p) · GW(p)

I definitely want to see more modeling of supervolcano and comet disasters.

answer by Denkenberger · 2021-08-09T23:29:05.476Z · EA(p) · GW(p)

I have claimed that the first few hundred million dollars of preparation for agricultural [EA · GW] and electricity [EA · GW] disrupting GCRs is competitive with AGI safety for the longterm, and preparation for agricultural GCRs is more cost effective than GiveWell interventions. Since these catastrophes could happen right away, I think it does make sense to scale up quickly to $100 million per year to get the preparation fast. Beyond research, this money could be used for piloting new technologies and developing response plans and training. To maintain $100 million per year may then be lower cost effectiveness than AGI safety at the expected margin, but would still provide additional value and may be competitive with other priorities. Projects could include subsidizing resilient food sources such as seaweed, cellulosic sugar, methane single cell protein, etc. Or building factories flexibly such that they could switch quickly from producing animal feed or energy to human food. These could easily be many billions of dollars per year.

answer by MaxRa · 2021-08-06T23:00:29.387Z · EA(p) · GW(p)

Take some EAs involved in public outreach, some journalists who made probabilistic forecasts on there own volition (Future Perfect people, Matt Yglesias, ?), and buy them their own news media organization to influence politics and raise the sanity- and altruism-waterline.

comment by MichaelStJules · 2021-08-07T22:03:38.019Z · EA(p) · GW(p)

We could buy (a significant number of shares in) media companies themselves and shift their direction. Bezos bought the Washington Post for $250 million. Some are probably too big, like the New York Times at a $8 billion market cap and Fox Corporation at $20 billion.

Replies from: RyanCarey
comment by RyanCarey · 2021-08-07T22:58:03.269Z · EA(p) · GW(p)

I generally agree, although I think these >$1B general audience entities are too expensive for EAs. Whereas I think it would make sense to buy media companies and consultancies that are somewhat focused on global security, AI and/or econ research. e.g. Foreign Policy magazine, Wired, GZero Media. Stratfor. Economist Intelligence Unit, and so on. At least, I think the value of info from trying out buying up one or more smaller entities, to see how one could steer them, or bolster them with some EA talent, could be high - the most similar things I can think of EAs having done previously were investing in DeepMind and OpenAI.

Another way of thinking about this question is - are there other entities that are of less value to invest in than DM/OAI, but more than the media/consulting orgs that I mentioned?

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-16T17:32:44.334Z · EA(p) · GW(p)

Who should buy them?

I'm concerned that it would look really shady for OpenPhil to do so, but maybe Sam Bankman-Fried or another very big EA donor could do it - but then the purchaser needs to figure out who to pick to actually manage things, since they aren't experts themselves. (And they need to ensure that their control doesn't undermine the publication's credibility - which seems quite tricky!)

Replies from: RyanCarey
comment by RyanCarey · 2021-08-16T18:22:13.462Z · EA(p) · GW(p)

It could only be billionaires who are running out of donation targets. If Bezos can buy WaPo, then less prominent billionaires can buy less popular media with much less (though not zero) controversy. But I agree that it only works well if you have EA-leaning talent to work there, especially at the executive level.

comment by ChanaMessinger · 2021-08-07T12:58:18.912Z · EA(p) · GW(p)

Matt makes lots of money on his independent substack now, so that feels less urgent, but funding other things like future perfect in other news sources as the Rockefeller Foundation does now seems great.

Replies from: MaxRa, nathan
comment by MaxRa · 2021-08-07T16:44:44.821Z · EA(p) · GW(p)

Urgent doesn‘t feel like the right word, the question to me is whether his contributions could be scaled up well with more money. I think his substack deal is on the order of 300k per year, but maybe he could found and lead a new news organization, hire great people that want to work with him and do more rational, informative and world-improvy journalism?

Replies from: HStencil
comment by HStencil · 2021-08-07T17:02:48.318Z · EA(p) · GW(p)

I would be extremely surprised if he had any interest in doing this, given what he’s said about his reasons for leaving Vox.

Replies from: MaxRa
comment by MaxRa · 2021-08-07T17:24:46.915Z · EA(p) · GW(p)

Thanks, didn't see what he said about this. Just read an Atlantic article about this and I don't see why it shouldn't be easy to avoid the pitfalls from his time with Vox, and why he wouldn't care a lot about starting a new project where he could offer a better way to do journalism.

Yglesias felt that he could no longer speak his mind without riling his colleagues. His managers wanted him to maintain a “restrained, institutional, statesmanlike voice,” he told me in a phone interview, in part because he was a co-founder of Vox. But as a relative moderate at the publication, he felt at times that it was important to challenge what he called the “dominant sensibility” in the “young-college-graduate bubble” that now sets the tone at many digital-media organizations.


Also, the idea of course is not at all dependent on him, I suppose there would be other great candidates, Yglesias just came to mind because I really like his work. 

Replies from: HStencil
comment by HStencil · 2021-08-07T17:32:06.271Z · EA(p) · GW(p)

Yeah, I guess the impression I had (from comments he made elsewhere — on a podcast, I think) was that he actually agreed with his managers that at a certain point, once a publication has scaled enough, people who represent its “essence” to the public (like its founders) do need to adopt a more neutral, nonpartisan (in the general sense) voice that brings people together without stirring up controversy, and that it was because he agreed with them about this that he decided to step down.

Replies from: MaxRa
comment by MaxRa · 2021-08-07T18:04:17.915Z · EA(p) · GW(p)

Interesting, the Atlantic article didn't give this impression. I'd also be pretty surprised if you had to become essentially the cliche of a moderate politician if you're part of the leadership team of a journalistic organization. In my mind, you're mostly responsible for setting and living the norms you want the organization to follow, e.g. 

  • epistemic norms of charitability, clarity, probabilistic forecasts, scout mindset
  • values like exploring neglected and important topics with a focus on having an altruistic impact? 

And then maybe being involved in hiring the people who have shown promise and fit?

Replies from: HStencil
comment by HStencil · 2021-08-08T00:19:30.926Z · EA(p) · GW(p)

Yeah, I mean, to be clear, my impression was that Yglesias wished this weren't required and believed that it shouldn't be required (certainly, in the abstract, it doesn't have to be), but nonetheless, it seemed like he conceded that from a practical standpoint, when this is what all your staff expect, it is required. I guess maybe then the question is just whether he could "avoid the pitfalls from his time with Vox," and I suppose my feeling is that one should expect that to be difficult and that someone in his position wouldn't want to abandon their quiet, stable, cushy Substack gig for a risky endeavor that required them to bet on their ability to do it successfully. I think too many of the relevant causes are things that you can't count on being able to control as the head of an organization, particularly at scale, over long periods of time, and I'd been inferring that this was probably one of the lessons Yglesias drew from his time at Vox.

comment by Nathan Young (nathan) · 2021-08-07T14:13:39.673Z · EA(p) · GW(p)

Or indeed experimenting with different incentives in news production. What would EAs do if they all had £10 to spend on news production.

comment by MichaelStJules · 2021-08-07T16:16:25.563Z · EA(p) · GW(p)

Wouldn't they lose readers if they left their organizations? Is that what you mean? The fact that Future Perfect is at Vox gets Vox readers to read it.

Replies from: MaxRa
comment by MaxRa · 2021-08-07T16:37:39.058Z · EA(p) · GW(p)

In the short term yes, but my vision was to see a news media organization under the leadership of a person like Kelsey Piper that is able to hire talented reasonably aligned journalists to do great and informative journalism in the vein of Future Perfect. Not sure how scalable Future Perfect is under the Vox umbrella, and how freely it could scale up to its best possible form from an EA perspective.

answer by AppliedDivinityStudies · 2021-08-06T19:35:16.509Z · EA(p) · GW(p)

No idea what it would cost, but we should get to work on cloning John von Neumann: https://fantasticanachronism.com/2021/03/23/two-paths-to-the-future/

comment by EdoArad (edoarad) · 2021-08-08T08:41:19.386Z · EA(p) · GW(p)

Interesting! Do you know anything about the state of regulations around this? 

(sorta related, there are several pet cloning services)

I'm not sure what are the potential downsides of such a wide-spread tech, but it seems like something which can have high scalability if done as a for-profit company.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-16T17:29:59.938Z · EA(p) · GW(p)

Yeah, cloning humans is effectively illegal almost everywhere. (I specifically know it's banned in the US and Israel, I assume the EU's rules would be similar.)

answer by BrownHairedEevee (evelynciara) · 2021-08-12T00:18:08.033Z · EA(p) · GW(p)

The Sustainable Development Goals - and their predecessor, the MDGs - are like a megaproject led by the UN. Some of these are already aligned with EA priorities, such as the following:

  • Eradicating extreme poverty (Goal 1, Target 1.1)
  • Ending hunger (Goal 2, Target 2.1) and malnutrition (Target 2.2)
    • Fortify Health aims to improve health by providing fortified wheat flour
  • Good health and well-being (Goal 3)
  • Clean water and sanitation (Goal 6)
  • Ending energy poverty (Goal 7, Target 7.1)
  • Increasing the share of renewable energy (Target 7.2) and energy efficiency (Target 7.3)
  • Promoting clean energy innovation (Target 7.A)
  • Decent work and economic growth (Goal 8)

The Economist has written that Goal 1 (ending poverty) should be "at the head of a very short list." In my opinion, if we're going to do a megaproject, we should take a handful of the SDG targets (such as 1.1, ending extreme poverty) and spend billions of dollars aggressively optimizing them.

answer by RobertDaoust · 2021-08-06T13:56:05.920Z · EA(p) · GW(p)

An institute for the science of suffering.

comment by AppliedDivinityStudies · 2021-08-06T19:31:43.285Z · EA(p) · GW(p)

Do you know about QRI? They're pretty close to what you're describing. https://www.qualiaresearchinstitute.org/

Replies from: RobertDaoust
comment by RobertDaoust · 2021-08-06T22:21:52.177Z · EA(p) · GW(p)

Yes I know, thank you ADS, but I rather have in mind something like "Toward an Institute for the Science of Suffering" https://docs.google.com/document/d/1cyDnDBxQKarKjeug2YJTv7XNTlVY-v9sQL45-Q2BFac/edit#

answer by Linch · 2021-08-06T16:24:57.580Z · EA(p) · GW(p)

You can maybe make very good civilizational refuges [? · GW] for 100M/year, though this is probably considerably more capital than MVPs [EA · GW] I'd like to consider.

comment by Linch · 2021-11-10T22:26:58.525Z · EA(p) · GW(p)

I did some more thinking (still not full Fermis) and now think that this is a >1B project even for just a sufficiently good MVP, possibly considerably more. 

Though most of the cost is upfront cost like digging, and constructing full bunkers with individual nuclear power plants. The running cost should be considerably lower than <100M/year, unless I'm missing something important.

comment by AppliedDivinityStudies · 2021-08-06T19:34:32.021Z · EA(p) · GW(p)

Is there a good writeup anywhere on cost estimates for this kind of refuge? Or what it would require?

Replies from: Linch
comment by Linch · 2021-08-07T00:08:11.644Z · EA(p) · GW(p)

Not that I know of, Nick Beckstead wrote a moderately negative review of civilizational refuges [EA · GW] 7 years ago (note that this was back when longtermist EA had a lot less $s than we currently do). 

One reason I'd like to write out a moderately detailed MVP is that then we can have a clear picture for others to critique concrete details of, suggest clear empirical or conceptual lines for further work, etc, rather than have most of this conversation a) be overly high-level or b) too tied in with/anchored to existing (non-longtermist) versions of what's currently going on in adjacent spaces. 

answer by MaxRa · 2021-08-06T22:45:35.169Z · EA(p) · GW(p)

Funding a „serious“ prediction market.

Not sure if 100M is necessary or sufficient if you want many people or even multiple organizations to seriously work full-time on forecasting EA relevant questions. Maybe could also be used to spearhead its usage in politics.

comment by samhbarton · 2021-08-08T22:35:47.164Z · EA(p) · GW(p)

www.ideamarket.io is working on something that's in the same vein. It's not a prediction market, but seeks to use markets to identify credible/trustworthy sources. 

Disclaimer: i started working with Ideamarket a month ago

answer by jchen1 · 2021-08-12T14:43:36.589Z · EA(p) · GW(p)

Challenge prize(s) to incentivise the development of innovative solutions in priority areas. These could be prizes for goals already suggested by people in this thread  (e.g. producing resilient food sources, drastic changes to diagnostic testing, meat alternatives underinvested in by the market) or others. 

Quotes from a Nesta report on challenge prizes (caveat that I haven't spent any time looking up opposing evidence/perspectives):

By guiding and incentivising the smartest minds, prizes create more diverse solutions. Because prizes only pay out when a problem has been solved, you can support long shots, radical ideas and unusual suspects while minimising risk...

The high profile of a prize can raise public awareness and shape the future development of markets and technologies. Prizes can help identify best practice, shift regulation and drive policy change...

For the Ansari XPRIZE, 26 teams spent $100 million chasing the $10 million prize, jump starting the commercial space industry.


See also Musk's $100m prize for carbon capture tech

answer by GMcGowan · 2021-08-06T15:01:28.008Z · EA(p) · GW(p)

Buy up scarce resources which are being used for bad things and just sit on them. Like the thing where you buy rainforest to prevent logging. Coal mines, agricultural land used for animals, GPUs?!

comment by Neel Nanda · 2021-08-07T20:22:12.016Z · EA(p) · GW(p)

Interesting idea! I think this works much better when supply is constrained, eg land, and not when supply is elastic (eg GPUs). I'm curious whether anyone has actually tried this

comment by Nathan Young (nathan) · 2021-08-07T14:19:52.227Z · EA(p) · GW(p)

Feels like buying GPUs would just increase their production.

Replies from: GMcGowan
comment by GMcGowan · 2021-08-07T21:03:29.064Z · EA(p) · GW(p)

That's true. I just listened to the most recent 80k podcast where they joke about buying up GPUs so it was in my head :) 

Replies from: nathan
comment by Nathan Young (nathan) · 2021-08-07T21:57:23.743Z · EA(p) · GW(p)

Haha, fair :)

answer by ryanbloom · 2021-08-08T17:51:47.636Z · EA(p) · GW(p)

Scaling up carbon removal and other promising climate-related technologies before governments are willing to fund them. A lot like what Stripe and Shopify have been doing, but about an order of magnitude bigger. If the timing is right (I'm not sure it is) this strategy could get a fair bit of leverage by driving costs down and accelerating even larger-scale deployments.

comment by StephanieAG · 2021-10-16T21:45:31.267Z · EA(p) · GW(p)


answer by slg (Simon_Grimm) · 2021-08-07T17:13:55.034Z · EA(p) · GW(p)

Launching a Nucleic Acid Observatory, as outlined recently by Kevin Esvelt and others here (link to paper). With $100m one could launch a pilot version covering 5 to 10 states in the US.

answer by GMcGowan · 2021-08-06T15:01:11.027Z · EA(p) · GW(p)

Activist investment fund which invests in large companies and then leans on them to change their policies. Examples abound in climate change, but other than that:

  • Food related companies to stop factory farming
  • Biotech companies to stop them from doing gain of function or mirror life research
comment by MichaelStJules · 2021-08-07T21:12:45.721Z · EA(p) · GW(p)

Bezos bought the Washington Post for $250 million. We could try to buy some other media groups, or at least a significant number of shares in them. Some are probably too big, like the New York Times at $8 billion market cap and Fox Corporation at $20 billion.

I think food-related companies are also probably too big relative to impact, with market caps in the billions or tens of billions of dollars for Tyson, Pilgrim's Pride, JBS SA, McDonald's. You could buy shares in smaller ones, but they also probably have a disproportionately smaller share of farmed animals, although getting a few of them to improve animal welfare policies could make the big ones look bad and push them to follow.

answer by Morgan Rivers · 2021-10-30T09:41:19.726Z · EA(p) · GW(p)

Responding as a member of the ALLFED team.

A network of reliable, long distance shortwave radio systems that do not depend on external sources of electricity and are unable to be disabled by widespread cyber attack, EMP, or most other threats to the global communication infrastructure.

In a wide range of catastrophes, communication systems are a critical vulnerability which, if disrupted, delay societal recovery from the disaster. A highly resilient and reliable system is HAM shortwave radio, which allows reliable, low cost communication to a significant fraction of
the global population. Maintaining key high speed communication
channels during a catastrophe would greatly increase disaster resilience beyond flyer distribution, potentially at relatively little additional cost. A backup shortwave radio communication system would facilitate the timely advice on where to locate clean water sources, identify sensible relocation options, allow improved international cooperation, and allow
coordination about the nature and likely duration of the outage.

We’ve identified HAM shortwave radios as key electronic equipment that is both likely to be highly resilient to global communication disruption on large or small scales, and as relatively easy to distribute. Another interesting use for these radios is distribution to power grid stations
for use to aid blackstart communications after large scale electrical grid collapse.

While several network configurations may serve GCR reduction purposes, our preliminary network design involves around a dozen central stations receiving and broadcasting globally, a network of several hundred two-way NVIS transceiver networks operated by trained personnel, and a few
thousand distributed receiver-only radios. The network would utilize SSB communications to lower power requirements. To cover the entire earth’s population we estimate the total construction and shipping cost at between USD $2 million and $10 million, scaling roughly proportionally with the fraction of global population able to be reached by the network. 

Total costs would therefore reasonably reach into the tens to hundreds of millions for this sort of mega project, depending on the spatial density of the network.

answer by kokotajlod · 2021-08-13T10:28:45.297Z · EA(p) · GW(p)

Announce $100M/year in prizes for AI interpretability/transparency research. Explicitly state that the metric is "How much closer does this research take us towards, one day when we build human-level AGI, being able to read said AI's mind, understand what it is thinking and why, what its goals and desires are, etc., ideally in an automated way that doesn't involve millions of person-hours?"
(Could possibly do it as NFTs, like my other suggestion.)

answer by Stefan_Schubert · 2021-08-12T16:19:40.023Z · EA(p) · GW(p)

I don't know the area well, but I guess that one option would be to invest in relevant AI companies, to be able to influence their decision-making (and it could also be profitable). I guess that one could in principle invest very large sums in that. And unlike some other suggested projects, it is maybe not necessarily logistically complicated (though it depends on the set-up). Cf. Ryan's comment.

answer by Allison Duettmann · 2021-09-18T18:34:29.505Z · EA(p) · GW(p)

I hope it’s ok to mention something I’d like to do at Foresight Institute:

Crowdsource + Crowdfund Civilization Tech Map


  • Build on this map for Civilizational Self-Realization (scroll to end of article) to create an interactive technology map for positive long-term futures that is crowdsourced (Wikipedia-style) and allows crowdfunding (Kickstarter-style)
  • The map surveys the ecosystem of areas relevant for civilizational long-term flourishing, from health, AI, computing, biotech, nanotech, neurotech, energy, space tech, etc.  
  • The map branches out into milestones in each area, and either lists projects solving them, or requests projects to solve them, including options to fund either
  • Crowdsourcing of milestones and requests for projects will get it very wrong at first but can get continuously course corrected, e.g. via prediction markets 
  • Crowdfunding makes more and more people have skin in the game for the long-term future, e.g. via tokenization, retroactive public goods funding, or a similar mechanism
  • In sum, the map can serve as a north star to coordinate those seeking to work toward positive futures and those seeking to fund such work.
answer by GMcGowan · 2021-08-06T15:03:51.409Z · EA(p) · GW(p)

Proof of concept for a geoengineering scheme (could be controversial)

answer by HaydnBelfield · 2021-08-06T16:29:32.352Z · EA(p) · GW(p)

9 PACs have raised/spent more than $100m (source). So an EA PAC?

Although I guess Sam Bankman-Fried was the second-largest donor to Biden (coindesk, Vox), and Dustin Moskovitz gave $50m; and they're both involved with Future Forward and Mind The Gap, so maybe EA is already kinda doing this.

answer by Dawn Drescher (Denis Drescher) · 2021-08-19T22:19:49.390Z · EA(p) · GW(p)

Create global distributed governments.

Governments as they exist today seem antiquated to me as they are linked to particular geographic regions, and the particular shapes and locations of those regions are becoming increasingly irrelevant.

Meanwhile some governments are good at providing for their people – social security, health insurance, enforcement of contracts, physical protection, etc. – so that’s fine, but there are also a lot of governments that are weak in one or more of these critically important departments.

If there were a market of competing global governments, we’d get labor mobility without anyone actually having to move. The governments that provide the best services for the cheapest prices would attract the most citizens.

These governments could draw on something like Robin Hanson’s proposal for a reform of tort law to incentivize a market for crime prevention, could use proof of stake (where the stake may be a one-time payment that the government holds in escrow or a promise of a universal basic income to law-abiding citizens) for some legal matters, and could use futarchy for legislation.

They could also provide physical services, such as horizontal health interventions and physical protection in countries where they can collaborate with the local governments.

An immediate benefit would be the reduction of poverty and disease, but they could also serve to unlock a lot of intellectual capacity by giving people the spare time to educate themselves on matters other than survival. They could define protocols for resolving conflicts between countries and lock in incentives to ensure that the protocols are adhered to. (I bet smart contracts can help with this.)

That way, they could form a union of autonomous parts sort of like the cantons of Switzerland. Such a union of global distributed governments could eventually become a de-facto world government, which may be beneficial for existential security and for enabling the Long Reflection.

Such a government could be bootstrapped out of the EA community. A nonpublic web of trust could form the foundation of the first citizens. If the system fails even when the citizenry is made up largely of highly altruistic, conscientious people who can pay taxes and share a similar culture, it’s probably not ready for the real world. But if it proves to be valuable, it can be gradually scaled up to a broader population spanning more different cultures.

comment by Dawn Drescher (Telofy) · 2022-01-19T16:46:22.374Z · EA(p) · GW(p)

I’ve come to feel like it’s a red flag if such a project bills itself as a distributed state or something of the sort. There seems to be a risk that people would start such a project only to do something grand-sounding rather than solve all the concrete problems that a state solves.

I’d much rather have a bunch of highly specialized small companies that solve specific problems really well (and also don’t exclude anyone based on their location or citizenship) than one big shiney distributed state that is undeniably state-like but is just as flawed as most geographic states, because it would just add one more flawed and hard-to-coordinate actor to the international scene, and make international coordination harder rather than easier.

The ideal project here is probably something that incubates and coordinates other small projects that provide specific services to solve specific problems while not discriminating based on location or citizenship but that never uses terms like “state,” “government,” or “country” for itself.

An added benefit is that a lot of my conversations about distributed states quickly became about “Is this really a distributed state/government/country,” which is one of the least interesting conversations to have. (That’s something I’d rather leave to trained lexicographers with big corpora to figure out.) I’d much rather have conversation about whether it solves the problems it sets out to solve and at what cost.

answer by Sanjay · 2021-08-08T22:50:24.948Z · EA(p) · GW(p)

As I alluded to in a comment [EA(p) · GW(p)] to KHorton's related post, I believe SoGive could grow to spend something like this much money.

SoGive's core idea is to provide EA style analysis, but covering a much more comprehensive range of charities than the charities currently assessed by EA charity evaluators.

As mentioned there, benefits of this include:

  • SoGive could have a broader appeal because we would be useful to so many more people; it could conceivably achieve the level of brand recognition achieved by charity evaluators such as Charity Navigator, which have high levels of brand recognition in the US (c50% with a bit of rounding).
  • Lots of the impact here is the illegible impact that comes from being well-known and highly influential; this could lead to more major donors being attracted to EA-style donating, or many other things.
  • There's also the impact that could come from donating to higher impact things within a lower impact cause area, and the impact of influencing the charity sector to have more impact

Full disclosure: I founded SoGive.

This short comment is not sufficient to make the case for SoGive, so I should probably right up something more substantial.

answer by Ben_Harack · 2021-08-06T22:27:19.992Z · EA(p) · GW(p)

The Human Diagnosis Project (disclaimer: I currently work there). If successful, it will be a major step toward accurate medical diagnosis for all of humanity.

answer by James Smith · 2021-08-06T18:03:14.206Z · EA(p) · GW(p)

Creating a new academic institute - the EA university - that houses a lot of EA research and (somehow) avoids the many issues seen in traditional academia. 

comment by SiebeRozendal · 2021-08-07T14:45:25.710Z · EA(p) · GW(p)

let's add a high school/prep school to it ;-)

Seriously though, I think having an institute more separate than GPI would not be great for disseminating research and gaining reputation. It would be nice though for training up EA students.

comment by EdoArad (edoarad) · 2021-08-08T09:54:21.827Z · EA(p) · GW(p)

I'd be interested in thinking more about this, even as just a thought experiment :) 

comment by Sami Kassirer · 2021-11-10T18:56:27.969Z · EA(p) · GW(p)

I like this! However, in a perfect world, rather than there being one university (or one institute at one university) that studies global priories, wouldn't all top research universities across the world  have global priorities schools (like business or policy schools are prevalent at most research universities)? With philosophers and scientists working together in one school on having the most impact on humanity, and coordinating with one another on how to do so—where students can get PhDs in Global Priorities Research (with specialization in one of the sub-fields, like business schools offer), and undergraduates at all universities around the world can major in global priorities, with paths towards academia and industry. Students majoring in GPR all take classes in the topics (e.g., longtermism, global health and development,  animal rights) and can create joint-majors with philosophy or one of the (social) sciences. 

Business schools were only popularized about 100 years ago, and look at how much their proliferation has incentivized study and work in this space. Also, once the top universities create these GPR schools, many other universities not funded by EA would likely follow (esp. if it’s a profitable, self-sustaining business model). This might cost more than 100 million thought...there's probably data out there on how much it cost initially to start b-schools and policy schools.

Replies from: Linch
comment by Linch · 2021-11-10T22:03:39.058Z · EA(p) · GW(p)

I think 

avoids the many issues seen in traditional academia. 

is James' central claim. I personally find myself confused about how much EA research should be done in academia vs outside of it; I can imagine us moving more towards academia (or other more standardized systems) as we institutionalize. 

Replies from: Sami Kassirer
comment by Sami Kassirer · 2021-11-12T04:36:01.760Z · EA(p) · GW(p)

Why would we have to choose between EA research being in vs. out of academia--why not both (which is kind of what we do now, right)? 

Replies from: Linch
comment by Linch · 2021-11-12T06:50:49.911Z · EA(p) · GW(p)

Academia has a lot of costs and benefits. It would be moderately surprising if the costs and benefits exactly balance out (or come anywhere close) for the median EA researcher. 

answer by Shelbster (new_user_6855080172) · 2021-09-17T13:58:53.237Z · EA(p) · GW(p)

OK, throwing out an idea here… could somebody cobble together a massive direct cash transfer fund? It’s not like there’s a lack of global poor to receive funding…

(Submitted without knowing a whole lot of details about cash transfers; I just know they are a thing.)

comment by Tetraspace (Tetraspace Grouping) · 2021-09-17T17:53:13.105Z · EA(p) · GW(p)

I looked up GiveDirectly's financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.

answer by GMcGowan · 2021-08-06T15:02:47.678Z · EA(p) · GW(p)

Paul's "message in a bottle" for future civilisations

answer by IJSinger · 2021-08-06T14:51:30.189Z · EA(p) · GW(p)

Creating a program like Birthright offering free, all inclusive 10 day trips to countries where EA global health/development programs are run (ie Malawi).

The trip could be targeted towards high achieving youth with a focus on helping make the abstract ideals of EA feel more "real", in addition to being loaded with all sorts of EA programming.

answer by Sami Kassirer · 2021-11-10T18:39:49.816Z · EA(p) · GW(p)

How can we foster longterm global trust and status as a social movement? In order to foster global backing for some of the movement's non-normative or 'creative' ideas (e.g., build post-apocalyptic bunkers to help re-build society in case of nuclear war) that may actually be highly impactful in the longterm future, we likely need to first prove ourselves as a movement that can actually create large-scale global impact. 

Here's one idea for a megaproject that could help to foster global trust/status by proving our ability to use evidence and reason to make a positive impact on the world:

  • Part 1: Survey representative samples of most (or all) countries and ask them “if you had 100 million dollars and wanted to use this money to make the world a better place, how would you spend it?”, giving open-ended text and a rank order option of some of the things we’re considering
    • Getting cross-cultural responses to this question could produce the most amount of global backing for EA, and it could look *very good* if we made the movement more democratic! But the latter is an empirical question (I.e., perceived trust in a social movement when the movement relies on experts only, the masses only, or a mix of experts and the masses, vs. a no-mention control)
  • Part 2: Create a list of top 10 or so most cared about global issues, and have EA researchers rank each of them in terms of total impact and effectiveness
  • Part 3: run a RCT again on nationally representative samples globally and compare the globally top ranked cause area to the EA most effective cause within the top 10 (if these two aren’t the same) to look at trade offs between indirect movement building impact and direct cause area impact --> after this RCT, choose which cause area will produce the most total impact as the "winner" of the $100 million grant.
  • Part 4: run a large grant competition to find the best approaches to solving whatever cause area is selected globally (note: I’d hypothesize that it’s very important to solve big issue *globally* to facilitate a new norm of collective global action and foster obligation perceptions towards EA from all countries), and aim to R&D for about 5-10 years (rough estimate), and then roll out the most effective intervention(s) based on these findings
  • (Repeat this every X years to maintain longterm support of EA)
answer by ChanaMessinger · 2021-10-25T18:57:22.040Z · EA(p) · GW(p)

Samo Burja said at a meetup the other day that he thinks Vitalik Buterin should give a medium university 10 million dollars to put ten top tier internet bloggers on tenure. No idea if that's a good idea or anywhere near possible, but it could use a decent amount of money for a while.

answer by justaperson · 2021-10-06T01:11:45.528Z · EA(p) · GW(p)

Educate, empower, and enable diverse talent to work on solutions for the world’s biggest issues.

What is it?

A remote school offering tuition-free education and job placement for vital roles (data scientist, researcher, engineer, etc.) in areas of crucial need (climate, economics, healthcare, etc.).


  • Identify important areas where key talent is lacking.
  • Establish tuition-free online school led by top thinkers.
  • Dispense task-oriented knowledge in short period of time.
  • Create post-graduation job placement program for sectors in need.


  • Remove barriers to higher education.
  • Create access to opportunities, regardless of location, language, background, etc.
  • Lift people out of poverty.
  • Funnel talent into organizations and projects that need the most support.
  • Solve range of vital issues.
  • Grow pool of world problem solvers.
  • Inspire next generation of doers and founders.


  • Open up to more students, more languages, more education levels, more areas of speciality.
  • Create accelerator program to invest in alum startups.


Two things that scale well are knowledge and technology. So, rather than attempt to choose a single area of focus, create a megaproject that both democratizes pursuits and crowdsources solutions. This has the potential to produce a network effect on a variety of problems, while removing hierarchal barriers. Scaling continues until new talent declines to join and/or roles disappear, or problems are solved (due to lack of new focus areas and/or some yet-to-be realized superior option i.e. ML/AI).


answer by Anton Rodenhauser · 2021-09-19T07:43:46.019Z · EA(p) · GW(p)

How about Qualia Research Institute?


answer by Ben Snodin (Ben_Snodin) · 2021-08-14T10:45:00.677Z · EA(p) · GW(p)

(extremely speculative) 

Promote global cooperation and moral circle expansion by paying people (/ incentivising them in some smarter way) to have regular video calls with a random other person somewhere on the planet.

answer by Bogdan Ionut Cirstea · 2021-08-14T08:19:57.594Z · EA(p) · GW(p)

I think aligning narrow superhuman models [LW · GW] could be one very valuable megaproject and this seems scalable to >= $100 million, especially if also training large models (not just fine-tuning them for safety). Training their own large models for alignment research seems to be what Anthropic plans to do. This is also touched upon in Chris Olah's recent 80k interview.

answer by Aayush Kucheria · 2021-08-11T19:33:23.751Z · EA(p) · GW(p)

Doing something to democratize randomized controlled trials (RCTs) - thereby reducing the risk involved in testing new ideas and interventions.

RCTs are a popular methodology in medicine and the social sciences. They create a safety net for the scientists (and consumers) to test that the drug works as intended and doesn't turn people into mutants.

I think using this methodology in other fields would be a high-leverage intervention. For example startups, policy-making, education, etc. Being able to try out new ideas without facing a huge downside should be a feature of every field.  Big institutions already conduct similar tests before they release something. But I'm wondering how useful it would be to allow small institutions, startups, and maybe even individuals to do this.

Plus, adding an RCT into the launch pipeline of any intervention/product allows us to see the unintended consequences before they're out there. I think this would have at least been helpful for the social media companies.

Based on some googling, I've understood that RCTs are very costly. But if the reasoning makes sense, this is exactly the kind of thing others can't try out that a megaproject should.

Here's a paraphrased quote by Eliezer Yudkowsky, that is relevant in this context: If people could learn from their mistakes without dying from them, well actually, that in itself would tend to fix a whole lot of problems over time. [source]

P.S. I'm thinking on working on this idea full-time in 2022. It would be very helpful to hear whatever criticism/thoughts you have - It'll help me make sure my time is effectively spent.

comment by Charles He · 2021-08-12T06:00:41.739Z · EA(p) · GW(p)

I think you should write this up as a full post or at least as a question. 

I don't think people will see this and you deserve reasonable attention if its a full time project.

Note that my knee jerk reaction is caution. The value of RCTs is well known and they are coveted. Then, in the mental models I use, I would discount the idea that it could be readily distributed. 

For example, something like the following logic might apply:

  • An RCT, or something that looks like it, with many of the characteristics/quality you want, will cost more than the seed grant or early funding for the new org doing the actual intervention.
  • Most smaller projects start with a pilot that gives credible information about effectiveness (by design, often much cheaper than an RCT).
  • Then "democratizing RCTs", as you frame it, will basically boil down to funding/subsidizing smaller projects than bigger ones.

I'm happy for this reasoning to be thoroughly destroyed and RCTs available for all!

answer by Ko Dama · 2021-08-07T12:17:38.839Z · EA(p) · GW(p)

An organizational version of 80k, GiveWell, Project Drawdown for "incentives".That is, an organization that specializes in 1) solving incentive problems in the most effective way possible (ease of implementation, minimizing costs, minimizing side effects...), 2) identifying priority changes based on their research (in general or for specific public policies such as climate change or longtermism...)

comment by Nathan Young (nathan) · 2021-08-07T18:19:07.820Z · EA(p) · GW(p)

Could you give an example of what this might look like?

Replies from: Ko Dama
comment by Ko Dama · 2021-08-09T11:32:46.188Z · EA(p) · GW(p)

Yes (and sorry for my English, I am French (and not very good in English)). Summary in a few lines :
At the level of a country (but it can be at another level of governance), the organization chooses one/several indicators, aiming at maximizing long-term well-being. It identifies the priority areas affecting them (based on importance, neglectedness, tractability).For each area, it analyzes the incentive structure, which means all the forces that push in a certain direction (e.g. what are the incentives of the 40 most influential people and organizations in this area? ).  It compares it with the system that would be needed to move forward in a robust way (which implies, and this would be the whole purpose of the organization, to develop expertise on this). It then identifies the most relevant levers to make the system evolve (ease of implementation, political acceptability, efficiency...). Finally it prioritizes each area according to the expected utility of the proposed systemic reforms.

One can also imagine a less ambitious version, for example a J-PAL of incentives, which would help governments calling on them for a specific problem (for example: increasing the mathematical performance of students).

I identify several advantages. 
1) Focuses decision makers on priority problems (like 80k does for individual careers, or Givewell for donations).
2) Incentives are a language that speaks to economists, whose influence on governments is significant. They have a real impact on the world, are often not aligned with the common good, and seem fairly objectifiable (in an otherwise extremely complex social world).
3) The cost-benefit ratio can be very high insofar as some systemic changes have almost no cost.

The best example I can think of is this article by Eliezer Yudkowsky (a comprehensive reboot of law enforcement), which gives an overview of the process I imagine. And with more quantitative models, an analysis of the decision-making process to facilitate the chances of implementation, a better knowledge of the effects of various incentives, the help of superforecasters etc...I think it can be improved.

answer by MichaelStJules · 2021-08-07T22:51:54.331Z · EA(p) · GW(p)

We could finance ballot initiatives, lobbying, running our own candidates. Running US presidential primary candidates could shift conversations and bring attention to issues (although bringing attention to an issue can backfire). Bloomberg spent over $500 million on his own presidential primary campaign and was 4th.

Running presidential candidates could be risky for EA, though. Non-partisan ballot initiatives seem safer.

answer by SiebeRozendal · 2021-08-07T14:54:12.292Z · EA(p) · GW(p)

(highly speculative and I see a lot of flaws, but I can see it scaled)

EA training institute/alternative university. Kind of like creating navy seals: highly selective, high dropout rate, but produces the most effective people (with a certain goal) in the world.

comment by Stefan_Schubert · 2021-08-07T15:20:58.275Z · EA(p) · GW(p)

My hunch is that that isn't a $100m per year-project, within reasonable time frames (the same is true of several other suggestions in this thread). Cf. Kirsten's post [EA · GW].

answer by dpiepgrass · 2022-01-28T20:52:49.820Z · EA(p) · GW(p)

This isn't really a megaproject, but I'm a bit busy to make a top-level post of it so I'm dropping it in here.

An evidence clearinghouse informed by Bayesian ideas and today's political mess.

One of humanity's greatest sources of conflict in the modern area is disagreement about (1) the facts, and (2) how to interpret them. Even basic facts are often difficult to distinguish from severe misinterpretations. I used to be hugely interested in climate misinformation, and now I'm looking at anti-vax stuff, but the problem is the same and has real consequences, from my unvaccinated former legal guardian dying of Covid (months after I questioned [LW · GW] popular anti-vax evidence), to various genocides that were fueled by popular prejudices.

To me, a central problem is that (1) most people believe it is easy to figure out what the truth is, so do not work very hard at verifying facts, (2) don't actually have enough time to verify facts anyway (doing it well is hard and very time-consuming!), and (3) are wasting a lot of effort by doing it because there is no durable place where the information you discover can be permanently stored, shared, and cross-referenced by others. The multi-millionaire antivaxxer Steve Kirsch has a dedicated substack with "thousands" of customers paying $5/mo. or $50/year to hear his latest Gish Gallop, while debunkings of Steve Kirsch are randomly scattered around and (AFAIK) unprofitable. If I personally discover something, I might mention it to someone on ACX and/or dump it in the old thread [LW(p) · GW(p)] I linked to above, and here's a guy who got 359 "claps" on Medium for his debunking. The response is disorganized and not nearly as popular as the original misinformation.

Another example: I spent 27 years in a religion I now know is false.

Or consider what happened on the extremely popular Joe Rogan program that inspired this meme (a joke, but some believe it was a true story):

Joe Rogan: hamburgers are good but I am trying to eat less pork
Guest: hamburgers are made with beef
Joe Rogan: ham is from pork it says ham in hamburger
Guest: it is beef
Joe Rogan: that’s not what I’ve heard Jamie look that up
Jamie: it beef
Guest: it beef
Joe: ok but can we really trust hamburger makers and butchers and grocery stores when the word ham is in hamburger and ham means pork 
Joe Rogan Fans: this is why I like him he is good at thinking

There are studies (Singer et al., Patone et al. 2021) that say there is a small risk of myocarditis in young people who catch Covid, and a much smaller risk of myocarditis in young people who take a mRNA Covid vaccine. Naturally, since he often listens to anti-vaxxers, Rogan had it backwards and thought the risk was higher in those who had a vaccine. If you watched this program, you'd probably come away confused about whether vaccines are worse than the disease or not.

Obviously a web site isn't going to solve this whole problem, but the absence of such a web site is a serious problem that we can solve.

Another way of framing the central problem is as a matter of distrust of institutions. My sense is that a large minority of the population doesn't trust government organizations and doesn't trust scientific research if it is done with money from the government or big companies, yet at the same time they do seem to trust random bloggers and political pundits who have the "right" opinions. But it's worse than that: anybody can put up a PDF and say "this is a peer-reviewed paper", or put up a web site and call it a peer-reviewed journal. For instance, consider the Walach paper that was retracted for various errors, such as the antivax cardinal sin of ignoring base rates of disease and death—see if you can spot this error in action:

...there were 16 reports of severe adverse reactions and 4 reports of deaths per 100,000 COVID-19 vaccinations delivered. According to the point estimate [...] for every 6 (95% CI 2-11) deaths prevented by vaccination in the following 3–4 weeks there are approximately 4 deaths reported to Lareb that occurred after COVID-19 vaccination. Therefore, we would have to accept that 2 people might die to save 3 people.

But antivax scientists have their own "peer-reviewed journal", which republished the paper with no mention of the earlier retraction, and Kirsch simply linked to that instead. Right now, to figure out that this paper is garbage, you have to suspect that "something is wrong" with it and its journal, and to know what's wrong with it exactly, you have to comb through it looking for the error(s). But that's hard! Who does that? No, in today's world we are almost forced to rely on a more practical method: we notice that the conclusion of the paper is highly implausible, and so we reject it. I want to stress that although this is perfectly normal human behavior, it is exactly like what anti-science people do. You show them a scientific paper in support of the scientific consensus and they respond: "that can't be true, it's bullsh**!" They are convinced "something is wrong" with the information, so they reject it. If, however, there were some way to learn about the fatal flaws in a paper just by searching for its title on a web site, people could separate the good from the bad in a principled way, rather than mimicking the epistemically bad behavior of their opponents.

So I envision a democratization of evidence evaluation, as an alternative to the despised "ivory towers". A site where anyone can go to present evidence, vote on its significance, and construct arguments. Something that uses Wikipedia and other well-sourced articles as a seed, and eventually grows into something hundreds of times larger. Something that has an automated reputation system like StackOverflow. Something that has a network of claims, counterclaims, and evidence for each. Where no censorship is necessary, as false claims are shown not to be credible under the weight of counterevidence. Where people recursively argue over finer and finer points, and recursively combine smaller claims ("greenhouse gases can increase average planetary surface temperature", "humans are causing a net increase of greenhouse gases") to build larger claims ("humans are causing global warming via greenhouse gas emissions"). Where vague or inaccurate claims get replaced over time for clearer and more precise claims. Where steelmen gain more prominence than strawmen.  Where offline and paywalled references must be cited with a quote or photo so users can verify the claim. Where people don't "like or dislike" statements, but vote on epistemically useful questions like "this is a fair summary of the claim made in the source" and "the conclusion follows from the premises", and where the credibility of sources is itself an entire universe of debate and evidence.

This site is just one idea I have under my primary cause area, "Improving Human Intellectual Efficiency" (IHIE), which, taken as a whole, could be a megaproject. I have been meaning to publish an article on the cause area, but haven't found the time and motivation to do it in the last year. Anyway, while it's possible to figure out the truth in today's world, it's only via luck (e.g. good teachers) or a massively inefficient and unreliable search process. Let's improve that efficiency, and maybe fewer people will volunteer to kill and die, and more people will understand their world better.

I think this relates to the top-rated answer too, since the lack of support for nuclear power is driven by unscientific myths. After Fukushima, it seemed like no one in the media was even asking the question of how dangerous X amount of radiation is, as if it made sense to forcibly relocate over 100,000 people without checking the risk first. The information was so hard to find that I ended up combing through the scientific literature for it (and I didn't find it there either, just some information that I could use as input for my own back-of-envelope calculation indicating that 100 mSv of radiation might yield a 0.05% chance of death by leukemia IIRC, less than normal risks of air pollution. Was my conclusion reasonable? If this site existed, I could pose my question there.)

answer by freedomandutility · 2021-08-19T20:48:26.676Z · EA(p) · GW(p)

Technological developments in the biotech / pharma industry are notoriously expensive, and my (fairly subjective) impression is that the industry is riddled with market failures.

Especially when applied to particularly pressing problems like pandemic prevention / preparedness, infectious diseases in LMICs, vaccines, ageing and chronic pain, I think EA for-profits and non-profits in this industry could absorb 100 million dollars of annual funding while providing high expected value in terms of social impact.

answer by James Smith · 2021-08-18T08:31:14.821Z · EA(p) · GW(p)

Universal flu vaccine development and testing  

answer by Ben Snodin (Ben_Snodin) · 2021-08-14T10:54:19.963Z · EA(p) · GW(p)

(idea probably stolen from somewhere else) create an organisation employing an army of superforecasters to gather facts and/or forecasts about the world that are vitally important from an EA perspective.

Maybe it's hard to get to $100million? E.g. 400 employees each costing $250k would get you there, which (very naively) seems on the high end of what's likely to work well. Also e.g. other comments in this post have said that CSET was set up for $55m/5 years.

comment by Ben Snodin (Ben_Snodin) · 2021-08-14T10:55:58.381Z · EA(p) · GW(p)

I realise re-reading this that I'm not sure whether these projects are supposed to cost $100million per year or e.g. $100million over their lifetime or something. Maybe something in between?

Replies from: nathan
comment by Nathan Young (nathan) · 2021-08-14T11:01:25.077Z · EA(p) · GW(p)

They are meant to grow to eventually be spending 100 million a year.

answer by andzuck · 2021-08-11T22:35:43.120Z · EA(p) · GW(p)

Qualia Research Institute

answer by MichaelStJules · 2021-08-07T20:59:43.107Z · EA(p) · GW(p)

Maybe diet pledge programs like Veganuary and Challenge 22? They could spend a lot more on ads and expand to more countries. Maybe this would be better set up like the Open Wing Alliance, where The Humane League supports, trains and regrants to local organizations working on cage-free campaigns in different countries.

I'm not sure this could reach $100 million while still spending reasonably cost-effectively, though.

answer by Nathan Young · 2021-08-07T14:26:04.061Z · EA(p) · GW(p)

How do companies sponsor films and TV? 

Rather than paying for a film to have Fords in it, pay for it to have more EA ideas. 

Feels like it might be seen as propaganda and could backfire.

comment by Nathan Young (nathan) · 2021-08-07T18:18:37.490Z · EA(p) · GW(p)

This isn't megaprojects scale and is just marketing.

answer by Jon Browne · 2021-10-04T15:42:23.499Z · EA(p) · GW(p)

Shamelessly copy the success of StitchFix but use it for the food industry but only sending information, not the actual food. 

I've thought about this one a lot so I'll try my best to summarize this:
Cook. Eat. Rate. Repeat. A 

The foundation would have data scientists/engineers behind the scenes that help customers find their perfect recipes via information and testing. The foundation would eventually then expand into eating out at sustainable restaurants based off feedback from the customer, then merging into community vertical farming which moves into individual household vertical farming. 

The company Yummly is pretty close to this but isn't quite there yet and are expanding in the wrong direction imo. 

Revamping the food industry to where we are not dependent on grocery stores' supply chains and instead growing it downstairs inside our own homes and creating absolutely delicious recipes from around the world is massive, healthy impact. Something the US could benefit from easily. It's only a matter of time where every individual will have to (mostly) live off the land in their backyard again in my opinion and to curb that catastrophe, we create FoodieFix. 

comment by Linch · 2021-10-04T22:53:09.533Z · EA(p) · GW(p)

Revamping the food industry to where we are not dependent on grocery stores' supply chains and instead growing it downstairs inside our own homes and creating absolutely delicious recipes from around the world is massive, healthy impact

Sorry, why? This just seems really minor in the grand scheme of things, unless I'm missing something important (which is very possible).


Comments sorted by top scores.

comment by Ozzie Gooen (oagr) · 2021-08-06T16:55:18.912Z · EA(p) · GW(p)

I just want to flag that "megaprojects" are notoriously problematic. They seem to fail more often than other projects and run to be far more expensive than initially predicted. 

There's a whole field of study now into why they go so poorly.

Some articles I was able to find very quickly:


The alternative is to have many projects start out small and just help them to scale up quickly. This is what Silicon Valley does, and it often works in spades. Basically all of the most successful companies started as tiny ventures, not megaprojects.

Starting and funding megaprojects from scratch is something you generally only do when you have no other option.

So if the question is, "which existing projects should scale to get $100m+", that's fine, but if the expectation is that these will be totally new projects, I'd suggest hesitancy.

Replies from: Stefan_Schubert, GMcGowan, HaydnBelfield, nathan
comment by Stefan_Schubert · 2021-08-06T19:55:25.854Z · EA(p) · GW(p)

How relevant is that literature on "megaprojects"? As far as I can tell it seems mostly focused on infrastructure - e.g. construction of big dams, bridges, and so on. Those projects seem very different from the kinds of projects that Ben and Will talk about. (Plus the latter have a smaller size, as mentioned.)

I don't think the term "megaproject" is misleading or confusing, though others may disagree. The fact that Flyvbjerg and others have used it in one sense doesn't necessarily mean we can't use it in another sense.

Replies from: Linch, oagr, Charles He
comment by Linch · 2021-08-06T23:57:04.937Z · EA(p) · GW(p)

I appreciate Ozzie flagging this, since a nontrivial fraction of the costs of my proposed idea (shelters) would in fact be construction costs for a fairly difficult/novel thing (eg construct an underground shelter for >100 people with BSL IV entry requirements, enough food, fuel and technical sophistication to support >100 people + >5000 frozen fertilized embryos for >30 years), so even if the objection is not applicable to the other project ideas, it should be applicable to mine.

comment by Ozzie Gooen (oagr) · 2021-08-06T20:05:35.267Z · EA(p) · GW(p)

The megaprojects literature does use it in those ways. I haven't found that much discussion on what exactly Ben and Will do talk about, I just found a few tweets.

I'm fine with people defining megaproject in a different sense; but if so, I think it should be defined. In this case, it's not clear to me what their definition is exactly.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2021-08-06T20:08:15.504Z · EA(p) · GW(p)

My impression is that the commonality of megaproject failure is more "a really big project, with often a bunch of stakeholders, and is difficult to incrementally develop", more so than being about bridges/dams in particular. Many huge software projects fit similar patterns and have had similar fates.  Many large technocratic initiatives also had a lot of problems.

If you take out software, hardware, and technocratic initiatives, I'm not sure why kinds of projects there are that could make it to the $100Mil mark.

comment by Charles He · 2021-08-06T22:55:13.632Z · EA(p) · GW(p)

Ok, probably more relevant is the the OLPC project. Here is an extremely readable overview.

Honestly, many of the projects in the thread are more susceptible to the same flaws that apply  to these infrastructure projects. Bridges and dams are far more tangible, and benefit from deep pools of experience.

Related to the bigger goal, I think few people here believe the value of this thread is in brainstorming a specific project proposal. 

Rather, there's lots of other value, e.g. in seeing if any ideas or domains pop out that might help further discussion, and knowledge of existing projects and experts might arise.

(There's also a perspective that is a bit snobby and looks down on big, grandiose planning).

comment by GMcGowan · 2021-08-06T17:01:32.549Z · EA(p) · GW(p)

FWIW my reading of the question is: "What projects could be created, that have the potential to scale to $100m".  I didn't read it as suggesting funding a megaproject from scratch.

Many EA projects are of the "start a research institute" flavour, and will likely never absorb $100m. I see the post as a plea for projects which could (after starting with smaller amounts and then scaling) absorb these sums of money. Much like Givedirectly wasn't started with $100m/year budget right away, but has proven itself capable of deploying that much funding.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2021-08-06T17:08:27.650Z · EA(p) · GW(p)

I think that's my guess too, but I could easily imagine some readers not getting that. "Megaproject" as a term has often referred to projects that have to be planned in advance.

comment by HaydnBelfield · 2021-08-06T18:32:26.145Z · EA(p) · GW(p)

Megaprojects cost $1 billion or more. Ben Todd was using the (admittedly somewhat confusing) term 'EA megaproject' by which he meant a new project that could usefully spend $100m a year. So these concerns about megaprojects don't apply.
How about we use the term '$100m-scale project'? (I considered 'kiloproject' but that's really niche.)

Replies from: Linch, oagr
comment by Linch · 2021-08-07T00:21:08.185Z · EA(p) · GW(p)

Note that $100M/year is not inconsistent with >$1B/project. For example, at an 8% discount rate, the net present value of a 100M/year annuity is about ~$1.25B

comment by Ozzie Gooen (oagr) · 2021-08-06T19:44:44.241Z · EA(p) · GW(p)

It sounds like there are two very different concerns here.

One is how large the project is. $100M vs. $1Billion. 

The second is how "gradual" that project can be. Like, can it start small, or do we need to allocate $100M at once?

The concern i was bringing up was more about the latter. My main point was just that we should generally prioritize projects that can be neatly scaled up over ones that require a huge upfront cost.

In fairness, I think most of the suggested examples are things that have nice ramps of scaling them up. For example, the nuclear funding gap seems fairly gradual, and the Anthropic team seems to be mainly progressing ideas that they worked on from OpenAI.

Projects I'd be more concerned are ones like,
"We've never done this sort of thing before, we really can't say how successful it will be, but here's $100M, and it needs to be spent very quickly using plans that can't change much at all."

I'm not that concerned about the $100M vs. $1Bil difference. Many groups grow over time, so I'd imagine that most exciting $100M projects would be very likely to reach $1Bil after a few years.

comment by Nathan Young (nathan) · 2021-08-06T17:16:02.838Z · EA(p) · GW(p)

what would you like it to say and I'll edit it.

Replies from: oagr, RyanCarey
comment by Ozzie Gooen (oagr) · 2021-08-06T17:42:33.821Z · EA(p) · GW(p)

Maybe just include something like this in the description:

"By Megaproject, I'm referring to any project that could eventually be scaled up to $100 Million, not ones that are planned from the start to cost $100 Million. In many cases this could include very small efforts that would have to achieve multiple levels of success to eventually get $100Million+ per year."

I expect many people will also read these comments, so it's not particularly important, but it could be nice.

Replies from: StephanieAG
comment by StephanieAG · 2021-10-16T22:09:15.522Z · EA(p) · GW(p)

Glad this edit was included at the start of the post; it definitely helped me as I read through the ideas. Thanks!

comment by RyanCarey · 2021-08-06T17:34:53.089Z · EA(p) · GW(p)

Mega-scalable projects!

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2021-08-06T17:42:59.741Z · EA(p) · GW(p)

I think I like this phrase more, though it might take some defining.

comment by Nathan Young (nathan) · 2021-08-06T13:27:56.248Z · EA(p) · GW(p)

I think each idea should be a seperate answer so it can be upvoted/downvoted seperately.

If you disagree, let me know why.

comment by Nathan Young (nathan) · 2021-08-07T16:33:53.645Z · EA(p) · GW(p)

As Stefan notes, Khorton's post on this is worth reading. She argues something that even larger projects like Givewell have <$100m annual budgets (most is regifting) and so finding good projects to spend $100m a year in the long term may be even more difficult.

https://forum.effectivealtruism.org/posts/Kwj6TENxsNhgSzwvD/most-research-advocacy-charities-are-not-scalable [EA · GW]

comment by HaydnBelfield · 2021-08-06T15:59:54.574Z · EA(p) · GW(p)

On Twitter I noted that when it comes to GCRs, its hard to spend $100m on a policy research organisation. Note CSET was $55m/5 years: in the 10m range. OpenPhil's grants to CHS & NTIbio similar.

Anthropic raised $124m - so they might be the most recent EA megaproject.