Ben Todd (CEO of 80,000 Hours) says "Effective altruism needs more 'megaprojects'. Most projects in the community are designed to use up to ~$10m per year effectively, but the increasing funding overhang means we need more projects that could deploy ~$100m per year." https://twitter.com/ben_j_todd/status/1423318852801290248
What are some $100m projects that you think might be worth consideration?
By megaproject, I'm referring to any project that could eventually be scaled up to $100 million, not ones that are planned from the start to cost $100 million. In many cases, this could include very small efforts that would have to achieve multiple levels of success to eventually get $100Million+ per year.
Filling the $100m funding gap in nuclear, since the MacArthur Foundation is pulling out of nuclear policy.
"Since 2015 alone, MacArthur directed 231 grants totaling >$100m in some cases providing more than half the annual funding for individual institutions or programs." "MacArthur was providing something like 40 to 55 percent of all the funding worldwide of the non-government funding worldwide on nuclear policy” https://t.co/srsq45ejc7?amp=1
Out of all the ideas, this seems the most shovel-ready.
MacArthur will (presumably) be letting go of some staff who do nuclear policy work, and would (presumably) be happy to share the organisations they've granted to in the past. So you have a ready-made research staff list + grant list.
All ("all" :) ) you need is a foundation and a team to execute on it. Seems like $100 million could actually be deployed pretty rapidly.
Possibly not all of that money would meet EA standards of cost-effectiveness though - indeed MacArthur's withdrawal provides some evidence that it isn't cost effective (if we trust their judgement).
I agree with this. As the article says, multiple funders are pulling out of nuclear arms control, not just MacArthur. So it would be a good idea for EA funders like Open Phil to come in and close the gap. But in doing so, we should understand why MacArthur and other funders are exiting this field and learn from them to figure out how to do better.
Probably a lot of bad things going on inside of them EAs could improve
There's a ton of drawbacks. These include barriers to entry like regulations and capture which could make this impractical. Once inside, implementation issues such as cultural/institutional challenges will be far outside the typical circle of competence of EA.
But I think that's the point—this idea has a flavor orthogonal to "New R&D/policy institute for X".
I think why I like this so much is that it isn't another idea that is fiddling on the margins of a problem with a complicated theory of impact - it just provides a project vehicle to solve one of the more tractable key problems head on.
The hyperlinked stories and legal cases are but a few examples of the potentially life-altering negative outcomes that have come out of privatization. One of the major challenges with combating this trend is that documenting wrongdoing and amassing evidence necessary to prepare conditions of confinement claims is extremely, extremely hard (and expensive, for a population that is perhaps the most economically disenfranchised of any in the U.S).
But we have seen that organized social movements have won victories and that zealous legal advocacy can unwind some of the worst consequences of mass incarceration. EA organizations are already supporting organizations doing this work, like Prison Policy Initiative (an Open Philanthropy recipient). But because of how localized punishment is and how limited resources remain, there is far more that could be done.
Love this! We could also use prisons as a place where social scientists could study how to optimize ethical development amongst criminals. These samples are so hard to access, but could produce so much impactful insight on when and why ethical decision-making fails, and how to improve ethical decision-making under conflict. This could also be coupled this with a grant competition that would fund the best ideas on how to rehabilitate inmates and improve their ethical decision-making both while in prison and after being reintegrated back into society.
I think you’re right. Even if the experts were paid really well for their participation, say 10k per year (maybe as a fixed sum or in expectation given some incentive scheme), and you might have on the order of 50 experts each for 20(?) fields, then you end up with 10 million per year. But probably it wouldn’t even require that, as long as it’s prestigious and is set up well with enough buy-in. Paying for their judgement would make the latter easier I suppose.
I think the gist of this idea might be something like a massively-scaled up prediction platform that focuses on recruiting subject-matter experts and pays them to make predictions on questions relevant to their expertise while perhaps additionally discussing important/neglected trends in their fields.
The Center for Election Science could easily make efficient use of greater than $50M a year with infrastructure and ballot initiatives. We've already laid out a plan on how we would spend it. We could also potentially build towards some hyper-aggressive $100M years by including lobbying in the remaining states that don't allow ballot initiatives. In any case, we are woefully underfunded relative to our goals and could at the very least surpass the $50M threshold in a couple of years with sufficient funding. If even greater funding were available, we could build in lobbying following more state-level wins.
For clarity, our lack of funding has already cost us approval voting campaign opportunities and is a big issue for us.
Okay, but I'm not persuaded that the Center for Election Science is scientific. I think it should be called "The Center for Approval Voting (especially the single-winner district kind)™"
I studied electoral systems for a school project and reached very different conclusions, for instance: that all single-winner-district systems are inherently non-proportional and subject to gerrymandering. I went so far as to design my own system (I suppose its merits are debatable — but never debated). In emails from the CES I see none of the insights I gained in my school project — nothing about criteria for evaluating voting systems, no theories about what the goals of a voting system should be and how to achieve them... except narrowly-crafted articles focused on crowning Approval Voting the winner, usually without surveying alternatives.
Quite the contrary, CES newsletters read more like the many political propaganda emails from which I have long since unsubscribed.
I agree that maybe this is the best way to achieve your Approval Voting goals. Most political emails simply tell people what to believe and what to vote for, not bothering with evidence or balance. It's probably done this way because it works. But don't call it "science", okay?
Edit: Downvotes are not counterarguments. If you can't say why I'm wrong, maybe I'm not wrong.
Here's a few suggestions for near-term megaprojects:
- Longevity research - Meat-replacement mega-cost reduction investments (leapfrogging current tech) - Eliminating disease-bearing mosquitoes - Eliminating all vaccine-preventable diseases worldwide - Developing cheap, universal metagenomic scanning for biosecurity (Also see this slightly less ambitious version, mentioned by Alex in a different answer.) - Large-scale governance reform initiatives - Universally available, validated, well-build apps for CBT to reduce depression / increase happiness - AI safety (We're doing this one already, so the key players may not have room for funding.)
For AI safety - maybe Redwood has the most room for funding? They seem to be the most interested in growth (correct me if I'm wrong). And even if the existing players don't have more room, other ways need to be thought of to scale up further through funding as the field is clearly still too small to compete in the race against the titanic field of AI capabilities.
Agree longevity needs to be funded more as well, though lots of aging billionaires like Bezos seem to be throwing tons of money at it these days too so maybe EA money would be much less useful/uniquely needed there than e.g. AI alignment.
I discussed this with a couple people ca. 2 years ago, and thought it was likely that a company like Google could design and produce a full stack secure system as a moderately large internal project. And some groups are already doing parts of this - for example, a provably secure OS microkernel, for far less than what we'd be able to spend.
As a fermi estimate on the high end, if we hire 10 top hardware design people for $500k/year each, throw in the same number of OS design people, and compiler designers at the same cost, and a team of 50 great people to do the rest of the development and testing at $300k/year, $100m means that we have 3 years to do this - and it's an open source project, so we'd get universities, etc. working on this as well. (i.e. we could not mass produce the hardware at theses prices, but that's commercialization, not design, and it should be funded by sales.)
(not an expert) My impression is that a perfectly secure OS doesn't buy you much if you use insecure applications on an insecure network etc.
Also, if you think about classified work, the productivity tradeoff is massive: you can't use your personal computer while working on the project, you can't use any of your favorite software while working on the project, you can't use an internet-connected computer while working on the project, you can't have your cell phone in your pocket while talking about the project, you can't talk to people about the project over normal phone lines and emails... And then of course viruses get into air-gapped classified networks within hours anyway. :-P
Not that we can't or shouldn't buy better security, I'm just slightly skeptical of specifically focusing on building a new low-level foundation rather than doing all the normal stuff really well, like network traffic monitoring, vetting applications and workflows, anti-spearphishing training, etc. etc. Well, I guess you'll say, "we should do both". Sure. I guess I just assume that the other things would rapidly become the weakest link.
In terms of low-level security, my old company has a big line of business designing chips themselves to be more secure; they spun out Dover Microsystems to sell that particular technology to commercial (as opposed to military) customers. Just FYI, that's just one thing I happen to be familiar with. Actually I guess it's not that relevant.
Agreed that secure low level without application security doesn't get you there, which is why I said we need a full stack - and even if it wasn't part of this, redeveloping network infrastructure to be done well and securely seems like a very useful investment.
But doing all the normal stuff well on top of systems that still have insecure chips, BIOS, and kernel just means that the exploits move to lower levels - even if there are fewer, the differences between 90% secure and 100% secure is far more important than moving from 50% to 90%. So we need the full stack.
Epistemic status: Confused person with zero expertise in this area
Who is "us" in this scenario? I assume it's meant to be "organizations with access to infohazardous bio/AI data"?
If so, what makes you think of the current infosec of these orgs as "unacceptable"? If you think they'd disagree with this characterization, do you have a sense for why?
If not, what do you see as some plausible consequences of weak infosec that could plausibly total $100m in damages for EA orgs if they came to pass, given that EA is a network of lots of organizations, with pretty limited funding and access to other valuable data per org?
(Even if something happened along the lines of "GiveWell leaks every donor's credit card number", I wonder what the actual damage would look like, given how often this sort of thing seems to happen to large organizations that don't go bankrupt as a result. And it's hard to imagine that most charities on GiveWell's scale would actually go positive-EV by investing millions of dollars in infosec.)
This is my impression based on (a) talking to a bunch of people and hearing things like "Yeah our security is unacceptably weak" and "I don't think we are in danger yet, we probably aren't on anyone's radar" and "Yeah we are taking it very seriously, we are looking to hire someone. It's just really hard to find a good security person." These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I haven't talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general don't take security seriously until there's actually a breach. (c) I've talked to some people who are also worried about this, and they told me there basically isn't any professional security person in the EA community willing to work full time on this.
I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim: "No amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you can't put anything on a computer."
If AI safety is a critical enabler for national security, and/or AI system security is important for their alignment, that means we're in deep trouble.
I see enormous value in it and think it should be considered seriously.
On the other hand, the huge amount of value in it is also a reason I'm skeptical about it being obvious to be achievable: there are already individual giant firms who'd internally at multi-million annual savings (not to talk about the many billions the first firm marketing something like that would immediately earn) from having a convenient simple secure stack 'for everything', yet none seems to have something close to it (though I guess many may have something like that in some sub-systems/niches).
So just wondering whether we might underestimate the cost of development/use - despite from gut feeling strongly agreeing that it would seem like such a tractable problem.
I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.
Agree with the "easily tens of millions a year", which, however, could also be seen to underline part of what I meant: it is really tricky to know how much we can expect from what exact effort.
I half agree with all your points, but see implicit speculative elements in them too, and hence remain with, a maybe all too obvious statement: let's consider the idea seriously, but let's also not forget that we're obviously not the first ones thinking of this, and in addition to all other uncertainties, keep in our mind that none seems to seriously have very much progress in that domain despite the possibly absolutely enormous value even private firms might have been able to make from it if they had serious progress in it.
Impact certificates. Announce that we will purchase NFT's representing altruistic acts created by one of the actors. (Starting now, but with a one-year delay, such that we can't purchase an NFT unless it's at least a year old.) Commit to buy $100M/year of these NFTs, and occasionally reselling them and using the proceeds to buy even more. Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.
Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.
It may be critical that the purchasing decisions will somehow account for historical risks (even ones that did not materialize and are no longer relevant), otherwise this approach may fund/incentivize net-negative interventions that are extremely risky (and have some chance of being very beneficial). I elaborated some more on this here [EA · GW].
Think of it like a grants program, except that instead of evaluating someone's pitch for what they intend to do, you are evaluating what they actually did, with the benefit of hindsight. Presumably your evaluations will be significantly more accurate this way. (Also, the fact that it's NFT-based means that you can recruit the "wisdom of the efficient market" to help you in various ways, e.g. lots of non-EAs will be buying and selling these NFTs trying to predict what you will think of them, and thus producing lots of research you can use.)
I don't think it should replace our regular grants programs. But it might be a nice complement to them.
I don't see what you mean by centralization here, or how it's a problem. As for reliable guarantees the money will be used cost effectively, hell no, the whole point of impact certificates is that the evaluation happens after the event, not before. People can do whatever they want with the money, because they've already done the thing for which they are getting paid.
Think of it like a grants program, except that instead of evaluating someone's pitch for what they intend to do, you are evaluating what they actually did, with the benefit of hindsight. Presumably your evaluations will be significantly more accurate this way. (Also, the fact that it's NFT-based means that you can recruit the "wisdom of the efficient market" to help you in various ways, e.g. lots of non-EAs will be buying and selling these NFTs trying to predict what you will think of them, and thus producing lots of research you can use.)
But the reason why you would evaluate someone's pitch as opposed to using hindsight is that nothing would be done without funding?
I don't see what you mean by centralization here, or how it's a problem.
I think I am using centralization in the same way that cryptocurrency designers/architects talk about crypto currency systems actually work ("centralization pressures").
The point of NFTs, as opposed to you, me, or a giant granter producing certificates, is that it is part of a decentralized system, not under any one entity's control.
My understanding is that this is the only logical reason why NFTs have any value, and are not a gimmick.
They don't have any magical power by themselves or have any special function or information or anything like that.
Under this premise, centralization is undermined, if any other structural component of the system is missing.
For example, if the grantors or their decisions come from a central source. Then the value of having a decentralized certificate is unclear.
Note undermining "centralization" is sort of like having a wrong step in a math theorem, it's existentially bad as opposed to a reduction in quality or something.
As for reliable guarantees the money will be used cost effectively, hell no, the whole point of impact certificates is that the evaluation happens after the event, not before. People can do whatever they want with the money, because they've already done the thing for which they are getting paid.
I meant that you have written out two distinct promises here that seem to be necessary for this system to structurally work in this proposal. One of these promises seem to be high quality evaluation:
Commit to buy $100M/year of these NFTs, and occasionally reselling them and using the proceeds to buy even more.
Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.
Once it's established that you will be giving $100M a year to buy impact certificates, that will motivate lots of people already doing good to mint impact certificates, and probably also motivate lots of people to do good (so that they can mint the certificate and later get money for it)
By buying the certificate rather than paying the person who did the good, you enable flexibility -- the person who did the good can sell the certificate to speculators and get money immediately rather than waiting for your judgment. Then the speculators can sell it back and forth to each other as new evidence comes in about the impact of the original act, and the conversations the speculators have about your predicted evaluation can then help you actually make the evaluation, thanks to e.g. facts and evidence the speculators uncover. So it saves you effort as well.
Hire ~5 film-studios to each make a movie that concretely shows an AI risk scenario which at least roughly survives the rationalist fiction sniff test. Goal: Improve AI Safety discourse, motivate more smart people to work on this.
What about creating academic institutes in reputable universities to tackle important problems, eg similar to FHI or CSER, creating research prizes, and sponsoring conferences. I'm mostly thinking about AI Safety, but it may be useful in other areas too.
Hard science funding seems able to absorb this scale of funding, though this might not count as 'EA-specific' projects: On climate: carbon capture, new solar materials, new battery R&D, maybe even fusion as 'hits-based giving'? On bio preparedness there's quite a lot, e.g. Cassidy Nelson recommendations, Andy Weber recommendations
The importance of internet connectivity is hard to understate. It's necessary to function as 21st-century citizens and is the backbone of our societies. It's also necessary for securing various human rights.
Some quick reasons why internet access is important:
Grants access to free education on just about anything
Access to banking, communication technologies, etc.
Increase economic growth, which well-being is somewhat a function of as internet access effectively increases the computational power of the economic system and can 'improve' the substrate upon which it runs (people).
Increase awareness of EA in general
Wrote this quickly so apologies for the brevity. I've been working on a longer post where I dive into this in a lot more detail.
I see a few issues with it in this context, though.
In the short-run, it will be prohibitively expensive for most of the world's population, and it doesn't solve for the device ownership necessity.
I also don't like the idea internet access being in the control of a company that is subject to the national laws. I feel that we need a censorship-resistant internet, especially in the existing climate. We're increasingly seeing crack-downs across the world, and I don't the US will be immune from increased internet suppression.
I think this would be broadly useful and in particular increase the reach of mobile payment-based activities like GiveDirectly. I'd be curious about estimates of how cost-effective increasing internet penetration would be, compared to throwing more money at GD.
I don't climate research as very valuable. The value of information would only be high if this research would change how people act. Climate inaction seems to be mainly political inertia, not lack of information about potential catastrophe.
Do you mean just the fourth bullet, or do you think this about all four?
The 1980s nuclear winter and asteroid papers (I'm thinking especially Sagan et al, and Alvarez et al) were very influential in changing political behaviour - Gorbachev and Reagan explicitly acknowledged that on nuclear, the asteroid evidence contributed to the 90s asteroid films and the (hugely successful!) NASA effort to track all 'dino-killers'. On the margin now, I think more scary stuff would be motivating. There's also VOI in resolving how big a concern nuclear winter is (eg some recent papers are skeptical) - if it turned out to not be as existential as we thought, that would change cause prioritisation for GCRs.
On geoengineering (sorry 'climate interventions'(!)), note 'getting more climate modelling' is a key aim for e.g. Silver Lining.
i was just referring to the last bullet re climate change. eg in the last IPCC report, it would have been reasonable for govts to believe that there was a >10% chance of >6C of warming and that has been true since the 1970s, without having any impact. The political response to climate change seems to be influenced by most mainstream media coverage and public opinion in some circles which it would be fair to characterise as 'very concerned' about climate change. An opinion poll suggests that 54% of British people think that climate change threatens human extinction (depending on question framing). I agree that in a rational world we want to know how bad climate change could be, but the world isn't rational.
If you're just talking about EA cause prioritisation, the cost-benefit ratio looks pretty poor to me. Wrt reducing uncertainty about climate sensitivity, you're talking costs of $100m per year to have a slim chance of pushing climate change up above AI, bio, great power war for major EA funders. Or we might find out that climate change is less pressing than we thought in which case this wouldn't make any difference to the current priorities of EA funders.
I also don't see how research on solar geoengineering could be a top pick - stratospheric aerosol injection just doesn't seem like it will get used for decades because it requires unrealistic levels of international coordination. Also, I don't think extra modelling studies on solar geo would shed much light unless we spent hundreds of millions. Climate models are very inaccurate and would provide much insight into the impacts of solar geo in the real world. There might be a case for regional solar geo research, though.
(fwiw, i really don't rate that Xu and Ramanathan paper. they're not using existential in the sense we are concerned about. They define it as "posing an existential threat to the majority of the population". The evidence they use to support their conclusions is very weak. For example, they note following the Mora et al study that currently 30% of the population is exposed to deadly heat, which would increase to 74% at 4C warming. But obviously, it is not the case that all of these people will die, just as it is not the case that 30% of the world population today is dying due to heat waves. Moreover, 4C will take until the end of the century when most people will probably be a lot richer and so will have greater access to air conditioning. Climate change of that magnitude only makes the tropics uninhabitable in the sense that the Persian Gulf is uninhabitable today. There would be great humanitarian costs in low growth agrarian economies but that is a separate question to whether climate change poses an existential risk)
Interesting first point, but I disagree. To me, the increased salience of climate change in recent years can be traced back to the 2018 Special Report on Global Warming of 1.5 °C (SR15), and in particular the meme '12 years to save the world'. Seems to have contributed to the start of School Strike for Climate, Extinction Rebellion and the Green New Deal. Another big new scary IPCC report on catastrophic climate change would further raise the salience of this issue-area.
I was thinking that $100m would be for all four of these topics, and that we'd get cause-prioritisation VOI across all four of these areas. $100m for impact and VOI across all four seems pretty good to me (however I'm a researcher not a funder!)
On solar geo, I'm not an expert on it and am not arguing for it myself, merely reporting that its top of the 'asks' list for orgs like Silver Lining.
I actually rather like the framing in Xu & Ram - I don't think we know enough about >5 °C scenarios, so describing them as "unknown, implying beyond catastrophic, including existential threats" seems pretty reasonable to me. In any case, I cited that more to demonstrate the lack of research thats been done on these scenarios.
On the last point, during the early Pliocene, early hominids with much worse technology than us lived in a world in which temperatures were 4.5C warmer than pre-industrial. It would be a surprise to me if this level of warming would kill off everyone, including people in temperate regions. There's more to come from me on this topic, but I will leave it at that for now
I have claimed that the first few hundred million dollars of preparation for agricultural [EA · GW] and electricity [EA · GW] disrupting GCRs is competitive with AGI safety for the longterm, and preparation for agricultural GCRs is more cost effective than GiveWell interventions. Since these catastrophes could happen right away, I think it does make sense to scale up quickly to $100 million per year to get the preparation fast. Beyond research, this money could be used for piloting new technologies and developing response plans and training. To maintain $100 million per year may then be lower cost effectiveness than AGI safety at the expected margin, but would still provide additional value and may be competitive with other priorities. Projects could include subsidizing resilient food sources such as seaweed, cellulosic sugar, methane single cell protein, etc. Or building factories flexibly such that they could switch quickly from producing animal feed or energy to human food. These could easily be many billions of dollars per year.
Take some EAs involved in public outreach, some journalists who made probabilistic forecasts on there own volition (Future Perfect people, Matt Yglesias, ?), and buy them their own news media organization to influence politics and raise the sanity- and altruism-waterline.
We could buy (a significant number of shares in) media companies themselves and shift their direction. Bezos bought the Washington Post for $250 million. Some are probably too big, like the New York Times at a $8 billion market cap and Fox Corporation at $20 billion.
I generally agree, although I think these >$1B general audience entities are too expensive for EAs. Whereas I think it would make sense to buy media companies and consultancies that are somewhat focused on global security, AI and/or econ research. e.g. Foreign Policy magazine, Wired, GZero Media. Stratfor. Economist Intelligence Unit, and so on. At least, I think the value of info from trying out buying up one or more smaller entities, to see how one could steer them, or bolster them with some EA talent, could be high - the most similar things I can think of EAs having done previously were investing in DeepMind and OpenAI.
Another way of thinking about this question is - are there other entities that are of less value to invest in than DM/OAI, but more than the media/consulting orgs that I mentioned?
I'm concerned that it would look really shady for OpenPhil to do so, but maybe Sam Bankman-Fried or another very big EA donor could do it - but then the purchaser needs to figure out who to pick to actually manage things, since they aren't experts themselves. (And they need to ensure that their control doesn't undermine the publication's credibility - which seems quite tricky!)
It could only be billionaires who are running out of donation targets. If Bezos can buy WaPo, then less prominent billionaires can buy less popular media with much less (though not zero) controversy. But I agree that it only works well if you have EA-leaning talent to work there, especially at the executive level.
Matt makes lots of money on his independent substack now, so that feels less urgent, but funding other things like future perfect in other news sources as the Rockefeller Foundation does now seems great.
Urgent doesn‘t feel like the right word, the question to me is whether his contributions could be scaled up well with more money. I think his substack deal is on the order of 300k per year, but maybe he could found and lead a new news organization, hire great people that want to work with him and do more rational, informative and world-improvy journalism?
Thanks, didn't see what he said about this. Just read an Atlantic article about this and I don't see why it shouldn't be easy to avoid the pitfalls from his time with Vox, and why he wouldn't care a lot about starting a new project where he could offer a better way to do journalism.
Yglesias felt that he could no longer speak his mind without riling his colleagues. His managers wanted him to maintain a “restrained, institutional, statesmanlike voice,” he told me in a phone interview, in part because he was a co-founder of Vox. But as a relative moderate at the publication, he felt at times that it was important to challenge what he called the “dominant sensibility” in the “young-college-graduate bubble” that now sets the tone at many digital-media organizations.
Yeah, I guess the impression I had (from comments he made elsewhere — on a podcast, I think) was that he actually agreed with his managers that at a certain point, once a publication has scaled enough, people who represent its “essence” to the public (like its founders) do need to adopt a more neutral, nonpartisan (in the general sense) voice that brings people together without stirring up controversy, and that it was because he agreed with them about this that he decided to step down.
Interesting, the Atlantic article didn't give this impression. I'd also be pretty surprised if you had to become essentially the cliche of a moderate politician if you're part of the leadership team of a journalistic organization. In my mind, you're mostly responsible for setting and living the norms you want the organization to follow, e.g.
epistemic norms of charitability, clarity, probabilistic forecasts, scout mindset
values like exploring neglected and important topics with a focus on having an altruistic impact?
And then maybe being involved in hiring the people who have shown promise and fit?
Yeah, I mean, to be clear, my impression was that Yglesias wished this weren't required and believed that it shouldn't be required (certainly, in the abstract, it doesn't have to be), but nonetheless, it seemed like he conceded that from a practical standpoint, when this is what all your staff expect, it is required. I guess maybe then the question is just whether he could "avoid the pitfalls from his time with Vox," and I suppose my feeling is that one should expect that to be difficult and that someone in his position wouldn't want to abandon their quiet, stable, cushy Substack gig for a risky endeavor that required them to bet on their ability to do it successfully. I think too many of the relevant causes are things that you can't count on being able to control as the head of an organization, particularly at scale, over long periods of time, and I'd been inferring that this was probably one of the lessons Yglesias drew from his time at Vox.
In the short term yes, but my vision was to see a news media organization under the leadership of a person like Kelsey Piper that is able to hire talented reasonably aligned journalists to do great and informative journalism in the vein of Future Perfect. Not sure how scalable Future Perfect is under the Vox umbrella, and how freely it could scale up to its best possible form from an EA perspective.
The Economist has written that Goal 1 (ending poverty) should be "at the head of a very short list." In my opinion, if we're going to do a megaproject, we should take a handful of the SDG targets (such as 1.1, ending extreme poverty) and spend billions of dollars aggressively optimizing them.
Yes I know, thank you ADS, but I rather have in mind something like "Toward an Institute for the Science of Suffering" https://docs.google.com/document/d/1cyDnDBxQKarKjeug2YJTv7XNTlVY-v9sQL45-Q2BFac/edit#
I did some more thinking (still not full Fermis) and now think that this is a >1B project even for just a sufficiently good MVP, possibly considerably more.
Though most of the cost is upfront cost like digging, and constructing full bunkers with individual nuclear power plants. The running cost should be considerably lower than <100M/year, unless I'm missing something important.
Not that I know of, Nick Beckstead wrote a moderately negative review of civilizational refuges [EA · GW] 7 years ago (note that this was back when longtermist EA had a lot less $s than we currently do).
One reason I'd like to write out a moderately detailed MVP is that then we can have a clear picture for others to critique concrete details of, suggest clear empirical or conceptual lines for further work, etc, rather than have most of this conversation a) be overly high-level or b) too tied in with/anchored to existing (non-longtermist) versions of what's currently going on in adjacent spaces.
Not sure if 100M is necessary or sufficient if you want many people or even multiple organizations to seriously work full-time on forecasting EA relevant questions. Maybe could also be used to spearhead its usage in politics.
Challenge prize(s) to incentivise the development of innovative solutions in priority areas. These could be prizes for goals already suggested by people in this thread (e.g. producing resilient food sources, drastic changes to diagnostic testing, meat alternatives underinvested in by the market) or others.
Quotes from a Nesta report on challenge prizes (caveat that I haven't spent any time looking up opposing evidence/perspectives):
By guiding and incentivising the smartest minds, prizes create more diverse solutions. Because prizes only pay out when a problem has been solved, you can support long shots, radical ideas and unusual suspects while minimising risk...
The high profile of a prize can raise public awareness and shape the future development of markets and technologies. Prizes can help identify best practice, shift regulation and drive policy change...
For the Ansari XPRIZE, 26 teams spent $100 million chasing the $10 million prize, jump starting the commercial space industry.
Scaling up carbon removal and other promising climate-related technologies before governments are willing to fund them. A lot like what Stripe and Shopify have been doing, but about an order of magnitude bigger. If the timing is right (I'm not sure it is) this strategy could get a fair bit of leverage by driving costs down and accelerating even larger-scale deployments.
Bezos bought the Washington Post for $250 million. We could try to buy some other media groups, or at least a significant number of shares in them. Some are probably too big, like the New York Times at $8 billion market cap and Fox Corporation at $20 billion.
I think food-related companies are also probably too big relative to impact, with market caps in the billions or tens of billions of dollars for Tyson, Pilgrim's Pride, JBS SA, McDonald's. You could buy shares in smaller ones, but they also probably have a disproportionately smaller share of farmed animals, although getting a few of them to improve animal welfare policies could make the big ones look bad and push them to follow.
A network of reliable, long distance shortwave radio systems that do not depend on external sources of electricity and are unable to be disabled by widespread cyber attack, EMP, or most other threats to the global communication infrastructure.
In a wide range of catastrophes, communication systems are a critical vulnerability which, if disrupted, delay societal recovery from the disaster. A highly resilient and reliable system is HAM shortwave radio, which allows reliable, low cost communication to a significant fraction of the global population. Maintaining key high speed communication channels during a catastrophe would greatly increase disaster resilience beyond flyer distribution, potentially at relatively little additional cost. A backup shortwave radio communication system would facilitate the timely advice on where to locate clean water sources, identify sensible relocation options, allow improved international cooperation, and allow coordination about the nature and likely duration of the outage.
We’ve identified HAM shortwave radios as key electronic equipment that is both likely to be highly resilient to global communication disruption on large or small scales, and as relatively easy to distribute. Another interesting use for these radios is distribution to power grid stations for use to aid blackstart communications after large scale electrical grid collapse.
While several network configurations may serve GCR reduction purposes, our preliminary network design involves around a dozen central stations receiving and broadcasting globally, a network of several hundred two-way NVIS transceiver networks operated by trained personnel, and a few thousand distributed receiver-only radios. The network would utilize SSB communications to lower power requirements. To cover the entire earth’s population we estimate the total construction and shipping cost at between USD $2 million and $10 million, scaling roughly proportionally with the fraction of global population able to be reached by the network.
Total costs would therefore reasonably reach into the tens to hundreds of millions for this sort of mega project, depending on the spatial density of the network.
Announce $100M/year in prizes for AI interpretability/transparency research. Explicitly state that the metric is "How much closer does this research take us towards, one day when we build human-level AGI, being able to read said AI's mind, understand what it is thinking and why, what its goals and desires are, etc., ideally in an automated way that doesn't involve millions of person-hours?" (Could possibly do it as NFTs, like my other suggestion.)
I don't know the area well, but I guess that one option would be to invest in relevant AI companies, to be able to influence their decision-making (and it could also be profitable). I guess that one could in principle invest very large sums in that. And unlike some other suggested projects, it is maybe not necessarily logistically complicated (though it depends on the set-up). Cf. Ryan's comment.
I hope it’s ok to mention something I’d like to do at Foresight Institute:
Crowdsource + Crowdfund Civilization Tech Map
Build on this map for Civilizational Self-Realization (scroll to end of article) to create an interactive technology map for positive long-term futures that is crowdsourced (Wikipedia-style) and allows crowdfunding (Kickstarter-style)
The map surveys the ecosystem of areas relevant for civilizational long-term flourishing, from health, AI, computing, biotech, nanotech, neurotech, energy, space tech, etc.
The map branches out into milestones in each area, and either lists projects solving them, or requests projects to solve them, including options to fund either
Crowdsourcing of milestones and requests for projects will get it very wrong at first but can get continuously course corrected, e.g. via prediction markets
Crowdfunding makes more and more people have skin in the game for the long-term future, e.g. via tokenization, retroactive public goods funding, or a similar mechanism
In sum, the map can serve as a north star to coordinate those seeking to work toward positive futures and those seeking to fund such work.
9 PACs have raised/spent more than $100m (source). So an EA PAC?
Although I guess Sam Bankman-Fried was the second-largest donor to Biden (coindesk, Vox), and Dustin Moskovitz gave $50m; and they're both involved with Future Forward and Mind The Gap, so maybe EA is already kinda doing this.
Governments as they exist today seem antiquated to me as they are linked to particular geographic regions, and the particular shapes and locations of those regions are becoming increasingly irrelevant.
Meanwhile some governments are good at providing for their people – social security, health insurance, enforcement of contracts, physical protection, etc. – so that’s fine, but there are also a lot of governments that are weak in one or more of these critically important departments.
If there were a market of competing global governments, we’d get labor mobility without anyone actually having to move. The governments that provide the best services for the cheapest prices would attract the most citizens.
These governments could draw on something like Robin Hanson’s proposal for a reform of tort law to incentivize a market for crime prevention, could use proof of stake (where the stake may be a one-time payment that the government holds in escrow or a promise of a universal basic income to law-abiding citizens) for some legal matters, and could use futarchy for legislation.
They could also provide physical services, such as horizontal health interventions and physical protection in countries where they can collaborate with the local governments.
An immediate benefit would be the reduction of poverty and disease, but they could also serve to unlock a lot of intellectual capacity by giving people the spare time to educate themselves on matters other than survival. They could define protocols for resolving conflicts between countries and lock in incentives to ensure that the protocols are adhered to. (I bet smart contracts can help with this.)
That way, they could form a union of autonomous parts sort of like the cantons of Switzerland. Such a union of global distributed governments could eventually become a de-facto world government, which may be beneficial for existential security and for enabling the Long Reflection.
Such a government could be bootstrapped out of the EA community. A nonpublic web of trust could form the foundation of the first citizens. If the system fails even when the citizenry is made up largely of highly altruistic, conscientious people who can pay taxes and share a similar culture, it’s probably not ready for the real world. But if it proves to be valuable, it can be gradually scaled up to a broader population spanning more different cultures.
I’ve come to feel like it’s a red flag if such a project bills itself as a distributed state or something of the sort. There seems to be a risk that people would start such a project only to do something grand-sounding rather than solve all the concrete problems that a state solves.
I’d much rather have a bunch of highly specialized small companies that solve specific problems really well (and also don’t exclude anyone based on their location or citizenship) than one big shiney distributed state that is undeniably state-like but is just as flawed as most geographic states, because it would just add one more flawed and hard-to-coordinate actor to the international scene, and make international coordination harder rather than easier.
The ideal project here is probably something that incubates and coordinates other small projects that provide specific services to solve specific problems while not discriminating based on location or citizenship but that never uses terms like “state,” “government,” or “country” for itself.
An added benefit is that a lot of my conversations about distributed states quickly became about “Is this really a distributed state/government/country,” which is one of the least interesting conversations to have. (That’s something I’d rather leave to trained lexicographers with big corpora to figure out.) I’d much rather have conversation about whether it solves the problems it sets out to solve and at what cost.
As I alluded to in a comment [EA(p) · GW(p)] to KHorton's related post, I believe SoGive could grow to spend something like this much money.
SoGive's core idea is to provide EA style analysis, but covering a much more comprehensive range of charities than the charities currently assessed by EA charity evaluators.
As mentioned there, benefits of this include:
SoGive could have a broader appeal because we would be useful to so many more people; it could conceivably achieve the level of brand recognition achieved by charity evaluators such as Charity Navigator, which have high levels of brand recognition in the US (c50% with a bit of rounding).
Lots of the impact here is the illegible impact that comes from being well-known and highly influential; this could lead to more major donors being attracted to EA-style donating, or many other things.
There's also the impact that could come from donating to higher impact things within a lower impact cause area, and the impact of influencing the charity sector to have more impact
Full disclosure: I founded SoGive.
This short comment is not sufficient to make the case for SoGive, so I should probably right up something more substantial.
I like this! However, in a perfect world, rather than there being one university (or one institute at one university) that studies global priories, wouldn't all top research universities across the world have global priorities schools (like business or policy schools are prevalent at most research universities)? With philosophers and scientists working together in one school on having the most impact on humanity, and coordinating with one another on how to do so—where students can get PhDs in Global Priorities Research (with specialization in one of the sub-fields, like business schools offer), and undergraduates at all universities around the world can major in global priorities, with paths towards academia and industry. Students majoring in GPR all take classes in the topics (e.g., longtermism, global health and development, animal rights) and can create joint-majors with philosophy or one of the (social) sciences.
Business schools were only popularized about 100 years ago, and look at how much their proliferation has incentivized study and work in this space. Also, once the top universities create these GPR schools, many other universities not funded by EA would likely follow (esp. if it’s a profitable, self-sustaining business model). This might cost more than 100 million thought...there's probably data out there on how much it cost initially to start b-schools and policy schools.
avoids the many issues seen in traditional academia.
is James' central claim. I personally find myself confused about how much EA research should be done in academia vs outside of it; I can imagine us moving more towards academia (or other more standardized systems) as we institutionalize.
I looked up GiveDirectly's financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.
How can we foster longterm global trust and status as a social movement? In order to foster global backing for some of the movement's non-normative or 'creative' ideas (e.g., build post-apocalyptic bunkers to help re-build society in case of nuclear war) that may actually be highly impactful in the longterm future, we likely need to first prove ourselves as a movement that can actually create large-scale global impact.
Here's one idea for a megaproject that could help to foster global trust/status by proving our ability to use evidence and reason to make a positive impact on the world:
Part 1: Survey representative samples of most (or all) countries and ask them “if you had 100 million dollars and wanted to use this money to make the world a better place, how would you spend it?”, giving open-ended text and a rank order option of some of the things we’re considering
Getting cross-cultural responses to this question could produce the most amount of global backing for EA, and it could look *very good* if we made the movement more democratic! But the latter is an empirical question (I.e., perceived trust in a social movement when the movement relies on experts only, the masses only, or a mix of experts and the masses, vs. a no-mention control)
Part 2: Create a list of top 10 or so most cared about global issues, and have EA researchers rank each of them in terms of total impact and effectiveness
Part 3: run a RCT again on nationally representative samples globally and compare the globally top ranked cause area to the EA most effective cause within the top 10 (if these two aren’t the same) to look at trade offs between indirect movement building impact and direct cause area impact --> after this RCT, choose which cause area will produce the most total impact as the "winner" of the $100 million grant.
Part 4: run a large grant competition to find the best approaches to solving whatever cause area is selected globally (note: I’d hypothesize that it’s very important to solve big issue *globally* to facilitate a new norm of collective global action and foster obligation perceptions towards EA from all countries), and aim to R&D for about 5-10 years (rough estimate), and then roll out the most effective intervention(s) based on these findings
(Repeat this every X years to maintain longterm support of EA)
Samo Burja said at a meetup the other day that he thinks Vitalik Buterin should give a medium university 10 million dollars to put ten top tier internet bloggers on tenure. No idea if that's a good idea or anywhere near possible, but it could use a decent amount of money for a while.
Educate, empower, and enable diverse talent to work on solutions for the world’s biggest issues.
What is it?
A remote school offering tuition-free education and job placement for vital roles (data scientist, researcher, engineer, etc.) in areas of crucial need (climate, economics, healthcare, etc.).
Identify important areas where key talent is lacking.
Establish tuition-free online school led by top thinkers.
Dispense task-oriented knowledge in short period of time.
Create post-graduation job placement program for sectors in need.
Remove barriers to higher education.
Create access to opportunities, regardless of location, language, background, etc.
Lift people out of poverty.
Funnel talent into organizations and projects that need the most support.
Solve range of vital issues.
Grow pool of world problem solvers.
Inspire next generation of doers and founders.
Open up to more students, more languages, more education levels, more areas of speciality.
Create accelerator program to invest in alum startups.
Two things that scale well are knowledge and technology. So, rather than attempt to choose a single area of focus, create a megaproject that both democratizes pursuits and crowdsources solutions. This has the potential to produce a network effect on a variety of problems, while removing hierarchal barriers. Scaling continues until new talent declines to join and/or roles disappear, or problems are solved (due to lack of new focus areas and/or some yet-to-be realized superior option i.e. ML/AI).
Doing something to democratize randomized controlled trials (RCTs) - thereby reducing the risk involved in testing new ideas and interventions.
RCTs are a popular methodology in medicine and the social sciences. They create a safety net for the scientists (and consumers) to test that the drug works as intended and doesn't turn people into mutants.
I think using this methodology in other fields would be a high-leverage intervention. For example startups, policy-making, education, etc. Being able to try out new ideas without facing a huge downside should be a feature of every field. Big institutions already conduct similar tests before they release something. But I'm wondering how useful it would be to allow small institutions, startups, and maybe even individuals to do this.
Plus, adding an RCT into the launch pipeline of any intervention/product allows us to see the unintended consequences before they're out there. I think this would have at least been helpful for the social media companies.
Based on some googling, I've understood that RCTs are very costly. But if the reasoning makes sense, this is exactly the kind of thing others can't try out that a megaproject should.
Here's a paraphrased quote by Eliezer Yudkowsky, that is relevant in this context: If people could learn from their mistakes without dying from them, well actually, that in itself would tend to fix a whole lot of problems over time. [source]
P.S. I'm thinking on working on this idea full-time in 2022. It would be very helpful to hear whatever criticism/thoughts you have - It'll help me make sure my time is effectively spent.
An organizational version of 80k, GiveWell, Project Drawdown for "incentives".That is, an organization that specializes in 1) solving incentive problems in the most effective way possible (ease of implementation, minimizing costs, minimizing side effects...), 2) identifying priority changes based on their research (in general or for specific public policies such as climate change or longtermism...)
Yes (and sorry for my English, I am French (and not very good in English)). Summary in a few lines : At the level of a country (but it can be at another level of governance), the organization chooses one/several indicators, aiming at maximizing long-term well-being. It identifies the priority areas affecting them (based on importance, neglectedness, tractability).For each area, it analyzes the incentive structure, which means all the forces that push in a certain direction (e.g. what are the incentives of the 40 most influential people and organizations in this area? ). It compares it with the system that would be needed to move forward in a robust way (which implies, and this would be the whole purpose of the organization, to develop expertise on this). It then identifies the most relevant levers to make the system evolve (ease of implementation, political acceptability, efficiency...). Finally it prioritizes each area according to the expected utility of the proposed systemic reforms.
One can also imagine a less ambitious version, for example a J-PAL of incentives, which would help governments calling on them for a specific problem (for example: increasing the mathematical performance of students).
I identify several advantages. 1) Focuses decision makers on priority problems (like 80k does for individual careers, or Givewell for donations). 2) Incentives are a language that speaks to economists, whose influence on governments is significant. They have a real impact on the world, are often not aligned with the common good, and seem fairly objectifiable (in an otherwise extremely complex social world). 3) The cost-benefit ratio can be very high insofar as some systemic changes have almost no cost.
The best example I can think of is this article by Eliezer Yudkowsky (a comprehensive reboot of law enforcement), which gives an overview of the process I imagine. And with more quantitative models, an analysis of the decision-making process to facilitate the chances of implementation, a better knowledge of the effects of various incentives, the help of superforecasters etc...I think it can be improved.
We could finance ballot initiatives, lobbying, running our own candidates. Running US presidential primary candidates could shift conversations and bring attention to issues (although bringing attention to an issue can backfire). Bloomberg spent over $500 million on his own presidential primary campaign and was 4th.
Running presidential candidates could be risky for EA, though. Non-partisan ballot initiatives seem safer.
This isn't really a megaproject, but I'm a bit busy to make a top-level post of it so I'm dropping it in here.
An evidence clearinghouse informed by Bayesian ideas and today's political mess.
One of humanity's greatest sources of conflict in the modern area is disagreement about (1) the facts, and (2) how to interpret them. Even basic facts are often difficult to distinguish from severe misinterpretations. I used to be hugely interested in climate misinformation, and now I'm looking at anti-vax stuff, but the problem is the same and has real consequences, from my unvaccinated former legal guardian dying of Covid (months after I questioned [LW · GW] popular anti-vax evidence), to various genocides that were fueled by popular prejudices.
To me, a central problem is that (1) most people believe it is easy to figure out what the truth is, so do not work very hard at verifying facts, (2) don't actually have enough time to verify facts anyway (doing it well is hard and very time-consuming!), and (3) are wasting a lot of effort by doing it because there is no durable place where the information you discover can be permanently stored, shared, and cross-referenced by others. The multi-millionaire antivaxxer Steve Kirsch has a dedicated substack with "thousands" of customers paying $5/mo. or $50/year to hear his latest Gish Gallop, while debunkings of Steve Kirsch are randomly scattered around and (AFAIK) unprofitable. If I personally discover something, I might mention it to someone on ACX and/or dump it in the old thread [LW(p) · GW(p)] I linked to above, and here's a guy who got 359 "claps" on Medium for his debunking. The response is disorganized and not nearly as popular as the original misinformation.
Another example: I spent 27 years in a religion I now know is false.
Or consider what happened on the extremely popular Joe Rogan program that inspired this meme (a joke, but some believe it was a true story):
Joe Rogan: hamburgers are good but I am trying to eat less pork Guest: hamburgers are made with beef Joe Rogan: ham is from pork it says ham in hamburger Guest: it is beef Joe Rogan: that’s not what I’ve heard Jamie look that up Jamie: it beef Guest: it beef Joe: ok but can we really trust hamburger makers and butchers and grocery stores when the word ham is in hamburger and ham means pork Joe Rogan Fans: this is why I like him he is good at thinking
There are studies (Singer et al., Patone et al. 2021) that say there is a small risk of myocarditis in young people who catch Covid, and a much smaller risk of myocarditis in young people who take a mRNA Covid vaccine. Naturally, since he often listens to anti-vaxxers, Rogan had it backwards and thought the risk was higher in those who had a vaccine. If you watched this program, you'd probably come away confused about whether vaccines are worse than the disease or not.
Obviously a web site isn't going to solve this whole problem, but the absence of such a web site is a serious problem that we can solve.
Another way of framing the central problem is as a matter of distrust of institutions. My sense is that a large minority of the population doesn't trust government organizations and doesn't trust scientific research if it is done with money from the government or big companies, yet at the same time they do seem to trust random bloggers and political pundits who have the "right" opinions. But it's worse than that: anybody can put up a PDF and say "this is a peer-reviewed paper", or put up a web site and call it a peer-reviewed journal. For instance, consider the Walach paper that was retracted for various errors, such as the antivax cardinal sin of ignoring base rates of disease and death—see if you can spot this error in action:
...there were 16 reports of severe adverse reactions and 4 reports of deaths per 100,000 COVID-19 vaccinations delivered. According to the point estimate [...] for every 6 (95% CI 2-11) deaths prevented by vaccination in the following 3–4 weeks there are approximately 4 deaths reported to Lareb that occurred after COVID-19 vaccination. Therefore, we would have to accept that 2 people might die to save 3 people.
But antivax scientists have their own "peer-reviewed journal", which republished the paper with no mention of the earlier retraction, and Kirsch simply linked to that instead. Right now, to figure out that this paper is garbage, you have to suspect that "something is wrong" with it and its journal, and to know what's wrong with it exactly, you have to comb through it looking for the error(s). But that's hard! Who does that? No, in today's world we are almost forced to rely on a more practical method: we notice that the conclusion of the paper is highly implausible, and so we reject it. I want to stress that although this is perfectly normal human behavior, it is exactly like what anti-science people do. You show them a scientific paper in support of the scientific consensus and they respond: "that can't be true, it's bullsh**!" They are convinced "something is wrong" with the information, so they reject it. If, however, there were some way to learn about the fatal flaws in a paper just by searching for its title on a web site, people could separate the good from the bad in a principled way, rather than mimicking the epistemically bad behavior of their opponents.
So I envision a democratization of evidence evaluation, as an alternative to the despised "ivory towers". A site where anyone can go to present evidence, vote on its significance, and construct arguments. Something that uses Wikipedia and other well-sourced articles as a seed, and eventually grows into something hundreds of times larger. Something that has an automated reputation system like StackOverflow. Something that has a network of claims, counterclaims, and evidence for each. Where no censorship is necessary, as false claims are shown not to be credible under the weight of counterevidence. Where people recursively argue over finer and finer points, and recursively combine smaller claims ("greenhouse gases can increase average planetary surface temperature", "humans are causing a net increase of greenhouse gases") to build larger claims ("humans are causing global warming via greenhouse gas emissions"). Where vague or inaccurate claims get replaced over time for clearer and more precise claims. Where steelmen gain more prominence than strawmen. Where offline and paywalled references must be cited with a quote or photo so users can verify the claim. Where people don't "like or dislike" statements, but vote on epistemically useful questions like "this is a fair summary of the claim made in the source" and "the conclusion follows from the premises", and where the credibility of sources is itself an entire universe of debate and evidence.
This site is just one idea I have under my primary cause area, "Improving Human Intellectual Efficiency" (IHIE), which, taken as a whole, could be a megaproject. I have been meaning to publish an article on the cause area, but haven't found the time and motivation to do it in the last year. Anyway, while it's possible to figure out the truth in today's world, it's only via luck (e.g. good teachers) or a massively inefficient and unreliable search process. Let's improve that efficiency, and maybe fewer people will volunteer to kill and die, and more people will understand their world better.
I think this relates to the top-rated answer too, since the lack of support for nuclear power is driven by unscientific myths. After Fukushima, it seemed like no one in the media was even asking the question of how dangerous X amount of radiation is, as if it made sense to forcibly relocate over 100,000 people without checking the risk first. The information was so hard to find that I ended up combing through the scientific literature for it (and I didn't find it there either, just some information that I could use as input for my own back-of-envelope calculation indicating that 100 mSv of radiation might yield a 0.05% chance of death by leukemia IIRC, less than normal risks of air pollution. Was my conclusion reasonable? If this site existed, I could pose my question there.)
Technological developments in the biotech / pharma industry are notoriously expensive, and my (fairly subjective) impression is that the industry is riddled with market failures.
Especially when applied to particularly pressing problems like pandemic prevention / preparedness, infectious diseases in LMICs, vaccines, ageing and chronic pain, I think EA for-profits and non-profits in this industry could absorb 100 million dollars of annual funding while providing high expected value in terms of social impact.
(idea probably stolen from somewhere else) create an organisation employing an army of superforecasters to gather facts and/or forecasts about the world that are vitally important from an EA perspective.
Maybe it's hard to get to $100million? E.g. 400 employees each costing $250k would get you there, which (very naively) seems on the high end of what's likely to work well. Also e.g. other comments in this post have said that CSET was set up for $55m/5 years.
Maybe diet pledge programs like Veganuary and Challenge 22? They could spend a lot more on ads and expand to more countries. Maybe this would be better set up like the Open Wing Alliance, where The Humane League supports, trains and regrants to local organizations working on cage-free campaigns in different countries.
I'm not sure this could reach $100 million while still spending reasonably cost-effectively, though.
Shamelessly copy the success of StitchFix but use it for the food industry but only sending information, not the actual food.
I've thought about this one a lot so I'll try my best to summarize this: Cook. Eat. Rate. Repeat. A
The foundation would have data scientists/engineers behind the scenes that help customers find their perfect recipes via information and testing. The foundation would eventually then expand into eating out at sustainable restaurants based off feedback from the customer, then merging into community vertical farming which moves into individual household vertical farming.
The company Yummly is pretty close to this but isn't quite there yet and are expanding in the wrong direction imo.
Revamping the food industry to where we are not dependent on grocery stores' supply chains and instead growing it downstairs inside our own homes and creating absolutely delicious recipes from around the world is massive, healthy impact. Something the US could benefit from easily. It's only a matter of time where every individual will have to (mostly) live off the land in their backyard again in my opinion and to curb that catastrophe, we create FoodieFix.
Revamping the food industry to where we are not dependent on grocery stores' supply chains and instead growing it downstairs inside our own homes and creating absolutely delicious recipes from around the world is massive, healthy impact
Sorry, why? This just seems really minor in the grand scheme of things, unless I'm missing something important (which is very possible).
The alternative is to have many projects start out small and just help them to scale up quickly. This is what Silicon Valley does, and it often works in spades. Basically all of the most successful companies started as tiny ventures, not megaprojects.
Starting and funding megaprojects from scratch is something you generally only do when you have no other option.
So if the question is, "which existing projects should scale to get $100m+", that's fine, but if the expectation is that these will be totally new projects, I'd suggest hesitancy.
How relevant is that literature on "megaprojects"? As far as I can tell it seems mostly focused on infrastructure - e.g. construction of big dams, bridges, and so on. Those projects seem very different from the kinds of projects that Ben and Will talk about. (Plus the latter have a smaller size, as mentioned.)
I don't think the term "megaproject" is misleading or confusing, though others may disagree. The fact that Flyvbjerg and others have used it in one sense doesn't necessarily mean we can't use it in another sense.
I appreciate Ozzie flagging this, since a nontrivial fraction of the costs of my proposed idea (shelters) would in fact be construction costs for a fairly difficult/novel thing (eg construct an underground shelter for >100 people with BSL IV entry requirements, enough food, fuel and technical sophistication to support >100 people + >5000 frozen fertilized embryos for >30 years), so even if the objection is not applicable to the other project ideas, it should be applicable to mine.
My impression is that the commonality of megaproject failure is more "a really big project, with often a bunch of stakeholders, and is difficult to incrementally develop", more so than being about bridges/dams in particular. Many huge software projects fit similar patterns and have had similar fates. Many large technocratic initiatives also had a lot of problems.
If you take out software, hardware, and technocratic initiatives, I'm not sure why kinds of projects there are that could make it to the $100Mil mark.
Honestly, many of the projects in the thread are more susceptible to the same flaws that apply to these infrastructure projects. Bridges and dams are far more tangible, and benefit from deep pools of experience.
Related to the bigger goal, I think few people here believe the value of this thread is in brainstorming a specific project proposal.
Rather, there's lots of other value, e.g. in seeing if any ideas or domains pop out that might help further discussion, and knowledge of existing projects and experts might arise.
(There's also a perspective that is a bit snobby and looks down on big, grandiose planning).
FWIW my reading of the question is: "What projects could be created, that have the potential to scale to $100m". I didn't read it as suggesting funding a megaproject from scratch.
Many EA projects are of the "start a research institute" flavour, and will likely never absorb $100m. I see the post as a plea for projects which could (after starting with smaller amounts and then scaling) absorb these sums of money. Much like Givedirectly wasn't started with $100m/year budget right away, but has proven itself capable of deploying that much funding.
Megaprojects cost $1 billion or more. Ben Todd was using the (admittedly somewhat confusing) term 'EA megaproject' by which he meant a new project that could usefully spend $100m a year. So these concerns about megaprojects don't apply. How about we use the term '$100m-scale project'? (I considered 'kiloproject' but that's really niche.)
It sounds like there are two very different concerns here.
One is how large the project is. $100M vs. $1Billion.
The second is how "gradual" that project can be. Like, can it start small, or do we need to allocate $100M at once?
The concern i was bringing up was more about the latter. My main point was just that we should generally prioritize projects that can be neatly scaled up over ones that require a huge upfront cost.
In fairness, I think most of the suggested examples are things that have nice ramps of scaling them up. For example, the nuclear funding gap seems fairly gradual, and the Anthropic team seems to be mainly progressing ideas that they worked on from OpenAI.
Projects I'd be more concerned are ones like, "We've never done this sort of thing before, we really can't say how successful it will be, but here's $100M, and it needs to be spent very quickly using plans that can't change much at all."
I'm not that concerned about the $100M vs. $1Bil difference. Many groups grow over time, so I'd imagine that most exciting $100M projects would be very likely to reach $1Bil after a few years.
Maybe just include something like this in the description:
"By Megaproject, I'm referring to any project that could eventually be scaled up to $100 Million, not ones that are planned from the start to cost $100 Million. In many cases this could include very small efforts that would have to achieve multiple levels of success to eventually get $100Million+ per year."
I expect many people will also read these comments, so it's not particularly important, but it could be nice.
As Stefan notes, Khorton's post on this is worth reading. She argues something that even larger projects like Givewell have <$100m annual budgets (most is regifting) and so finding good projects to spend $100m a year in the long term may be even more difficult.