Problem areas beyond 80,000 Hours' current priorities
post by Ardenlk
Why we wrote this post
Potential highest priorities
of outer space
individual reasoning or cognition
promoting positive values
of stable totalitarianism
from malevolent actors
systems at top tech firms
may need to invest more to tackle future problems
Other longtermist issues
policy and infrastructure
institutions to promote development
settlement and terraforming
Other global issues
research and other basic science
access to pain relief in developing countries
risks from climate change
in the developing world
Why we wrote this post
At 80,000 Hours we've generally focused on finding the most pressing issues and the best ways to address them.
But even if some issue is 'the most pressing'—in the sense of being the highest impact thing for someone to work on if they could be equally successful at anything—it might easily not be the highest impact thing for many people to work on, because people have various talents, experience, and temperaments.
Moreover, the more people involved in a community, the more reason there is for them to spread out over different issues. There will eventually be diminishing returns as more people work on the same set of issues, and both the value of information and the value of capacity building from exploring more areas will be greater if more people are able to take advantage of that work.
We're also pretty uncertain which problems are the highest impact things to work on—even for people who could work on anything equally successfully.
For example, maybe we should be focusing much more on preventing great power conflict than we have been. After all, the first plausible existential risk to humanity was the creation of the atom bomb; it's easy to imagine that wars could incubate other, even riskier technological advancements.
Or maybe there is some dark horse cause area—like research into surveillance—that will turn out to be way more important for improving the future than we thought.
Perhaps for these reasons, many of our advisors guess that it would be ideal if 5-20% of the effective altruism community's resources were focused on issues that the community hasn't historically been as involved in, such as the ones listed below. We think we're currently well below this fraction, so it's plausible some of these areas might be better for some people to go into right now than our top priority problem areas.
Who is best suited to work on these other issues? Pioneering a new problem area from an effective altruism perspective is challenging, and in some ways harder than working on a priority area, where there is better training and infrastructure. Working on a less-researched problem can require a lot of creativity and critical thinking about how you can best have a positive impact by working on the issue [EA · GW]. For example, it likely means working out which career options within the area are the most promising for direct impact, career capital, and exploration value, and then pursuing them even if they differ from what most other people in the area tend to value or focus on. You might even eventually need to 'create your own job' if pre-existing positions in the area don't match your priorities. The ideal person would therefore be self-motivated, creative, and willing to chart the waters for others, as well as have a strong interest or relevant experience in one of these less-explored issues.
We compiled the following lists by combining suggestions from 6 of our advisors with our own ideas, judgement, and research. We were looking for issues that might be very important, especially for improving the long-term future, and which might be currently neglected by people thinking from an effective altruism perspective. If something was suggested twice, we took that as a presumption in favor of including it.
We're very uncertain about the value of working on any one of these problems, but we think it's likely that there are issues on these lists (and especially the first one) that are as pressing as our highest priority problem areas.
What are the pros and cons of working in each of these areas? Which are less tractable than they appear, or more important? Which are already being covered adequately by existing groups we don't know enough about? What potentially pressing problems is this list missing?
We'd be excited to see people discussing these questions in the comments, and to check out relevant material from any readers who have existing expertise in these areas. We've linked to a few resources for each area that seem interesting or helpful, though we don't always agree with everything they say and we wouldn't be surpised if in many cases there are better resources out there.
We hope that for people who want to work on issues other than those we talk most about, these lists can give them some fruitful ideas to explore.
Potential highest priorities
The following are some global issues that seem like they might be especially pressing from the perspective of improving the long-term future. We think these have a chance of being as pressing for people to work on as our priority problems, but we haven’t investigated them enough to know.
Great power conflict
A large violent conflict between major powers such as the US, Russia or China could be the most devastating event to occur in human history, and could result in billions of deaths. In addition, mistrust between major powers makes it harder for them to coordinate on arms control or ensure the safe use of new technologies.
Though there is considerable existing work in this area, peacebuilding measures aren’t always aimed at reducing the chance of the worst outcomes. We’d like to see more research into how to reduce the chance of the most dangerous conflicts breaking out and the damage they would cause, as well as implementation of the most effective mitigation strategies.
Great power conflict is the subject of a large body of literature spanning political science, international relations, military studies, and history. Get started with accessible materials on contemporary great power dynamics—this blog post for a brief and simple explanation, this report from Brookings on the changing role of the US on the world stage, this podcast series on current military and strategic dynamics from the International Institute for Strategic Studies, and this talk on the risks from great power conflict using the scale, solvability, and neglectedness framework [? · GW].
Useful books in this area include After Tamerlane: The Rise and Fall of Global Empires, 1400-2000, and Destined for War: Can America and China Escape Thucydides's Trap?
International governing institutions might play a crucial role in our ability to navigate global challenges, so improving them has the potential to reduce risks of global catastrophes. Moreover, in the future we may see the creation of new global institutions that could be very long-lasting, especially if the international community trends toward more cohesive governing bodies—and getting these right could be very important
The Biological Weapons Convention is an example of one way institutions like the UN can help coordinate states to reduce global risks — but it also demonstrates current weaknesses of this approach, like underfunding and weak enforcement mechanisms.
There doesn’t seem to be as much work on improving global governance as you might expect —especially with an eye toward reducing global catastrophic risks. Here are a few pieces we know of:
We'd be keen to see more research on what governance reforms might be best for improving the long-run future.
Governance of outer space
It seems possible that humanity will at some point settle outer space. If it does, the sheer scale of the accessible universe makes what it does there enormously important.
Currently there is no agreement on how to decide what happens in space, should settlement become possible. The Outer Space Treaty of 1967 prohibits countries from claiming sovereignty over anything in space, but attempts to agree on more than that have failed to achieve consensus.
Who ends up in control of resources in space will naturally shift how they are used, and might influence vast numbers of lives. Furthermore, having agreements on how space is divided between groups might avoid a major conflict or a harmful rush to claim resources, and instead foster cooperation or compromise between different parties.
To make more concrete one possible way things could go wrong: one superpower may be alarmed by another superpower that finds itself on the verge of claiming and settling Mars, as they would anticipate eventually being eclipsed economically and militarily.
Despite the huge stakes, governance of space is an extremely niche area of study and advocacy. As a result, major progress could probably be made by a research community focused on this issue, even just by applying familiar lessons from related fields of law and social science.
Arguably it is premature to work on this problem because actual space settlement appears so far off. While this is an important point, we don't think this is decisive for 4 reasons.
First, legal arrangements like constitutions and international treaties are often 'sticky' because they are difficult to renegotiate. Second, it may be easier to agree on fair processes for splitting resources in space while settlement remains far in the future, as it will be harder for interest groups to foresee what peculiar rules would benefit them in particular. Third, humanity may experience another 'industrial revolution' in the next century driven by AI or atomic scale manufacturing, which would allow space settlement to begin sooner than seems likely today. Fourth, once settlement becomes possible there will likely be a rush to agree on how to manage the process, and the more preparation has been completed ahead of that moment the better the outcome is likely to be.
This blog post [EA · GW] by Tobias Baumann fleshes out this case and suggests next steps people could take if they're interested in using their career to study this problem.
We often elect our leaders with 'first-past-the-post’-style voting, but this can easily lead to perverse outcomes. Better voting methods could lead to better institutional decision-making, better governance in general, and better international coordination.
Despite these potential benefits, ideas in this space often get little attention. One reason might be that current political leaders—those with the most power to institute reforms—have little incentive to change the systems that brought them to power. This might make this area particularly difficult to make progress in, though we still think additional effort in this area may be promising.
To learn more check out resources from the Center for Election Science and our podcast episode with Aaron Hamlin.
A related issue is the systematic lack of representation of future generations' interests in policy making. One group trying to address this in the UK is the All Party Parliamentary Group for Future Generations.
There is also the importance of voting security to prevent contested elections, discussed in our interview with Bruce Schneier.
Improving individual reasoning or cognition
The case here is similar to the case for improving institutional decision-making: better reasoning and cognitive capacities usually make for better outcomes, especially when problems are subtle or complex. And as with institutions, work on improving individual decision-making is likely to be helpful no matter what challenges the future throws up.
Strategies for improving reasoning might include producing tools, trainings, or research into how to best make better forecasts or decisions, or come to sensible views on complex topics. Strategies for improving cognition might take a variety of forms, e.g., researching safe and beneficial nootropics.
Although focusing on individuals seems to us like it will usually be less effective for tackling global problems than taking a more institutional approach, it may be more promising if interventions can influence large segments of the population or be targeted toward the most influential people. See the Update Project for an example of the latter kind of strategy.
Global public goods
Many of the biggest challenges we face have the character of global 'public goods' problems—meaning everyone is worse off because no particular actors are properly incentivized to tackle the problem, and they instead prefer to 'free-ride' on the efforts of others.
If we could make society better at providing public goods in general, we might be able to make progress on many challenges at once. One idea we’ve discussed that both has promise and faces many challenges is quadratic funding, but the space for possible interventions here seems enormous.
Another potential approach here is improving political processes. Governments have enormous power and are the bodies we most often turn to to tackle public goods problems. Shifting how this power is used even a little can have substantial and potentially long-lasting effects. Check out our podcast episode with Glen Weyl to learn about current and fairly radical ideas in this space.
If you’re interested in tackling these issues, learning product design, gaining experience in advocacy or politics, or studying economics may all be useful first steps.
We’d be keen to see more research into balancing the risks and benefits of surveillance by states and other actors, especially as technological progress makes surveillance on a mass scale easy and affordable.
Some have argued that sophisticated surveillance techniques might be necessary to protect civilization from risks posed by advancing technology with destructive capabilities (for example see Nick Bostrom’s article ‘The Vulnerable World Hypothesis’); at the same time, many warn of the dangers widespread surveillance poses not only to privacy but to valuable forms of political freedom (example).
Because of these conflicts, it may be especially useful to develop ways of making surveillance more compatible with privacy and public oversight [? · GW].
Atomic scale manufacturing
Both the risks and benefits of advances in this technology seem like they might be significant, and there is currently little effort to shape its trajectory. However, there is also relatively little investment going into making atomic-scale manufacturing work right now, which reduces the urgency of the issue.
To learn more, read this popular article by Eric Drexler, a cause report from the Open Philanthropy Project, or listen to our podcast episode with Christine Peterson.
Broadly promoting positive values
If positive values like altruism and concern for other sentient beings were more widespread, then society might be able to better deal with a wide range of other problems—including problems that haven’t come up yet but might in the future, such as how to treat conscious machine intelligences. Moreover, there could be ways that the values held by society today or in the near future get ‘locked in’ for a long time, for example in constitutions, making it important that positive values are widespread before such a point.
We’re unsure about the range of things an impactful career aimed at promoting positive values could involve, but one strategy would be to pursue a position that gives you a platform for advocacy (e.g. journalist, blogger, podcaster, academic, or public intellectual and then using that position to speak and write about these ideas.
Advocacy could be built around ideas such as animal welfare, moral philosophy (including utilitarianism or the 'golden rule'), concern for foreigners, or other themes.
In the context of cause prioritization within the effective altruism community, some have argued for the importance of spreading positive values through working to improve the welfare of farmed animals [EA · GW] (comparing it to AI safety research), while others push back against this view [EA · GW].
We might be able to significantly increase the chance that, if a catastrophe does happen, civilization survives or gets rebuilt. However, measures in this space receive very little attention today.
To learn more, see our podcast episode on the development of alternative food sources, this paper on refuges and our podcast episode with Paul Christiano.
An ‘s-risk’ is a risk of an outcome much worse than extinction. Research work out how to mitigate these risks is a subset of global priorities research that might be particularly neglected and important. Read more.
Whole brain emulation
This is a strategy for creating artificial intelligence by replicating the functionality of the brain in software. If successful, whole brain emulation could enable dramatic new forms of intelligence—in which case steering the development of this technique could be crucial. Read a tentative outline of the risks associated with whole brain emulation.
Risks of stable totalitarianism
Bryan Caplan has written about the worry that 'stable totalitarianism' could arise in the future, especially if we move toward a more unified world government (perhaps in order to solve other global problems) or if certain technologies—like radical life extension or better surveillance technologies—make it possible for totalitarian leaders to rule for longer.
We think more research in this area would be valuable. For instance, we'd be excited to see further analysis and testing of Caplan's argument, as well as people working on how to limit the potential risks from these technologies and political changes if they do come about.
Risks from malevolent actors
A blog post by David Althaus and Tobias Baumann [EA · GW] argues that when people with some or all of the so-called 'dark tetrad' traits -- narcissism, psychopathy, Machiavellianism, and sadism -- are in positions of power or influence, this plausibly increases the risk of catastrophes that could influence the long-term future.
Developing better measures of these traits, they suggest -- as well as good tests of these measures -- could help us make our institutions less liable to be influenced by such actors. We could, for instance, make 'non-malevolence' a condition of holding political office or having sway over powerful new technologies.
While it's not clear how large of a problem malevolent individuals in society are compared to other issues, there is historical precedent for malevolent actors coming to power -- Hitler, Stalin, and Mao plausibly had strong dark tetrad traits -- and perhaps this wouldn't have happened if there had been better precautions in place. If so, this suggests that careful measures could prevent future bad events of a similar scale (or worse) from taking place.
Safeguarding liberal democracy
Liberal democracies seem more conducive to intellectual progress and economic growth than other forms of governance that have been tried so far, and perhaps also to peace and cooperation (at least with other democracies). Political developments that threaten to shift liberal democracies toward authoritarianism therefore may be risk factors for a variety of disasters (like great power conflicts), as well as for society generally going in a more negative direction.
A great deal of effort—from political scientists, policymakers and politicians, historians, and others—already goes into understanding this situation and protecting and promoting liberal democracies, and we're not sure how to improve upon this.
However, there are likely to be some promising interventions in this area that are currently relatively neglected, such as voting reform (discussed above [? · GW]) or improving election security in order to increase the efficacy and stability of deomocratic processes. A variety of other work, like good journalism or broadly promoting positive values, also likely indirectly contributes to this area.
Recommender systems at top tech firms
The technology involved in recommender systems—such as those used by Facebook or Google—may turn out to be important for positively shaping progress in AI safety, as argued here [EA · GW].
Improving recommender systems may also help provide people with more accurate information and potentially improve the quality of political discourse.
We may need to invest more to tackle future problems
It may be that the best opportunities for doing good from a longtermist perspective lie far in the future [EA · GW]—especially if resources can be successfully invested now to yield greater leverage later. However, right now we have no way of effectively and securely investing resources long-term.
In particular, there are few if any financial vehicles that can be reasonably expected to persist for more than 100 years while also earning good investment returns and remaining secure. We’re unsure in general how much people should be investing vs. spending now on the most pressing causes. But it seems at least worthwhile to look more into how such philanthropic vehicles might be set up.
Founders Pledge — an organisation that encourages effective giving for entrepreneurs — is currently exploring this idea and is actively seeking input [EA · GW].
Learn more about this topic by listening to our podcast episode with Philip Trammell.
Other longtermist issues
We’re also interested in the following issues, but at this point think that work on them is likely somewhat less effective for substantially improving the long-term future than work on the issues listed above.
Speeding up economic growth doesn’t seem as useful as more targeted ways to improve the future, and in general we favour differential development. However, speeding up growth might still have large benefits, both for improving long-term welfare, and perhaps also for reducing existential risks [EA · GW]. For debate on the long-term value of economic growth check out our podcast episode with Tyler Cowen.
The causes of growth already see considerable research within economics, though this area is still more neglected than many topics. Potential strategies for increasing growth include trade reform (which also has the potential to reduce conflict), land use reform, and increasing aid spending and effectiveness.
Science policy and infrastructure
Scientific research has been an enormous driver of human welfare. However, science policy and infrastructure are not always well-designed to incentivize research that most benefits society in the long-term.
For example, we’ve argued that some scientific and technological developments can increase risks of catastrophe, which better institutional checks might be able to help reduce.
More prosaically, scientific progress is often driven more by what is commercially valuable, interesting, or prestigious than by considerations of long-run positive impact. In general, we favor differential development in science and technology over indiscriminate progress, which better science policies or institutional design may help enable.
This suggests that there is room for improving systems shaping scientific research and increasing their benefits going forward. We’re particularly keen on people creating structures or incentives to push scientific research in more positive and less risky directions. Read more.
This strategy has the potential to greatly increase economic growth, intercultural understanding, and cosmopolitanism—as well as help migrants directly. However, it also faces strong opposition and so carries political risk.
Read more from the Open Philanthropy Project, OpenBorders.info, or see the book Open Borders: The Science and Ethics of Immigration.
Recent advances in the science of aging have made it seem more feasible than was previously thought to radically slow the aging process and perhaps allow people to live much longer. If these efforts are successful, some have argued there would be positive long-run effects on society [EA · GW], as people would be led to think in more long-term ways and could keep working productively past retirement age, which could be beneficial for intellectual and economic growth.
That said, the case for long-term impact here is highly speculative and many people think more anti-aging research could be totally ineffective (or perhaps even negative). Anti-aging research also might soon be able to draw substantial private investment, meaning it will be less neglected. But some have also argued that's a reason to work on it now, because it may need some early successes before it can become a self-sustaining field. Read more.
Improving institutions to promote development
Institutional quality seems to play a large role in development, so if there were a way to make improvements to institutions in developing countries, this could be an effective way to improve many people’s lives.
For instance, legal and political changes in China seem to have been key to its economic development from the 80’s onwards. For a discussion of the importance of governing institutions for economic growth see our interview with a group trying to found cities with improved legal infrastructure in the developing world.
Keep in mind, however, these efforts are often best pursued by citizens of the relevant countries. There is also substantial disagreement about which institutions are best, and the answers will vary depending on a country's circumstances and culture.
Space settlement and terraforming
Expanding to other planets could end up being one of the most consequential things humanity ever does. It could greatly increase the number of beings in the universe and might reduce the chance that we go extinct by allowing humans to survive deadly catastrophes on earth. It may also have dramatic negative consequences, for instance if we fail to take into account the welfare of beings we cause to exist in the process, or if settlement turns out to increase the risk of eventual catastrophic conflict. (Read more.)
However, independent space colonies are likely centuries away, and there are more urgent challenges in the meantime. As a result, we think that right now resources are generally better used elsewhere. Still, there does seem to be a chance that in the long run research on the question of whether space settlement is likely to be good or bad—and how good or bad—could have significant impacts.
Lie detection technology
Lie detection technology may soon see large improvements due to advances in machine learning or brain imaging. If so, this might have significant and hard-to-predict effects on many areas of society, from criminal justice to international diplomacy.
Better lie detection technology could improve cooperation and trust between groups by allowing people to prove they are being honest in high-stakes scenarios. On the other hand, it might increase the stability of non-democratic regimes by helping them avoid hiring, or remove, anyone who isn't a 'true believer' in their ideology.
Wild animal welfare
Wild animals are very numerous, and they often suffer due to starvation, heat, parasitism and other issues. Almost nobody is working to figure out what if anything can be done to help them, or even which animals are likely to be suffering most. Research on invertebrates might be especially important, as there is such an enormous number of them [EA · GW].
Learn more in our interview with Persis Eskander and read some early research from the Foundational Research Institute here.
Other global issues
We think the following issues are quite important from a short- or medium- term perspective, and that work on them might well be as impactful as additional work focused on reducing the suffering of animals from factory farming or improving global health.
Improving mental health seems like one of the most direct ways of making people better off, and there appear to be many promising areas for research and reform that have not yet been adequately explored—especially with regard to new drug therapies and improving mental health in the developing world. See the Happier Lives Institute for more.
There is also some chance that like economic growth, better mental health in a population could have positive indirect effects that accumulate over time. Read a preliminary review of this cause area and check out our podcast episode with Spencer Greenberg to learn more.
Biomedical research and other basic science
Basic scientific research in general has had a large positive effect on welfare historically. Major breakthroughs in biomedical research specifically could lead to people living much longer, healthier lives. You might also be able to use training in biomedical research to work on other promising areas discussed above, like biosecurity or anti-aging research. Read more.
Increasing access to pain relief in developing countries
Most people lack access to adequate pain relief, which leads to widespread suffering due to injuries, chronic health conditions, and disease. One natural approach is increasing access to cheap pain relief medications that are common in developed countries, but often not available in the developing world. One group working in this area is the Organization for the Prevention of Intense Suffering. Read more.
Other risks from climate change
We discuss extreme risks of climate change—such as severe warming and geopolitical risks—in our writeup of the area.
Climate change also threatens to create many smaller problems or make other global problems worse, for example frictions between countries due to movement of refugees. While compared to other areas we cover climate change is not as neglected, we are highly supportive of reducing carbon-emissions through research, better technology, and policy interventions. Read more.
Smoking in the developing world
Smoking takes an enormous toll on human health – accounting for about 6% of all ill-health globally according to the best estimates. This is more than HIV and malaria combined. Despite this, smoking is on the rise in many developing countries as people become richer and can afford to buy cigarettes.
Possible approaches include advocating for cigarette taxes or campaigns to discourage smoking, and development of e-cigarette technology. Read more.
Comments sorted by top scores.
comment by Pablo (Pablo_Stafforini) ·
2020-06-23T01:23:25.658Z · EA(p) · GW(p)
Great post, thank you for compiling this list, and especially for the pointers for further reading.
In addition to Tobias's proposed additions [EA(p) · GW(p)], which I endorse, I'd like to suggest protecting effective altruism as a very high priority problem area. Especially in the current political climate, but also in light of base rates from related movements as well as other considerations, I think there's a serious risk (perhaps 15%) that EA will either cease to exist or lose most of its value within the next decade. Reducing such risks is not only obviously important, but also surprisingly neglected. To my knowledge, this issue has only been the primary focus of an EA Forum post [EA · GW] by Rebecca Baron, a Leaders' Forum talk by Roxanne Heston, an unpublished document by Kerry Vaughan, and an essay by Leverage Research (no longer online). (Risks to EA are also sometimes discussed tangentially in writings about movement building, but not as a primary focus.)Replies from: MichaelA, Ardenlk, elifland, Jorgen_Ljones, Julia_Wise
↑ comment by Ardenlk ·
2020-06-23T09:44:59.967Z · EA(p) · GW(p)
Thanks Pablo -- I agree we should discuss risks to EA more. It seems like it should be a natural part of 'building effective altruism' to me. I wonder why we don't discuss it more in that area. Maybe people are afraid it will seem self-indulgent?
I think I'd worry about how to frame it in 80k content because our stuff is very outward-facing and people who aren't already part of the community might not respond well to it. But that's less of an issue with forum posts, etc.
I'd also guess most people's estimates for EA going away or becoming much less valuable in the next 10 years are lower than yours. Want to expand a bit on why you think it's as high as you do?
Thanks for bringing this up and also for the list of places this has been discussed!
Replies from: Pablo_Stafforini
↑ comment by Jorgen_Ljones ·
2020-06-24T11:51:47.032Z · EA(p) · GW(p)
This made me think of backing up online EA content. It's not that hard to automatize backing up the content on the EA Forum, the EA Hub and the websites of CEA, GiveWell and other organizations. Not all movement collapse scenarios involve loosing access to online content and communication platforms, but it may be part of both internal conflict scenarios and external shocks.
Is the EA Forum regularly backed up, Aaron?
Replies from: jpaddison
↑ comment by JP Addison (jpaddison) ·
2020-06-25T18:15:51.085Z · EA(p) · GW(p)
Short answer: Yes.
Our database provider provides backups automatically. I would be very surprised if they lost it. I think the largest remaining risk is that I accidentally issued a command to delete everything. In that worst case scenario, I'd be able to get one-off copies of the database that I've made at various points.
There's still a single point of failure at the level of my organization. If something (maybe a lawsuit? seems unlikely) were to force us to intentionally take the site down, you'd want to have backups outside of our control. For that you might want to see this question [EA · GW], which your comment may have prompted.Replies from: Inda
↑ comment by Inda ·
2020-06-26T01:42:12.609Z · EA(p) · GW(p)
Can’t you release the backups on torrent in the event of a legal shutdown? Without actually admitting that you “leaked” the data, of course. Considering how successful piracy has been, making a first-party backup persist on the net seems like a low hanging fruit to me.
Replies from: jpaddison
↑ comment by Julia_Wise ·
2020-06-24T15:00:39.023Z · EA(p) · GW(p)
I agree this is a broad and worthwhile area to think about. The community health team at CEA (Sky Mayhew, Nicole Ross, and I) do some work in this area, and I know of various staff at other orgs who also think about risks to EA and incorporate that thinking into their work. That’s not to say I think we have this completely covered or that no risk remains.
comment by MichaelPlant ·
2020-06-22T20:14:06.308Z · EA(p) · GW(p)
Thanks for this write up. The list is quite substantial, which makes me think: do you have a list of problems you've considered, concluded are probably quite unpromising and therefore dissuade people from undertaking? I could imagine someone reading this and thinking "X and Y are on the list so Z, which wasn't mentioned explicitly [but 80k would advice against], is also likely a good area".Replies from: MichaelA, Ardenlk
↑ comment by Ardenlk ·
2020-06-22T21:05:01.653Z · EA(p) · GW(p)
Hey Michael -- there isn't such a list, though we did consider and decide not to include a number of problems in the process of putting this together. I definately think that "X and Y are on the list so Z, which wasn't mentioned explicitly, is also likely a good area" would be a bad inference! But there are also probably lots of issues that we didn't even consider so something not being on the list is probably at best a weak negative signal. [Edit: I shouldn't have said "at best" -- it's a weak negative signal.]
Replies from: jackmalde
↑ comment by jackmalde ·
2020-06-23T17:04:02.506Z · EA(p) · GW(p)
I don't know if you guys have capacity but it might be useful for a separate post to list the problems that you considered and decided not to include, with short explanations as to why. This may reduce the probability of people independently investigating them which could save time, or increase the probability of people investigating them if they think you wrongfully excluded them which could be helpful. Just an ideaReplies from: Ardenlk
↑ comment by Ardenlk ·
2020-06-24T17:11:01.844Z · EA(p) · GW(p)
Hey jackmalde, interesting idea -- though I think I'd lean against writing it. I guess the main reason is something like: There are quite a few issues to explore on the above list so if someone is searching around for something (rather than if they have something in mind already), they might be able to find an idea there. I guess despite what I said to Michael above, I do want people to see it as some positive signal if something's on the list. Having a list of things not on the list would probably not add a lot, because the reasons would just be pretty weak things like "brief investigation + asking around didn't make this seem compelling acc to our assumptions". Insofar as soeone was already thinking of working on something and they saw that, they probably wouldn't take it as much reason to change course. Does that make sense?
Replies from: jackmalde
↑ comment by jackmalde ·
2020-06-25T08:48:25.470Z · EA(p) · GW(p)
Hi Arden, yeah that makes sense. You've definitely given the EA community a lot to work on with this post so probably not worth overcomplicating things.
comment by willbradshaw ·
2020-06-25T19:45:56.180Z · EA(p) · GW(p)
Nice list, thanks for compiling it!
It would be great to hear your thoughts about putting wild-animal welfare in the "Other longtermist issues" section. I know quite a few people who are sceptical about the value of wild-animal welfare for the long-term future. (I think the medium-term case for it is pretty solid.)
Replies from: Max_Daniel, Ardenlk
↑ comment by Max_Daniel ·
2020-06-26T08:00:00.573Z · EA(p) · GW(p)
Yes, FWIW I was also quite surprised to see wild-animal welfare described as a longtermist issue. (Caveat: I haven't listened to the podcast with Persis, so it's possible it contains an explanation that I've missed.). So I'd also be interested in an answer to this.Replies from: alexrjl
↑ comment by alexrjl ·
2020-06-26T13:22:35.182Z · EA(p) · GW(p)
I think it depends somewhat on what you mean by longterm, but my (limited) understanding is that wild-animal welfare is currently very much in the "we should do some thinking and maybe some research stage but not take any actions until we know a lot more and/or have much greater ability to do so" stage, which does put it on a timeframe which is decidedly *not* "neartermist"Replies from: willbradshaw, Max_Daniel
↑ comment by willbradshaw ·
2020-06-26T14:00:24.754Z · EA(p) · GW(p)
Yeah, this is why I said "medium-term" rather than "near-term". I agree that calling wild-animal welfare "neartermist" is confusing and perhaps misleading, but I think probably less so than calling it "longtermist", given how the latter term is generally used in EA.
I'm optimistic about wild-animal welfare work achieving a lot of good over the next century or two. I don't expect it to have major positive impact on the longrun future, except perhaps indirectly via values-spreading.
Replies from: email@example.com
↑ comment by Fai (firstname.lastname@example.org) ·
2021-03-11T11:57:07.557Z · EA(p) · GW(p)
Hey Will (or anyone that sees this), if you can still see this reply, can you let me know what you think about this set arguments supporting that WAS is a longtermist issue?
The four main arguments:
- I think it is quite clearly plausible to argue that what we do now will probably impact wild animals in the far future. This argument means that current WAS work can be perceived as potentially longtermist. But we have to establish WAS as a worthy longtermist issue.
- In terms of potential, the number of wild animals that can exist in the future seems to far exceed the number of humans/human-like organisms that can exist in the future (but maybe lower than the potential "number" of artificial minds). This makes it seem plausible to argue that the amount of potential well-being and suffering at stake for wild animals is at least of the same order of magnitude of importance for such potential for humans/human-like organisms. This argument means that WAS is plausibly a worthy longtermist issue.
- One way to view the cause area of WAS is to view it as the first stage of achieving high positive welfare for wild animals. If we have certain obligations in making future humans/human-like organisms capable of attaining more, higher, and longer positive experiences than average humans now are, it seems to me to be plausible to conceive the possibility that the correct moral theory could entail certain obligations to make non-human animals more capable of attaining positive experiences. And since the potential number of possible non-human animals that can exist seems to far exceed humans/human like organisms, it seems to me plausible to argue that such obligation is at least on the same order of magnitude to that toward humans/human-like organisms.
- Notice that if one thinks that there is an obligation to create more humans/human-like organisms to experience good lives, this point is actually made stronger. You might see why this is the case later.
- This argument is similar to the arguments made on value spreading made by others in this thread, but not exactly. Changing the society's current view on WAS might have huge implications/impacts on the welfare of future artificial minds that take the form or appearance of wild animals, for example those that exist in nature/evolution simulations. It is conceivable that humanity's current and near-future views on WAS will partially stay in the far future, and if it does it seems quite possibly catastrophic. For example, if humans continue to value the "intrinsic beauty/value" of thriving and diverse ecosystems over the suffering that happens together, or even sees the suffering as part of the "beauty", nature simulations might be deliberately built with the suffering.
The first two arguments hinge largely on the premise that the potential number of non-human animals that can exist in the future far exceed that of humans/human-like organisms. As some of you might not agree with this, I think it might be necessary for me to explain why I think so. If you don't disagree with this, you don't need to read further.
First, I am only speaking of the highest potential numbers, not an expected estimate of the actual numbers. Second, I meant to separate physically existing animals and humans from digitally simulated/emulated animals and humans, because I can't see a convincing reason why the number of digital humans will be more than animals, nor the reverse.
So why is the potential number of non-human animals higher than that of humans? Basically it is because for any planet that is habitable for humans and can be turned into human-habitable ones, it will also be extremely likely to become habitable for non-human animals. And since non-human animals can be much smaller than humans, their number potential has to be higher.
Also, after arguing for the potential number of organisms, I would like to express my view on the expected number of animals that are human/non-human: I think the expected number of physical non-human animals is (maybe substantially) lower than physical humans/human-like organisms. Four arguments makes me quite confident about this:
A. It is possible that future humans/human-like organisms would want to intentionally bring or create wild animals to terraformed planets. A 1% chance of this being true would imply more non-human animals than humans brought to life.
B. Even if humans won't be specifically interested in bringing/creating animals, an interest in bringing or just allowing some "nature" or "wilderness" (which basically have to have at least plants) to those planets will likely spawn animals to live naturally.
C. Even if humans will be eager to prevent nature/wilderness as much as practical, some animals might still be allowed to spawn to life. For example, because they have no interest in preventing or destroying "every bit of nature" due to diligence of energy use, or because biological processes might still be perceived as one of the most efficient ways to produce certain things (such as metabolizable calories).
D. It seems likely or at least possible that humans/human-like organisms will not be the last physical animal to go extinct.
Last but not least, I have one last potential argument in reply to the view that some hold, that claims that the expected number of non-human animals will be far less than humans/human-like organisms/artificial minds (and therefore WAS is not a longtermist issue). The argument probably is better illustrated in the form of a question: Should it be the case? Regardless of what probability distribution we assign to this future scenario, whether this future scenario is ethically good/ideal/right is another question, one that we have yet to ask let alone answer. To decide now that this scenario will be the case and we will leave it as it is seems to me to be premature and irresponsible. (Part of the current WAS research agenda is to gain insights on relevant population ethics problems.)
↑ comment by Max_Daniel ·
2020-06-26T13:49:09.868Z · EA(p) · GW(p)
I agree that most people seem to think this is true about wild-animal welfare. However, I don't think this means wild-animal welfare is well described as a longtermist issue. The definition of longtermism [EA · GW] is about when most of the value of our actions is going to accrue, not about when we expect to take more direct actions. So I think the natural reading of 'longtermist issue' is 'an issue that we think is important because working on it will have good consequences for the very long-run future' (or something even stronger, like being among the most valuable issues from that perspective), not 'an issue we don't expect to directly work on in the short term'.
↑ comment by Ardenlk ·
2020-06-28T10:20:02.794Z · EA(p) · GW(p)
To be honest, I'm not that confident in wild animal welfare being on the 'other longtermist' list rather than the 'other global' list -- we had some internal discussion on the matter and opinions differed.
Basically it's on 'other longtemrmist' because the case for it contributing to spreding positive values seems stronger to me than in the case of the other global problems. In some sense working on any issue spreds positive values, but wild animal welfare is sufficiently 'weird' that it's success as a cause area seems more likely to disrupt people's intuitive views than successes of other areas, which might be particularly useful for spreading postitive values/moral philosophy progress. In particular, the rejection of "natural = good" seems like it could be a unique and useful contribtuion. I also find the analogy of wild animals and other forms of consciousness that we might find ourselves influencing (alien life? Artificial consciousnesses?) somewhat compelling, such that getting our heads straight on wild animal welfare might help prepare us for that.
Replies from: willbradshaw
comment by BrianTan ·
2020-06-26T07:04:59.122Z · EA(p) · GW(p)
Thanks for posting this Arden! I know that this list is quite long already, but I was wondering why criminal justice reform isn't on the list of "other global issues"? It seems to be the only focus area of Open Philanthropy that isn't covered in one of these priorities.Replies from: Ardenlk
↑ comment by Ardenlk ·
2020-06-29T14:55:54.813Z · EA(p) · GW(p)
In general, we have a heuristic according to which issues that primarily affect people in countries like the US are less likely to be high impact for more people to focus on at the margin than issues that primiarly affect others or affect all people equally. While criminal justice does affect people in other countries as well, it seems like most of the most promising interventions are country-, and especially US-, specific -- including the interventions Open Phil recommends, like those discussed here and here. The main reason for this heuristic is that these issues are likely to be less neglected (even if they're still neglected relative to how much attention they should receive in general), and likely to affect a smaller number of people. Does that make sense?
comment by MichaelA ·
2020-06-23T05:08:49.418Z · EA(p) · GW(p)
Thanks for putting this list together, and including a whole bunch of handy links! This post has already caused me to buy After Tamerlane (there's an audiobook version) and to download some of the IISS podcast episodes.
Two things I particularly appreciated were the mention of atomic scale manufacturing and risks of stable totalitarianism. Those are two problems where it seems like there are either some estimates [EA · GW] or some arguments suggesting we should be substantially concerned, and yet where there's relatively little discussion of the poblems in EA. So I'd be excited to see a bit more discussion of those topics, even if it's just to make a clearer case for de-prioritising those topics for now. (That said, I don't actually have strong reasons for thinking those problems matter more than the rest of the problems listed in this post.)
Also, here are three other collections of links relevant to some problems covered here, which some readers may find useful:Replies from: Ardenlk
↑ comment by Ardenlk ·
2020-06-23T10:00:34.453Z · EA(p) · GW(p)
Glad you've found it helpful, and thanks for these resource lists! I'm adding them to our inernal list of resources. Anything you've read from them you think it'd be particularly good to add to the above blurbs?
Replies from: MichaelA
↑ comment by MichaelA ·
2020-06-24T00:09:30.318Z · EA(p) · GW(p)
Hey Arden, glad those resource lists look useful!
Unfortunately, I haven't seen any papers, chapters, or even blog posts fully focused on robust totalitarianism, except for the Caplan chapter you already mentioned. I assume that (a) there's a lot of fairly relevant work in fields like international relations or political science, but also that (b) such work won't focus on the matter of global and very long-lasting totalitarianism - but to be honest I haven't actually checked either of those assumptions. (If anyone else knows of any relevant work, please comment about it on that collection I made.)
For "Broadly promoting positive values", it might be worth adding one or more of:
But I've only read the first two of those posts thus far, and I at least slightly disagreed with parts of both, personally.
As for the other topics, the only things that immediately come to mind as particularly noteworthy are Should Longtermists Mostly Think About Animals? [EA · GW] and Space governance is important, tractable and neglected [EA · GW], both for the "Space settlement and terraforming" topic. (Tobias already mentioned space governance in another comment; I'm neutral about whether a separate topic should be added for space governance specifically, but I also think it could make sense to just fold that into the "Space settlement and terraforming" topic.)Replies from: Ardenlk
comment by atlasunshrugged ·
2020-06-23T17:10:29.819Z · EA(p) · GW(p)
For the author, please correct me if I'm wrong, but the reference to Great Power Conflict is most likely the U.S. vs. China - is that right (just inferring based on the Graham Allison recommendation)? I'm curious if you a more in depth rationale or data available for this? Mostly, I'm curious about some other outcomes and how harmful they are - for instance, what happens if we avoid great power conflict but in doing so allow China to become the dominant world power and spread their authoritarian governance model even further than they do today? What are the expected deaths and the level of economic destruction of a conflict with China today? If the likelihood of a conflict is significantly high and the level of potential destruction continues to rise as China gains more and more military and economic capabilities, is it better to initiate a conflict early?Replies from: NunoSempere, Ardenlk
↑ comment by NunoSempere ·
2020-06-24T17:35:16.724Z · EA(p) · GW(p)
India v. China conflict is perhaps more immediately worrying than US v. China.
Replies from: atlasunshrugged, Cullen_OKeefe
↑ comment by atlasunshrugged ·
2020-06-24T17:47:04.616Z · EA(p) · GW(p)
Because of the likelihood of it occurring or because the potential for human/economic damage or both? It also is concerning to me given that India would probably be somewhat more inclined to use nuclear weapons in a China v. India conflict than America would be (although who knows with the current admin), especially if Pakistan started making moves at the same time as India was focused on China. But I'm not sure why China would really push a conflict, that means they have to move huge amounts of men and materials to the west and potentially leave an opening on their coasts, plus they import huge amounts of energy products that flow past India that would surely get massively disrupted in the case of a conflict and don't have that strong of a blue water force projection capability as the US and others who would probably come to India's aid
↑ comment by Cullen_OKeefe ·
2020-06-24T17:40:58.145Z · EA(p) · GW(p)
India v. Pakistan seems very important as well
Replies from: atlasunshrugged
↑ comment by atlasunshrugged ·
2020-06-24T20:30:04.888Z · EA(p) · GW(p)
Agreed, from the foreign policy folks I follow who focus on the region that one seems especially dangerous, especially if you care about stopping the usage of nuclear weapons which would be somewhat more likely in an India v. Pakistan conflict given it's likely Pakistan would lose a war waged with purely conventional weaponry
↑ comment by Ardenlk ·
2020-06-24T17:00:11.241Z · EA(p) · GW(p)
I'm afraid I don't know the answers to your specific questions. I agree that there are things worse than great power conflict, and perhaps China becoming the dominent world power could be one of those things. FWIW although war between the US and China does seem like one of the more worrying scinarios at the moment, I meant the description problem to be broader than that and include any great power war.
Replies from: atlasunshrugged
↑ comment by atlasunshrugged ·
2020-06-24T17:43:25.148Z · EA(p) · GW(p)
No worries, I was just curious - I've tried to find data on things like projections of lives lost in combat between the US and China and can't find anything good (best I found was a Rand study from a few years ago but it didn't really give projections of actual deaths) so was curious if you had gotten your hands on that data to make your projections. Sorry for the misunderstanding, I had assumed China/US conflict but makes sense - probably anyone with nuclear capabilities who gets into a serious foreign entanglement will create an extremely dangerous situation for the world.Replies from: MichaelA
↑ comment by MichaelA ·
2020-06-25T00:06:58.820Z · EA(p) · GW(p)
probably anyone with nuclear capabilities who gets into a serious foreign entanglement will create an extremely dangerous situation for the world.
I'd agree with this. But partly due to what nuclear capabilities correlates with, rather than solely due to the nuclear capabilities themselves. Off the top of my head, I see at least 4 mechanisms by which great power war could reduce the expected value of the long-term future:
- Risk of nuclear war and thereby of nuclear winter (this seems to be the implied focus of your comment)
- Increased chances of unsafe development of emerging technologies (or, similarly, less willingness/ability to cooperate on ensuring that technological development proceeds safely)
- As this post notes, "In addition, mistrust between major powers makes it harder for them to coordinate on arms control or ensure the safe use of new technologies."
- Increased chance of robust totalitarianism [EA(p) · GW(p)] (analogous to how it seems plausible that, had the Nazis won WWII, that regime would've spread fairly globally and lasted fairly a long time)
- Residual chance of various bad things if there's a violent disruption of current trends, which seem to be unusually good (see The long-term significance of reducing global catastrophic risks by Beckstead)
Speaking as very much a non-expert, all 4 of those mechanisms seem important to me, without one of them standing out as far more important than the others. (Though I think I'd very weakly expect the first two to be more important than the last two.) If that's true, and if someone had previously focused primarily on the risks of nuclear winter, this might suggest that person should increase their level of concern about great power conflict, including about conflicts that are very unlikely to result in nuclear weapons use.
(I assume there's been EA and non-EA work on this general topic that I haven't seen - this is just my quick take.)
comment by Prabhat Soni ·
2020-06-22T15:24:55.884Z · EA(p) · GW(p)
We may need to invest more to tackle future problems
Which types of "investments" are you talking about? Are they specifically financial investments, or a broader range of investments?
In case you mean a broader range of investments, such investments could include: building the EA movement, making good moral values a social norm, developing better technologies that could help us tackle unforseen problems in the future, improving the biological intelligence level of humans. This definition could get problematic since many of these investments are seperate cause areas themselves.Replies from: Brendon_Wong
↑ comment by Brendon_Wong ·
2020-06-23T18:37:48.364Z · EA(p) · GW(p)
They are referring to financial investments (stocks, bonds, etc) as covered in the linked podcast episode with Philip Trammell.Replies from: MichaelA, Prabhat Soni
↑ comment by MichaelA ·
2020-06-23T23:49:56.034Z · EA(p) · GW(p)
The relevant section of this post does appear to be discussing financial investments, or at least primarily focusing on that. But that wasn't Trammell's sole focus. As he states in his 80k interview:
Philip Trammell: [...] in this write-up, I do try to make it clear that by investment, I really am explicitly including things like fundraising and at least certain kinds of movement building which have the same effect of turning resources now, not into good done now, but into more resources next year with which good will be done. I would be just a little careful to note that this has to be the sort of movement building advocacy work that really does look like fundraising in the sense that you’re not just putting more resources toward the cause next year, but toward the whole mindset of either giving to the cause or investing to give more in two years’ time to the cause. You might spend all your money and get all these recruits who are passionate about the cause that you’re trying to fund, but then they just do it all next year.
Robert Wiblin: The fools!
Philip Trammell: Right. And I don’t know exactly how high fidelity in this respect movement building tends to be or EA movement building in particular has been. So that’s one caveat. I guess another one is that when you’re actually investing, you’re generally creating new resources. You’re actually building the factories or whatever. Whereas when you’re just doing fundraising, you’re movement building, you’re just diverting resources from where they otherwise would have gone.
Robert Wiblin: You’re redistributing from some efforts to others.
Philip Trammell: Yeah. And so you have to think that what people otherwise would have done with the resources in question is of negligible value compared to what they’ll do after the funds had been put in your pot. And you might think that if you just look at what people are spending their money on, the world as a whole… I mean you might not, but you might. And if you do, it might seem like this is a safe assumption to make, but the sorts of people you’re most likely to recruit are the ones who probably were most inclined to do the sort of thing that you wanted anyway on their own. My intuition is that it’s easy to overestimate the real real returns to advocacy and movement building in this respect. But I haven’t actually looked through any detailed numbers on this. It’s just a caveat I would raise.
I'm currently working on two drafts relevant to these topics, with the working titles "A typology of strategies for influencing the future" and "Crucial questions about optimal timing of work and donations". I'll quote below my current attempt from one of those drafts to make a distinction between "present-influence" actions (this term may be replaced) and "punting to the future" actions. (I plan to adjust this attempt soon, or at least to add a causal diagram to make things clearer.)
MacAskill [EA · GW] has discussed whether we’re living at the “most influential time in history”, for which he proposed the following definition:
a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources, that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.
He writes that the most obvious implication of this is:
regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes.
Following [EA(p) · GW(p)] Tomasik, I’ll refer to “buck-passing” strategies as “punting to the future”.
There were many comments on MacAskill’s post about the difficulties of distinguishing “buck-passing” strategies from other strategies. It can also seem hard to distinguish this from the “narrow vs broad” dimension and an “object-level vs meta-level” dimension [these are two other distinctions I discuss in this draft]. But I think we can resolve these issues by drawing on this comment from Jan Brauner [EA(p) · GW(p)]:
Punting strategies, in contrast, affect future generations [primarily] via their effect on the people alive in the most influential centuries.
Here are my proposed terms and definitions: There’s a continuum from present-influence actions to punting to the future actions. Present-influence actions are intended to “quite soon” result in “direct impacts”[...]. Relatively clear examples include:
- Doing AI safety research yourself to directly reduce existential risk.
- Providing productivity coaching to AI safety researchers.
Meanwhile, punting to the future actions are intended to result in “direct impacts” primarily via actions taken “a long time” from now, which the punting to the future actions somehow supported. [...]
One relatively clear example of a punting to the future action is investing money so that, decades from now, you’ll be able to donate to support AI safety research or movement-building. I also think it makes sense to imagine punting to your own future self, such as by doing a PhD so you can have more impact in “direct work” later, rather than doing “direct work” now.
However, the division isn’t sharp, because:
- all actions would have their influence at least slightly in the future
- many actions will have multiple pathways to impact, some taking little time and others stretching over longer times
For example, AI safety movement-building and existential risk strategy research [EA · GW] could be intended to result in “direct impacts” (after several steps) both decades from now and within years, although probably not within weeks or months. Such actions could be seen as landing somewhere in the middle of the “present-influence to punting” dimension, and/or as having a “present-influence” component in addition to a “punting to the future” component. Indeed, even some people doing AI safety research themselves may be doing so partly or entirely for movement-building reasons, such as to attract funding and talent by showing that progress on these questions is possible and concrete work is being done (see Ord).
If anyone would like to see (and perhaps provide feedback on) either or both of those drafts I'm working on, let me know.
comment by ChrisJensen ·
2020-07-08T10:25:42.935Z · EA(p) · GW(p)
Thank you for sharing this. I'd just like to add that broadly promoting positive values (or even more narrowly focusing on a specific skill like empathy) would have the added benefit of drawing a greater diversity of backgrounds and views to Effective Altruism which would enrich the movement and discussions within it.
In my (admittedly limited) experience with EA, i find that it tends to attract people that are very strongly analytical and technical in their outlook. There's nothing wrong with that, but any particular outlook, brings with it unconscious biases that people from other backgrounds or with different outlooks would help to identify.
To give an example, another item on the list - "Improving individual reasoning or cognition" - would seem to rest on the assumption that better reasoning will lead to quantitative or qualitative improvements in altruistuc behaviours, yet the decision to behave altruistically, to give importance to the well-being of others is primarily an emotional decision rather than a rational one. Sadly there are many examples of both individuals and institutions that have made very effective rational decisions but with defective moral reasoning resulting in great harm.
Improving rational decision making could have negligible impact if distributed evenly, or a negative impact if such improvements are taken up disproportionately by institutions making poor moral decisions.
Similarly, there may well be opportunities for great advances that are currently overlooked that people from more diverse interest areas would help to identify.
comment by Gracchus.T ·
2020-12-30T21:17:59.350Z · EA(p) · GW(p)
In terms of grouping Global Governance and improving logic are there any studies on how people can reduce tribalism and the us-them response? I know that there are studies looking at triggering an us them result through voting but has there been research on reducing tribalism?
comment by Jack_H ·
2020-08-05T16:15:42.423Z · EA(p) · GW(p)
Great that you mention 'anti-aging' research, one of the most promising means of alleviating enormous amounts of suffering (from chronic diseases) and increase healthy lifespan of the population (something not often discussed in EA).
Anti-aging research also meets the EA criteria for an important cause area of tractability, scale, neglectedness.
I personally donate to the SENS Research Foundation to progress this work. I encourage others to also.
comment by AlanGreenspan ·
2020-07-07T05:15:27.934Z · EA(p) · GW(p)
I think solving real estate will solve the long-term welfare problem, which will improve education and bring to light the rest of effective altruism's movements.
This needs to be discussed much more, because it could be solved quite easily, and have an exponential affect on the rest of society.