To me it seems like you have a wrong premise. A wellbeing focused perspective is explicitly highlighting the fact that Sentinelese and the modern Londoners may have similar levels of wellbeing. That's the point! This perspective aims to get you thinking about what is really valuable in life and what the grounds for your own beliefs about what is important are.
You seem to have a very strong opinion that something like technological progress is intrinsically valuable. Living in a more technically advanced society is "inherently better" and, thus, everyone who does not see this is "objectively wrong". That argument would seem strange to even the most orthodox utilitarian. Even if your argument is a little bit more nuanced in the sense that you are seeing technological progress only as instrumentally valuable to sustain larger population sizes at similar levels of wellbeing, this perspective is still somewhat naive because technological progress also has potentially devastating consequences such as climate change or AI risks. In that sense, one can actually make the case that the agricultural revolution was maybe the beginning of the end of the human race. So maybe if there would have been a way to grow our societies more deliberately and to optimize for wellbeing (rather than economic growth) from the beginning, it wouldn't have been such a bad idea? I just want to illustrate that the whole situation is not as clear cut as you make it out to be.
Altogether, I would encourage you to keep more of an open mind regarding other perspectives. The post but also this comment of yours make it seem like you might be very quick in dismissing perspectives and being vocal about it even if you have not really engaged with them deeply. This makes you come across as naive to a somewhat more knowledgeable person which could put you at a personal disadvantage in the future and, in addition, could also be contributing to bad epistemics in your community if the people you are talking to are less informed and, thus, not able to spot where you might be cutting corners. Hope you don't resent me for this personal side note, it's meant in a constructive spirit.
Just a short follow up: I just wrote a post on the hedonic treadmill and suggest that it is an interesting concept to reflect about in relation to life in general:
I think that it may be helpful to unpack the nature of perceived happiness and wellbeing a little bit more than this post does. I think the idea of hedonic adaptation is pretty well known—most of us have probably heard of the hedonic treadmill (see Brickmann & Campbell, 1971). The work on hedonic adaptation points to the fact that perceived happiness and wellbeing are relative constructs that largely depend on reference points which are invoked. To oversimplify things a little bit, if everyone around me is bad off, I may already be happy if I am only slightly better of than them. At the same time, I might be unhappy if I am pretty good off but everyone around me is much better off. As such, it is entirely reasonable to expect that hunter-gatherers when asked about their life feel quite good and happy about it as long as they don't feel like everyone else around them is much better off.
The conclusion of this post should not be that perceived happiness and wellbeing should not be used to compare the effects of interventions but that they simply measure something different than "objective measures". They aim to measure how people feel about their life in general as they compare it to others, not how they score on a particular metric in isolation. Whether you prefer one or the other approach largely depends on your perspective on what is valuable in life. Some people may find making progress on metrics that they find particularly valuable is the way to go and others prefer a more self-organizing perspective where the affected people themselves are more involved in determining what is valuable.
In sum, this post seems a little bit confused on what the WELLBY debate is about. I can recommend the cited article to get some idea on why something like a WELLBY approach may be interesting to consider even if one doesn't like it at first glance.
Brickman, P., & Campbell, D. (1971). Hedonic relativism and planning the good society. In M. H. Appley (Ed.), Adaptation-level theory: A symposium (pp. 287–305). Academic Press. https://archive.org/details/adaptationlevelt0000unse_x7d9/page/287/mode/2up
If you take this as your point of departure, I think that’s worth highlighting that the boundaries between community and organizations can become very blurry in EA. Projects pop up all the time and innocuous situations might turn controversial over time. I think those examples with second-order partners of polyamorous relationships being (more or less directly) involved in funding decisions are a prime example. There is probably no intent or planning behind this but conflicts of interest are bound to arise if the community is tight knit and highly “interconnected”.
While I think that you have a good starting point for a discussion here, I would expect the whole situation to be not as clear cut and easy as your argument suggests. So, I really agree with the post that getting to a state most people are happy with will require some muddling through.
I kind of skimmed this post, so hopefully I am not making of fool pf myself but I think you didn’t really address a key point which is raised by „critics“ and that are the challenges associated with the tendency for centralization in EA.
There are basically two to three handful of people who control massive amounts of wealth, many of which are interweaved in a web of difficult to untangle relationships ranging from friendly to romantic. The denser this web is, the more difficult it is for people to understand what is going on. Are rejections or grants based on emotion or merit? It’s simply difficult to say the more complex the interactions are.
I think having friendships and relationships is great if the people involved are happy but we have to develop appropriate means for dealing with the complexity of it all. For instance, in cooperatives relationships are often very highly valued and central to the whole experience of being part of the cooperative. There are formalized mechanisms in place to afford systematic discussion of relationships and negotiation of mutually acceptable forms of organization. In EA, we are lacking this kind of structure. While some participatory islands might exist, there are often streamlined but opaque processes in place that allow a few people to make huge decisions affecting countless people with very limited involvement from the community at large (or the people affected for that matter). This becomes pretty tricky to justify as “EA” if you cannot demonstrate that the decisions being made are “above reproach” and not influenced by romantic relationships, in-group favoritism or the like.
In sum, I think I broadly agree that having friendly or even romantic relationships within the EA community can also have a good side but I am very skeptical that our current ways of organizing can handle all the complexity that is entailed by strong versions of this. If we want deeper and more relationships within the community, we should adapt our spaces and institutions to be ready for that. We owe it to ourselves and others to figure out how we can behave responsibly in this context.
Thanks for the response. I agree that this might not be „pleasant“ to read but I tried to make a somewhat plausible argument that illustrate some of the tensions that might be at play here. And I think this is what the comment that I replied to asked for.
Also I would argue that the comment „holding up“ when we are switching to related phenomena (at least sex positive gay culture) could actually be an indicator of it pointing to some general underlying dynamics regarding „weirdness“ in relation to orthodoxy. Weirdness tends to leave more room for deviance from established norms which may attract people with tendencies toward rule breaking. And since being gay has become much more accepted by the mainstream and less „weird“, so has the potential for misuse by bad faith actors.
All of this should not be interpreted as me having anything against polyamory or other practices currently perceived to be weird per se, actually, I find there are very interesting arguments in favor of polyamory and I am many regards holding weird positions myself (e.g., vegan, etc.). I have friends who have polyamorous relationships. But given it’s status in the current environment, it still might be an attraction point for nefarious people simply by virtue of being „weird“ and, thus, more open for misuse.
Just to explain why I downvoted this comment. I think it is pretty defensive and not really engaging with the key points of the response, which made no indication that would justify a conclusion like: „You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.“
There is nothing in the capability approach as explained that would keep you from using survey data to consider which options to provide. On the opposite, I would argue it to be more open and flexible for such an approach because it is less limited in the types of questions to ask in such surveys. The capability approach simply highlights that life satisfaction or wellbeing are not necessarily the only measures that can be used. For instance, you could also ask what functionings provide meaning to your life, which may be correlated to life satisfaction but not necessarily the same thing (e.g., see examples that were given).
I generally agree but I think it might also be interesting to take a memetic perspective and look at the incentives and consequences that some of the ideas might cause as a product of their information content interacting with a dynamic environment. Sometimes we tend to think of ourselves as the masters of our own behaviors (e.g., we have „free“ will) but underneath it all, we may just be carriers for the information and rules encoded in genes and memes. In this view, „weirdness“ relating to the distribution of memes may actually be an informative perspective because it highlights novel dynamics that might be at play here.
I think that the call for more self-awareness regarding weirdness and how this might be viewed by other people is quite important. However, I also think it has been discussed before and quite a few people are aware of it. The recent situations have highlighted that we should maybe aim for clearer guidelines and rules how to handle this in practice. But it’s not really easy to find appropriate tradeoffs between the different interests here (weird vs. non-weird).
If you downvote or disagree it’s quite helpful to explain why. I think this is a reasonable comment that provides a possible answer to the question that was posed. I would argue it makes a contribution to the discourse here and deserves to be engaged with.
For me it seems really difficult to disentangle whether downvotes are just „soldier mindset“ or actually grounded in deliberate reasoning. Just downvoting without any kind of explanation seems like it should be reserved for clear cut cases of „no contribution“.
My point here was that the conclusion that can be drawn from your example is orthogonal to the question of how concentrated power is. Your example did not provide much evidence against the claim that concentration of power may be a contributing factor to the issue here. Feel free to reread my prior comment.
Disclaimer: I haven’t read the full article but I think a common position one could take here is the following:
Polyamory makes sexually promiscuous behavior permissible and some might argue „virtuous“ in a way that it encourages conflicts with conventional understanding of love and sexual relationships. Polyamory might not be „bad“ in principle but could be a contributing factor to people feeling emboldened and morally justified in making sexual advances even when they are not appropriate. So the claim here is not that non-polyamorous people could not have behaved similarly but that the rate of non-polyamorous behaving in such ways is lower because of more „guilt“ and „shame“ associated with sexually promiscuous behaviors.
EDIT: After some rethinking of the formulation of the sentence above, I would change it to: So the claim here is that polyamory might attract some people prone to predatory behaviors who feel like they can justify their own attitudes and behaviors this way. It could be easier to tell yourself that what you are doing is polyamorous and that’s why other people are freaked out by what you do rather than deal with the fact that your behavior may be over-the-line.
I think this line of thinking should not be dismissed outright as I don’t have any data that could back either side on this one. My gut says there could be something to the argument but mostly in the sense that I think that polyamory could cover a heterogeneous group of people who may express more extreme positions on a spectrum here. Some or most polyamorous people may be more sensitive to such issues but a few people may really feel emboldened and justified to behave in predatory ways.
But those are two very different communities / movements and I don’t think that the situations are similar. As you said, there is something like CEA and the EA movement also has the ambition to act in a somewhat coordinated fashion to solve the world’s biggest problems, whereas dance groups grow like wild flowers wherever and whenever enough people interested in dance come together regularly.
I am not saying that there is nothing to learn by comparing these different situations but this doesn’t seem to be an argument against the theory that centralization of power could have somehow contributed to creating an environment in which people felt badly treated or even harassed. Rather it seems to be more of an illustration that preventing such behaviors is a really difficult problem regardless of concentration of power.
Thank you for the thoughtful reply! I think the kind of debate that you indicated to have had is exactly what we need to make sense of such emotionally difficult and complex topics.
Even if you come to the conclusion that other framings seem more useful to you, we can’t have confidence in such conclusions if they are made ex ante without deliberate engagement with the content. So thanks for doing that!
I personally think that we may need to take more time to really try to understand and explore the problem we are facing here, before focusing on solutions. I have the feeling that something broader than „isolated incidents“ of sexual harassment is the right way to frame the problem here. There have been „community-related“ issues after issues popping up over the last few months. We should step back and try to look at the whole picture here and try to understand the mechanisms and drivers that lead to such events. I think this is happening to some degree but it still feels like we could be doing this more explicitly and openly. I really think there’s a lot at stake regarding the future development of the movement.
Sorry, this got meta pretty quickly…
While this is a simple comment, I am a little bit surprised by the down votes and strong disagreement signaled. Could people who strongly disagree with this comment point out their reasoning?
Without having thought too much about this, I do think that it seems plausible to consider the effects that centralized decision making has on enabling or at least not discouraging these types of behaviors.
Thanks for all the answers so far! Collectively they were really helpful to get a sense of how this discussion could be framed in a productive way. I am quite looking forward to push this conversation further, I think there is much to be gained here for all perspectives involved.
Thank you for engaging with the content in a meaningful way and also taking the time to write up your experience. This answer was particularly helpful for me to get the sense that a) there is a productive way that more discussion can be had on this topic and b) some ideas for how this might be framed. So thank you very much!
I am encouraged by the resonance of my question here and think it is worthwhile to try to continue this conversation. I think I would want to work on a longer blog post in the future. Maybe let's connect around that and see if we can open up the doors for more conversations.
I think the point of the metacrisis is to look at the underlying drivers of global catastrophic risks that are mostly various forms of coordination problems related to the management of exponential technologies (e.g., AI, Biotech, and to some degree fossil fuel engines, etc.) and try to address them directly rather than try to solve each issue separately. In particular, there is a worry that solving such issues separately involves building surveillance and control powers to manage the exponential tech which then leads to dystopic outcomes because more centralized power and power corrupts. Because addressing one of these issues opens up the other, we are in a dilemma situation that can only be addressed holistically as "the metacrisis".
That's my broad strokes summary of the metacrisis and why it's argued to be bad. I think some EAs won't see the "more centralized power and power corrupts" as a problem because that's what we are building the AI singleton for... others would say EAs are naively mad for thinking that could be an adequate solution and maybe even dangerous for unilaterally and actively working on this. I think there is more discussion to be had, in particular, if one has short timelines.
It seems like this post was on your personal blog but not link-posted to the EA forum. It might make sense to consider doing that in the future for topics that are potentially EA relevant so that we can all get a quick sense of what the community is thinking about these topics.
Thanks! I am quite happy with the resonance the questions got, so I am considering writing a more comprehensive post on this topic in the future. It would be great to connect at some point and see if there are ways to push this forward together.
thanks for your efforts. I am interested in similar topics also from the perspective of sustainability transitions research, which also seems to be well positioned to help address the "metacrisis" but not really paying that much attention right now. Feel free to reach out via PM, if you are interested to connect.
As a meta point for these types of tutorials, I would recommend a short section on alternative tools with a short discussion of pros and cons for each alternative.
Right now, this feels more like an advert for excalidraw rather than an open exploration of the options out there.
Just wondering how it is possible to be so unsure about the impact of global health interventions but still have „enough“ certainty regarding the positive impact of orgs like FLI? I mean there is still lots of stuff that can go wrong based on FLI interventions. Maybe that’s just the work that tipps us into an astronomical suffering scenario?
It seems rather arbitrary how you make those decisions. Imo, for this to have any value beyond being personal speculation, you should at least start to make explicit your reasoning process in more detail and also express the range of uncertainty you see. Maybe using conditionals as well to cover different scenarios.
Valuing Bill Gates philanthropy at 0 value outright without justification does not seem to be plausible or rigorous to me.
A possible name that I don’t instantly hate just popped into my mind: aspirational altruism.
I don’t love the AA shorthand but the connotation seems apt to what this movement is about. Really aspiring to do the most good but recognizing how that’s only an ideal that can never be reached.
Some variations on this theme seem possible as well like ambitious altruism or daring altruism.
But, yeah wouldn’t hold my breath for a rebranding of EA that ship has mostly sailed. Maybe new adjacent communities will pop up though.
Thanks for the reply Michael.
I just wanted to note that I didn’t want to imply or recommend that the fund managers should be payed but rather that there is a reason fund managers are payed. I think this reason was somewhat neglected in the post. I think more of an argument / investigation would be helpful to understand why / find out if the current arrangement is well-thought out.
I think my main concern is that funds tend to centralize funding pretty strongly and this can have positive but also negative consequences in certain situations. Imagine a corrupt fund manager having access to / leverage over hundreds of millions of dollars.
Paul Christiano’s suggestion regarding donor lotteries may be an interesting approach in this regard because it makes the whole thing less interesting for career criminals.
Having said all that I am not overly concerned that this is a pressing issue for the community but still something that should be in the (back of the) mind of people who design such management systems.
I just wanted to raise a short critique that came to me while reading this section:
Investment funds regularly take a management fee (hedge funds, for example, typically take 1–4% of invested funds each year). Whereas the charitable funds we recommend don’t take any fees for their work.
While I certainly understand the point, it seems a little bit more justification for why this is a good arrangement is desirable when viewed through an economics lens. The reason that there are management fees is so that there is an economic incentive for the people running the fund to stay "alive". In economic terms, ideally, the management fee would be conditional on the profits made using an investment so as to align interests between management team and investors. However, even in cases where there is a simpler arrangement, having economic incentives in place helps to align interests as long as the management teams depends on them.
So, I guess my point is, what we are doing here in the donation space seems to be a very trust based arrangement, where we would need to justify the mechanisms that ensure that interests between management and investors remain aligned if the management team does not depend on the fund surviving. I am slightly worried about this after the whole SBF and FTX debacle. There is/was a lot of good will towards people who seem to have a lot of money and claiming they want to do good with it. How do we make sure that not all of our eggs are in one basket and potential downsides in the case of betrayal or corruption are limited?
Thanks for the thoughtful post. I think you are onto something interesting here!
I like the move of trying to frame ethics in pragmatic terms (i.e., focused on what we actually are and could/should be doing rather then on a-priori assumptions) and would argue that your argument hits onto something really important in this regard. Imo, there is much to learn from pragmatic philosophy to further elaborate on your insight.
Having said that, I am not sure that assigning new meanings to already heavily used terms like "virtue ethics" or "consequentialism" is the right way to go here. Imo, people are bound to be confused by this. Maybe it would make sense to frame it slightly differently by creating a "new model" for ethics that consists of the components you identify and then simply state that "component Y could be informed by prior work on X" where Y is a useful term for the component, and X is one of the already used terms like virtue ethics.
Hope this helps you flesh out this idea further! Feel free to reach out to me if you want to discuss.
Yeah, I think this question is pretty interesting and worthwhile pondering about at least a little bit.
I think that one could take the perspective that the whole FTX situation is presenting EA in a bad light BECAUSE FTX has had a very visible connection to EA with most people (including me) being very enthusiastic about this. I think it’s safe to say that few were considering potential downsides of such associations. It seems like this whole situation can become a watershed moment for the whole community to look at the sources of where “our” money is coming from more critically. Maybe we need new vetting mechanisms or at least be more careful about “praising” large donors.
It’s gonna be interesting to see how this develops!
thanks for the interest. I am considering running some workshops to start bringing the communities a little bit closer together. If you are interested in stuff like that, feel free to reach out to me. Would love to connect with similar minded fellows :)
I think the explanation for this happening is pretty simple. People writing academic articles (me included) have cited EA Forum articles... thus, google is finding them.
For better or worse, I am pretty sure there is no(t yet a) systematic attempt to integrate the EA Forum in the scholarly debate...
I think the main drawback of this approach is that there is no “average person”. Every person is a unique combination across a broad range of characteristics.
The classic example is the story of how the Airforce first designed the fighter jet cockpits to fit the average fighter pilot but got complaints from the pilots that this didn’t work too well for them. Upon investigating it turned out that there was no pilot in the entire airforce that fit the average pilot used to design the cockpit. They changed their strategy and now allow for multiple ways of adjusting the cockpit to the individual characteristics of the pilots. The rest is history.
I think what this tells us that there are indeed many possibilities for how to be in this world and we all have a unique vantage point on life that no one had but us. Thus, it may not really be about “what should I be” but “what can I offer”.
While I have not done a deep dive into the literature and checked the claims in depth, afaik ACT counts as one of the more evidence based psychotherapies with several hundred studies including RCTs demonstrating good effects.
There is also a whole scientific paradigm “contextual behavioral science” based on “functional contextualism” which grounds the development of ACT. This is one of the clearest theoretical foundations for a scientific field I have come across (i.e., it’s a coherent account grounded in Pragmatism) and should be refreshing to have a look at for people interested in philosophy of science as well as behavioral science in general.
I am pretty bullish on ACT and would recommend anyone interested in mental health to have a good look for aspects that might work for them.
What I would maybe add to the post is a short description of the ACT Matrix, which is a thinking tool that can be useful for organizing thoughts about problematic situations. While it certainly depends on the person, some friends I have showed it to found it easy to grasp and very helpful for navigating difficult situations. It’s not a panacea but may be a good starting point for people who appreciate a hands-on learning approach.
I also recommend the tools section in a liberated mind. Should be pretty relatable for people who have done or are generally interested in CFAR workshops / rationality techniques.
Thanks for writing the post!
Thanks for the interesting post! I just wanted to ask if there are any updates on these research projects? I think work along these lines could be pretty promising. One potential partner for cooperation could be clearerthinking.org. They already have a survey tool for intrinsic values and this seems to hit in a similar direction.
Also a big thank you from my side. It really feels like an open and honest account and to me it seems to shine the light on some very important challenges that the EA community faces in terms of making the best use of available talent. I hope that your story can inspire some voices in the community to become more self-reflective and critical about how some of the dynamics are playing out, right under our own noses. For a community that is dedicated to doing better, being able to learn from stories like yours seems like an important requirement.
In this light, I would love to see comments (or even better follow-up posts) address things like what their main takeaways are for the EA community. What can we do to help dedicated people who are starting out to make better decisions for themselves as well as the community?
Thanks for the thoughtful answer. I agree that it's not clear that it is worse than other alternatives, in my comment I didn't give a reference solution to compare it to after all.
I just wanted to highlight the potential for problems that ought to be looked at while designing such solutions. So, if you consider working more on this in the future, it might be fruitful to think about how it would influence such feedback loops.
In essence, I think that act of adding quantitative measures may lend a veil of "objectivity" to assessments of peoples work, which is intrinsically vulnerable to the success to the successful feedback loop.
Based on your comment, I had another look at the specific criteria of the rubric and agree that it seems possible that it could help to counteract something like the dynamic I outlined above, however, it would still have to be applied with care and recognizing the possibility of such dynamics.
The main problem I wanted to highlight is that something like this might obscure those dynamics and might be employed for political purposes such as justifying existing status hierarchies which might be simply circumstantial and not based on merit.
Thanks for the interesting post.
One consideration that comes to my mind is if something like this type of evaluation further reinforces a "success to the successful" feedback loop which is inherently sensitive to initial conditions. As in people might be able to produce great work given the right support and conditions but don't have them in the beginning. Someone else is more lucky and gets picked up, then more supported, which then reinforces further success.
Thus, it seems generally pretty hard to use something like this kind of system to achieve "optimal" outcomes or, rather, let's say you have to be careful about how you implement such rating systems and be aware of such feedback loops.
What do you think about this?
I just wanted to leave a quick endorsement for the concept of "local priorities research". One thing that might be easy to forget is that at least some of the best opportunities for doing good don't just "exist", they are created by entrepreneurial efforts and "made to be". Thus, simply directing people to the, at the time, most impactful opportunities is likely not the best long-term strategy. Rather, it seems logical that we also have to invest a part of our resources into developing our capacity to make the best use of available resources at a specific location and create opportunities that didn't exist before. So thank you very much, for putting this idea on the EA concept map, I hope it receives some of the attention it deserves!
One consideration that came to my mind at multiple times of the post was that I was trying to understand what your angle for writing the post was. So while I think that the post was written with the goal of demarcating and pushing "your brand" of radical social justice from EA, you clearly seem to agree with the core "EA assumption" (i.e., that it's good to use careful reasoning and evidence to try to make the world better) even though you disagree on certain aspects about how to best implement this in practice.
Thus, I would really encourage you to engage with the EA community in a collaborative and open spirit. As you can tell by the reactions here, criticism is well appreciated by the EA community if it is well reasoned and articulated. Of course there are some rules to this game (i.e., as mentioned elsewhere you should provide justification for your believes) but if you have good arguments for your position you might even affect systemic change in EA ;)
Thanks for the quick reply!
Yeah, an article or podcast on the framework and possible pitfalls would be great. I generally like ITN for broad cause assessments (i.e., is this interesting to look at?) but the quantitative version that 80k uses does seem to have some serious limitations if one digs more deeply into the topic. I would be mostly concerned about people new to EA either having false confidence in numbers or being turned off by an overly simplistic approach. But you obviously have much more insight into peoples reactions and I am looking forward to how you develop and improve on the content in the future!
Thanks for the post, very interesting initiative! However, this investigation seems to be at least slightly in conflict/contrast with other Founderspledge investigations into "giving later" options such as DAFs. Could you elaborate how these projects relate and where Founderspledge priorities are pointing to?
I know this is a late reply to an old comment but it would be awesome to know in how far you think you have addressed the issues raised? Or if you did not address them what was you reason for discarding them?
I am working through the cause prio literature at the moment and I don't really feel that 80k addresses all (or most) of the substantial concerns raised. For instance, the assessments of climate change and AI safety are great examples where 80k's considerations can be quite easily attacked given conceptual difficulties in the underlying cause prio framework/argument.
Thanks for the counterpoint, I think that's an interesting perspective and in the abstract valid.
Nevertheless, as far as I can tell, in practice these discussions here don't seem to focus on the assessment of whether "other people spend too much now and not enough later" beyond the general assertion that people tend to discount the feature and the conclusion that, thus, there are opportunities to gain comparatively by investing.
However, what I haven't really seen are good arguments that people are actually spending too much now and not enough later or models which model this aspect in some way. In another comment I have outlined in more detail, why I think that it is important to explicitly consider the "nature" of problem solving when making such analyses and decisions.
Long story short, I think current models of giving now vs. giving later are way too simple and additional consideration about problem solving in general would lead me to believe that giving later should not become "the default" for longtermist giving - at least until we have set up an appropriate infrastructure to effectively identify and address problems as they arise. However, I don't want to misrepresent the position of giving later advocates who have often acknowledged that giving now that takes the form of "investments" (as I am suggesting) is somewhat exempt from the discussion. I agree that there might be substantial room for investments as part of wise philanthropic activity, I just don't think it's a winning strategy by itself. Thus, what I mostly seem to disagree with is the framing and emphasis of the debate.
Circling back to my comment on free riding. Simply postponing giving into the future under the assumption that other people will figure out what to do by then seems dangerous unless appropriate measures are taken to ensure that actual progress does happen at a reasonable rate as the world could also become much worse (e.g. climate change). However, postponing giving into the future, makes the individual who is postponing comparatively better of in the future, which would be a plus. Thus, there is in interesting dilemma situation here, where altruists who are not 100% aligned could get into conflict about who should invest when and how much to maximize overall expected value.
To avoid any potential conflicts as much as possible, care should be taken to communicate why specific decision to give now or later where made and how this is expected to affect the community as a whole. For instance, I would expect an organization considering giving later at a large scale like Founders Pledge to clearly articulate their strategy and what the EA community can expect from them now and in the future in a way that can be checked for value alignment over time. Otherwise, it seems totally plausible that opaque and non-transparent behavior could be perceived as free riding on the investments of the community as a whole.
To me that notion actually seems to be a little bit paradoxical because the notion of giving later seems to imply that there will be better opportunities in the future but at the same time we seem to expect less giving then. Economics 101 would suggest that better opportunities would attract more buyers. Thus, wouldn't we need some other type of argument which considers the nature of the problem under consideration to justify giving later? ↩︎
Thank you for raising some additional considerations against giving later. I think this is really valuable for the ongoing discussion that seems to be strongly tilted in favor of investing and giving later.
Even beyond your argument for movement growth, there seem to be many other intuitive considerations where similar arguments could be made. For instance, you consider that "converting" longtermists is an activity that is not only related to money but also to time and room for growth.
You need time to convert dollars into results given that there are generally strong limitations to room for more funding that is tied to the current allocation of resources in the world. I would guess one could model this as some kind of game where at each time point t you can effectively invest x amount into cause y where x is a function of cumulative money spent on cause y. It could be plausible to model this as a gaussian function (i.e., a bell curve) where money invested in the beginning leads to strong growth in room for more funding in the next round and then declines again at some point when full saturation (i.e., all money that could reasonable be spent is spent) is approached. Interestingly, this is both an argument for giving now and giving later as there is limited room where money could be spent effectively.
Going beyond this "simple" view, it would also be interesting to model how problems grow over time as they are not addressed. The most obvious example is climate change. If somehow a US president in the 80s could have been convinced to shift policy towards renewables... the problem would have likely required much less resources overall. This indicates that the money required to be spent on problems is a function of the time at which it is discovered and how much resources are directed to it over time.
I am not a mathematician but if any of this is remotely plausible, I am not sure that the thinking so far has considered such complications (i.e., at least I haven't seen models that model these things but I also haven't been searching in depth) and at least my intuition tells me that integrating such consideration could radically tip the balance toward a strong preference for giving as early as reasonable and provide a good argument for investing into infrastructure that would help us identify and address problems effectively as they emerge.
This could be an interesting topic for a PhD with simulations chops. Or even a benchmarking platform where different agent strategies can compete against each other.
See Ketter, W., Peters, M., Collins, J., and Gupta, A. 2016. “COMPETITIVE BENCHMARKING: AN IS RESEARCH APPROACH TO ADDRESS WICKED PROBLEMS WITH BIG DATA AND ANALYTICS,” MIS Quarterly (40:4), p. 34. ↩︎
Thanks for the post, it is interesting to see how other people are thinking about this question and I see it as valuable, although I am also somewhat critical toward this whole endeavor.
Maybe I am too naive or not thinking deep enough but with all of these giving now vs. giving later discussions I am somewhat worried about the mindset which is underlying such considerations. While I appreciate people investing time and resources into trying to understand how to have the biggest impact, just taking the perspective of a single investor comes across as somewhat narrow minded and selfish. What you basically seem to be calculating is the optimal degree of free riding that you can get away with to maximize the impact of your own dollars. Maybe it's good knowledge to have where that optimal point seems to be but I am somewhat worried about this becoming the underlying philosophy of longtermist giving.
For instance, longtermism is itself a rather new idea and people thinking about how they can invest as little as possible seems... yes, to some degree rational but also pretty risky in terms of ensuring success giving the many options for failure that exist in our world. I note that "capacity building" interventions are often explicitly excluded from these giving later considerations but giving off the whole vibe of "let's freeride as much as possible" doesn't seem to bode well for such initiatives as well. There is something like image, perception, and momentum and it really feels like this is strongly neglected in these kinds of discussions.
Having said that I am in favor of longtermist thinking but I would encourage to take a broader "community level" perspective. Wouldn't it be more effective to think about optimal rates of investment into community growth and then look for ways to get to those numbers and distributing them fairly rather than focusing on the best outcome for an individual investor and then circle back to what this means for the community? I mean your whole calculation depends on the possible return of investment that you can get from giving now vs. giving later. If we don't have a clear sense of what that RoI is right now how can you make good individual decisions?
Open to be shown the errors in my thinking!
Some simple but possible consideration against patient philanthropy that comes to my mind are:
- Monetary investments are not necessarily value neutral but might actively cause harm, potentially over long time horizons (e.g., investments contributing to climate change). How do you account for this possible "mortage" of letting things run without taking action?
- Climate change is a good example for a problem that could have been solved most effectively with early but heavy investment. How do you guard against missing those opportunities if your general strategy is to "wait things out"? If there is no pressure to "show results" overall results in general are going to depend very much on the estimates of the fund managers which is a crucial failure mode.
- What is the actual value of having loads of money in the bank? There seem to be severe limits on how much money can effectively been used in a short time frame (without resorting to things like large indiscriminate payouts which are most likely not the most effective use of money). It already doesn't seem to make sense to spend too much even if your goal is to get rid of money (see OpenPhil). Thus, it seems like we actually have to invest in capacity building to be able to effectively absorb more funding in the future - this is somewhat contradictory to an explicitly patient perspective. I think this even holds somewhat for catastrophes which have been mentioned elsewhere. Good preparation is likely to be cheaper than getting out the checkbook at the last minute (e.g., see Taiwans response to Covid, climate change).
That's not to say that it seems not worthwhile to explore ways that one can profit from patience but I would personally prefer a term like "wise philanthropy" as a more appropriate goal that respects a more holistic perspective.
Thanks for writing this post, very interesting! I haven't read all of the comments but wanted to share one point that came to me over and over again while reading the post. Apologies if it has already been mentioned in another comment.
It seems like you assume a strong (and relatively simple) causal relationship from genetics to malevolent traits to bad behavior. I think this view might make the problem seem more tractable than it actually might be. Humans are complex systems that are nested in other complex systems and everything is driven by complex and interacting feedback loops. Thus, to me it seems very difficult to untangle causality here. To me it would be much more intuitive to think about malevolence as a dynamic phenomenon that is emergent based on a history of interactions rather than a static personality trait. If you accept this characterization as plausible the task of screening for malevolence in a valid and reliable way seems much more difficult than just designing a better personality test. I think the main difference between those two perspectives is that in the simple case you have a lot of corner cases to keep in mind (e.g., what is if people have malevolent traits but actually want to be good people?) whereas the complex case is more holistic but also much more, well, complex and likely less tractable.
Nevertheless, I agree with the general premise of the post that mental health is an important aspect in the context of X/S-risk related activities. I would go even further than this post and argue that mental health in the context of X/S-risk related activities in general is a very pressing cause area that would score quite well in terms of ITN-Analysis. Thus, I would really love to see an organization or network being set up dedicated to the serious exploration of this area because existing efforts in the mental health space seem only to be focus on happiness in the context of global development. If some interested in this topic reads this, don't hesitate to reach out, I would love to support such efforts.
Thank you for the pointer. I updated the post to correct the typo.
Maybe I am extending Khorton's point but in addition to this simple calculation it might be interesting to consider the marginal counterfactual impact of your operations. I imagine that most of the $300k raised would have been raised for other longtermist causes like the EA long term future fund or similar donation opportunities.
Do you have some reasonable evidence for actually having "grown the pie" and added to the overall donation volume?
Otherwise your marginal impact would be the expected value difference to other donation opportunities like EA funds, which I expect to be somewhat close to zero (e.g., you make the analogy to EA funds yourself in the post).
If you have any questions or concerns regarding NEAD also feel free to comment on this post, we will keep checking and answer in somewhat regular intervals :)