Andreas Mogensen's "Maximal Cluelessness"

post by Pablo_Stafforini · 2019-09-25T11:18:35.651Z · score: 45 (15 votes) · EA · GW · 21 comments

This is a link post for https://globalprioritiesinstitute.org/wp-content/uploads/2019/Mogensen_Maximal_Cluelessness.pdf

Andreas Mogensen, a Senior Research Fellow at the Global Priorities Institute, has just published a draft of a paper on "Maximal Cluelessness". Abstract:

I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance and inscrutability of the indirect effects of our actions, conjoined with the plausibility of a permissive decision principle governing cases of deep uncertainty, known as the maximality rule. I conclude that we lack a compelling decision theory that is consistent with a long-termist perspective and does not downplay the depth of our uncertainty while supporting orthodox effective altruist conclusions about cause prioritization.

21 comments

Comments sorted by top scores.

comment by Pablo_Stafforini · 2019-09-25T12:40:03.071Z · score: 21 (11 votes) · EA · GW

Mogensen writes (p. 20):

We might be especially interested in assessing acts that are directly aimed at improving the long-run future of Earth-originating civilization...These might include efforts to reduce the risk of near-term extinction for our species: for example, by spreading awareness about dangers posed by synthetic biology or artificial intelligence.
The problem is that we do not have good evidence of the efficacy of such interventions in achieving their ultimate aims. Nor is such evidence in the offing. The idea that the future state of human civilization could be deliberately shaped for the better arguably did not take hold before the work of Enlightenment thinkers like Condorcet (1822) and Godwin (1793). Unfolding over time- scales that defy our ability to make observations, efforts to alter the long-run trajectory of Earth- originating civilization therefore resist evidence-based assessment, forcing us to fall back on intuitive conjectures whose track record in domains that are amenable to evidence-based assessment is demonstrably poor (Hurford 2013). This is not a case where it can be reasonably claimed that there is good evidence, readily available, to constrain our decision making.

These concerns are forceful, but don't seem to generalize to all intervention types aimed at improving the long-term future. If one believes that the readily available evidence is insufficient to constrain our decision making, one still can accumulate resources to be disbursed at a later time when good enough evidence emerges. Although we may at present be radically uncertain about the sign and the magnitude of most far-future interventions, the intervention of accumulating resources for future disbursal does not itself appear to be subject to such radical uncertainty.

comment by Pablo_Stafforini · 2019-09-26T08:50:06.169Z · score: 8 (6 votes) · EA · GW

Robin Hanson, Paul Christiano, and others have made similar points in the past.

Hanson (2014):

This post describes attempts to help the future as speculative and non-robust in contrast to helping people today. But it doesn’t at all address the very robust strategy of simply saving resources for use in the future. That may not be the best strategy, but surely one can’t complain about its robustness.

Christiano (2014):

There is some debate about this question today, of whether there are currently good opportunities to reduce existential risk. The general consensus appears to be that serious extinction risks are much more likely to exist in the future, and it is ambiguous whether we can do anything productive about them today.

However, there does appears to be a reasonable chance that such opportunities will exist in the future, with significant rather than tiny impacts. Even if we don’t do any work to identify them, the technological and social situation will change in unpredictable ways. Even foreseeable technological developments over the coming centuries present plausible extinction risks. If nothing else, there seems to be a good chance that the existence of machine intelligence will provide compelling opportunities to have a long-term impact unrelated to the usual conception of existential risk (this will be the topic of a future post).

If we believe this argument, then we can simply save money (and build other forms of capacity) until such an opportunity arises.

comment by reallyeli · 2019-10-06T21:29:04.254Z · score: 3 (2 votes) · EA · GW

By accumulating resources for the future, we give increased power to whatever decision-makers in the future we bequeath these resources. (Whether these decision-makers are us in 20 years, or our descendants in 200 years.)

In a clueless world, why do we think that increasing their power is good? What if those future decision makers make a bad decision, and the increased resources we've given them mean the impact is worse?

In other words, if we are clueless today, why will we be less clueless in the future? One might hope cluelessness decreases monotonically over time, as we learn more, but so does the probability of a large mistake.

comment by Milan_Griffes · 2019-10-07T04:27:08.621Z · score: 4 (2 votes) · EA · GW

Indefinite accumulation of resources probably also increases the chance of being targeted by resource-seeking groups with military & political power.

comment by Halstead · 2019-10-06T05:50:07.327Z · score: 8 (5 votes) · EA · GW

I'm pretty sceptical of arguments for cluelessness. Some thoughts:

  • Knightian uncertainty seems to me never rational. There are strong arguments that credence functions should be sharp. Even if you can bound your credences very broadly with intervals, it seems like you would never be under knightian uncertainty given your information - your credal state is always somewhere between 0 and 1, and surely your mean estimate will differ between different problems.
  • Similar arguments for complex cluelessness also seems to apply to my own decisions about what would be in my rational self-interest to do. Nevertheless, I will not be wandering blindly into the road outside my hotel room in 10 minutes.
  • I don't see how you could make a general argument for cluelessness with respect to all decisions made by the community. You could make an argument that the sign of the expected benefits of EA actions is much more uncertain than has been acknowledged. I don't see how this could ever generalise to an argument that all of our decisions are clueless, since the level of uncertainty will always be almost entirely dependent on the facts about the particular case. Why would uncertainty about the effects of AMF have any bearing on uncertainty about the effects of MIRI or the Clean Air Task Force?
  • Cluelessness seems to imply that altruists should be indifferent between all possible actions that they can take. Is this implication of the view embraced?
  • Related to the above, in the AMF vs make a wish foundation example, I don't actually agree that we are as uncertain as suggested. e.g. you list studies citing different effects of life saving on fertility saying "Unfortunately, the studies just noted are of different kinds (cross-country comparisons, panel studies, quasi-experiments, large-sample micro-studies), with different strengths and weaknesses, making it difficult to draw firm conclusions". This seems to be asking for the reaction "what are we to do in the face of all this methodological complexity?" But an economist would actually have an answer to this - cross-country comparisons with cross-sectional data are out of fashion for example.
  • Overall, arguments about cluelessness seem to merely reassert that the world is complex and we should think carefully before acting. I don't see how it points to some deep permanent feature of our epistemic situation.
comment by Max_Daniel · 2019-10-06T16:04:11.272Z · score: 13 (4 votes) · EA · GW
Similar arguments for complex cluelessness also seems to apply to my own decisions about what would be in my rational self-interest to do. Nevertheless, I will not be wandering blindly into the road outside my hotel room in 10 minutes.

I appreciate you making this point, as I think it's interesting and I hadn't come across it before. However, I don't currently find it that compelling, for the following reasons [these are sketches, not fully fleshed out arguments I expect to be able to defend in all respects]:

  • I think there is ample room for biting the bullet regarding rational self-interest, while avoiding counter-intuitive conclusions. To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car. I don't think the intuition that it'd be crazy to wander blindly into the road is driven by any theory that appeals exclusively to long-term consequences on my well-being, nor do I think it needs such a philosophical fundament. I think a theory of self-interest that just appeals to consequences for my time-neutral lifetime wellbeing is counter-intuitive and faced with various problems anyway (see e.g. the first part of Reasons and Persons). If it was the case that I'm clueless about the long-term consequences of my actions on my wellbeing, I think that would merely be yet another problem for the rational theory of self-interest; but I was inclined to discard that theory anyway, and don't think that discarding it would undermine any of my common sense beliefs. So while I agree that there might be a problem analogous to cluelessness for philosophers who want to come up with a defensible theory of self-interest, I don't think we get a common-sense-based argument against cluelessness.
  • However, I think one may well be able to dodge the bullet, at least to some extent. I think it's simply not true that we are as clueless about our own future wellbeing as we are about the consequences of our actions for long-run impartial goodness, for the following reasons:
    • Roughly speaking, my own future predictable influence over my own future wellbeing is much greater than my own future influence over impartial goodness. Whatever happens to me, I'll know how well off I am, and I'll be able to react to it; something pretty drastic would need to happen to have a very large and lasting effect on my wellbeing. By contrast, I usually simply won't know how impartial goodness has changed as a result of my actions, and even if I did, it would often be beyond my power to do something about it. If the job I enthusiastically took 10 years ago is now bad for me, I can quit. If the person I rescued from drowning when they were a child is now a dictator wrecking Europe, that's too bad but I'm stuck with it.
    • The time horizon is much shorter, and there is limited opportunity for the indirect effects of my actions to affect me. Suppose I'll still be alive in 60 years. It will, e.g., still be true that my actions will have far-reaching effects on the identities of people that will be born in the next 60 years. However, the number of identities affected, and the indirect effects flowing from this will be much more limited compared to time-horizons that are orders of magnitudes longer; more importantly, most of these indirect effects won't affect me in any systematic way. While there will be some effects on me depending on which people will be born in, say, Nepal in 40 years, I think the defence that these effects will "cancel out" in expectation works, and similarly for most other indirect effects on my wellbeing.
    • Maybe most importantly: I think that a large part of the force of the "new problem of cluelessness" (i.e., instances where the defence that "indirect effects cancel out in expectation" doesn't work) comes from the contingent fact that (according to most plausible axiologies) impartial goodness is freaking weird. I'm not sure how to make this precise, but it seems to me that an important part of the story is that impartial goodness, unlike my own wellbeing, hinges on heavy-tailed phenomena spread out over different scales - e.g., maybe I'm just barely able to guess the sign of the impact of AMF on population size, but assessing the impacts on impartial goodness would also require me to assess the impacts of population size on economic growth, technological progress, the trajectory of farmed animal populations, risks of human extinction, etc. That is, small indirect net effects of my actions on impartial goodness might blow up due to their effects on much larger known and unknown levers, giving rise to the familiar phenomenon of "crucial considerations." For all I know, in an idealized epistemic state I'd realize that the effects of my actions are dominated by their indirect effects on electron suffering (using this as a token example of "something really weird I haven't considered", not to suggest we ought to in fact take electron suffering seriously) - by contrast, I don't think there could be similar "crucial considerations" for my own well-being. It is not plausible that, say, actually, the effect of my walking into the road on my wellbeing will be dominated by the increased likelihood of seeing a red car; it seems that the "worst" kind of issues I'll encounter are things like "does drinking one can of Coke Zero per day increase or decrease my life expectancy?", which is a challenging but not hopeless problem; it's something I'm uncertain, but not clueless about.
comment by Pablo_Stafforini · 2019-10-08T10:55:34.452Z · score: 5 (3 votes) · EA · GW

Very interesting comment!

To explain, I think that the common sense justification for not wandering blindly into the road simply is that I currently have a preference against being hit by a car.

I don't think this defence works, because some of your current preferences are manifestly about future events. Insisting that all these preferences are ultimately about the most immediate causal antecedent (1) misdescribes our preferences and (2) lacks a sound theoretical justification. You may think that Parfit's arguments against S provide such a justification, but this isn't so. One can accept Parfit's criticism and reject the view that what is rational for an agent is to maximize their lifetime wellbeing, accepting instead a view on which it is rational for the agent to satisfy their present desires (which, incidentally, is not Parfit's view). This in no way rules out the possibility that some of these present desires are aimed at future events. So the possibility that you may be clueless about which course of action satisfies those future oriented desires remains.

comment by Max_Daniel · 2019-10-08T22:35:56.691Z · score: 1 (1 votes) · EA · GW

Thank you for raising this, I think I was too quick here in at least implicitly suggesting that this defence would work in all cases. I definitely agree with you that we have some desires that are about the future, and that it would misdescribe our desires to conceive all of them to be about present causal antecedents.

I think a more modest claim I might be able to defend would be something like:

The justification of everyday actions does not require an appeal to preferences with the property that, epistemically, we ought to be clueless about their content.

For example, consider the action of not wandering blindly into the road. I concede that some ways of justifying this action may involve preferences about whose contents we ought to be clueless - perhaps the preference to still be alive in 40 years is such a preference (though I don't think this is obvious, cf. "dodge the bullet" above). However, I claim there would also be preferences, sufficient for justification, that don't suffer from this cluelessness problem, even though they may be about the future - perhaps the preference to still be alive tomorrow, or to meet my friend tonight, or to give a lecture next week.

comment by Halstead · 2019-10-07T02:28:55.038Z · score: 3 (2 votes) · EA · GW

On the biting the bullet answer, that doesn't seem plausible to me. The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death. Per proponents of cluelessness, I could argue "maybe it will make me look cool to smoke, and that will increase my chances of getting a desirable partner" or something like that. In that sense the sign of the effect of smoking on my own interests is not certain. Nevertheless, I think it is irrational to smoke. I don't think a Parfitian understanding of identity would help here because then my refusal to smoke would be altruistic - I would be helping out my future self.

The dodge the bullet answer is more plausible, and I may follow up with more later.

comment by Max_Daniel · 2019-10-07T11:07:06.515Z · score: 1 (1 votes) · EA · GW
The preference we have are a product of the beliefs we have about what will make our lives better over the long-run. My preference not to smoke is entirely a product of the fact that I believe that it will increase my risk of premature death.

I think this is precisely what I'm inclined to dispute. I think I simply have a preference against premature death, and that this preference doesn't rest on any belief about my long-run wellbeing. I think my long-run wellbeing is way too weird (in the sense that I'm doing things like hyperbolic discounting anyway) and uncertain to ground such preferences.

Nevertheless, I think it is irrational to smoke.

Maybe this points to a crux here: I think on sufficiently demanding notions of rationality, I'd agree with you that considerations analogous to cluelessness threaten the claim that smoking is irrational. My impression is that perhaps the key difference between our views is that I'm less troubled by this.

I don't think a Parfitian understanding of identity would help here

I'm inclined to agree. Just to clarify though, I wasn't referring to Parfit's claims about identity, which if I remember correctly are in the second or third part of Reasons and Persons. I was referring to the first part, where he among other things discusses what he calls the "self-interest theory S" (or something like this).

comment by Max_Daniel · 2019-10-06T16:21:03.809Z · score: 4 (3 votes) · EA · GW
Cluelessness seems to imply that altruists should be indifferent between all possible actions that they can take. Is this implication of the view embraced?

As I say in another comment, I think that a few effects - such as reducing the risk of human extinction - can be rescued from cluelessness. Therefore, I'm not committed to being indifferent between literally all actions.

I do, however, think that consequentialism provides a reason for only very few actions. In particular, I do not think there is a valid argument for donating to AMF instead of the Make-a-Wish Foundation based on consequentialism alone.

This is actually one example of where I believe cluelessness has practical import. Here is a related thing I wrote a few months ago in another discussion:

"Another not super well-formed claim:
- Donating 10% of one's income to GiveWell charities, prioritizing to reduce chicken consumption over reducing beef consumption, and similar 'individual' actions by EAs that at first glance seem optimized for effectiveness are valuable almost entirely for their 'symbolic' and indirect benefits such as signalling and maintaining community norms.
- Therefore, they are analogous to things like: environmentalists refusing to fly or reducing the waste produced by their household; activists participating in a protest; party members attending weekly meetings of their party; religious people donating money for missionary purposes or building temples.
- Rash criticism of such actions in other communities that appeals to their direct short-term consequences is generally unjustified, and based on a misunderstanding of the role of such actions both within EA and in other communities. If we wanted to assess the 'effectiveness' of these other movements, the crucial question to ask (ignoring higher-level questions such as cause prioritization) about, say, an environmentalist insisting to always switch of the lights when they leave a room, would not be how much CO2 emissions are avoided; instead, the relevant questions would be things like: How does promoting a norm of switching off lights affect that community's ability to attract followers and other resources? How does promoting a norm of switching off lights affect that community's actions in high-stakes situations, in particular when there is strategic interdependence -- for example, what does it imply about the psychology and ability to make credible commitments of a Green party leader negotiating a coalition government?
- It is not at all obvious that promoting norms that are ostensibly about maximizing the effectiveness of all individual 'altruistic' decisions is an optimal or even net positive choice for maximizing a community's total impact. (Both because of and independently of cluelessness.) I think there are relatively good reasons to believe that several EA norms of that kind actually have been impact-increasing innovations, but this is a claim about a messy empirical question, not a tautology."

comment by Stefan_Schubert · 2019-10-06T17:31:23.664Z · score: 3 (2 votes) · EA · GW

Thanks, Max, this is interesting.

Donating 10% of one's income to GiveWell charities, prioritizing to reduce chicken consumption over reducing beef consumption, and similar 'individual' actions by EAs that at first glance seem optimized for effectiveness are valuable almost entirely for their 'symbolic' and indirect benefits such as signalling and maintaining community norms.

Suppose that it is true that the value of those actions comes almost entirely from their symbolic benefits. If so, then a further question is whether those symbolic benefits are dependent on the belief that that is not the case; i.e. the belief that the value of those actions, on the contrary, largely comes from their direct and non-symbolic effects. (Analogously to how indirect benefits of a religion on well-being or community cohesion may be dependent on the false belief that the religion's metaphysical claims are true.) It could be that making it widely known that the value of those actions comes almost entirely from their symbolic benefits would undermine those benefits (maybe even turn them to harms; e.g. because knowingly doing something with low direct benefits for symbolic reasons would be seen as hypocritical). Whether that's the case depends on the social context and doesn't seem straightforward to determine.

comment by Max_Daniel · 2019-10-06T18:34:11.312Z · score: 3 (2 votes) · EA · GW

I agree this is a non-obvious question. There is a good reason why consequentialists at least since Sidgwick have asked to what extent the correct moral theory might imply to keep its own principles secret.

comment by Stefan_Schubert · 2019-10-06T21:47:15.563Z · score: 3 (2 votes) · EA · GW

Yes, though it seems to me that EAs largely think one shouldn't (cf. that Integrity is one of "the guiding principles of effective altruism" as understood by a number of organisations). (Not that you would suggest otherwise.)

A tangentially related comment. What symbolic benefits or harms our actions have will be dependent on our norms, and these norms will to at least some extent be malleable. Jason Brennan has argued that we should judge such symbolic norms by their consequences.

If you’ve read Markets without Limits or “Markets without Symbolic Limits,” you’ve seen one of the moves I end up making here. We imbue the right to vote with all sorts of symbolic value–we treat it is a metaphorical badge of equality and full membership. But we don’t have to do that. The rest of you could and should think of political power the way I do, that having the right to vote has  no more inherent special status than a plumbing license. Further, I argue that we can judge semiotic/symbolic norms by their consequences. In this case, if it turns out that epistocracy produces more substantively just results than democracy, this would mean we’re obligated to change the semiotics we attach to the right to vote, not that we’re obligated to stick with democracy because the right to vote has special meaning. I push hard on the claim that it’s probably just a contingent social construction that we imbue the right to vote with symbolic value. At least, no one has successfully shown otherwise.

So, we shouldn't just take symbolic benefits into account when we prioritise what action to take, but we should also consider whether to change our symbolic norms, so that the symbolic benefits (which are a consequence of those norms) change. Brennan argues that if epistocracy produces greater direct benefits than democracy, then we should change our symbolic norms so that democracy doesn't yield greater symbolic benefits than epistocracy. Similarly, one could argue that if some effective altruist intervention produces greater direct benefits than some other effective altruist intervention (say diet change), then we should change our symbolic norms so that the latter doesn't yield greater symbolic benefits than the former.

[Edit: I realise now that the last paragraph in your above comment touches on these issues.]

comment by Max_Daniel · 2019-10-06T16:15:17.624Z · score: 4 (2 votes) · EA · GW
I don't see how you could make a general argument for cluelessness with respect to all decisions made by the community.

I agree. More specifically, I think the argument for cluelessness is defeatable, and tentatively think that we know of defeaters in some cases. Concretely, I think that we are justified in believing in the positive expected value of (i) avoiding human extinction and (ii) acquiring resources for longtermist goals. (Though I do think that for none of these it is obvious that their expected value is positive, and that considering either to be obvious would be a serious epistemic error.)

[...] I don't see how this could ever generalise to an argument that all of our decisions are clueless, since the level of uncertainty will always be almost entirely dependent on the facts about the particular case. Why would uncertainty about the effects of AMF have any bearing on uncertainty about the effects of MIRI or the Clean Air Task Force?

I think you overstate your case here. I agree in principle that "the level of uncertainty will always be almost entirely dependent on the facts about the particular case," and so that whether we are clueless about any particular decision is a contingent question. However, I think that inspecting the arguments for cluelessness about, say, the effects of donations to AMF do suggest that cluelessness will be pervasive, for reasons we are in principle able to isolate. To name just one example, many actions will have small but in expectation non-zero, highly uncertain effect on the pace of technological growth; this in turn will have an in expectation non-zero, highly uncertain net effect on the risk of human extinction, which in turn ... - I believe this line of reasoning alone could be fleshed out into a decisive argument for cluelessness about a wide range of decisions.

comment by Halstead · 2019-10-07T02:18:06.278Z · score: 3 (2 votes) · EA · GW

On the latter, yes that is a good point - there are general features at play here, so I retract my previous comment. However, it still seems true that your rational credal state will always depend to a very significant extent on the particular facts.

I find the use of the long-termist point of view a bit weird as applied to the AMF example. AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.


comment by Max_Daniel · 2019-10-07T12:02:33.681Z · score: 1 (1 votes) · EA · GW
AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.

I agree in principle. However, there are a few other reasons why I believe making this point is worthwhile:

  • GiveWell has in the past advanced an optimistic view about the long-term effects of economic development.
  • Anecdotally, I know many EAs who both endorse long-termism and donate to AMF. In fact, my guess is that a majority of long-termist EAs donate to organizations that have been selected for their short-term benefits. As I say in another comment, [EA · GW] I'm not sure this is a mistake because 'symbolic' considerations may outweigh attempts to directly maximize the impact of one's donations. However, it at least suggests that a conversation about the long-termist benefits of organizations like AMF is relevant for many people.
  • More broadly, at the level of organizations and norms, various actors within EA seem to endorse the conjunction of longtermism and recommending donations to AMF over donations to the Make-A-Wish foundation. It's unclear whether this is some kind of political compromise, a marketing tool, or done because of a sincere belief that they are compatible.
  • The point might serve as guidance for developing the ethical and epistemological foundations of EA. To explain, we might simply be unwilling to give up our intuitive commitments and insist that a satisfying ethical and epistemological basis would make longtermism and "AMF over Make-A-Wish" compatible. This would then be one criterion to reject proposed ethical or epistemological theories.
comment by Max_Daniel · 2019-10-06T14:16:27.920Z · score: 1 (1 votes) · EA · GW

Thanks for this! - My tentative view is that cluelessness is an important issue with practical implications, and so I'm particularly interested in thoughtful arguments for opposing views.

I'll post some reactions in separate comments to facilitate discussion.

Knightian uncertainty seems to me never rational. There are strong arguments that credence functions should be sharp. [...]

I agree that are strong arguments that credence functions should be sharp. So I don't think the case for cluelessness is a slam dunk. (Granting that, roughly speaking, considering cluelessness to be an interesting problem commits one to a view using non-sharp credence functions. I'm not in fact sure if one is thus committed.) It just seems to me that the arguments for taking cluelessness seriously as a problem are stronger. Still, I'm curious what you think the best arguments for credence functions being sharp are, or where I can read about them.

comment by Milan_Griffes · 2019-09-25T19:50:30.213Z · score: 5 (3 votes) · EA · GW

Thanks, looking forward to reading this. Here's an archived version.

Cluelessness deserves more attention in EA, especially from the longtermist contingent.

comment by aarongertler · 2019-10-03T21:08:40.049Z · score: 4 (2 votes) · EA · GW

Haven't read the full paper, but I'm recording some brief thoughts on cluelessness here for my own records. In a clueless world, the value of having an active EA-style movement that is at least partly longtermist may come from:

  • Having a group of people watching the world carefully for potential opportunities to reliably improve the long-term future, so that they can alert the wider world when something comes up that might not be seen by people interested in world events for non-longtermist reasons
  • Having a group of people developing relevant skills (which seems a bit different than "saving resources") in case such an opportunity appears, so that action can be taken more swiftly
  • Offering people with a common interest in longtermism a reason to spend time with each other and hang together; perhaps our research isn't particularly useful in a clueless world, but even people skeptical about their ability to have an impact now might find value in other activities (whether that's "writing fiction about existential risks" or "spending research effort on short-term causes as a way of having more certain impact, in case we don't become more clueful within our own lifetimes")

I'm sure these ideas aren't original, and (as with anything I write), I'd be glad to see links to places they've been expressed in a better way.