What Might Have Been

… in which I explain how Daniel gets it almost completely right.


of course I couldn’t resist asking Midjourney AI to portray Daniel and Nate as Greek philosophers

Daniel Schmachtenberger’s latest video is an epic three-hour chat with ecological economist Nate Hagens, ostensibly about AI, but actually mostly about how AI could exacerbate the metacrisis (aka the polycrisis) — the global economic and ecological collapse that seemingly is now entering its final and most furious and disastrous stage.

The gist of the earlier part of the discussion harks back to their previous conversations about the tragic “Moloch” nature of much human activity — that eight billion people pursuing their narrowly-focused, short-term goals are inevitably, collectively, going to produce outcomes that are not in anyone’s interest, and in fact deleterious to our collective interest. And that our intelligence enabling the achieving of those narrow, short-term goals is terribly out of sync with the wisdom (as Daniel uses that term) that anticipates and sees holistic, collective, long-term objectives and values, and strives to rein in and balance all those narrow, short-term goals to ensure the holistic, collective, long-term objectives and values are met.

That definition of wisdom is what Gaia Theory asserts is unique to the collective organism that is all-life-on-earth. And that wisdom is, according to pessimists like John Gray, inherently lacking in any single species like humans who are, by their very nature and conditioning, “preoccupied with the needs of the moment” (ie the pursuit of narrow, short-term goals).

The discussion in this video is incomplete — a continuation is promised — but Daniel wraps up the conversation with a rather remarkable synthesis of what seems to be his entire take on where civilization stands now. The 83-page transcript of the entire video is available as a PDF (thank you Nate!), but here are a few extracts from Daniel’s conclusion (emphases mine):

Human intelligence, unbound by wisdom, I think it is fair to say, is the cause of the metacrisis… That intelligence has created all the technologies, the industrial tech, the agricultural tech, the digital tech, the nuclear weapons, the energy harvesting, all of it.

It made the system of capitalism, it made the system of communism. Now, that system of intelligence takes corporeal capacities, things that a body could do, and extends and externalizes them the way that a fist can get extended through a hammer, or a grip can get extended through a plier, or an eye can get extended through a microscope or a telescope, or our own metabolism can get extended through an internal combustion engine… extra-corporeally, … not bound by wisdom, and driven by international, multipolar military and other traps, and markets, and narrow short-term goals at the expense of long-term wide values…

AI is not a risk within the metacrisis. It is an accelerant to all of them, employed by the narrow-focus choice-making architectures that are currently driving the metacrisis… And if we make an AI that is fully autonomous, we can’t pull the plug…

If I have something that can optimize so powerfully, what is the right thing to guide that?… [It] is not intelligence. It is wisdom… AI superintelligence shows us just how fucking dangerous narrow optimization is…

If you wanted to make a superintelligence that was aligned with the thriving of all life in perpetuity, the group that was building it would have to have the goal of the thriving of all life in perpetuity, which is not the interest of one nation state relative to others and is not the interest of near-term market dynamics or election dynamics or quarterly profits or a finite set of metrics…

If you have a group that has a goal narrower than the thriving of all life in perpetuity and it is developing increasingly general AIs that will be in service of those narrower goals, they will kill the thriving of all life in perpetuity

If we look at the multipolar traps and the competition dynamics, if we look at who has the resources to build things at scale, if we look at the speed of those curves, it doesn’t look good… Something has to happen that we are not currently obviously on course for, but if enough people, if some people can, stepping back, be able to see, “Oh, the path that we are pursuing that we feel obligated to pursue, our own opportunity focus relative to risk focus, is actually mistaken”… [But] if we do not get the ‘restraint wisdom’ to stop the max race, then yes, these will be the last chapters of humanity. And so then the task becomes How do we do that?…

I want to share something that I think will be helpful in thinking about the wisdom/intelligence relationship, which is not saying how we enact it. The enactment thing is a real tricky thing, but this is just on what we need to enact. If people have not watched the conversations that David Bohm and Krishnamurti had together back in the day, I would recommend them…

What Bohm said is the underlying cause of the problem is a consciousness that perceives parts rather than than perceives wholes or the nature of wholeness. And because it perceives parts, it can think about something as being separate from others. So it can think about benefiting something separate from others and either it can then care about some parts more than others so it’s okay harming the other things (or it just doesn’t even realize it is). … And so I can benefit myself at the expense of somebody else. I can benefit my in-group at the expense of an out-group. I can benefit my species at the expense of nature. I can benefit my current at the expense of my future. I can benefit these metrics at the expense of other metrics we don’t know about. And all of the problems come from that…  [Whereas if] we were perceiving the field of wholeness itself and our goals were coming from there, and then our goal achieving was in service of goals that came from there, that’s what ‘wisdom binding intelligence’ would mean, which is the perception of and the identification with wholeness…

Iain McGilchrist… in The Master and His Emissary, said… there’s a capacity in humans that needs to be the master and another capacity that needs to be the emissary, meaning in service of, and also bound by [the master]… The thing that needs to be the master is that which perceives, not mediated by word symbols, language models, perceives in an immediate way the field of inseparable wholeness

If you look at all the problems in the world and the global metacrisis and the impending catastrophes being the result of the ’emissary’ intelligence function unbound by the ‘master’ wisdom function, then you look at AI [as] taking that part of us already not bound by wisdom and putting it on a completely unbound, recursive exponential curve….

[That means we require] a restructuring of our institutions, our political economies, our civilizational structure, such that the goals that arise from wisdom are what the [intelligence’s] goal achievement is oriented towards. That is the next phase of human history, if there is to be a next phase of human history…

Yet if people are working to make change but they are not actually connected to the kind of wholeness that they need to be in service to, and they continue to have what seem like good goals, but they’re narrow — “We need to get carbon down”, “We need to get the rights of these people up”, “We need to protect democracy”, “We need to get our side elected because the other side is crazy”, “We need to develop the AI to solve this problem” [etc] — anything less than the connectedness with wholeness, anything less, both at the level of care and at the level of calculus, [will not be enough].

Even though you can’t [actually do this], you [need to be] oriented to try with the humility that knows you’ll never do it properly. The humility that knows that you’ll never do it properly is what keeps you from being dangerous from hubris. But the part that really, really wants to try is what has you make progress in the direction of the service of the whole.

As I think about this, I realize I am caught between Daniel’s cautious optimism (“It may not be possible, but we have to try.”) and John Gray’s pessimism (“Homo rapiens is only one of very many species, and not obviously worth preserving. Later or sooner, it will become extinct. When it is gone Earth will recover.”).

I find John’s pessimism too dour, too confident, too stoic and fervid, almost to the point of religiosity. But I find Daniel’s hopefulness both charmingly naive and slightly bewildering. I think his diagnosis of the human condition and our current situation is spot on — the most articulate summarization of the state of the world I have seen or heard anywhere. And I think his prescription — if it were possible — is also valid.

Unfortunately, my understanding of complexity makes me believe it is utterly impossible. “A restructuring of our institutions, our political economies, our civilizational structure”, realigned in service of “the thriving of all life in perpetuity”? Really? What precedent exists for such a sudden and radical transformation, an utter change to the way we eight billion apes do everything in our lives? Daniel’s belief that this is even remotely possible is unfathomable to me. “The enactment thing is a real tricky thing… It doesn’t look good” indeed.

So my take on this fascinating conversation is that, yes, this is what would have to happen to pull us, last minute, out of civilizational collapse and the sixth great extinction. Would have to happen, not will have to happen. What his analysis tells me is not what is possible, but rather, more humbly and more wistfully, what might have been. And I’m incredibly grateful for that.

It wouldn’t have taken much, in fact, for a world like the one that Daniel envisions to have emerged instead of the one we actually live in. Perhaps, as Iain McGilchrist seems to suggest, if humans still had separate bicameral brains, instead of the imaginative, cross-talking, conceptualizing ones that (probably as a spandrel) emerged after we separated from our bonobo and chimp cousins, we would still see and accept the world as holistically as Daniel says we must start to do again. Perhaps, had the cosmic storms of several million years ago not forced us from our natural arboreal tropical homes and required us to learn a radically different and unnatural way to live in hostile environments, we would still be living, in modest numbers, in harmony with the rest of life on earth.

As I’m sure you know by now, I don’t blame our species for our inadvertent folly. And thinking about how it might have been otherwise, how life might have unfolded so differently from the way it so recently has done, as our bewildered rogue species did the only thing it could have done, is, for me, inspiring, fascinating, and even (my new favourite word) solacing. 

Trying to make sense of what has happened, where we stand now, and what our future holds, is most likely a fool’s errand. But trying to appreciate, blamelessly, what might have led us to our current and intractable predicament, and how it might have been otherwise, can, I think, enable us to see the world, and our situation, with compassion, equanimity, tolerance, appreciation, and, perhaps, even joy.

This entry was posted in Collapse Watch, How the World Really Works, Illusion of the Separate Self and Free Will, Our Culture / Ourselves. Bookmark the permalink.

2 Responses to What Might Have Been

  1. Benn says:

    Last time I listened to Mr S’s talks he talked about how we might be like an egg embryo. It consumes all its resources to hatch into a bird: an optimistic metaphor for civilisation that gives him hope for the future. I stopped listening to him. I think he sounds clever, but is he?

    We also need to avoid thinking that our culture’s actions are representative of our species. Lots of other cultures have lived well for thousands of years and not caused the harm we have.

  2. Paul Reid-Bowen says:

    … in which I almost completely agree with Dave.

    Not sure there is too much value in simply agreeing, but I watched the same conversation and came to much the same conclusion. Daniel’s grasp of the systemic logic(s) and interconnections of our collapse seems as good as anyone who has thought sufficiently hard about the various elements of the poly-crisis. His conclusions about AI, the alignment problem and its nature as an uber-accelerant and amplifier of whatever Molochian and all-to-human goals we plug into also seems similarly on target. However, that he can then pull forth some optimism – or an, ‘at least we have to try’ – from his diagnosis of intractable stage, planet-wide four meta-cancer is rather bewildering.

    Currently working my way through Andrew Boyd’s I Want a Better Catastrophe, which I would strongly recommend (and is certainly the book that I wish I had written). It even comes with a lovely fold out flowchart, duplicated on the website here: https://bettercatastrophe.com/flowchart

Comments are closed.