The Real Problems With Artificial Intelligence


a collage of some of my favourite Midjourney AI images, none of them my prompts; to view larger size, right click and open in a new window/tab and then simultaneously press CMD and + to enlarge (CTRL + in Windows)

My thoughts on AI and on Artificial General Intelligence (AGI) are evolving as I use publicly-available AI apps more and more, and see how they are being employed.

My sense at this point is that AI/AGI is neither a new problem nor a solution to anything. The actual problem is humans’ propensity to misuse technologies, usually with the best of intentions. AI/AGI is just another tool that neoliberals can use to advance their vision, militarists can use to advance their vision, and technotopians can use to advance their vision. None of that is of any value in dealing with the polycrisis predicament at hand, which, like all predicaments, is insoluble. And in playing with these new toys we are likely to make a lot of messes and cause a lot of damage, as we have done with essentially every new technology we have ever invented.

Trying to ban or ‘freeze’ development of AI/AGI is, I think, akin to, and as futile as, banning or ‘freezing’ the development of arrowheads, or cars, or the printing press, or letter openers, or any other kind of technology throughout our history on the basis that it could easily be misappropriated (accidentally or deliberately) to destructive ends. As John Gray put it in Straw Dogs (before AI was a thing):

If anything about the present century is certain, it is that the power conferred on ‘humanity’ by new technologies will be used to commit atrocious crimes against it. If it becomes possible to clone human beings, soldiers will be bred in whom normal human emotions are stunted or absent. Genetic engineering may enable centuries-old diseases to be eradicated. At the same time, it is likely to be the technology of choice in future genocides. Those who ignore the destructive potential of new technologies can only do so because they ignore history. Pogroms are as old as Christendom; but without railways, the telegraph and poison gas there could have been no Holocaust. There have always been tyrannies, but without modern means of transport and communication, Stalin and Mao could not have built their gulags. Humanity’s worst crimes were made possible only by modern technology.

My skepticism about the use of AI/AGI as a vehicle for problem-solving is that AI/AGI is inherently devoid of the capacity for imagination. Its most interesting ‘work’ happens when it uses its clever data crunching capabilities to barf out random concatenations, like ChatGPT’s poetry or the sometimes-stunning images that come from Midjourney’s misunderstandings of (mostly badly-worded) prompts. The genius of randomness. Its most compelling outputs are largely accidental.

None of what it produces is really art, but some of it could well inspire art, by provoking our rusty human imaginations to think in ways or about things we hadn’t thought about before. But that’s mostly dumb luck when it happens. AI/AGI will never be imaginative because it is intrinsically incapable of metaphorical, lateral, inductive or abductive thinking — it can never acquire the vast rich human, uncategorizable slurry of content-in-context that would be needed to enable such thinking, and in any case these ways of thinking are non-analytical processes that are not strictly intellectual and cannot be programmed. Only in human-written sci-fi will AI/AGI be able to look at the pigment-free colouring in a butterfly’s wing and ‘independently’ imagine how that ‘technology’ might be commercially applied to aeronautical coatings or noncounterfeitable banknotes.

Living in an age of staggering imaginative poverty at exactly the time when imagination is most desperately needed to help us cope with the polycrisis, we are inevitably going to be disappointed with the inherently stale, derivative, clichéd and prevailing-narrative-reinforcing ‘intelligence’ that AI/AGI comes up with.

Since AI/AGI can only ever do what it’s told to do (by humans or by other AI/AGI bots), its use for anything other than mundane commercial and military applications (and misapplications) is inevitably going to be limited. It might precipitate the end the world (most likely by military or geoengineering accident), but it will never produce anything genuinely novel. That is the difference between creativity and imagination.

I think our impoverished imaginations are mostly a result of lack of practice. I used to invent games, conjure up imaginary friends, daydream about going into other dimensions etc. All of that is done for us now, constraining our imaginations to what Hollywood and the gaming companies can manage with CGI, and the hackneyed, trite, warmed-over myths that they reinforce.

When I look at the Midjourney ‘showcase‘ of most-upvoted images, it is kind of depressing. Anything in the world that you can imagine could theoretically be constructed and displayed from the prompts, but 99% of what is presented looks like posters or cels from Hollywood cartoons, comic books, violent action films, sci-fi and horror movies, or disturbing incel fantasies. Part of that is that the Midjourney AI can’t imagine, but most of it is due to the fact the prompters can’t imagine either.

So, yes, I’m worried about how humans will continue to abuse new technologies for nefarious purposes, such as producing fake videos indistinguishable from real recordings, to the point we will have to be skeptical of everything we see on our screens (if we aren’t already). And I know this is a slippery “guns don’t kill people…” argument (though some technologies like weapons are basically Moloch Tragedy technologies, and the less use we make of them the better).

But I’m far more concerned about how, for example, we’re using new kinds of underwater explosives, guided missiles and drones to ‘anonymously’ assassinate people we don’t like, and to blow up pipelines, dams and potentially nuclear power plants, creating political havoc, social and ecological disaster, and accelerating the risk of nuclear war.

In the meantime, AI has its uses, and I look forward to seeing continuous improvements in its very useful capacities for information-gathering and synthesis, and for increasingly high-quality image production. For example, I now have ChatGPT installed on my Google search page, and its responses to my searches, which appear beside Google’s, are so superior to Google’s that I only bother to look at the Google results now when I’m asking about something that happened recently (ChatGPT’s knowledgebase only runs up to September 2021). It’s that much better.

A caveat, though: I must admit that it’s required me to up my game in learning how to word and phrase my chat/search queries, without which it’s often just a GIGO exercise. It took me years to learn how to use the Google search bar effectively. And now I’m back to square one with the chat box. It’s like a conversation with someone you don’t know — ChatGPT and I learn from and teach each other how we communicate and understand, and only when we’ve got that understanding down can we start to craft sentences we know will be understood by the other.

And these days, instead of using Creative Commons licensed images (which have been absolute lifesavers for unpaid writers like me for the last two decades) on my blog posts, I’m now using mostly Midjourney-produced images. No more worries about copyright, and I have far more control over the types of images I can produce. And it’s a lot more fun.

Maybe I’ll be more concerned about the evolution of artificial intelligence if and when it becomes, um… intelligent. Y’know, like, not just processing data (often suspect data at that) really quickly, but actually coming up with something useful for addressing and coping with some of the challenges of our time. So many of our recent ‘smart’ technologies are focused on creating new (largely artificial) ‘needs’ (and doing so strictly to make a profit). It would be nice to have some that actually addressed some real existing needs instead.

But that can’t and won’t happen until we shake the false computer-as-brain metaphor and start to understand how nature and its creations actually adapt to changes in the environment in ways that enable them to survive and thrive. That entails far, far more than mere ‘intelligence’. Invented technologies let you do the same old things faster/cheaper/better etc. But evolution lets you do new things.

It’s taken the natural world several billion years to evolve that astonishing capacity. Small wonder our bewildered, bumbling species is still at the starting gate. Still playing with fire, and still not cleaning up after our messes. And still, and more than ever, unable to imagine.

This entry was posted in How the World Really Works, Our Culture / Ourselves, Using Weblogs and Technology. Bookmark the permalink.

7 Responses to The Real Problems With Artificial Intelligence

  1. Philip says:

    In the soul of the marionette, John Gray describes James Lovelock’s possible future where AI becomes a step in evolution and could influence Gaia’s systems for the better.

    Gray….
    But if humans are creating the conditions in which they cease to be the planet’s dominant life-form, they may also be seeding the planet with their successors. Lovelock cites artificial intelligence and electronic life-forms as examples of human inventions that can carry on where humans leave off. Developing first as human tools , entering into symbiosis with human beings and then evolving separately to them, electronic life could develop that was more suited to thriving in the hot world human beings have created.
    Lovelock….
    The new life, if its neutrons operated at electronic speed and included intelligent software, could live one million times faster than we do and as a result its time scale would be increased as much as a millionfold. Time enough to evolve and diversify in the same way carbon life has done. It might extend the life of Gaia further, long enough even to enable the next Gaian dynasty, whatever that might be.
    Gray….
    Interwoven with the life cycle of the planet, machines have created a virtual world in which natural selection is at work at far greater speed than among the planet’s biological organisms. With the rise of artificial forms of life, the next phase of evolution may have already begun.

    Still AI requires electricity and this is a switch that humans could use for better or worse depending on the evolution of this “tool”. Because ideas of salvation in the collective unconscious now reside in technology we will continue to turn these machines on in the hope that we can escape the burden of choice etc. The machines can do the thinking and work for us why we misuse our imagination. As technologies grow and our words are added to a “brave new world” in our general domestication, we will become more confused by questions such as “what are people for?” if we can even remember to ask them. In the scheme of things we aren’t for anything and lack the imagination to heal ourselves and the earth for we are driven by a sense of lack itself. Jellyfish in the tide, dust in the wind.

  2. Dave Pollard says:

    Hi Philip: I run hot and cold on John Gray, especially when he gets on to moral topics, where he seems unable to shake his catholic convictions. AI requires a lot more than just electricity — it needs metals and trace elements among other things, none of which will be available when we’re gone. As Indi points out, we’ve had AI for centuries in the form of The Corporation, but no forms of AI can live without us “to pull the strings” or at least to keep the string-pullers operating. When we’re gone the most notable relics that will be left that are as new as us, will be the nuclear reactors and chemical alleys primed to release their toxins when the coolers fail and the concrete crumbles.

    I don’t think John really appreciates what natural selection fully entails, since it’s far more than the result of brute competition winners and losers. AI is a non-starter in natural selection because by definition it doesn’t ‘fit’ with the rest of life on earth.

    And James Lovelock, for all his wonderful revelations, was also a big fan of nuclear energy.

    But then we all have our blind spots, I guess. I’m just beginning to realize mine.

  3. Philip says:

    It is easy to imagine a future where various classes of human survivors still inhabit a hot Earth in greatly diminished numbers. Some will be the descendants of those living today who already are just surviving. Others may occupy societies with advanced nuclear, robotic and AI technologies. 2100 is still 73 years away, enough time for much advancement controlled by the powerful. Sure things will be messy but the will to survive in humans will manifest in various directions. Can’t see why AI combined with robotics and logical resource extraction technology maybe initially programmed by humans couldn’t figure out the part of Drakes equation about civilizations surviving the discovery of radioactive isotopes.

    It is easy to imagine a future with or without increased types of nuclear power. Regardless of being a fan or not. There are many possibilities. Adaptation is an instinct. I’m not much of a fan of the internet, net alone AI and aren’t in a hurry to install chatGP or mid journey or whatever but that doesn’t mean they are going away or are not useful.

    I brought up John Gray’s more recent thoughts on AI as you mentioned his earlier take on human technology in general. I’d strongly disagree with you that he does not appreciate what natural selection entails as there are many references to evolution in his writings. Personally I observe the competition on a daily basis in nature and find it to be more brutal than not. That is why we are living in a sixth extinction. AI may be a non-starter in natural selection but it fits well with artificial selection which would still result in evolution and a future few could imagine.

  4. Philip says:

    forgive my math and spelling , 77 years, chat GPT.
    On the news today, AI making child abuse pornography. Natural selection has led to an ape using machines to satisfy desires in some that most find abhorrent. Dave, I enjoyed reading “the other side of eden” a book you like. We are not in a natural state of human existence it seems. Recently read “being a human”. The enlightenment had a focus on science and nature but one has fallen off in this grand endevour. Ignorance is our natural state, blind spots everywhere. At least without modern science and technology humans are able to find more connection and understanding in their ignorance.

  5. Dave Pollard says:

    Thanks Philip: John does claim that ‘biophilia’ is the natural tendency of humans, but he seems to have a very dour take on what he sees as our ‘natural’ propensity for violence. Whereas I see our displays of violence as aberrant behaviour stemming from the incredible stress and disconnection of our current situation and way of living, and hence I think I am more forgiving and less prone to believe we are inherently destructive or violent. Robert Sapolsky’s thinking on this seems closest to my own on this subject.

  6. Philip says:

    https://www.youtube.com/watch?v=1m1jWjGXs6I Kingsnorth on AI

    something is using us to create itself

  7. Paul Reid-Bowen says:

    Strange to see Kingsnorth’s ideas converging with those of Nick Land of a machinic intelligence or telos – of the future – creating itself through us in the present. I suspect Kingsnorth would now see this as the Devil, while Land would be more obscure in his neo-reactionary and dark enlightenment readings of this inhuman inevitability (that said, probably not that different in tone, just very different starting points).

Comments are closed.