An essay written for Beshara Magazine
For more on my book, Spiritual Intelligence in Seven Steps
Hysteria is a form of defence, according to psychoanalysis. Underneath the panic or mania lies something that it has become impossible to think about. Alarm and fever are a cover for what threatens to destroy.
I have been pondering this insight recently, as first a handful, and then dozens, of top IT professionals and AI experts have put their names to letters and newspaper columns calling for a moratorium on AI development. The large language model, ChatGPT, has freaked them. Suddenly, there is the possibility that the Turing Test – the method devised by the computer genius, Alan Turing, which decides whether a computer possesses human-like intelligence – has been passed. The fear is that the next iterations of ChatGPT, or another AI system like it, will develop the capacity to set goals of its own. Such a network may or may not develop consciousness: whether computers will one day ‘wake up’ is moot, with the answer depending on who you ask. But a conscious machine is no longer the main concern. That has shifted to runaway computers outwitting their human creators in double quick time with the risks to humanity, including total annihilation, going exponential. Hence my considering what lies beneath the hysteria.
Or is such a psychological interpretation of our anxiety complacent? My query arises from having been part of an academic project contemplating AI in the context of spiritual intelligence. Organised by the International Society for Science and Religion (ISSR), we have spent the last couple of years in a dialogue involving technologists, philosophers, anthropologists and theologians. The conversation has included well-known figures such as Rowan Williams and Iain McGilchrist.
The ISSR is a fellowship of those working at the interface of science and religion and has run a number of projects, including an examination of human origins and the ways in which evolutionary biology is evolving. When it came to AI, the issue was framed in an interesting way: maybe the arrival in our daily lives of sophisticated algorithms and large language models like ChatGPT could be a chance to return to the perennial question of the nature of human consciousness, particularly in relation to what might be called ‘spiritual intelligence’. This type of awareness is associated with the capacity not only to think conceptually but also glean intuitively. According to the convenor of the project, the psychologist of religion Fraser Watts, spiritual intelligence is, therefore, akin to moral, aesthetic, social and emotional sensibilities. It is particularly alert to the way we participate in life, from the embodied experience of things to an appreciation of what lies just over the horizon of our awareness, the ineffable. The natural language of spiritual intelligence is analogical, relational, receptive and universal. My own involvement with the project has included writing a book that unpacks these elements, Spiritual Intelligence in Seven Steps, which looks at the nature of this intelligence as something common to all the great spiritual and philosophical traditions.
The Limits of Technology
Participants in the ISSR project do have concerns about AI, even major concerns, ethical and otherwise. But the technologists, in particular, have tended to regard their more outspoken AI colleagues as over-excited or publicity seeking.
One factor to remember, they point out, is that the technology industry trades on the future. Wealth is generated by promising what lies down the line, from self-driving cars to colonies on Mars. Keeping expectations high is, therefore, crucial and fear is as good a driver of that as hope. Plus, as one experienced coder told me, remember that a proportion of those drawn to AI will fancy themselves as masters of the universe. They think that technology is to the twenty-first century what magic was to the past – a means of transformative power that puts its expert practitioners in the vanguard and, ideally, makes them indispensable.
So there are other ways of interpreting the point AI has now reached. It could be that ChatGPT does not represent a take-off point along a fast-rising curve towards unlimited progress but, rather, marks a plateau or inflection point down. The reasoning behind this deflationary reading of the situation is that AIs need training. Training, in turn, requires data, and remarkably large amounts of it. The large language models of today are consuming all the data that exists, which means that they may not be able to keep developing. Worse, from an AI point of view, the current generation of AIs are now themselves becoming significant data producers, which means that the networks might start cannibalizing themselves. The upshot could be a stalling of advances or a regress.
Now, this prospect is itself, of course, an attempt to peer into the future and so quite as open to question as the point of view which see superhuman intelligences just around the corner. But it is worth contemplating because it helps puncture the hysteria, thereby making space for a cooler consideration of the point we have reached.
One important point to do with the technology immediately becomes clearer. This element has to do with how science works. Many scientists, particularly since the dawn of powerful computers, are involved in building models of reality. These models are then compared with reality to see how accurate they are, and then to see how they illuminate the workings of reality, which in turn aids the development of technology. For example, my old physics tutor is a cosmologist. He now spends his working life not looking up telescopes – the last time I saw him, he confessed sadly that it had been years, possibly decades, since he enjoyed such pleasures. Rather, he generates artificial representations of the movement of galaxies and stars, from the Big Bang to the present day, which spin and twirl around each other virtually, as bits and bytes streaming across vast assemblies of silicon chips. It generates fascinating questions about the nature of Dark Matter, Dark Energy and the like. But one thing my tutor has never suggested is that he is making universes inside the computers. He knows that he is running models, as any good scientist does.
This distinction between the model and the reality seems easily forgotten when it comes to AI. To build a representation of a cognitive process or mental system is readily interpreted as building a cognitive process or mental system, which is to say, an intelligent brain. But it is not the case.
I think that the slippage comes about because we have already become so used to imagining that we are machines, particularly when it comes to our inner lives. The inside of the body is envisaged as an advanced biochemical robot. The workings of the mind are envisaged as advanced data processing. The metaphors have become pervasive. They are rehearsed and become more firmly embedded every time someone says that they are ‘just wired this way’, declares that they ‘need a reboot’, or anticipates a higher form of consciousness emerging as if it was simply a matter of advancing from one level to another, like the iterations of an operating system or space rocket.
Spiritual Intelligence
This way of thinking about ourselves has deep roots. The mindset that places its hopes in the idea of perfectible machines is one of the defining features of modernity. Logic is superior to intuition, the reasoning goes. Calculation is better than inspiration; prediction more reliable than imagination. Ways of life and economic organisation have been aligned to this vision, based upon the philosophy of scientific materialism. This civilisation-forming hope must be another factor behind the current hysteria about AIs: is all that we trusted and lived for about to backfire and destroy us?
But to recognise that our intelligence is deeply shaped by our longings and fears, our history and hopes, is to begin to take a step back from the progressivism that banks the whole of life on material improvement and technological advance. Presenting the matter like this, as Rowan Williams did in his 2023 Boyle Lecture, ‘Attending to Attention’ – given as part of the ISSR project – it becomes clear that how we attend to the world profound affects what shows up.
So, in a period such as our own, attention is much shaped by technology and technology is a quantitative form of attention because it works by measuring, comparing, calibrating. But in previous eras and other cultures, attention was shaped by participating in the life of the world in which the people live. This form of attention is more qualitative and is fostered through ritual, art, worship, divination.
Becoming aware of the difference is where a notion such as spiritual intelligence can be useful. Where AI is driven by algorithms, spiritual intelligence draws on the analogical, detecting sympathies, harmonies, synchronicities. Where AI processes what is concrete and known, spiritual intelligence is alert to the ineffable, the paradoxical, the mythological. Where AI works on abstractions drawn from reality in the form of data and models, spiritual intelligence depends on contact with reality, which is why embodied practices are so important, alongside undergoing experiences of longing, suffering, bewilderment and love. Where AI is conceptual and so thrives in the domains of digital representation, spiritual intelligence recognises that the conceptual is in the service of the intuitive and requires moral, aesthetic and relational testing. Where AI is built on models and abstractions, spiritual intelligence links the particular to the universal, uncertainty to wisdom, and disclosure with the delights of poetry and music, as well as experiment and reason.
Another thing that spiritual intelligence knows is a truth that lies at the heart of the great wisdom traditions. Dying is the pathway to life. Darkness is really the eclipse of light. Tragedy is held within the more expansive outcome of comedy, in Dante’s sense that the way down is also the way back; embracing what happens is gradually discovering that everything can be embraced.
The motivation behind the ISSR project is that by considering AI in the round, and neither reacting wildly for or against it, the wider intelligence, of which human intelligence is a part, might become clearer once more. After all, the intelligence that spiritual ways of life seek is within us and beyond us. It can only be approached by being trusted. To conclude that all is lost now is to conclude that our religiosity and questing is not only faulty, which undoubtedly it is, but that it has been profoundly deluded all along.
So whether or not a machine one day wakes up and says ‘hello’, today’s crisis point can be a turning. Resist the panic. And as the wisdom traditions insist: understand and love.