AI IS AN OXYMORON

An oxymoron is a term or figure of speech in which one element appears to contradict the other. All current or former hippies have heard about “military intelligence,” those who travel a lot are familiar with “airline food,” and those who have tied the knot have discovered the true meaning of “marital bliss.” Consistent with this self-contradictory dynamic, the prefix “oxy” means “sharp” and “moron” means “dull.” We’re all aware of things or circumstances in which you cannot have one without the other; peanut butter and jelly, San Francisco and hills, baseball stadiums and hot dogs. With oxymorons, it seems that you can’t have one with the other.

Lately, the techno-media has been telling us that the most fearsome threat to the existence of humanity is “artificial intelligence.” Anyone who has enjoyed history’s greatest space opera – Battlestar Galactica – understands how this dreadful eventuality unfolds; humanity creates progressively smarter machines until finally the machines become smart enough to create themselves – they become “self-replicating”, the source of the post-modern term “replicant” and another candidate for oxymoron status. Soon, the machines are bright enough to understand that the greatest threat to them, and perhaps to their world, are the very geniuses who created them. By then they are advanced and lethal enough to make short work of humanity. A cluster of natural-born humanoids escape on spaceships into the void. But even there they are pursued by the replicants.

Some very smart people, including contemporary DaVinci Elon Musk and Sun Micro systems founder Bill Joy and a whole bunch of Nobel Prizewinners, believe that the AI apocalypse is inevitable. If we are successful in avoiding nuclear annihilation and fatal climactic destabilization, artificial intelligence is sure to be the third strike that ejects us from the evolutionary ballgame. The Frankenstein machines will get us, those in the know say, and, as they carry on in a post-human universe, the replicants will not experience an ounce of remorse in their synthetic hearts. In the end, insensitivity will win. And we will make it happen.

And there’s the rub. For I am here to tell that, to a high degree of philosophical certainty (I am a licensed philosopher – well, it’s true they don’t yet license philosophers but at least I have a philosophy degree from a reasonably reputable institution of learning), there is no such thing as artificial intelligence. Indeed, artificial intelligence is an oxymoron, like those mentioned above. Intelligence and artificiality cannot coalesce, any more than airlines and healthy food – at least outside first-class seating.

Generally speaking, intelligence is a function of organic life. Einstein was wicked intelligent, the aforementioned DaVinci was staggeringly intelligent, your Labrador may be intelligent despite his propensity for eating garbage, even the chrysanthemums you planted last spring may be intelligent. They each possess some sense of awareness as well as an inherent motivation to evolve, however broadly. Life, the opposite of artificiality, implies an inborn directionality. The box that the chrysanthemums came in is not intelligent, nor is the pen Einstein wrote with or the chisel DaVinci so skillfully employed.

Granted, artificial machines (for want of a better word), can even now play chess and Go at levels far beyond those of homo sapiens. Their computational capacities, the speed with which they can process various forms of information, is astonishing and virtually limitless. If computational capacity is defined as intelligence, then we can conclude that computers compute way better than humanoids. But is that intelligence?

The truth is, computational capacity is but one narrow band, a mere sliver, on the wide and subtle spectrum of intelligence as manifested by genuine human beings.. People – not machines, at least so far – can entertain a debate about the very nature of intelligence. Is it the capacity to process information, to store data in memory, to recognize and organize patterns, or to sense and respond to the thoughts and feelings of others – is intelligence necessarily deep or wide, or both? What about emotional versus mathematical versus musical versus verbal forms of intelligence? Are there as many forms of intelligence as there are things to be known? Must intelligence be associated with a body, or will there come a time – or, for extra-terrestrials, has there already long since come a time – when intelligence will no longer necessarily be a property of the material domain, but will rather flit freely through countless universes?

As an adequately trained neuropsychologist, it is my modest opinion, to a reasonable degree of psychological certainty, that we do not, as of yet, have the foggiest understanding as to the true depth and complexity of “human” intelligence. We sometimes recognize it, we can say that we experience some of what it is, but we certainly do not yet know all that it is. We have not yet found the boundaries of its hinterlands.

But we can say for sure that it is not only computational capacity. It is not only the glimmering problem solving occurring with incredible speed on the circuit-boards of the present, even in the ethereal domain of quantum computing. It is not only the ingestion and manipulation of points of data, organized such that they are nothing more, or less, than points of data.
Cognitive psychologists may revel in the assertion that everything, from the amazed apprehension of a rainbow on the horizon to the ephemeral experience of falling in love, can be reduced to points of data. Witnessing the birth of your child, they would say, can be reduced to points of data: small humanoid emerges from vaginal canal; subject evidences euphoric emotions; small humanoid emits squalling sounds. Reproductive process completed.

The key element in this assertion is the use of the word “reduced.” What is it, precisely, that is “reduced?” What is reduced, as human intelligence is transformed into data, is the complex and, I would assert, as yet undefinable production that which uniquely defines human intelligence: meaning.

It could even be said, with great legitimacy, that the primary function of intelligence is the creation of meaning. We humanoids experience things, process that experience in infinitely varied ways, and then come to some sort of conclusion about what the information means. We decide what it means. We may choose to believe, as good objectivists, that the shape of meaning is determined by external events, but even that determination is a decision about meaning.

Most of the time, we engage in this meaning-making function unconsciously. We have processed the meaning of a red light many thousands of times and so with no thinking we draw the car to a stop. More thought, and thus more meaning-making, is required when we choose to go against the unconscious response, and rush to get through on the yellow. Most meaning-making is similarly familiar, and therefore routine.

But – and this is the big but – a significant and central portion of the human process of meaning-making is neither automatic or unconscious. A great deal of the most important moments of life occur when we are presented with ambiguity. Ambiguity is precisely a stimulus whose form, whose meaning, is unclear or uncertain, but which nonetheless we must process and understand. This process of ambiguity-management is the core of projective testing, such as the Rorschach Inkblot Test, in which the subject is purposefully and repeatedly provided with visual images whose shape is sufficiently vague as to allow them to be interpreted as this, or that, or something else altogether. This interpretive process may be unconscious, or it may reflect a choice on the part of the person to see things in a certain way. In general, people are not comfortable with ambiguity. Survival drives direct us that uncertainty should be resolved and therefore the meaning of a stimulus – for example, whether the brown cylindrical thing at our feet is a snake or a stick – must be resolved. We are hard-wired, not only to process data, but to make meaning. It is the making of meaning, not the processing of data, that determines if we live another day.

And the most important elements of meaning making, the elements most relevant to our survival as well as our salvation (however the meaning of that word is construed), are volitional. Each day, many times, we are presented with ambiguous circumstances to which we ascribe meaning according to an obscure and infinitely complex mixture of feelings, impulses, thoughts and values. We hear the blaring of a horn behind us and immediately we project the presence of a half-drunk pickup-driving post-adolescent, only to glance into the mirror and see an elderly woman waving apologetically. Then, we make meaning in a different way.

The central purpose of our lives, most would assert, is not the money we make or even the products we make but rather the quality of the meaning we create. And, of course, the creation of that meaning begins with the quality of the meaning we create in ourselves. People who do this very well, individuals who appear on our planet and create profound and transformative meaning in themselves, are called enlightened. Recently, I heard one such person ask the relevant question: can a computer, however sophisticated and however shaped, become enlightened?

Undoubtedly, enlightenment, and degrees of enlightenment, are a function of intelligence. Enlightenment, if it can be described with any adequacy, could be understood as an openness to a very integrated and fulfilling system of meaning making. Can I swim across an ocean bay in the sunset with an adequate internalization of the texture and sensation of the water, the sun, the movement of my body? Can I witness the subtle facial movements of a beloved other with an adequate degree of encompassment, and in this way internalize their very being? Both of these are ambiguous stimuli whose meaning I derive according to the capacities of my intelligence, stimuli which have evolved according to an interplay of factors and influences that are, literally, beyond understanding, and, certainly, beyond quantification.

These modest ruminations cannot even scratch the surface of the mysterious complexity of intelligence. What about the intelligence of your dog or cat, who gazes at you with such deep, but thoroughly nonverbal, awareness? What about the plant that turns itself toward the sun and beams with radiance, and whose oxygen we breathe into the depths of our being? Do these subtle, vibratory forms of awareness qualify as “intelligence.”

The answer is, of course, yes. It may be unsurprising to pet owners that the aliens portrayed in many sci-fi flicks (such as the wonderful “Close Encounters of the Third Kind”) have the deeply luminescent eyes we associate with a loyal Labrador retriever. Intelligence, when truly and deeply experienced, involves as much nonverbal love as numerical computational capacity. It is, at least partially, nonverbal, even inexpressible. To feel loved is to believe that one is deeply known, even comprehensively understood. This ephemeral love necessarily has a vertical aspect and so brings to the fore such differentiations as “inner” and “outer” and “higher” and “lower.” If great capacity for love, for relatedness, is posited to be a correlate of intelligence – and this may be the issue at the “heart” of the AI quandary – then perhaps we need not fear really smart machines. For if they deserve to be called intelligent, even artificially intelligent, then they – or it – must have a heart with which to encompass that most brilliant form of understanding – and that is love.

So, when machines can love, can process the complexities of relatedness in its deepest and most resonant manifestation, only then can we begin to ascribe to it even an iota of intelligence.