The Limits of Artificial “Intelligence”

The gauntlet has now been thrown down, that computers, or more specifically artificial intelligence, will soon be able to create more perfect art than humans. This prediction, or threat, depending upon your point of view, applies in particular to music, the most mathematical and ‘algorithmic’ of the arts, following set patterns of chord structure, harmonies, frequencies, intervals and such, but may be extended to everything else, painting, sculpture, even writing, which will make students’ essays a whole lot easier.

The article describes a 1997 demonstration by David Cope at the University of Oregon, who presented an audience with three compositions: an original piece by the great Johann Sebastian Bach, and two similar pieces, one by musicology professor Steve Larson, and another by a computer program Cope had devised specifically to imitate Bach, which he called EMI (Experiments in Musical Intelligence). I suppose Cope chose Bach as one of the more cerebral and precise of the great composers. The auditors then voted on who they thought actually composed the three pieces. Curiously, a majority thought the EMI piece was by Bach, that Bach’s piece was by Larson, and that Larson’s piece was by EMI.

So does this prove that a computer (we will leave aside the human imitation for another discussion) can compose as well as the great and unparalleled Kapellmeister from Eisenach?

Orthodox. Faithful. Free.

Sign up to get Crisis articles delivered to your inbox daily

Email subscribe inline (#4)

Well, besides the subjectivity of this study (how well could these particular audience members judge comlex compositions, such as Bach’s?), the short answer is, no, and we may visit some other examples to understand why.

At around the same time, just before the EMI demonstration, IBM had pit their own algorithmic bulldog, which they evocatively called “Deep Blue,” against the mind of chess master Gary Kasparov in what was billed as the match of the century. Up until this point, no computer had ever definitively defeated a grand master under tournament conditions. In the much ballyhooed and televised event (how does one watch chess on TV? I guess you could take a whole lot of bathroom breaks), Kasparov lost to Deep Blue in both tournaments, not by much, mind you, 4-2 in a 1996 match, and 3.5 to 2.5 in a subsequent re-match in 1997, but it was enough: Computers were now finally better than humans at chess.

Well, you might say, chess is so mathematical, no wonder. But then this superiority of artificial intelligence was (again, apparently) demonstrated in 2011, when another IBM computer, “Watson,” defeated the greatest Jeopardy! Champions of all time, Ken Jennings and Steve Rutter, even hedging its bets better than the human competitors. Then, more recently in 2016, Google’s Deep Mind algorithm defeated Lee Sedol in the game Go, which is apparently even more complex than chess. From Kasparov to Sedol, it seems the evidence was in: Computers were officially “smarter” than humans, even the smartest of humans.

But are they? Mr. Kasparov complained that the engineers at IBM were ‘tweaking’ the computer in-between games, to improve its performance. Perhaps, but that was not the main point. Rather, the question we should ask is whether the computer was really playing chess, or Jeopardy!, or Go? Or, for that matter, was it really ‘writing’ a Bach prelude?

At its core, this is not so much a scientific question, but a philosophical one, part of the answer to which may be found in a 1931 paper by the Austrian mathematician Kurt Gӧedel, whose official title was “On Formally Undecidable Propositions of Principia Mathematica and Related Systems,” but which came to be known more generally as the “Incompleteness Theorem.” Gӧedel demonstrated that any “formal system,” that is, any algorithm that operated according to a set of pre-existing rules and axioms (as any computer by definition must) can never transcend this algorithm, which means that it can never be aware of its own “consistency,” of its own rules and procedures.   When applied to computers (which would not really be invented until after the Second World War, although the theory was around in Gӧedel’s time), this means that any artificial “intelligence,” cannot be aware of what it is doing, unable to go beyond the algorithm that has been programmed into it. Humans, on the other hand, can indeed transcend not only the computer’s algorithm (for we put it in there), but also our own, for we, as humans, are aware of our own consistency and can therefore, in a paradoxical way, be inconsistent, violating, if that is the word, our own “algorithm.” What this means at a practical level is that we humans, unlike computers, are conscious and free, and therefore will always in some fundamental way be superior to computers, regardless of how good they get at certain programmable tasks.

This is a subtle argument, not accepted by all, but seems true. Some respond that our own very awareness and consciousness, even our freedom, is a function of the complexity of our brains, which are still vastly more intricate than the most advanced computer, with untold trillions of dendritic connections. When artificial intelligence reaches such a level of complexity, it is purported, then they too will become self-aware, and begin perfecting themselves, without our help. This is the trope of such figures as Ray Kurzweil, now head of engineering at Google, and other transhumanists who evoke the notion of the “singularity,” wherein computing power will explode exponentially, becoming ever-more intelligent and powerful, and we humans had best get on board, perfecting ourselves along with the (our?) machines. In other words, we must become super-intelligent cyborgs, transcending the limited computing powers of our fleshy brains. (But then who would win Jeopardy!, if we all know all the answers, and answers to questions that have not even been raised? Ye shall be like gods…).

Science, or science fiction? The key here is that, according to Gӧedel’s theorem, no matter how “complex” one makes a computer, it will always be trapped, if that is the word, within its own complexity, within the very processes that make it a computer.

That is not to say, as seems obvious, that computers will do far better than humans on tasks that are purely algorithmic, that work on a system of set pre-ordained, never-deviating rules and axioms, which is why we use calculators (although in my old school habits I still check much of my own work by hand). This is also why computers are very good at chess, which has strict rules (part of the reason I am not much given to the game), as well as games of propositional “facts,” like Jeopardy! Computers can even manipulate musical notes, altering a rest here, a note there, a chord interval somewhere else, and “compose” music.

However, the computer could do none of these things unless humans had invented the “rules” first and, more than that, programmed these rules into the computer. Beyond the rules, the programmers (all of them human) would also install past iterations of chess and Go games, along with million of facts put into the “brain” of Watson. The computer would then play these games over and over, in millions of iterations, “learning” in the process, but only learning because humans had already invented and learned.

This argument applies even more clearly to the musical example, with which we began. A computer could never imitate Bach, unless there had first been a Bach. Not much computing power is required (even for humans) to manipulate what Bach there is, sometimes to pleasing effect, oftentimes not (for Bach knew what he was doing, and it is nearly impossible to perfect what such a master has produced).

Even in its most complex procedures, the computer knows not what it does, although in a different sense than we who put to death the Redeemer. Only humans can act in ignorance, or refuse to follow what truth they know, or go beyond a given set of rules. A computer must follow what “truth” is programmed into it, inexorably proceeding from one step to another, never able to transcend itself, to recognize, or go against or beyond, its own consistency.

This is why computers, however good they are at algorithmic tasks, are woefully bad at anything that requires context, nuance, and subtle decision-making, things that seem very simple to us, but in fact are incredibly complex. Recognizing faces, understanding events, getting the bigger picture, humor, invention, creativity (in the true sense) are either difficult for a computer (insofar as they have some basis in algorithms) or impossible (if they depend on transcending an algorithm, such as consciousness, spirituality, relationships and love).

For a tragic example, ponder the case of Joshua Brown, who was test-driving a Tesla self-drive car on a bright sunny day in May of 2016. Joshua was supposed to have his hands and his human brain ready to take the wheel at all times. The details are a bit sketchy, but the car went full speed under the trailer of an 18-wheeler, a horrific accident which Mr. Brown did not survive (requiescat in pace). According to the official laconic statement from Tesla “neither autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied.” The driver of the truck claimed he could hear a Harry Potter movie playing in the crashed car, and it was still playing as the car kept speeding down the highway, its top sheared off, until it struck a telephone pole a quarter of a mile away.

A human driver, if he were aware and not watching a film, would have noticed that the big, white thing filling his field of vision was attached to a tractor turning left in front of him, and was certainly of a different hue and texture from the blue azure of the sky above and around. Also, a human driver, even if he were so tricked and survived impact, would have stopped the vehicle, and not kept on going full speed like some B-movie zombie-car. It’s like the joke with the three old ladies out for a drive and, after zipping through the third stop sign, the two passengers say to the driver: “Mildred, you’ve just gone through your third stop sign!” to which Mildred replies, “What, am I driving?” Unlike poor (fictional) Mildred, a computer is not only not aware that it is driving, but is also not aware that it is ‘in’ a car, or playing a game, or writing music. It just does what it is programmed to do, inexorably and inevitably, which is why I for one will never sit back in a driverless car and, as the song says, just let it drive.

Computers will never be able to see and react to reality in all of its complexity, for the basic reason that reality is not an algorithm. This is similar to the mistake that Galileo made, thinking that the cosmos could be captured and fully explained by a mathematical model, and why Pope Saint John Paul II had this to say in response 400 years later in a 1992 address to the Pontifical Academy for Science:

The emergence of the subject of complexity probably marks in the history of the natural sciences a stage as important as the stage which bears relation to the name of Galileo, when a univocal model of order seemed to be obvious. Complexity indicates precisely that, in order to account for the rich variety of reality, we must have recourse to a number of different models.

Reality, and all that can happen as things unfold in God’s providence, is not deterministic, and its infinite richness requires many “models,” or paradigms, even to begin understanding and responding to it. So don’t expect artificial intelligence, such as it is, to take over the art and music world anytime soon, or indeed ever. We still need creators, inventors, writers, artists, and, I would argue, human drivers and pilots and surgeons, in fact, all of us, to strive to go beyond where we now are, to transcend our own “algorithms,” our own sense of limitation and functionality. It is this that makes us in God’s image, our capacity to relate to persons and to things as they really are, in all their infinite depth and beauty as reflecting their Creator.

Sure enough, like their artificial analogues, humans will never get all of it, but at least we get it.

Author

Join the Conversation

in our Telegram Chat

Or find us on

Editor's picks

Item added to cart.
0 items - $0.00

Orthodox. Faithful. Free.

Signup to receive new Crisis articles daily

Email subscribe stack
Share to...