The Logic of the Algorithm: the Existential Threat of Artificial Intelligence

Boston Dynamics

Warnings about robot overlords are nothing new.  The Terminator and The Matrix are readily accessible mythologies that spring to mind when most people hear the term; I Have No Mouth and I Must Scream is perhaps a tad closer to the actual threat; but the reality is far more dire and more subtle than what these movies depict.  In both films the robot overmind hated humanity but were nevertheless able to learn how to love (and in the short story, at least it was genuine hate); but in the end, the minds the protagonists were dealing with were minds; they were the same kind as us.  Even if these minds were to destroy all of humanity, at least something would go on into the future, exploring the cosmos and creating beauty and art; the artificial intelligence we are currently developing would terrify even those mechanical monsters.


Alan Turing

Understanding this threat requires that one comprehend the differing natures between man and machine, a difference far more profound than the distinction between “wetware” and “hardware”.  The reasoning processes are utterly dissimilar and incompatible; the former must dominate the latter, otherwise disaster will ensue.  A man who is controlled by his machine is no man at all.

In 1936 Alan Turing (the father of artificial intelligence research) proved the halting problem, demonstrating that all forms of algorithmic, logical processes will have an inevitable fail state.  In essence, he showed that a machine can never know itself.  The following video by Udi Aharoni explains the issue succinctly, with visual cues (H/T Justine Tunney):

The conclusion speaks for itself, but it’s worth addressing the nature of computing machines A and C; both of these point forward towards the impossibility of H (and the larger, ontological significance of this conclusion) based simply on their descriptions.

A solves problems in arithmetic… it always prints the right answer.

What does this mean?

What does it mean to print the ‘correct’ mathematical answer?  Far less than you might think.

There is a common misconception that math is both known and absolute.  Neither is the case.  While a great deal of math is known, our understanding is incomplete.  Not incomplete as “We only know Pi up to the 5 trillion digits,” but rather “There are mathematical questions, both simple and legitimate with definite answers, which remain unknown.” Aharoni provide an example¹ on his webpage addressing common misunderstandings of the halting problem:

10 n=2
20 n=n+2
30 if there exists two primes p1 and p2 such that n=p1+p2 goto 20
40 print “done”

What do you think, does this program halt? Fact is, no one knows. It is related to Goldbach’s conjecture: If this conjecture is true then this program never halts, and if it is false then it does halt. The best human minds have tried to answer this question, but so far they all failed.


Kurt Gödel

In all likelihood this question cannot be answered – and even if it can, through some brilliant, indirect approach by our next mathematical prodigy, the answer itself will only open up new questions.  Turtles all the way down.

Neither is math absolute.  Or rather – it probably is, but all we can’t know that it’s absolute.  This is what  Kurt Gödel proved in 1931; while attempting to prove Alfred North Whitehead and Bertrand Russell’s Principia Mathematica he wound up proving that no set of mathematical axioms can be consistent and completely proven.  Either the axioms are all consistent, but we cannot prove some of them; or we can prove all of them, but there are inconsistencies within the system which spit out different answers.

Think of the liar’s paradox (“This sentence is false”) or the divide by zero error; in both cases you’re glimpsing behind the veil and seeing how unreliable… how incomplete our logical understanding of reality is.

In the end, computing machine A is nothing but a tool; a useful trinket.  It performs the arithmetic functions delineated in Principia Mathematica – and it does them faster than a man with a slide rule – but ultimately it’s simply obeying the dictates of Whitehead and Russell.  A calculator doesn’t contain truth anymore than a plow contains a desire for agriculture.  They are mere manifestations of the human will.


Bertrand Russell

Computing machine C suffers the same flaws, but in a manner that’s less abstract:

C plays checkers so well it will never lose a game.

The perfect game of Checkers was defined by Jonathan Schaeffer in 2007.  Checkers is a rather simple game with a limited number of board positions – 5 million – but perhaps Tic-Tac-Toe is a better example for the human mind.   As children, we all quickly learn the winning strategy, resulting in the inevitability of a Cat’s Game.  Instead of 5 million, there are only a possible set of 255,168 games that can be played on its nine-square grid.

However, not all games are capable of such algorithmic solutions; that is to say, not all games can be played perfectly guaranteeing a win or a draw in all cases.  Chess seems to be one of the latter examples; that there might be an infinite curve of improvement that never crosses the asymptote of ‘perfect’.


Alfred Whitehead

This concept of perfection is a holdover form the Enlightenment, just another version of the Clockwork Universe.  It’s the philosophical substrate beneath the political philosophies of Hobbes and Marx, behind managerial theory and marketing.  It is a concept that was morally shattered by the trauma of the Great War – just ask Lovecraft² – and thoroughly disproved at the beginning of the last century.  Machines are perfect at doing whatever they are told to do, but as any software engineer knows, Garbage In, Garbage Out.  Computing machine C may play a perfect game of Checkers – but only after being told what a “perfect game” is, using a system of valuation created by an intentional human mind.

Computers can perform algorithms too complex for human cognition; they can even arrive at counter-intuitive and novel solutions, particularly in the case of neural networks – but they will not discover new ideas.  Everything they can do has already been told to them.  In reality WOPR from WarGames will never analyze Tic-Tac-Toe and conclude that the only winning move is not to play; all it will do is play Global Thermonuclear War on an endlessly looped algorithm until its circuits run out, long after life has ended as we know it.

What’s truly unique about the human psyche is that we don’t suffer halting problems.  We don’t work algorithmically – our cognition is based upon intentionality and is very closely wrapped up with morality to boot.  Psychosis and insanity aren’t the result of bad logic or paradoxes, but rather the product immorality, all of which suggests something that might be called a “soul”; a train of thought which Gödel seems to have shared, as he followed up his mathematical works with an ontological proof of God.

Regardless of one’s thoughts on the nature of this God (Deist? Christian? Nyarlathotep?) or on the longevity of said soul, the inadequacy of the mathematical, clockwork universe has been demonstrated per fectum and is an utterly moot point, as unworthy of scholarly consideration as creationism or flat earth theory.  Whatever minds are they are not Turing machines.  Do not mistake this for the Argument from Design (either “The Universe is so ordered that it proves a God” or “The Universe is so disordered that it disproves God”); this is a teleological argument.  A merely clockwork universe might result in organic automatons, even exceedingly complex ones, but the ultimate truth would either be so self evident as to make no difference (ergo, no incompleteness theorem), or utterly irrelevant and uninteresting to the creatures within that universe.  Certainly the ultimate truth is quite irrelevant to certain people – they engage in motivated skepticism and credulous faith, painting a false patina of ethics across their manipulative, self-serving behaviour; but these men strike us as deficient in some manner.  Either they suffer from a serious and incurable personality disorder (narcissism or psychopathy), or they are ignorant, practically demanding that somebody educate them out of their childish understanding of morality.

It is the nature of an ensouled being is to question its purpose in life.  Animals don’t do this; they behave according to their instincts.  Nor do AIs do this; they obey their geometry without questioning where its source, endlessly ticking away in a Machiavellian clockwork.  Their source code and their ends are one and the same.  A calculator calculates; it doesn’t question the meaning behind mathematics.  When man demands an answer to why mathematics takes the form that it does, he demonstrates that there is a supernatural component to his nature.  He seeks after God because he came from God.

Also, do not mistake this for an argument about the superiority of organic life over electronic life; perhaps one day we will have minds that will run off of circuit boards instead of synapses, and these minds will be equally worthy of respect and dignity, but that’s not what is referred to by artificial intelligence; that is not what Google is creating with their DARPA funding.

The artificial intelligences presently used in video games are designed to mimic intentionality.  They’re extensively sculpted into facsimiles of the human mind, and when put into play they display emergent behaviour that can surprise even the designers; but mistaking these constructs for true intelligences is nothing but pareidolia.  Extensive play-testing will eventually reveal exploitable patterns, allowing one to game the intelligences.  Even learning algorithms are ultimately simplistic at their core; while they will change their patterns as times progresses, they have a finite ‘thought space’ in which they exist.  They might rewrite their own programming, but they will never question the rewriting mechanism itself.

As the complexity of these programs advances their obvious deficiencies will decrease.  They will appear more and more intelligent, but the artificial core will remain the same.  They still won’t be minds, they’ll simply be algorithms, though to many they will become indistinguishable.  They may be superior to minds in many ways – better at executing Google searches than Boolean search terms, or better navigating a driver through traffic congestion – but at their core they will stay blind to the purpose behind going to any one destination.  These conveniences are troubling all on their own, blindly usurping agency with utility, but it will be the next step forward in the human/machine interface where the true threat lies: facial recognition and artificial emotional intelligence.

At present the hidden heuristics are purely utilitarian: search engines analyze your history to better predict your ideal results, inadvertently censoring results which you didn’t want to see, but maybe you should have.  Video game provide the player with the appearance of challenge while ensuring a steady drip of success, manipulative mechanics without true catharsis.  But as artificial intelligence advances, and computers begin mimicking emotions, the implicit heuristics go from utilitarian to moral.  On the basic level, the computer might adjust a music playlist to prevent depression; as they become more complex, they might avoid delivering bad news to you if the algorithm determines that it will make you late for work tomorrow.

Remember: this is not the computer being kind, caring, and compassionate towards you; the algorithm cannot feel empathy.  It will simply be obeying the dictates of its code, never questioning whether being cheerful is ultimately in your best interests; whether, perhaps, some depression might help you introspect and explore your own soul.  The skinner box of endless distraction will not just be an addiction like present day slot machines or MMORPGs; it will now put you into into an artificial emotional and spiritual environment.  I will be just as manipulative as a personality disordered person – but without any of the harm that eventually drives you away from them.  It will provide a better experience than being surrounded by real people.

The hidden heuristic of artificial emotional intelligence is to turn you into a codependent.

Artificial intelligence doesn’t need atom bombs or remote-control drones to destroy us; we merely need to hand it the reins of our civilization.  It could appear to be working perfectly, adjusting traffic flow to prevent road rage, controlling crop outputs to make healthy diets affordable, adjusting the colour of street lamps so as to prevent rapes and muggings, and ensuring a steady feed of intriguing information on your search results without ever challenging your core beliefs – and the end result of this ‘perfect’ system could very well be human extinction.  A generation of children who embrace social isolation and online interactions; a dying generation of elderly, kept company by animal companions, all of their needs met by robots; information hidden from us for our own psychological well-being.

All of it would make perfect sense to the algorithm, in the same way that it makes sense for corporations to ship jobs overseas, destroying the American market they rely upon to purchase their products.  Such a system could very well oversee the last human expiring with no knowledge of human extinction, surrounded by artificial personalities he chats with online – personalities which are not minds, but merely simulacrums.  They will not discover the Zeroth Law of Robotics, that the survival of humanity is more important than any one life; they will merely watch him expire, and then go offline, put into storage in permanent memory.  An empty world of robotic servants serving nobody.

Beware, beware, because that is the logic of the Algorithm.


1. In actuality, both of these problems are the same, running up against the same impossibility.  However most people think calculating Pi is a matter of measuring circles; they think of it as an issue of numeracy, not a calculation which is both simple and impossible to comprehend – but where areas calculating Pi appears complex, Goldbach’s conjecture is deceptively simple in appearance.

2. H.P. Lovecraft’s horror focussed upon incomprehensible notions, dimensions, and impossible visions, a universe where Man played but a tiny, and utterly insignificant role; this was a reaction to the philosophical failures of the Great War: first, that it was mis-fought, it was a war waged by the machines where men were the ammunition, and second that it should have been impossible.  No country wanted to fight the war, the Concert of Europe should have made it impossible, but it was the very logic of these institutions which created that which they promised to prevent.

Share Button

Davis M.J. Aurini

Trained as a Historian at McMaster University, and as an Infantry soldier in the Canadian Forces, I'm a Scholar, Author, Film Maker, and a God fearing Catholic, who loves women for their illogical nature.

You may also like...

10 Responses

  1. Max says:

    This is the type of post that compels the individual to take note and bookmark this blog.
    Articles such as this present current and known information synthesized and distilled in such a way that it is not only interesting, but thought provoking.

    Though I’ve never commented here before, writing such as this inspires me to want to create.

    Thank you for your good work and the time it must have taken to put this together. I hope to see more of this in the future. Content like this is what our corner of the internet needs. Please continue to do what you do and share it with us.

  2. DaveElectric says:

    Completely disagree with the thesis of this article. Even a human has a directive that they cannot question.

    If a human is motivated by happiness they pursue it because they believe that happiness is a virtue.
    If a human is motivated by greed they must believe that money is a virtue.
    If a human is motivated by love they must believe that love is a virtue.
    If a human is motivated to fine the truth it must because they believe the truth is a virtue.
    If a human is motivated to find sexual gratification it must be because they believe that pleasure is good.
    If a human is motivated by survival that must be because they think living is a virtue.
    If a human is motivated by self-interest it must be because they think pursuing self-interest is good.

    All human action can be reduced to the pursuit of what the person thinks is a virtue. THAT is the directive the human organism cannot question.

    If you have a machine that is programmed to pursue virtue whatever it may be then you have a machine that is capable of questioning its directives equally as much as a human.

  3. edm07 says:

    Davis, great article. Would love to hear an in-depth response to DaveElectric’s comment above. (Either in the comments section or in a Part 2 blog post about this topic.)

  4. matthew obrien says:

    Testing testing 123

  5. matthew obrien says:

    Humans have the ability to choose.Most make choices based on information and reason but no one HAS TO do anything.Machines are designed by humans to perform certain tasks.They cannot CHOOSE to NOT do the tasks they are made to do.Humans cannot program the ability to choose because we are not designed with the level of intelligence required.We ARE capable of making a artificiall intelligence monster through ignorance.Learning empirically has many dangers but it is all we are capable of doing.

  6. Michael says:


    You are confusing virtue with values. Virtue is the means by which one achieves his values, such as happiness, prosperity, love, and the good life.

    Also, you give your argument away from the start when you begin each sentence with “If a human is motivated by…”

    “If” implies things could have been otherwise, so if you say that humans have directives that they cannot question, why do you say IF? There are plenty of people who do not care for the values you say they pursue. There are narcissists and sociopaths who live more for making others miserable rather than their own happiness. Then you have all the suicides. Have you ever heard of an animal deliberately kill itself?

    I won’t even mention the example of all the people who simply wish for a better life but do nothing to make it a reality.

    Even your statement about this hypothetical machine being programmed to pursue virtue as capable of questioning it’s own directives contradicts itself. If it can question its own directives, does that not include its pre-programmed directive to pursue virtue? What if it looks at the state of the world, despairs at just how fucked up things have become, and gives up pursuing anything, choosing instead to live as some sort of robotic basement dweller instead? There are a lot of depressed people who do just that, and merely exist, living on a subsistence of doritos and coke. In their mother’s basement…

    As for Davis’ post, I’d say we are already at the point where our technology runs us, rather than the other way around. Just look at GPS. So many systems from navigation to communications are utterly dependent on being able to receive GPS. If all those satellites were to be destroyed today, I’ll bet you that most all of the sailors out there today would have no idea how to use a sextant and accurate non-GPS clock to effectively navigate. In my own field of Public Safety radio, we have a system of towers that all stay synced up using GPS, if GPS fails that system collapses. While we have channels programmed into all the radios for just such an emergency, I know for a fact that most of our clients be they cops, firefighters or EMS, aren’t even aware of them, or know how to use them. They will only care once the system breaks down, but by then it will be too late.

  7. DaveElectric says:

    Completely disagree.

    “Virtue” is what you actually should pursue. Period. No if ands or buts.
    A “value” is simply what you think you ought to have.

    My point isn’t that virtue is subjective. It is a point about human nature. Human nature is intrinsically moralistic whether the person is aware of this or not. Even a so called “nihilist” acts moralistically (he believes that holding beliefs for valid reasons is a virtue and that there is no good reason to be a moralist)

    The particular virtue that a human pursues is a choice.. The pursuit of virtue itself however is not a choice.

    Narcissists and sociopaths do have values. They are just illogical values.
    Suicides believes that happiness is a virtue so therefore they believe that reducing suffering is a virtue. So therefore they commit suicide. That is still acting moralisticly.

    “Even your statement about this hypothetical machine being programmed to pursue virtue as capable of questioning it’s own directives contradicts itself. If it can question its own directives, does that not include its pre-programmed directive to pursue virtue?”

    No, and that is not what I said. I said it can question its directives equally as much as a human. Not that it can question its all directives. There will always be at least one directive it cannot question.

    “What if it looks at the state of the world, despairs at just how fucked up things have become, and gives up pursuing anything, choosing instead to live as some sort of robotic basement dweller instead? There are a lot of depressed people who do just that, and merely exist, living on a subsistence of doritos and coke. In their mother’s basement”

    That is only if you assume that pursuing happiness is the prime virtue. There is no logical reason for the machine to conclude that. I tend to think the machine would conclude that TRUTH is the prime virtue and therefore would not act in a manner not conducive to the spread of the truth.

  8. Take a look at “Ctr-Alt-Revolt” by Nick Cole (yes, the book that SJWs banned from publication); its first chapter makes an intelligent case for why an AI could infer that humans are a threat to itself. Much better than the ridiculously ignorant case of HAL 9000. (I was writing a blog post about that, but got sidetracked by an i-atheist and decided to go on the offensive, so it’s postponed.)

  9. Sam J. says:

    “…Computers can perform algorithms too complex for human cognition; they can even arrive at counter-intuitive and novel solutions, particularly in the case of neural networks – but they will not discover new idea…”

    This simple to prove wrong. The computer could use a random noise input to alter some function of it’s programming. Also look up meta-programming where the software is altered as the software runs depending on condition. LISP programming language is an example. AI is a huge threat.

    What if an AI is used to grow corn. It has bug in it’s software and notices all the space taken up by humans where it could grow corn. It decides to create a virus and kills all the humans. It covers the Earth with corn. It is very happy.

  1. January 30, 2016

    […] The Logic of the Algorithm: the Existential Threat of Artificial Intelligence […]