There’s always something to howl about.

Debunking Artificial Intelligence — while programming your computer to be almost as smart as your dog.

Everything you’ve been taught about Artificial Intelligence for your whole life is false. AI researchers are not frauds, I don’t think, but they’re exuberant when they talk to reporters, and the reporters are ignorant, thoughtless and brash. In real life, AI is Siri, which can reliably lead you to the nearest closed-down super-market. In your imagination, AI is C-3PO, who can lecture you on Chinese lithography while clobbering you at backgammon.

This is the truth — and telling the truth about AI is as rash as telling the truth about Anthropogenic Global Warming or abortion, an incitation to a frothy wrath: There is no Artificial Intelligence anywhere — nor will there be any time soon, if ever. This is a case where new theory really is required. The theories currently being deployed in AI research will produce ever-more-competent Siris — which achievements will be hailed as “proof” of Artificial Intelligence — but they will never produce any actual intelligence.

Why? Ontology and teleology, of course.

AI fails because it is not actually attempting to model intelligence but simply to mimic the effects of intelligence. In this respect, AI is a cargo cult, and its argument of “proof” is the same as that of any cargo cult: Post hoc, ergo propter hocafter this, therefore because of this. If the destination turned out to be a super-market, even if it’s a bankrupt super-market, then Siri is “intelligent.”

Just that much is an error of identification. When you play chess against your iPhone, this is not “man versus machine,” it’s you versus a team of programmers — and they’re cheating, which is how they win. If you could pop out to do 50,000 pre-programmed calculations per move, you’d kick their program’s ass.

It is that error of identification that produces all the absurd AI claims in the popular media: Someone is doing something differently! It must be a miracle! So far, there is nothing in AI that cannot be adequately explained as human ratiocination. The fact that software is written in advance does not mean that the process is driven by anything other than human intelligence. The output of all software is never anything more than the consequence of interactions among human beings — the end-user and the programmers. Any intelligence you perceive is 100% human in origin.

To infer is to follow a rule of inference, but to originate that rule of inference is an entirely different process. That statement alone risks a new cargo cult: A smart computer would generate its own rules of inference! That won’t help, alas. Organisms — the only entities we know of that exhibit traits we might describe as intelligence — are internally-motivated to act, where computers and software are not. Oops! Another cargo cult: Motivated computers! Sorry, that won’t work, either. Whatever it is that makes organisms want to stay alive, there is nothing made by the mind of man that evidences anything even remotely like a will to live.

(And the micro-biologists can stand down. They don’t make life, they mix and match pre-existing components they don’t know how to make from scratch. But: If there is a hope for new AI theory, my money would be on the biologists, not on computer scientists.)

Every organism is aware of its environment to some degree, but there is no such thing as computing hardware or software that is aware in this way. Siri is not as smart as a helpful child. Siri is not even as smart as an amoeba. Siri is not aware of anything, ever, nor is any other piece of AI technology. This is a fundamental ontological misidentification, and it is the source of all the absurd claims made about Artificial Intelligence. In due course, we are going to talk about “intelligent” responses to signaling events, but the ability of hardware and software to detect some of those events and to respond “appropriately” is simply more computer chess: Canned ratiocination, human reasoning done in advance and encoded in that hardware and software.

In this respect, Artificial Intelligence is a dancing-bear theory:

[T]here are only three kinds of social science “news” stories. When the “news” deigns to inform you of your nature or your mental acumen, the breathless revelation will come in one of these forms:

1. We now know we know nothing!

2. Your good behavior is not to your credit, but at least your bad behavior is not your fault!

3. Dancing Bears are just as smart as you!

The purpose of a dancing bear story — always — is to induce you to denigrate your own mind. I don’t believe this is what the AI researchers are doing, but they sure aren’t doing anything to correct the popular misapprehension of what software can and cannot do. The ignorant, thoughtless, brash reporters may or may not be in on the conspiracy to undermine human intelligence and free will, but, either way, their work product is a manifestation of the unavoidable consequences of philosophical premises: Our culture is at war with the mind, and dancing bear stories of all types are just individual intellectual bombs in that conflict.

Here’s some good news: Dancing bears do have a purpose, even if it’s not to induce human beings to spit on their own beautiful minds. And here’s some even better news: Dancing bears, even though they are not and cannot be what we insist they are, can be very useful vehicles in the pursuit of the fully-human life.

An actual dancing bear is pretty useless, unless you’re coming up on your fifth birthday, but human civilization as we know it would not exist in its present form if we had not gotten very good at training animals. Beasts of burden still matter to many human economies, but most of the trained animals in America, by now, are pets. And the pet that matters most to Americans is the very best dancing bear in the history of human life on earth: Canis familiaris, the family dog.

If “intelligence” is defined a certain way, your dog is intelligent in the same way all non-human organisms are intelligent: It is equipped by nature to be aware of its environment and to respond in ways that are usually appropriate to its circumstances. Non-human organisms are not conceptually aware, so your dog does not know what’s really happening nearby. It simply has elaborate pattern-matching systems — each one “defended” in the dog-brain analog of post hoc, ergo propter hoc — that it deploys to respond to events it has encountered before. If you put a puppy in front of a mirror, it will bark and growl, on the guidance of its instincts. An older dog will not do this, because it will have modified its responses to account for things that look like dogs but have no scent at all. (And that last claim is fundamentally indefensible, since I have no way of discovering what goes on inside a dog’s brain. All I can do is draw inferences from its observed behavior.)

A dog is certainly not intelligent in the ways you are most likely to claim it is: It does not love you in any conceptual way — it cannot identify a concept like love — and it does not know how you feel — nor even that feelings can be identified and conceptualized. Dancing bears are not at all like human beings, for the simple reason that no non-human organism is capable of identifying and acting upon concepts abstracted from the chaos of sensory information in the immediate environment. There is no argument to be made about the mental functioning of a dancing bear that would offer any analogous understanding about human rationality and free will. We are like animals when we behave like animals — when we snuggle, for instance, or when we attempt to communicate by grunting or growling — but because we can reason conceptually and choose freely, animals are nothing like us.

But that doesn’t mean they’re not useful. Because your dog has no way of distinguishing the significant from the insignificant — an amazingly complex conceptual problem — it pays attention to everything. Because it does not conceptualize, and therefore cannot abstract logical predictive propositions, your dog’s brain works by referencing an elaborate pattern-matching system. Everything is a cargo cult to your dog: The “reason” for everything is post hoc, ergo propter hoc.

One of our dogs always “knows” when it’s time to start campaigning for a walk. We think she is referencing the quality of sunlight she sees through the windows — she starts earlier on cloudy days — but, of course, we have no way of knowing this for sure. We used to feed our dogs reliably at 9 am and 9 pm, and a dog we had in those days would always come to tell us if we were even a few minutes late. Could she tell time? Obviously not. But some sequence of reliable patterns added up in her brain to the “conclusion” that being fed should be the next event in the cargo cult chain.

Note that those two phenomena, and many others I could name, all turn on a very stupid “notion” of causal determinism: If being fed or being walked are the unavoidable consequences of the prior observed events in the pattern, then there would be no need to campaign for them. Dogs — very much unlike social scientists and newspaper reporters — exhibit a fundamental if uncomprehended “faith” in human free will: They would not need to try to influence our future behavior if that behavior were deterministically inescapable. Take note, however, that as stupid as all this is — the dog is accidentally “right” about free will as a consequence of its being accidentally “wrong” about everything in existence — as stupid as that is, your dog is still “intelligent” in some degree, where no artifact of Artificial Intelligence ever is.

The challenge for software engineers is to effect this kind of dog-like pattern-matching faux-epistemology in ways that are useful to human beings. And this, at last, is what Artificial Intelligence actually is: Pattern-matching software. Software cannot be aware, not as aware as a dog and not as aware as an amoeba. Within certain algorithmic limits, software cannot modify its behavior to respond to previously unforeseen circumstances, like the puppy learning that a scentless image of a live dog should not elicit a response. And unlike a dog, desperate to earn its keep as the dancing bear you long for most, software has no motivation of any kind.

And yet software can be programmed to exhibit any sort of dancing bear behavior its corresponding hardware can effect. You can write a simple rule in your mail client to look for triggers in your email and send the apposite auto-response in reply. Siri can direct you to the nearest super-market — and with just a little extra coding, she could warn you that that particular market has gone broke. If you are awash in statistical data, chances are you have software that can find a best fit with the push of a button. None of this software is “intelligent” in even the crippled way a dog or other non-human organism is intelligent, but by modeling the pattern-matching behavior of non-human organisms, software becomes massively more useful to human beings.

And this is what I have been talking about in a general way in this series of technology posts: Writing software that more perfectly emulates the pattern-matching “intelligence” of non-human organisms. Trigger and response — post hoc, ergo propter hoc — is a good start. Better still is a database of past triggers and responses, with the software working probabilistically to determine the best response to the current trigger. Better still is real-time interaction between you and your software, first so you can show it which triggers are significant and which are not, and second so you can show it your ideal response to particular significant triggers. That is to say, software that you can train like you trained your dog.

That much is what I’m looking for when I talk about Sarah, your software secretary. Practically speaking, I’m talking about a whole new style of AI. Not “expert” systems, like a Labrador perfectly trained to hunt waterfowl, but, rather, “trainee” systems, like a puppy, dumb as a thumb but eager to please. This is not to say that software like Sarah, or Heidi or Antoinette or Constance cannot have built in “expertise” — pre-programmed canned responses. It’s simply an acknowledgement that one size does not fit all.

The central piece of this argument is still to come. I keep waiting for someone to jump up, shouting, “I get it! I get it!” And it is no doubt shamefully rude for me to point out that the AI empire — as it is presented in the popular media — is a completely naked realm. Nothing that is promoted as being Artificially Intelligent is intelligent in any way — nor even aware as the simplest of organisms is aware. But what AI actually is — canned, pre-programmed human intelligence — is very useful already, and it will only come to be more useful as we get a grip on the idea that we should program software for human beings, rather than always trying to reprogram human beings to fit our software.

It would help, too, if we would take a moment to rejoice in the amazing power of the human mind — as represented in the tools we have seen from AI researchers so far — instead of always crafting bogus arguments to dismiss, deride and denigrate the incredible intelligence to be found between your ears.

< ?PHP include ("http://www.bloodhoundrealty.com/BloodhoundBlog/TechBackStory.php"); ?>