Romance vs. Everything You Need to Know About Artificial Intelligence

This is a follow-up to my last post.  In it, I was talking about how fascinating Artificial Life is.  Genetic algorithms (trying to do something useful with Artificial Life) and neural networks may be among the most overused artificial intelligence (AI) algorithms for the simple reason that they’re romantic.  The former conjures up the notion that we are recreating life itself, evolving in the computer from primordial soup to something incredibly advanced.  The latter suggests an artifical brain, something that might think just like we do.

When you’re thinking about AI from a purely recreational standpoint, that’s just fine.  (Indeed, I have occasionally been accused of ruining the fun of a certain board game by pointing out that it is not about building settlements and cities, but simply an excercise in efficient resource allocation.)

But lest you get seduced by one claim or another, here are the three things you need to know about artificial intelligence.

First, knowledge is a simple function of alleged facts and certainty (or uncertainty) about those facts.  Thus, for any circumstance the right decision (from a mathematical standpoint) is a straightforward calculation based on the facts.  This is simply the union of Aristotle’s logic with probability.  Elephants are grey.  Ruth is an elephant.  Therefore Ruth is grey.  Or, with uncertainty:  elephants are grey 80% of the time.  Ruth is probably (90%) an elephant.  Therefore there is an  (0.80*0.90=) 72% chance that Ruth is grey.  If you have your facts straight and know the confidence in them– something you can learn from a statistics class– you can do as well as any intelligence, natural or artificial.

Think about this for a moment.  For just about every situation, there are one of three cases.  In the first case, there isn’t enough data to be sure of anything, so any opinion is a guess.  How popular a movie will be when almost nobody has seen falls into this category.   Second, there is enough data that one possibility is likely, but not certain.  (The same movie, once it has opened to a small audience, and opinions were mixed.)  And third, evidence is overwhelming in one direction.  But in none of these cases will a super smart computer (or human analyst) be able to do better on average than anyone else doing the same calculations with the same data.  Yet we tend to treat prognisticators with lucky guesses as extra smart.

Which leads us to the second thing you need to know about AI:  computers are almost never smarter than expert, motivated humans.  They may be able to sort through more facts more quickly, but humans are exceptionally good at recognizing patterns.  In my experience, a well-researched human opinion beats a computer every time.  In fact, I’ve never seen a computer do better than a motivated idiot.  What computers excel at is giving a vast quantity of mediocre opinions.  Think Google.  It’s impressive because it has a decent answer for nearly every query, not because it has the best answer for any query.  And it does as well as it does because it piggybacks on informal indexes compiled by humans.

And the third and final thing you need to know about AI is that every AI algorithm is, at one level, identical to every other.  Genetic algorithms and neural networks may seem completely different, but they fit into the same mathematical framework.  No AI algorithm is inherently better than any other, they all approach the same problem with a slightly different bias.  And that bias is what determines how good a fit it is for a particular problem.

Think about a mapping program, such as MapQuest.  You give it two addresses, and it tells you how to get from your house to the local drug store.   Internally, it has a graph of roads (edges, in graph terminology) and intersections (vertices).  Each section of road has a number attached to it.  The maps I get from AAA have the same number– minutes of typical travel time.  MapQuest finds the route where the sum of those numbers– the total travel time– is minimized.  In AI terminology, the numbered graph is the search space, and every AI problem can be reduced to finding a minimal path in a search space.

What makes AI interesting is that the search space is often much too large to be searched completely, so the goal is to find not the shortest path, but a path which is short enough.  Sometimes the path isn’t as important as finding a good destination, for example, finding the closest drug store.
In the case of artifical life, each “creature” is a point in the search space.  Consider Evolve, the program I wrote about the other day.  In it, each creature is defined by a computer program.  The search space is the set of all possible programs in that programming language– an infinite set.  And any transformation from one program to another– another infinite set– is an edge.  By defining mutation and reproduction rules, a limited number of edges are allowed to be traversed.

So, to summarize:  certain AI algorithms sound romantic, but they are all essentially the same.  And humans are smarter than computers.

Advertisements

2 thoughts on “Romance vs. Everything You Need to Know About Artificial Intelligence

  1. One skill which humans have and (to my knowledge) we’ve never replicated in computers is extrapoliating from incomplete data to general rules.

    For example, tracking the motion of points of light across the sky for thousands of years and coming up with gravity and the heliocentric model of the solar system. Granted that took thousands of years of humanity, but it also required an insane leap of logic. Mere mortals perform lesser generalizations all the time–it seems to be part of human nature to want to compress data into rules, and we’re very good at it.

  2. Actually, computers are pretty good at it too. The problem is that the search space is huge, and most such “rules of thumb” (or theorems with limited support, in Association Rule Mining jargon) are worthless.

    There are two ways to make an abstract rule. One is to make sweeping generalizations, and test these hypotheses. Since most sweeping generalizations are wrong, it can take years for a computer to come up with a good one. And it will be something that humans think is obvious.

    The other is to take existing, easy-to-prove rules and then abstract them. For example, it’s not hard to go from a whole slew of “People who like Harry Potter and the Goblet of Fire also like Harry Potter and the Chamber of Secrets” to a single “People who like one book in the Harry Potter series tend to like other books in the same series.” From there, it’s not hard to go from “people who like one book in a series are more likely to like other books in the same series.”

    The problem is that hypothesis testing is expensive when you have quintillions of possible hypotheses. If you have the data to tell the computer what a series of books is, (it’s not simply same author, similar title) you might as well just tell it that they have a special relationship. Motivated humans discover these sorts of relations on their own, but for a computer to do the same calculation would involve millions of hours of CPU time only to discover things that are common knowlege, but which nobody bothered to tell the computer.

Comments are closed.