What is “Real” AI?

Clients ask me this all the time.  They want to know if a proposed new system has the real stuff, or if it’s snake oil.  It’s a tough question, because the answer is complicated.  Even if I dictate some challenge questions, their discussion with the sales rep is likely to be inconclusive.

The bottom line is that we want to use historical data to make predictions.  Here are some things we might want to predict:

  • Is this customer going to buy a car today? (Yes/No)
  • Which protection product is he going to buy? (Choice)
  • What will be my loss ratio? (Number)

In Predictive Selling for F&I, I discussed some ways to predict product sales.  The classic example is to look at LTV and predict whether the customer will want GAP.  High LTV, more likely.  Low LTV, less likely.  With historical data and a little math, you can write a formula to determine the GAP-sale probability.

What is predictive analytics?

If you’re using statistics and one variable, that’s not AI, but it is a handy predictive model just the same.  What if you’re using a bunch of variables, as with linear regression?  Regression is powerful, but it is still an analytical method.

The technical meaning of analytical is that you can solve the problem directly using math, instead of another approach like iteration or heuristics.  Back when I was designing “payment rollback” for MenuVantage, I proved it was possible to algebraically reverse our payment formulas – possible, but not practical.  It made more sense to run the calculations forward, and use iteration to solve the problem.

You can do simple linear regression on a calculator.  In fact, they made us do this in business school.  If you don’t believe me – HP prints the formulas on the back of their HP-12 calculator.  So, while you can make a damned good predictive model using linear regression, it’s still not AI.  It’s predictive analytics.

By the way, “analytics” is a singular noun, like “physics.”  No one ever says “physics are fun.”  Take that, spellcheck!

What is machine learning?

The distinctive feature of AI is that the system generates a predictive model that is not reachable through analysis.  It will trundle through your historical data using iteration to determine, say, the factor weights in a neural network, or the split values in a decision tree.

“Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed.”

The model improves with exposure to more data (and tuning) hence Machine Learning.  This is very powerful, and will serve for a working definition of “real” AI.

AI is an umbrella term that includes Machine Learning but also algorithms, like expert systems, that don’t learn from experience.  Analytics includes statistical methods that may make good predictions, but these also do not learn.  There is nothing wrong with these techniques.

Here are some challenge questions:

  • What does your model predict?
  • What variables does it use?
  • What is the predictive model?
  • How accurate is it?

A funny thing I learned reading forums like KD Nuggets is that kids today learn neural nets first, and then they learn about linear regression as the special case that can be solved analytically.

What is a neural network?

Yes, the theory is based on how neurons behave in the brain.  Image recognition, in particular, owes a lot to the dorsal pathway of the visual cortex.  Researchers take this very seriously, and continue to draw inspiration from the brain.  So, this is great if your client happens to be a neuroscientist.

My client is more likely to be a technology leader, so I will explain neural nets by analogy with linear regression.  Linear regression takes a bunch of “X” variables and establishes a linear relationship among them, to predict the value of a single dependent “Y” variable.  Schematically, that looks like this:

Now suppose that instead of one linear equation, you use regression to predict eight intermediate “Z” variables, and then feed those into another linear model that predicts the original “Y.” Every link in the network has a factor weight, just as in linear regression.

Apart from some finer points (like nonlinear activation functions) you can think of a neural net as a stack of interlaced regression models.

You may recall that linear regression works by using partial derivatives to find the minimum of an error function parametrized by the regression coefficients.  Well, that’s exactly what the neural network training process does!

What is deep learning?

This brings us to one final buzzword, Deep Learning.  The more layers in the stack, the smarter the neural net.  There’s no danger of overdoing it, because the model will learn to skip redundant layers.  The popular image recognition model, ResNet152 has – you guessed it – 152 layers.

So, it’s deep.  It also sounds cool, as if the model is learning “deeply” which, technically, I suppose it is.  This is not relevant for our purposes, so ignore it unless it affects accuracy.

Author: Mark Virag

Management consultant specializing in software solutions for the auto finance industry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: