Auto Auctions Disintermediated

Carvana acquired the Adesa auto auction last week.  Discussion on Twitter said this was not fair play, cutting into the supply line, and that dealers should take their business elsewhere.  I replied that there is already a movement to “disintermediate” the auctions, and that they will ultimately go the way of the stick shift.

Auctions are to wholesale what the test drive was to retail. 

If you think about it, the whole auction paradigm is incompetent, like the dark days of assembly plant inventory before JIT was invented.  It means that one dealer took my used X5 in trade, couldn’t retail it, and sold it at auction – where it was purchased by another dealer, and finally retailed to a new owner.

Think of the friction – the time lags, the transport, the fees.  It’s just insane.  The only reason I didn’t sell the car myself is that it’s a lot of bother, but I can easily sell person to person (P2P) through platforms like Shift and Tred.  I can also sell direct to a used-car specialist like CarMax or, yes, Carvana.

This diagram shows three ways to skip the auction:

Figures from NAAA show that auction volume has declined every year since 2016.  I understand they provide other services but, look – Carvana already ingests inventory at scale using its own facilities.  They handle two million vehicles a year, and Adesa will bring them to three.

The wholesale market will be conducted dealer-to-dealer, without physical auctions, on digital marketplaces like CDK CarSource and Cox Upside.  The only wholesale inventory will be in transit or recon, because the digital listing can flip instantly to a retail offer.

The Car Offer case is instructive.  Bruce Thompson developed Car Offer as a dealer-to-dealer marketplace, skipping the auction.  Car Gurus then bought the platform and converted it to a consumer site, skipping the selling dealer.

Auctions are to wholesale what the test drive was to retail.  Just as consumers are learning to buy cars online, so will dealers.  In fact, dealers should pick it up faster because they’re experts.

Lenders at Top of Funnel

Chase Auto recently rolled out a digital platform for car shopping … and financing.  I like it.  The link is here.  It seems that everyone today has a vehicle search page.  The original cast, Autotrader and Cars.com, with about a dozen TPC competitors, are now joined by OEM sites, public dealer groups, and marketplaces from Roadster and Carvana.

“More vehicle shoppers than ever have started to look for vehicle financing before ever setting foot in a dealership.”

Competition hinges on which information the customer will seek first.  In an era of reduced purchasing power, many customers will want to “secure financing before going to the dealer.”  That’s the prompt on the Chase website.  There’s a prequal button right there between the Lariat and the XLT.

Don’t take my word for it, though.  This J.D. Power study found that nearly half of all customers shop for financing before visiting a dealer – 62% among Gen Z – and they start more than 30 days out.

This is probably a negative development for captives, and indirect finance in general.  Banks have a lower cost of capital and better rates.  Chase, as you know, is also popular as an indirect lender.  They say there’s no conflict with their dealer channel, but what if they had to choose?

The reach hierarchy, by customer base, is:

  • Banks – eight digits (ongoing)
  • Car Makers – millions of cars per year
  • Dealer Groups – hundreds of thousands

Banks have more customers, by an order of magnitude, than even the largest car makers.  Ten years’ worth of loyal Toyota drivers doesn’t approach Bank of America’s 66 million customers.  The same ranking goes for website reach, with the banks getting 120 to 190 million visits per month, while Carvana, Ford, and Autotrader each get twenty something.

Capital One, by the way, also has a shopping platform.  Ally has a dealer locator.  Bank of America has a redirect to Dealertrack.  Capital One is pretty shrewd about encouraging buyers to bring the app with them into the dealership, so they can update the deal as needed.  Mobile-first responsive is good, but an app is better.  Bank customers will carry their bank’s app.

Captives have the home field advantage once the customer is in the dealership and, likewise, their position online is downstream from the OEM brand.  Captives are advised to be front and center on their manufacturer’s website.

Dealer groups, like AutoNation, must rely on their own brand to draw customer attention.  In terms of unit sales, even the largest dealer groups fall below tenth-ranked Subaru.  Note that Lithia chose to develop a new brand, Driveway, for their online business.

Of course, none of these is a direct measure of financing intent.  Only a fraction of online banking traffic is looking for an auto loan.  The point is that they’re looking for the loan first, and then the car.

Claims Prediction with BQML

Did you know you could develop a machine learning model using SQL?  Google’s cloud data warehouse, Big Query, includes SQL support for machine learning with extensions like CREATE MODEL – by analogy with SQL DDL statement CREATE TABLE.

If you’re like me, you’re probably thinking, “why on Earth would I ever use SQL for machine learning?”  Google’s argument is that a lot of data people are handy with SQL, not so much with Python, and the data is already sitting in a SQL-based warehouse.

Big Query ML features all the popular model types from classifiers to matrix factorization, including an automated model picker called Auto ML.  There’s also the advantage of cloud ML in general, which is that you don’t have to build a special rig (I built two) for GPU support.

In this article, I am going to work a simple insurance problem using BQML.  My plan is to provide an overview that will engage both the Python people and the SQL people, so that both camps will get better results from their data warehouse.

  1. Ingest data via Google Cloud Storage
  2. Transformation and modeling in Big Query
  3. Access the results from a Vertex AI notebook

By the way, I have placed much of the code in a public repo.  I love grabbing up code samples from Analytics Vidhya and Towards Data Science, so this is my way of giving back.

Case Study: French Motor Third-Party Liability Claims

We’re going to use the French car insurance data from Wüthrich, et al., 2020.  They focus on minimizing the loss function (regression loss, not insurance loss) and show that decision trees outperform linear models because they capture interaction among the variables.

There are a few ways to handle this problem.  While Wüthrich treats it as a straightforward regression problem, Lorentzen, et al. use a composition of two linear models, one for claim frequency and a second for claim severity.  As we shall see, this approach follows the structure of the data.

Lorentzen focus on the Gini index as a measure of fitness.  This is supported by Frees, and also by the Allstate challenge, although it does reduce the problem to a ranking exercise.  We are going to follow the example of Dal Pozzolo, and train a classifier to deal with the imbalance issue.

Ingesting Query Data via Google Cloud Storage

First, create a bucket in GCS and upload the two CSV files.  They’re mirrored in various places, like here.  Next, in Big Query, create a dataset with two tables, Frequency and Severity.  Finally, execute this BQ LOAD script from the Cloud Shell:

bq load \
--source_format=CSV \
--autodetect \
--skip_leading_rows=1 \
french-cars:french_mtpl.Frequency \
gs://french_mtpl2/freMTPL2freq.csv

The last two lines are syntax for the table and the GCS bucket/file, respectively.  Autodetect works fine for the data types, although I’d rather have NUMERIC for Exposure.  I have included JSON schemas in the repo.

It’s the most natural thing in the world to specify data types in JSON, storing this schema in the bucket with the data, but BQ LOAD won’t use it!  To utilize the schema file, you must create and load the table manually in the browser console.

Wüthrich specifies a number of clip levels, and Lorentzen implements them in Python.  I used SQL.  This is where we feel good about working in a data warehouse.  We have to JOIN the Severity data and GROUP BY multiple claims per policy, and SQL is the right tool for the job.

BEGIN
SET @@dataset_id = 'french_mtpl'; 
 
DROP TABLE IF EXISTS Combined;
CREATE TABLE Combined AS
SELECT F.IDpol, ClaimNb, Exposure, Area, VehPower, VehAge, DrivAge, BonusMalus, VehBrand, VehGas, Density, Region, ClaimAmount
FROM
    Frequency AS F
LEFT JOIN (
  SELECT
    IDpol,
    SUM(ClaimAmount) AS ClaimAmount
  FROM
    Severity
  GROUP BY
    IDpol) AS S
ON
  F.IDpol = S.IDpol
ORDER BY
  Idpol;
 
UPDATE Combined
SET ClaimNb = 0
WHERE (ClaimAmount IS NULL AND ClaimNb >=1 );
 
UPDATE Combined
SET ClaimAmount = 0
WHERE (ClaimAmount IS NULL);
 
UPDATE Combined
SET ClaimNb = 1
WHERE ClaimNb > 4;
 
UPDATE Combined
SET Exposure = 1
WHERE Exposure > 1;
 
UPDATE Combined
SET ClaimAmount = 200000
WHERE ClaimAmount > 200000;
 
ALTER TABLE Combined
ADD COLUMN Premium NUMERIC;
 
UPDATE Combined
SET Premium = ClaimAmount / Exposure
WHERE TRUE;
 
END

Training a Machine Learning Model with Big Query

Like most insurance data, the French MTPL dataset is ridiculously imbalanced.  Of 678,000 policies, fewer than 4% (25,000) have claims.  This means that you can be fooled into thinking your model is 96% accurate, when it’s just predicting “no claim” every time.

We are going to deal with the imbalance by:

  • Looking at a “balanced accuracy” metric
  • Using a probability threshold
  • Using class weights

Normally, with binary classification, the model will produce probabilities P and (1-P) for positive and negative.  In Scikit, predict_proba gives the probabilities, while predict gives only the class labels – assuming a 0.50 threshold.

Since the Allstate challenge, Dal Pozzolo and others have dealt with imbalance by using a threshold other than 0.50 – “raising the bar,” so to speak, for negative cases.  Seeking the right threshold can be a pain, but Big Query supplies a handy slider.

Sliding the threshold moves your false-positive rate up and down the ROC curve, automatically updating the accuracy metrics.  Unfortunately, one of these is not balanced accuracy.  You’ll have to work that out on your own.  Aim for a model with a good, concave ROC curve, giving you room to optimize.

The best way to deal with imbalanced data is to oversample the minority class.  In Scikit, we might use random oversampling, or maybe synthetic minority oversampling.  BQML doesn’t support oversampling, but we can get the same effect using class weights.  Here’s the script:

CREATE OR REPLACE MODEL`french-cars.french_mtpl.classifier1`
    TRANSFORM (
        ML.QUANTILE_BUCKETIZE(VehAge, 10) OVER() AS VehAge,
        ML.QUANTILE_BUCKETIZE(DrivAge, 10) OVER() AS DrivAge,
        CAST (VehPower AS string) AS VehPower,
        ML.STANDARD_SCALER(Log(Density)) OVER() AS Density,
        Exposure,
        Area,
        BonusMalus,
        VehBrand,
        VehGas,
        Region,
        ClaimClass
    )
OPTIONS (
    INPUT_LABEL_COLS = ['ClaimClass'], 
    MODEL_TYPE = 'BOOSTED_TREE_CLASSIFIER',
    NUM_PARALLEL_TREE = 200,
    MAX_TREE_DEPTH = 4,
    TREE_METHOD = 'HIST',
    MAX_ITERATIONS = 20,
    DATA_SPLIT_METHOD = 'Random',
    DATA_SPLIT_EVAL_FRACTION = 0.10,
    CLASS_WEIGHTS = [STRUCT('NoClaim', 0.05), ('Claim', 0.95)]
    )  
AS SELECT
  Area,
  VehPower,
  VehAge,
  DrivAge,
  BonusMalus,
  VehBrand,
  VehGas,
  Density,
  Exposure,
  Region, 
  ClaimClass
FROM `french-cars.french_mtpl.Frequency`
WHERE Split = 'TRAIN'

I do some bucketizing, and CAST Vehicle Power to string, just to make the decision tree behave better.  Wüthrich showed that it only takes a few levels to capture the interaction effects.  This particular classifier achieves 0.63 balanced accuracy.  Navigate to the model’s “Evaluation” tab to see the metrics.

The OPTIONS are pretty standard.  This is XGBoost behind the scenes.  Like me, you may have used the XGB library in Python with its native API or the Scikit API.  Note how the class weights STRUCT offsets the higher frequency of the “no claim” case.

I can’t decide if I prefer to split the test set into a separate table, or just segregate it using WHERE on the Split column.  Code for both is in the repo.  BQML definitely prefers the Split column.

There are two ways to invoke Auto ML.  One is to choose Auto ML as the model type in the SQL script, and the other is to go through the Vertex AI browser console.  In the latter case, you will want a Split column.  Running Auto ML on tabular data costs $22 per server-hour, as of this writing.  The cost of regular BQML and data storage is insignificant.  Oddly, Auto ML is cheaper for image data.

Don’t forget to include the label column in the SELECT list!  This always trips me up, because I am accustomed to thinking of it as “special” because it’s the label.  However, this is still SQL and everything must be in the SELECT list.

Making Predictions with Big Query ML

Now, we are ready to make predictions with our new model.  Here’s the code:

SELECT
    IDpol,
    predicted_ClaimClass_probs,
FROM 
    ML.PREDICT (
    MODEL `french-cars.french_mtpl.classifier1`,
    (
    SELECT
      IDpol,
      BonusMalus,
      Area,
      VehPower,
      VehAge,
      DrivAge,
      Exposure,
      VehBrand,
      VehGas,
      Density,
      Region
    FROM
      `french-cars.french_mtpl.Frequency`
    WHERE Split = 'TEST'))

The model is treated like a FROM table, with its source data in a subquery.  Note that we trained on Split = ‘TRAIN’ and now we are using TEST.  The model returns multiple rows for each policy, giving the probability for each class:

This is a little awkward to work with.  Since we only want the claims probability, we must UNNEST it from its data structure and select prob where label is “Claim.” Support for nested and repeated data, i.e., denormalization, is typical of data warehouse systems like Big Query.

SELECT IDpol, probs.prob 
FROM pred, 
UNNEST (predicted_ClaimClass_probs) AS probs
WHERE probs.label = "Claim"

Now that we know how to use the model, we can store the results in a new table, JOIN or UPDATE an existing table, etc.  All we need for the ranking exercise is the probs and the actual Claim Amount.

Working with Big Query Tables in Vertex AI

Finally, we have a task that requires Python.  We want to measure, using a Gini index, how well our model ranks claims risk.  For this, we navigate to Vertex AI, and open a Jupyter notebook.  This is the same as any other notebook, like Google Colab, except that it integrates with Big Query.

from google.cloud import bigquery
client = bigquery.Client(location="US")
sql = """SELECT * FROM `french_mtpl.Combined_Results` """ 
df = client.query(sql).to_dataframe()

The Client class allows you to run SQL against Big Query and write the results to a Pandas dataframe.  The notebook is already associated with your GCP project, so you only have to specify the dataset.  There is also a Jupyter magic cell command, %%bigquery.

Honestly, I think the hardest thing about Google Cloud Platform is just learning your way around the console.  Like, where is the “New Notebook” button?  Vertex used to be called “AI Platform,” and notebooks are under “Workbench.”

I coded my own Gini routine for the Allstate challenge, but the one from Lorentzen is better, so here it is.  Also, if you’re familiar with that contest, Allstate made us plot it upside down.  Corrado Gini would be displeased.

The actual claims, correctly sorted, are shown by the dotted line on the chart – a lot of zero, and then 2,500 claims.  Claims, as sorted by the model, are shown by the blue line.  The model does a respectable 0.30 Gini and 0.62 balanced accuracy.

Confusion Table:
       Pred_1 Pred_0 Total Pct. Correct
True_1   1731    771  2502     0.691847
True_0  29299  36420 65719     0.554178
Accuracy: 0.5592
Balanced Accuracy: 0.6230

Now that we have a good classifier, the next step would be to combine it with a severity model.  The classifier can predict which policies will have claims – or the probability of such – and the regressor can predict the amount.  Since this is already a long article, I am going to leave the second model as an exercise.

We have seen how to make a simple machine learning model using Big Query ML, starting from a CSV file in Google Cloud Storage, and proceeding through SQL and Python, to a notebook in Vertex AI.  We also discussed Auto ML, and there’s a bunch of sample code in the repo.

Paying Bills for American Motors

My first Big Six consulting engagement, right out of MBA school, was solving a catastrophic failure in the Accounts Payable system of American Motors Corp.  You may recall AMC, they produced the Gremlin and the original Jeep.  This was right around the time of their acquisition by Chrysler, a sensitive time for the company.  The building still wore the red, white, and blue AMC logo, but the Chrysler employee newspaper was on the stand in the cafeteria.

It was on me to figure out what in hell had caused this popular and bulletproof software to fail. 

They were also just about to launch two new assembly plants in Canada, at Brampton and Bramalea, Ontario.  The launch, and maybe even the acquisition, was jeopardized because AMC had suddenly lost the ability to pay its suppliers’ invoices.  They had devolved to a purely manual process, paying months late, and their suppliers were threatening to cut them off.  Without a functioning A/P system, there would not be many parts shipping to the new plants in Canada.

The classical A/P function revolves around the “three-way match.”  Starting with the invoice, you must locate the purchase order for the goods and the slip from the receiving department showing that the correct goods had arrived.  As you can imagine, a giant manufacturing company cannot possibly perform this task on paper.  American Motors had been running the McCormack & Dodge suite of accounting software, and that was the proximate cause of the failure.  My assignment was to diagnose and fix the failure.

The Director of the A/P department had collected all of the invoices, receivers, and purchase orders into file boxes on tables in a huge room.  This had been a big conference room, maybe, or a gymnasium, and he had hired a platoon of “account temps” to run around the room looking for three-way matches.  Once someone found a match, they would run down the hall to the cashier and authorize payment of the invoice.  It was like a demented Chuck Barris TV game show.

The mad rush to pay months-old invoices was destroying any organization that might once have existed.

For me, as a programmer, this provided a stunning visualization of what this work must have looked like in the dark days before computers.  Of course, in those days, they would have been prepared for it.  Here, the mad rush to pay months-old invoices was destroying any organization that might once have existed in the file boxes.  The A/P director’s job was on the line and, over the weeks of my engagement, he aged ten years.  This poor devil was my client.  I could see the dark circles and the grey hair progressing as I greeted him each morning.

I should note that a new consultant doesn’t get a big job like this on his own but, as “senior schmuck onsite,” I was running the engagement.  Occasionally, higher-ranking consultants would show up to offer an opinion, not do any actual work, and bill four hours to the job.  Also, as the only one with any computer skills, it was on me to figure out what in hell had caused this popular and bulletproof software to fail.

Our method had two prongs of attack.  First, we brought in several junior, not yet CPA, staffers from our audit practice, and put them to work matching invoices.  This was basically the same process as in the gym downstairs, only our people were going to be smarter and look for patterns that might provide some clues.  Plus, we could bill for them at 100% realization.

Meanwhile, I would learn everything I could about the failing A/P system and its friends, the Purchasing system and the General Ledger system.  I read all three APRMs (Application Programmer’s Reference Manual, pronounced “A-Parm”) from cover to cover.  I read all the Job Control Language, the job streams, and much of the COBOL source code.

The only people dumber than the A/P department are these consultants!

I also got invited to defend our work at an executive meeting on the top floor of the AMC building, where I met the Vice President of Purchasing.  This was a big bull of a man, obviously some kind of ex-jock with a lot of red meat in his diet.  He pounded my guy mercilessly, and the preliminary stats from our auditors were no defense.  “The only people dumber than the A/P department,” he roared, “are the consultants hired by the A/P department!”

Eventually, I traced the failure to one specific job running one specific program, P1X030, the “matching module” itself.  All data flowing into, out of, or around this module were good, except that something like 90% of invoices went unmatched.  I called my manager up from Detroit and we went over the results.

I enjoyed working with Ken.  Back in those days, computer skills were considered déclassé.  I was the only consultant who could write a lick of code, and Ken was our only “technical” manager.  Eventually, the firm would get rid of Ken, and then me, in favor of a more golf-oriented practice.

“What about the exception report?” Ken asked, “is it dummied out?”  I checked the JCL.  Systems programmers would often streamline an implementation by suppressing some of its printouts but, no, P1X030 was faithfully printing a list of its reasons for rejecting 90% of the invoices.  “Let’s go for a walk,” Ken said.

We walked about half a mile, the length of the big, mainframe computer facility.  There, lying on the output table, was P1X030’s exception report.  Ken rapped on the window of the control room and spoke with the operator.  The report spooled off his printer every night, and then lay unclaimed on the table.  The operator had been collecting the old reports, and he was relieved to the be rid of them.  This was line-printer paper, boxes of it.  I waited while Ken went to find a hand truck.

The problem, printed mechanically line after line, was that the Purchasing department had been neglecting the important task of generating proper purchase orders.  They had been ordering the suppliers, probably in the same tones I had heard in the boardroom, simply to ship now and worry about the numbers later.

Purchasing had evidently instructed the suppliers to invent random P.O. numbers.  Our auditors had found a few clinkers, like 12345678 and 00000000, but mostly we had no clue.  If anyone had thought to ask a supplier, they would have been afraid to admit it and, anyway, it would have been the Purchasing department doing the asking.

I wrote up our findings and Ken presented them to AMC management.  He wheeled his hand truck into the boardroom and, for dramatic effect, read off the first few variants of “missing or invalid purchase order number.”  We included a report from P1X030, tabulating the various ways in which its safety features had been defeated.

There was no system failure for me to fix, so that concluded our engagement.  As to the failure we did find, management seemed eager to fix that one on their own.