Weighted Factors for Product Selection

Every so often, I will write up a standard quantitative procedure, usually because someone has asked me about it.  For instance, see Pay Plan Math, What Is Accuracy, and Know Your Time Series.  Today, it’s weighted-factor analysis for product selection.  At a high level, this procedure is:

  1. Gather your requirements and selection criteria
  2. Quantify how important each criterion is
  3. Grade the vendor responses
  4. Compute numerical scores

Gather Requirements and Criteria

First, through interviews and maybe some direct observation, discover why we need the product.  In my business, this is generally a software product, but it could be anything.  Next, determine what are the requirements and selection criteria.

Selection criteria are the features we will evaluate to decide which product is the best fit, whereas requirements are features the product must have to even be considered.  Don’t make the mistake of thinking requirements are just extra-special criteria.

If you’re looking to buy a car, and gas mileage is on the list, then a hybrid will score well on that criterion.  If you’re only looking to buy a hybrid, then that’s the category, and you’re not looking at gas cars at all.

The purpose of requirements is to define the category of product we’re looking for. If you’re writing an RFP, the criteria are what the vendors respond to, and the requirements determine which vendors get the RFP.  When in doubt, send them the RFP anyway and let the vendor figure it out.

For example, if I am selecting cybersecurity software, I might want endpoint protection (EPP), endpoint detection and response (EDR), managed detection and response (MDR), or even a security operations center (SOC).  These all address the same problem, but they’re not the same product.

Quantify Importance of the Criteria

In the chart, I show criteria scored on a scale of 1 to 5, which is typical. Then, for the sake of example, I norm these to a total score of 100.  This is probably overkill, but it’s fun to have 100 as a baseline.  Later, we’ll do the same with the final score.  Clients love simple numbers.

One way to explore the criteria is do a forced ranking from most important to least important.  This is not amenable to quantitative methods, but it’s a good way to get started.  Spend an hour in front of the whiteboard while the client staff fight it out over the ranking, then let them each do the 1 to 5, and average their responses.

Yet another way is to give each participant 100 points that he can allocate as desired across the criteria.  This is the most accurate, in terms of understanding tradeoffs, and it makes the math easy.

I like to keep the cost analysis separate from the features.  It is possible to turn the price proposal into another row among the criteria, but no one really thinks this way.  What you’re shooting for is, “this one scored 84 out of a hundred, and it’s $100,000 more than the one that scored 74,” with traceability back to the features that account for the difference.

Grade the Vendor Responses

Maybe you’ve sent an RFP and are now grading the proposals, or maybe you’re doing your own research. Using an RFP is handy because you can include the criteria and let the vendors tell you how they propose to meet them. In either case, you (and the committee) are responsible for assigning a number to indicate how well the product meets each criterion.

Here again, the 1 to 5 scale is popular and easy to use.  Obviously, grades supported by numbers are best.  For gas mileage, you can assign 1, 2, 3, 4, and 5 to specific ranges of MPG.  Something like “vendor support” can be tied to a service-level agreement in hours or minutes.

Compute the Final Score

This is called weighted-factor analysis because each product is scored according to its criteria grades, and the criteria have different weights.  It’s just like computing a weighted average.  Since we’ve normed the weights to 100 and we’re using a 5-point grading scale, we divide the totals by five to produce a score out of 100.  You can present this as a percentage if you want.

In our carefully contrived example, vendor #3 comes out on top even though they had the lowest raw score, because they scored well on the criteria that mattered most.

When data scientists say that “our precision exceeds our accuracy,” this is what they mean. Do not take this fundamentally subjective numerical score out to two decimal places.  The point of this procedure is not so much to generate a number, but to make the variables explicit.

The idea is that sum of many small decisions will be more accurate than one big one, particularly if there is consensus among the participants. Everyone on the committee should be able to say why the chosen product scored ten points better than the runner-up.

Also, to be a little bit pragmatic: now everyone has their fingerprints on the decision.  No one can complain that they weren’t consulted, or question how the decision was made.

Funny aside:  One of my first consulting projects was the selection of a networking vendor for Ford Credit. We did the full procedure: interviews, requirements, criteria, an RFP, a selection committee, bidder conferences, sealed bids, etc. Digital Equipment (DEC) won. Remember them? And then some big shot from the Glass House swooped in and gave the contract to IBM. What about our fancy RFP project? Well, it was “defective” because it failed to produce IBM as the winner. There was a saying in those days, “no one gets fired for buying IBM.” It was seen as the safe choice – and the only choice for risk-averse executives.

Author: Mark Virag

Management consultant specializing in software solutions for the auto finance industry.

Leave a Reply

Discover more from Virag Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading