Saaspocalypse Now

For my sins, I have joined the “AI not kill SaaS” debate. I am motivating this with the Salesforce stock chart, which went off 30% in the recent “Saaspocalypse.” Charts for Thomson Reuters, Service Now, and Atlassian look about the same.

By 2030, more than 60 percent of software economics could flow through agentic systems rather than legacy SaaS seats.

So, why are people debating an accomplished fact? Because of a faulty thesis. This thesis (which I have actually read, not naming names) is that someone can vibe code a new Salesforce. This is a strawman. That’s not the thesis that wiped out $300 billion of market cap.

Someone probably could vibe code a new Salesforce app, but – that’s obviously not the same as killing Salesforce, the company, nor SaaS in general.

The thesis, according to Satya Nadella, is that business logic will come to reside in AI agents, leaving SaaS systems as mere databases. According to Goldman Sachs, by 2030, more than 60 percent of software usage could flow through agentic systems rather than legacy SaaS seats.

The fact that a single, well-prompted AI agent can now do the job of five or ten “seats” does not bode well for the old framework.

The more recent stock tankage in February – that 16% gap down in Thomson Reuters – is attributable to Claude Cowork, coupled with that day’s release of a prompt that does legal contract review. Yes, one single prompt. Again, it’s not feature coding – it’s the pricing model.

Consider Salesforce, for example. Each literal headset-wearing agent needs a “seat license.” With Claude Cowork, no human agent would ever interact directly with Salesforce. Robots talk to Salesforce, with 10X efficiency, and only escalate to humans when they have to.

As Phil Rosen puts it, “the fact that a single, well-prompted AI agent can now do the job of five or ten seats does not bode well for the old framework.”

None of this says that SaaS is dead, exactly. What it says is that SaaS vendors need to reinvent themselves – something legacy “growth to value” companies have historically failed to do.

Q-Day Is Sooner Than You Think

Information security people are worried about Q-Day, and maybe not worried enough. That’s the date when quantum computing will render today’s encryption methods obsolete. Information security depends on cryptography – secret code keys that are uncrackable because of large numbers and hard math problems.

The good news from quantum computing is that we’ll have a new generation of more-powerful computers, with the usual benefits – discovering new medicines, powering AI, and generating cat videos. The bad news is that we will have to come up with more-robust cryptography, in time for Q-Day.

Breaking Crypto

Quantum computers are not literally faster than today’s binary ones, but they support a new class of algorithms made possible by the weirdness of quantum theory. Oddly, the algorithms getting all the attention are not the ones for medicine or astrophysics, but those that defeat public-key cryptography.

Suppose someone wanted to find your four-digit PIN. They would have to try 10,000 different combinations (or half that, on average). This algorithm is “order n,” meaning that it varies linearly with the number of digits. See Know Your Time Series for more on “order n.” Grover’s algorithm for quantum search is order √n, which means only 100 tries.

Shor’s algorithm for prime factorization is, in fairness, kind of the first thing you would do with a new computer anyway, cryptography or no. It was my first homework assignment in Fortran (Euclid’s, not Shor’s). Cracking a four-digit code is no big deal. The backbone of information security today, RSA, uses a 2,048-bit key, which is more than 600 decimal digits.

How Many Qubits

Early microprocessors, like the Intel 4004, had about 2,250 transistors. Each transistor is like a switch that can be on or off, representing a binary digit, or “bit.” Google is proud of their latest quantum computer, Willow, with 105 quantum bits, or qubits. Shown here is its refrigeration unit. IBM advertises 1,000 qubits, but counting them is tricky.

Computers today sacrifice about 12% of their capacity, to error correction. Every eight bits in memory require a spare bit for error checking. Error checking overhead varies depending on the application. For quantum computing, this overhead is massive. It can take thousands of physical qubits to make one good “logical” one.

That’s why Google bangs on about error correction. Their 105 qubits may be stronger than IBM’s 1,000, depending on error correction. The latest paper on breaking encryption makes specific assumptions about how reliable the qubits are. It’s called How to factor 2048-bit RSA integers with less than a million noisy qubits, or 1,400 logical qubits.

When is Q-Day

Progress toward breaking RSA 2048 is happening on several fronts: better hardware, better error correction, and better algorithms (that tolerate errors). Gidney’s previous work, just four years ago, required 20 million physical qubits.

IBM plans to deliver a real, commercial-grade computer, “the first fault-tolerant quantum computer,” with 200 logical qubits, in 2029, with 2,000 in prospect around 2032. Startup IonQ is targeting 1,600 in 2028. They’re growing by acquisition, and targeting this audacious goal by stacking a bunch of new technologies.

Google is also in the hunt, but their roadmap is more complicated. As you know from the link above, Google doesn’t use the popular logical/physical shorthand. They talk about computing benchmarks that explicitly include error correction – kind of like Gidney’s “one million noisy.”

Depending on how you assess the roadmaps, Q-Day probably happens around 2030. But then, there’s “harvest now, decrypt later.” Hackers can start collecting your encrypted information today, and saving it to use later, when RSA 2048 falls.

So, the real question is: do you have confidential data that will still be important five years from now? In that case, Q-Day is today.

Chez Vicky

As a young consultant at Coopers, I had the privilege of being included as the technology person on a number of engagements with other specialties. One such was the Victoria’s Secret engagement, where I was able to work with the firm’s top retail experts. I am going to make a point here, about knowing your customer, but not without telling the story.

Our customers in the Detroit office were mostly from the manufacturing practice, and the guys teased me about shipping out to the Victoria’s Secret facility. “Wear a hardhat,” one wag said, “in case a box of panties falls on you.” We did, in fact, keep hardhats in the office.

I did not know a corset from a camisole, so I resolved to study the catalog until I knew the names of all the items.

The retail people were different. My tech counterpart arrived from Chicago with just a rollaboard, same as me. He was chafed because he had had to wait for Charles, the retail expert, with his train of checked baggage. Bemberg lining, doctor’s sleeves, Aston-Martin cufflinks. They were a different species.

My side of the engagement was to evaluate the client’s competence in software management, capacity utilization, contingency planning, staffing, budgeting, and so forth – routine work for me.

I also ran the day-to-day activities of collecting data and conducting interviews. Victoria’s middle managers were, unbelievably, all attractive women. I would have to tell my guys to stop hyperventilating. “Yes, she’s hot. She’s also a VP.  We’re interviewing her tomorrow.”

The men who worked there seemed inured to Victoria’s charms. The head of store ops banged through the statistics from memory. He knew which item, color, and style sold best in each market.

“The black satin tap,” he said, on this topic, “that one.” He pointed, without looking up, at a promotional poster. I confronted a life-size photo of a dark-haired woman modelling this item, to good effect. I did not know, initially, a corset from a camisole, and so I resolved to study the catalog – no, not the illustrated one – until I knew the names of all the items.

The firm’s seniormost retail expert, Marge Meek, took me under her wing. She was a retail god. Like, personal friends with Marhsall Field, or something. Marge took me to visit some stores, which turns out to be pretty important in retail.

“Okay Mark, who is the Victoria’s Secret customer?” Well, to start with, she is young, fit, well-educated, and upwardly mobile. I rattled off what I had read in the annual report.

“Now look around. Is that who you see here?” I am a tech guy. It would never have occurred to me to visit a store and study the customers.  Marge offered her own characterization, which was a little less flattering, but undeniably accurate.

Back at the job site, we reprised our field trip for the team. Our engagement partner had his own opinion. “Women that date Mexicans,” was Dean’s pronouncement. He was not well-liked by the retail people.

Choose the Right AI Tool

The AI landscape has changed a bit since I wrote What Is Real AI? back in 2021. The advent of GenAI has enabled a new wave of dubious AI sales pitches. Here’s one that crossed my desk recently:

We’ve identified some key GenAI opportunities at PermaPlate … forecast revenue and claims across [products] and adjust staffing monthly.

This sounds like a good idea, except – it’s not a GenAI application. It’s a standard forecasting exercise that everybody does already. If I did want to switch to a learning model – even a deep neural net, which is architecturally similar to an LLM – it would still not be GenAI.

The thing to remember is that GenAI “generates” things, like blog posts and deepfakes. My favorite learning models, going back to AI-Based Risk Rating, are all quantitative in nature. Here again, there are plenty of good statistical methods. Even if you prefer to use a learning model, you may not choose a neural net.

Neural nets have a problem with explainability. They’re basically a black box. That’s why credit bureaus, which must be able to explain their ratings, use a two-step approach. They use AI for exploration and feature engineering, but then they put the features into a more-transparent logistic regression model.

I might consider GenAI in a forecasting application, to deal with unstructured data. On the other hand, I would ask why the data is unstructured. We did this exercise as a POC here at PermaPlate. I wrote a little program that would read a service contract, and then answer natural-language queries.

Which coverage did the customer select and does it include roadside assistance?

It was a cool demo, but – if you want coverage details available for automation, it makes a lot more sense to store them in machine-readable form, at origination time. And what kind of automation might that be? Well, it might be “agentic.”

Agentic AI means that the AI has “agency,” in the sense that it can make decisions and do things in the world. Cool, huh? We give AI agency by equipping it with tools, in the form of software APIs.

Imagine asking ChatGPT to organize your next trip. It can’t, because it’s trapped inside your web browser. But if you invoke ChatGPT as part of an agentic workflow, with interfaces to the airlines and hotels, it can actually book the trip.

Agentic workflows often divide the work among tool-using LLMs, with a mastermind LLM directing the others. For systems that don’t have APIs, the agent can use Robotic Process Automation to operate the system’s user interface – just like you would at the keyboard. It’s not surprising that UIPath, one of the leading RPA vendors, has moved into Agentic AI.

Here is a short list of the latest AI techniques:

  • Large Language Model (LLM) – Like Grok and ChatGPT, these are AI models that can read and write (and plan, and execute).
  • Generative AI – Broad class of AI models that can create things, including LLMs but also diffusion models for video and other media.
  • Deep Neural Net (DNN) – Core technology behind GenAI, and many other learning models, as in my earlier article.
  • Retrieval Augmented Generation (RAG) – As the name implies, GenAI “augmented” by the ability to find and read your documents. See Unguided RAG for Text Comparison.
  • Robotic Process Automation (RPA) – Not AI, but frequently used by Agentic AI. As I wrote in Applied AI for Auto Finance, you can derive a lot of efficiency from RPA alone.
  • Agentic AI – AI agents that can make decisions and act autonomously.

Now that you know the lingo, you can choose the right tool for the job – or your AI sales pitch. I, for one, will not be using GenAI to predict claims volume … but I may use Agentic AI to dispatch the technicians.