When not to buy AI: a checklist for RevOps leaders

The honest version. If your foundation looks like this, every dollar of new AI spend will produce noise instead of signal — and probably make existing problems worse.

Most RevOps content tells you what AI to buy. This one tells you when to stop buying.

Across audits in the past 18 months, the same uncomfortable truth has surfaced repeatedly. A meaningful number of companies have over-invested in AI tooling relative to the foundations underneath. Their boards are asking when the AI investment will pay off. The honest answer is: it won't, until something else changes first.

The signals below are diagnostic. If three or more apply to your stack, the right move isn't another AI tool. It's a hard pause and a foundation fix.

Signal 1 — You can't list your AI tools and the outcome each produces

This is the simplest test, and it's the one most companies fail.

Pull up a blank document right now. Write the name of every AI-powered or AI-marketed tool currently being paid for. For each, write the specific revenue or cost outcome it has produced in the last six months.

If you can't list them, you have a tool inventory problem. If you can list them but can't tie outcomes to each, you have an ROI tracking problem. Either way, buying another AI tool will compound the problem rather than solve it. Every new tool added to an unmeasured stack makes the existing measurement gap worse.

What to do instead. Build the inventory. Demand outcomes per tool. Sunset anything you can't justify. The exercise typically returns $30,000 to $80,000 of annual savings before any new investment.

Signal 2 — Field completeness on critical CRM fields is below 70%

This is the data-foundation test. Run a quick query on your CRM. For records created in the last 90 days, what's the completeness rate on the five most-used fields on Account and Opportunity?

If any of those fields is below 70%, you have a problem that no amount of AI will fix. Lead-scoring AI trained on partial firmographic data will produce systematic errors. Conversational AI summarizing deals with missing context will hallucinate to fill the gap. Forecasting AI weighting opportunities by stage probability will produce the wrong number with high confidence.

The AI is not broken. The data is broken. Adding more AI to broken data does not produce a smarter system; it produces a more confidently wrong one.

What to do instead. Restore validation rules on the affected fields. Run a one-time enrichment pass on stale records. Then re-evaluate the AI investment.

Signal 3 — Three reps would give three different definitions of "MQL"

The process-standardization test, in one sentence.

If marketing, sales, and CS each have their own working definition of when a lead becomes an opportunity, no AI scoring or routing tool can produce consistent outputs. The AI inherits the inconsistency and amplifies it. You'll see lead routing where the same lead gets scored differently depending on which workflow touched it first.

The same is true of stage exit criteria, MQL/SQL definitions, won/lost reasons, expansion versus new-business categorization, and any other concept that crosses functional boundaries. AI cannot standardize what humans haven't agreed on.

What to do instead. A 60-minute alignment workshop. Force a single definition. Encode it in your CRM as required validation. Until the standardization happens, additional AI investment will compound the noise.

Signal 4 — You don't know how many connected apps are active in your CRM

The governance test. The Salesforce-Salesloft-Drift breach made this an executive-level question. Most companies still don't know the answer.

Pull your connected apps list. Count the OAuth grants. Sort by last-used date. If the answer to "how many active grants do we have, and when were they last reviewed?" is some version of "we don't know," then deploying additional AI agents — which require their own grants and elevated access — adds risk to a system that's already over-permissioned.

Spring '26 OAuth restrictions are forcing a partial reckoning. But most environments still have a long tail of legacy grants that nobody owns. New AI agents inherit and extend that tail.

What to do instead. Audit and revoke. Implement Spring '26 restrictions. Document the AI agents you currently have, with named owners and access scope. Then consider new ones.

Signal 5 — Your CEO maintains a parallel forecast in a spreadsheet

The trust test. This is the one that usually surprises founders.

If your CEO doesn't trust the CRM-generated forecast — meaning they keep their own version somewhere, with their own assumptions, that they actually plan against — then no AI forecasting tool you buy will fix the trust problem. The CEO's parallel model exists because of foundational issues: data quality, methodology, pipeline hygiene, stage discipline. The AI tool you buy will produce yet another number, and the CEO will continue to use the spreadsheet.

This is the most expensive failure mode because it's the most invisible. The AI tool gets renewed annually. The CEO never says they don't trust it; they just don't use it. The forecasting AI vendor reports "deployed" as a success metric. The buyer reports "we have AI forecasting" in board meetings. The forecast accuracy gap stays where it was.

What to do instead. Find out why the CEO doesn't trust the number. The reason is almost always a specific foundational gap — typically pipeline bloat, inconsistent stage exit criteria, or a methodology that nobody applies consistently. Fix that, and the AI forecast becomes useful. Buy more AI before fixing it, and the gap widens.

The decision rule

If you're matching three or more of the signals above, the foundation isn't ready for more AI. The investment will not produce ROI in the near term, and it's likely to make existing problems harder to diagnose.

The right move is the unglamorous one: a foundation fix. Three to six months of work on data, process, integration, and governance. Then re-evaluate the AI investment from a sound base.

This is rarely what executives want to hear. It's almost always what they need to hear.

The companies seeing the largest AI ROI in 2026 are the ones that did the boring work first. The ones still chasing AI tools are the ones whose forecasts are still missing.

What about the AI you've already bought?

If you've already invested heavily in AI and are seeing weak ROI, you have three options:

  1. Stop and audit. Diagnose which of the five signals apply. Score the foundation. Then decide which of the existing AI tools is worth keeping, and what to fix to make it productive.
  2. Consolidate. If two AI tools are doing overlapping work, kill one. The savings fund the foundation work.
  3. Renegotiate. Most AI tool contracts allow consolidation if invoked at renewal. The next renewal is your leverage point.

None of these are easy conversations. All of them are easier than another year of unmeasured AI spend on a broken foundation.


Find out where you stand.

The free 15-question assessment scores your foundation across the six dimensions in under four minutes. If your Readiness Index is below 65, your next move probably isn't another AI tool — and the report will tell you what is.

Take the assessment

← All articles