AI on broken foundations: why your AI tools aren't producing ROI

The dominant RevOps problem of 2026 isn't AI strategy. It's whether the data, processes, and governance underneath the AI can be trusted at all.

Walk into any mid-market B2B revenue meeting in 2026 and you'll hear the same conversation. The CRO is asking why the AI forecasting tool is producing numbers nobody believes. The marketing leader is wondering why the lead-scoring AI is routing accounts to the wrong reps. The CFO is looking at $200,000 in annual AI tool spend and asking what specifically is the return.

The answer is almost never "we picked the wrong AI." The answer is that the AI is operating on a foundation that was never designed to support it.

This is the AI-on-broken-foundations problem. It's the dominant RevOps challenge of 2026, and it's why most of the AI investments made between 2024 and now are quietly underperforming.

The pattern

By early 2026, roughly three out of four RevOps teams had embedded AI into their go-to-market stack. That's a remarkable adoption curve for a technology category that barely existed three years ago. But adoption isn't the same as value. And the companies that bought aggressively are now confronting a hard question: what was the ROI?

Three patterns recur across companies asking that question, and they show up in roughly this order.

Pattern 1: AI deployed before data hygiene

The most common version. A company buys Agentforce, a forecasting copilot, or an AI SDR. They expect transformation. What they actually get is outputs — but the outputs are wrong in ways that are difficult to detect until something goes badly off, like a forecast missed by 20 percentage points or a high-value account routed to the wrong rep.

Look under the hood and the cause is always the same. The AI is querying a CRM where 40% or more of fields on Account, Contact, and Opportunity are missing or stale. Pipeline stage definitions vary by rep. The lead-scoring model is trained on data with a 7% duplicate rate. When the AI produces a recommendation, it's not wrong — it's correctly responding to the data it was given. The data is just bad.

And here's the worst part: the AI is systematically wrong in ways that are invisible. A human rep looking at a stale account record knows it's stale and discounts it. An AI model treats it as ground truth.

Pattern 2: Tool sprawl masquerading as transformation

The second pattern is harder to see because it looks like progress.

The average B2B sales rep now juggles 7 to 10 tools. Each new AI-native platform — for call recording, sequencing, scoring, scheduling, summarization, deal review — adds another data silo. Each requires another integration to maintain. Each becomes another source of conflicting truth. Tool costs are now 15 to 20% of an average AE's on-target earnings.

The real cost isn't financial. It's context-switching. Reps now spend their days hopping between tools, each of which has a different model of the customer. Marketing's record says one thing. Sales's record says another. The CS tool says a third. The AI in each tool is trained on its own slice and produces recommendations that don't reconcile across the stack.

Companies that try to solve this by buying yet another tool — usually a "single source of truth" platform — typically end up with the same fragmentation distributed across more platforms.

Pattern 3: Governance debt from years of unchecked admin sprawl

The third pattern is the one that nobody wants to talk about until something goes wrong.

The Salesforce-Salesloft-Drift breach of 2025 exposed a category-wide problem. Revenue systems have accumulated a decade of connected apps, OAuth grants, profile permissions, and integration users — most of them ungoverned, undocumented, and largely forgotten. Every new AI agent that gets deployed inherits that debt. Every API call it makes runs through layers of permissions that haven't been audited in years.

Spring '26 OAuth restrictions are forcing a partial reckoning. But most companies are unprepared, and the AI agents now operating inside those environments have access to data and capabilities nobody has formally accounted for.

Why traditional audits miss this

The instinct, when a company hits this wall, is to bring in a consultancy to do an audit. And that's where the second wave of frustration starts.

The traditional RevOps audit produces a 60-page slide deck with a maturity model, a process map, and a recommended tech stack. It's broad. It's diagnostic in the most general sense. It tells you that you're at "Maturity Level 2 of 5" and that you should "improve data quality." Six months later, the deck is in a shared drive and nothing has changed.

The audit isn't useless. It just isn't structured to answer the actual question. The question isn't "is our RevOps mature." The question is "is our foundation positioned to support the AI we have already deployed."

That's a more specific question. It needs a more specific answer.

What the right diagnostic looks like

An AI-readiness audit looks different from a traditional RevOps audit in three ways.

It's quantified. Not a maturity level. A number, 0–100, with a published rubric. Anyone can audit the math. The Readiness Index decomposes cleanly into the underlying drivers, so the conversation moves immediately from "where are we" to "where to act."

It's AI-specific. The criteria are weighted around what AI actually needs to function: clean data, clear system of record, standardized process, governed access, trustworthy forecasting. Generic RevOps maturity is a different question.

It produces a roadmap, not a deck. The output is a 90-day remediation plan with effort and impact estimates per item. The buyer can scope, fund, and execute it. The cost-of-inaction analysis converts the gaps into dollar terms the CFO can defend.

The economics

For a typical $20M ARR B2B SaaS company, the math of poor foundations is concrete:

  • Annual revenue tech spend: $400K to $800K (2 to 4% of ARR).
  • AI tooling subset of that: $80K to $200K.
  • Forecast accuracy gap, manual versus AI-assisted: 18 percentage points.
  • Revenue at risk from forecast error on a $20M plan: $1.8M to $3.6M.
  • Wasted AI spend from low adoption (typical 30 to 50% gap): $25K to $100K annually.

The AI investment is real. The ROI is gated entirely on foundation quality. Fixing the foundation is rarely glamorous work, but it is — measurably — the highest-leverage thing most revenue teams can do in the next 12 months.

What to do this quarter

Three concrete moves that don't require an external engagement to start:

1. Inventory your AI tools and demand outcomes. List every AI tool currently being paid for. For each, write the specific revenue or cost outcome it's producing. Tools that can't justify their line item should be flagged for sunset. This single exercise typically surfaces $25,000 to $100,000 of annual savings.

2. Run a field completeness scan on your CRM. Pull a report of records created in the last 90 days. Measure completeness on the 10 most-used fields. If completeness on any critical field is below 70%, that field is producing noise in every AI model that uses it. Fix the validation rule before fixing anything else.

3. Audit your connected apps and OAuth grants. List every connected app with its last-used timestamp. Revoke anything older than 12 months and unused. This is governance hygiene that has been deferred for years and is now urgent.

None of this is exotic. None of it requires new tools. All of it produces measurable improvement in AI output quality within the same quarter.


Score your stack.

If you want a quantified view of where your foundation stands, the free 15-question assessment produces a Readiness Index in under four minutes. Then you'll know whether you're a candidate for an audit — or whether you can just work the checklist above.

Take the assessment

← All articles