This website uses cookies

Read our Privacy policy and Terms of use for more information.

Happy Triple Threat Thursday.

Here’s one Signal to notice, one thing to Spark growth and one Shift to consider.

This week's theme: There is no AI strategy. There are three GPTs the team built last quarter.

The custom GPTs and Claude skills running across mid-market companies right now were built quickly, by smart people, for real reasons. None of that makes them a strategy. The marketing director built one. The COO built another. Operations built a third. The team is using them. The team is also rewriting most of what they produce. And every time a new model ships, the workflows have to be retuned by the same people whose job is something else.

📡 Signal — What’s Changing

Where Is the Hidden Cost of AI Showing Up Inside SMBs?

A 40-person professional services firm spent eight months running their own AI workflows. The marketing director built a custom GPT to draft client proposals. The ops lead built a Claude skill for project scoping. The COO built a third workflow for client research and competitive analysis. By month four there were three different AI tools, two prompt libraries, and no owner.

Nobody called it a build. There was no proposal, no project code, no kickoff meeting. There was just a slow accumulation of AI work that landed on the existing team's calendars. By the time the CEO looked at it, the OpenAI bill was $1,800 a month, the COO was spending three hours a week on prompt tweaks, and proposal quality was inconsistent enough that clients were pushing back.

That is the hidden cost of AI at SMB scale. It is not one big invoice. It is five smaller costs that nobody adds up.

The first is direct labor. The hours the marketing lead, the ops person, and the COO spend tweaking prompts, fixing broken outputs, retraining the team. At a fully loaded $110 an hour, ten hours a week across two people for six months is roughly $57,000 of work nobody itemized.

The second is the OpenAI, Anthropic, and infrastructure bills. The tokens, the vector database, the hosting. They land in expense reports, not budgets. They scale quietly with usage.

The third is integration drag. Connecting AI to the CRM, the billing system, the support stack. Each new integration roughly doubles the maintenance load because there is no platform team to absorb it.

The fourth is opportunity cost. Every hour the COO spends on prompt fixes is an hour not spent on operations. Every hour the marketing lead spends on workflow maintenance is an hour not spent on campaigns. The roadmap pays the price even when the budget does not.

The fifth is the probability of pivoting anyway. MIT's NANDA Initiative analyzed 300 enterprise AI deployments in 2025. Internal builds reached production 33 percent of the time. Vendor implementations succeeded 67 percent of the time. Two out of three internal projects end with a vendor purchase on top of everything already spent.

Those five costs together are usually larger than the vendor would have charged from the start. The reason most teams miss it is that none of the five appear on a single line item.

Why it matters now:

Q2 is when most mid-market companies finalize AI capital plans for the back half of the year. The proposals on the table count tooling. They do not count the four other costs. The decision gets made on a number that captures one fifth of the actual commitment.

What to do this week:

Pull every AI tool, custom GPT, Claude skill, and ChatGPT workflow in active use across the team. Tag who owns each one and how many hours per week they spend maintaining it. If you find more than three workflows owned by people whose job is something else, the experiment phase is over. The next decision is which ones become real systems and which ones get retired before they consume another quarter.

⚡ Spark — What to Try This Week

How can operators add up the real cost of their AI experiments in under ten minutes?

Run the hidden cost audit before the next AI tool gets added to the stack.

The friction this solves: the five costs above never live in the same conversation. Direct labor is in the COO's calendar. Tooling is in the marketing director's expense report. Integration drag is in IT's queue. Opportunity cost is invisible until something else slips. The pivot probability is a footnote nobody includes. The audit puts all five on one page.

Why this keeps repeating: the costs do not feel like costs while they are happening. They feel like work the team should be doing anyway. The reframe only happens once somebody adds them up.

The audit, in order:

  1. List every AI workflow currently running. Custom GPTs, Claude skills, ChatGPT prompts the team uses regularly, AI features inside tools you already pay for, internal automations that route data through AI. Include the casual ones. Especially the casual ones.

  2. Assign an owner to each. Who built it. Who maintains it. Who fixes it when it breaks. If the answer is "nobody really" or "we all do," that is the first finding.

  3. Calculate hours per week spent maintaining each one. Honest hours, not aspirational ones.

  4. Add the AI tooling line. OpenAI, Anthropic, vector databases, hosting, AI add-ons inside other tools. Pull the actual invoices from the last 90 days.

  5. Add the opportunity cost. Every hour spent maintaining an AI workflow is an hour not spent on the work that earns revenue. Price it at the same fully loaded rate.

  6. Compare the total against a specialized vendor in the same category. Most operators are surprised by how close the numbers are. Most are also surprised by how much capacity comes back.

To make the math easier, Leadway built a calculator that runs the audit in two minutes. Sample inputs are pre-filled in italic gray. Type over them with your own numbers. Export the result as a PDF before the next leadership meeting.

Why it works:

AI experiments at SMB scale rarely fail because the technology is wrong. They become expensive because the labor was never named. The audit makes the labor visible, which is the only way to decide whether it is worth keeping.

Make the decision once, in writing, with the team in the room.

Note: Everything you enter is processed in real time and never stored. The data is not visible to anyone, including this newsletter. Put in what is actually true and the output will be worth acting on.

🔄 Shift — How to Rethink It

Is your AI workflow producing quality, or producing slop on a schedule?

Default belief: Once the AI workflow is running, the team is more efficient.

Flip: Most internal AI workflows produce slop the team has to rewrite, on a stack that breaks every time a new model ships.

The hidden cost of AI is not only the hours it takes. It is what those hours produce. A workflow built without best practices generates output that needs heavy editing. The marketing director's proposal drafts come back generic. The customer service bot's responses get rewritten by humans before they go out. The research GPT pulls plausible-sounding facts that turn out to be wrong. The team's net efficiency is often zero or negative because every AI output gets a human pass anyway. The team is doing the work twice, once with the AI and once to clean up after it.

Why it matters:

Quality issues compound on a moving target. OpenAI ships a new model every few months. Anthropic does the same. Each upgrade subtly changes how prompts behave. Workflows built without versioning, evaluation, or fallback logic break silently. Outputs that were mediocre at GPT-4 become unpredictable at GPT-5. The team retunes the prompts again, on top of everything else they are doing, on a workflow they half-remember why they built. This is not a one-time tax. It is a recurring tax that scales with the pace of model releases.

A vendor solving one specific problem for hundreds of companies has built the evaluation framework, the versioning, the fallback logic, and the quality gates that keep output consistent across model upgrades. That infrastructure does not exist on day one of an internal experiment. It rarely gets built at all because the team does not know to ask for it.

Three things buying right gets you that experimenting does not:

  1. Output you do not have to rewrite. Vendors who have shipped this before have already learned which prompts produce reliable output and which produce slop. Your team is learning that in real time, on live work, in front of customers.

  2. Durability across model changes. Vendors absorb model upgrades in the background and run regression tests so output stays consistent. Internal workflows have to be retuned by the team, every time, on top of their actual jobs.

  3. Compound learning. Vendors get better as their other customers stress-test the system. You inherit those improvements. Internal experiments only get better if your one team has time to maintain them, which they almost never do.

Build only when no vendor can plausibly deliver the business outcome. That bar is higher than most teams realize.

📚 Worth A Look

What should you be reading about AI build vs. buy decisions this week?

A full breakdown of where mid-market AI budgets actually go, with the MIT GenAI Divide finding embedded in the data: vendor-led deployments succeed at 67 percent, internal builds at 33 percent.

The clearest line in the debate. Every hour your team spends on AI infrastructure is an hour not spent on the work customers pay for.

Names the cost categories most teams forget. Strategy, data prep, integration, deployment, monitoring, governance. API fees are one line among many.

📈 TL;DR

The AI workflows running across your team have five hidden operational costs (labor, tooling, integration drag, opportunity cost, and the 67% probability of buying a vendor anyway) and a quality cost most operators feel without ever naming. The output requires rework, the workflows break with every model upgrade, and the team ends up doing the work twice. The vendor invoice you avoided is almost always cheaper than what is happening now.

📈 One Question

If you stopped every AI experiment running on your team this Friday, how many hours would your COO, marketing lead, or IT person get back, and how much output quality would the team gain back?

⏭️ What’s Next

Run your numbers. The Leadway hidden cost calculator stacks all five operational costs onto one page in under two minutes, with a downloadable PDF for your next leadership meeting.

Next Thursday: once you know buying is the right call, how do you vet an AI implementer who actually understands your business? Four standards to filter out the wrong vendors, and the questions to ask in the first 20 minutes of a call.

Thanks for reading Triple Threat. See you next Thursday with another Signal, Spark, and Shift.

— Alexandria Ohlinger

p.s. If this helped you think sharper or move faster, share it with someone who builds the way you do. And if you want more practical insight between issues, connect with me on LinkedIn or schedule a strategy session.


Latest Posts