The following is a guest post from Daniel Schmeltz, managing director with Alvarez & Marsal’s Corporate Performance Improvement practice. Opinions are the author’s own.
Companies are investing heavily in artificial intelligence, yet few can point to sustained financial returns. Despite widespread experimentation and adoption, AI has rarely translated into measurable improvements in EBITDA, cash flow or return on invested capital — a concern now surfacing even among global executives as the focus shifts from AI adoption to AI ROI.
The problem is not the technology. It is that AI is being introduced without the operating model redesign required to turn capability into value. Until AI is embedded into how decisions are made, owned and governed, it will continue to generate visible activity but little lasting impact.
The gap between investment and impact is visible in enterprise data. A recent MIT study on AI adoption found that while global investment has surged, most organizations have yet to see measurable financial outcomes. While this may be an oversimplification of the value realized by organizations from AI investments to date, it illustrates the big-picture reality of just how little true financial benefit organizations are seeing from their AI investments at this time.
The disconnect is clear in how these initiatives unfold. Multiple pilots launched at once, spread across functions and business units, demonstrate activity, but rarely translate to scale. Few are tied to a financial owner, a baseline or a measurable economic delta, making success impossible to determine.
This pattern is driven by urgency rather than design; leaders feel pressure to demonstrate progress and avoid falling behind competitors, so huge swaths of companies are told to bring AI online, without a clear definition of success.
This results in AI leading operating models rather than supporting them. AI is deployed “in HR” or “for finance,” and the vagueness of these mandates leaves companies at the end of a pilot asking: “What now?” AI is capable of making hiring decisions or creating budgets, but who owns that decision — a manager or a machine? And how are controls maintained when decisions require more sensitive information?
Operating models are defined by who makes decisions, how work flows and where accountability sits. Most AI deployments ignore this, treating the technology as a simple automation tool.
Instead, it’s vital that companies think of AI not as software, but as a fleet of eager, inexperienced interns. They have infinite energy and can process data at scale, but they lack judgment. Successful operating models don't replace humans with these "digital interns"; they elevate humans to editors who curate and validate the interns' work. The result is technical capability paired with operational authority, thereby sustaining impact.
Crucially, this shift unlocks more than just efficiency; it expands customer value. When "digital interns" absorb repetitive, low-value work, humans are freed to focus on higher-order creative and judgment-based tasks. In this model, AI becomes a true thought partner — helping teams design better offerings and personalize experiences — allowing the organization to create entirely new sources of revenue rather than simply running the same operating model more cheaply.
Evidence from early enterprise deployments points to a more effective approach. The strongest results come from human-led, AI-augmented models. Rather than substituting an AI model for a human decision-maker (as financial institutions do when automating credit and risk decisions with AI), AI should be leveraged on the side of decision preparation.
This distinction matters in high-stakes decisions — what we might call the "kitchen cabinet test." If AI generates an image of a kitchen cabinet that is slightly wrong, the cost is zero. If it miscalculates a credit risk covenant, the cost is existential. AI excels at pattern recognition, synthesis and drafting, but humans must retain responsibility for judgment in high-stakes areas. Over time, as confidence grows and performance stabilizes, human intervention can narrow to exceptions rather than remain constant. In practice, enterprise AI programs that deliver measurable financial returns tend to follow a consistent pattern:
- Anchor AI initiatives to financial value (cost, revenue, cash—not function). This means applying a "cash flow litmus test": If a pilot cannot be tied to a specific financial line item like days sales outstanding or inventory turns, it likely belongs in an R&D lab, not the operating budget.
- Redesign end-to-end human-AI workflow before automating decisions.
- Invest in change management and training that builds familiarity through use, rather than relying on top-down mandates.
- Expand governance and automation only after trust and control are established.
This sequencing matters. Programs that automate decisions too early introduce risk and erode trust. Those that start with augmentation build credibility, making subsequent expansion possible, as seen in areas such as software development, pricing analysis and high-volume transaction processing.
Another consistent finding is that impact improves when AI initiatives are aligned to value rather than to organizational silos. More successful efforts start with outcomes, such as looking at AI use that specifically addresses cost to serve or cycle time. When use cases are tied to economic levers, prioritization sharpens and trade-offs become explicit.
Change management also shapes outcomes. Comfort with AI is earned through repeated, low-risk use in real work, not through policy, training decks or executive mandates. When early adopters demonstrate tangible value in their day-to-day roles, skepticism fades and familiarity grows. Consistency, not intensity, builds lasting operational muscle
Taken together, these patterns point to a clear conclusion: It remains too early for AI to run enterprise operating models end-to-end. Rather, AI should be a critical part of reshaping how work gets done. The organizations making progress are deliberately integrating AI across people, processes and technology, with humans firmly responsible for decisions that matter.
The companies that ultimately win with AI will not be those that deploy the most tools, but those that treat AI like any other transformation investment — anchored to value, governed with discipline and embedded into how decisions actually get made. AI can accelerate operating models, but it cannot compensate for the absence of one.