AI ROI is real, but not where you'd expect

Everyone wants to know if AI is worth it. Three years into the generative AI era, we finally have real data.

Back button

INSIGHTS

AI

Everyone wants to know if AI is worth it. Three years into the generative AI era, we finally have real data.

Wharton's latest study of 800+ enterprise decision-makers reveals which companies see returns, which stay stuck in pilot purgatory, and why the gap between them keeps widening.

The truth? ROI is happening. But not where you'd expect, and not for the reasons most vendors claim.

Most companies see positive ROI because they stopped chasing moonshots

Nearly three-quarters of enterprises already see positive returns on their AI investments (page 45). The companies winning aren't building sci-fi applications. They're automating the boring stuff. Data analysis. Document summarization. Meeting notes.

What this actually means is simple. ROI shows up fastest in high-frequency, repeatable workflows where you can measure before-and-after productivity. Not in experimental projects that might ship next quarter.

The implication for your team? Stop waiting for the perfect use case. Start with the tasks your team does 50 times a week, not once a quarter.

AI budgets are growing, but some companies are cutting elsewhere to reallocate spending

AI spending isn't free money anymore. While 88% of leaders expect budget increases in the next year (page 49), 11% are reallocating from legacy IT and outside services to fund AI programs (page 51).

This signals a fundamental shift. AI budgets now compete with existing systems, which means they need to prove out faster.

For engineering teams, this creates pressure to show measurable impact within quarters, not years. The era of open-ended AI experiments is over.

Enterprises are building custom solutions, not just buying tools

Companies aren't just subscribing to ChatGPT Enterprise. According to IT decision-makers, roughly 30% of AI tech budgets now go to internal research and development (page 42).

The implications are significant. Off-the-shelf tools won't differentiate you. Generic AI implementations won't deliver competitive advantage.

Teams need infrastructure that lets them build proprietary workflows without rebuilding auth, security, and integrations from scratch every time. Without that foundation, you're stuck recreating plumbing instead of shipping features.

Leaders worry about skill loss even as AI boosts productivity

Here's the paradox. While 89% agree that AI enhances employee skills, 43% see risk of declines in skill proficiency (page 60). AI makes teams more productive, but leaders fear employees are losing fundamental skills.

This isn't about resistance to change. It's about junior developers who never learned to code without Copilot. Or analysts who can't build models without Claude.

The real risk isn't AI replacing jobs. It's creating a workforce that can't function when the AI fails or produces garbage output. And it will.

Think about what happens when your team encounters edge cases the model wasn't trained on. Or when the API goes down during a critical demo. If your people can't work through problems manually, you're building on sand.

The capability gap is widening as training investments shrink

Enterprises face a choice. Train existing employees or hire new talent with AI skills. But they're underinvesting in both.

Investment in training programs has dropped 8 percentage points (page 68), while confidence in training as the path to fluency fell 14 points. Meanwhile, recruiting talent with advanced AI skills remains a top challenge, cited by 49% of leaders (page 59).

What this means for your infrastructure decisions matters more than you think. You can't assume your team will magically become AI experts. Your tools need to reduce complexity, not add to it.

The winners will be platforms that let developers ship AI features without becoming AI specialists. That means handling auth, security policies, and workflow orchestration so teams can focus on building product, not plumbing.

What This Means for You

Three years in, the data is clear. Companies that see ROI aren't the ones with the biggest budgets or the flashiest use cases.

They're the ones that made it easy for their teams to automate high-frequency work. They built infrastructure that reduced friction instead of adding complexity. They focused on shipping features, not managing middleware.

If you're still in pilot mode, the gap is widening. The companies pulling ahead aren't waiting for perfect solutions. They're shipping fast, measuring impact, and iterating.

Want to see how Civic Nexus helps teams build custom AI workflows without rebuilding infrastructure? Check out our latest integrations that handle the complexity so your developers can focus on what matters.