Cincinnati Bell Technology Solutions dropped Forge AI on Monday — a platform that's less about the sexy parts of artificial intelligence and more about the unglamorous reality most enterprises face: their infrastructure isn't remotely ready for the models they want to run.

The company's pitch is straightforward. Enterprises have spent the last eighteen months getting excited about generative AI, large language models, and automation. Then they hit the wall: their compute doesn't scale, their networking wasn't built for the bandwidth AI demands, and their cloud strategy is a patchwork of decisions made before anyone cared about GPU clusters.

Forge AI isn't trying to be another AI tool. It's infrastructure — the boring, critical layer that determines whether your AI initiative gets stuck in proof-of-concept hell or actually makes it to production. CBTS is betting there's a market in being the company that helps enterprises get their house in order before they try to redecorate.

The platform bundles compute, networking, security, and managed services into what CBTS calls an "AI-ready infrastructure." It's designed to work across on-premises data centers, private cloud, and public cloud environments — because most large organizations aren't purely anything anymore. According to the company's announcement, Forge AI aims to reduce deployment timelines and eliminate the "infrastructure silos" that typically slow AI projects to a crawl.

What CBTS Is Actually Selling

Forge AI isn't a single product. It's a bundle — compute resources optimized for AI workloads, networking architecture built to handle the data throughput machine learning demands, security layers that don't break when you scale horizontally, and managed services to keep it running.

The compute side focuses on GPU-accelerated infrastructure, which is table stakes for training or running large models. But CBTS is emphasizing flexibility: customers can provision resources on-demand, scale up during model training, then scale back down without being locked into long-term commitments on hardware they won't always need.

The networking piece is where things get more interesting. AI workloads are bandwidth hogs. Training a large language model involves moving terabytes of data between storage, compute nodes, and sometimes multiple data centers. Most enterprise networks weren't designed for that kind of sustained throughput. Forge AI includes what CBTS describes as "high-performance networking fabric" — essentially low-latency, high-bandwidth connections that won't bottleneck when you're shuffling training data around.

Security and compliance are baked in, which matters more than it sounds. Enterprises can't just spin up AI experiments in the public cloud if they're dealing with regulated data. Healthcare, financial services, government contractors — they all need infrastructure that meets specific compliance standards while still delivering the performance AI demands. Forge AI is positioning itself as the answer for organizations that can't afford to choose between compliance and capability.

The Hybrid Cloud Problem No One Wants to Talk About

Here's the thing CBTS is implicitly acknowledging: most enterprises are stuck in hybrid hell. They've got legacy systems on-premises that can't be moved. They've got some workloads in AWS or Azure. They've got a private cloud initiative from three years ago that never quite finished. And now they want to add AI to the mix.

Forge AI is designed to span that mess. The platform integrates with existing cloud environments — AWS, Azure, Google Cloud — while also supporting on-premises deployments. The value proposition is continuity: you don't have to rip out your existing infrastructure or force a cloud migration just to get AI capabilities.

That's a harder problem than it sounds. Most AI infrastructure solutions are optimized for one environment or the other. If you're all-in on AWS, you use SageMaker and call it a day. If you're on-prem, you build something custom. But if you're like most Fortune 500 companies — split across multiple environments with data gravity issues and compliance constraints — you need something that works everywhere.

Infrastructure Layer

Traditional Enterprise Setup

Forge AI Approach

Compute

Fixed capacity, CPU-optimized, long procurement cycles

GPU-accelerated, on-demand scaling, flexible provisioning

Networking

Standard enterprise bandwidth, not optimized for data-heavy workloads

High-performance fabric, low-latency connections for AI throughput

Cloud Strategy

Fragmented across on-prem, private cloud, public cloud

Unified platform spanning hybrid and multi-cloud environments

Security/Compliance

Bolted on after deployment, often blocks AI experimentation

Integrated from the start, designed for regulated industries

CBTS isn't the first company to notice this gap. Managed service providers, hyperscalers, and infrastructure vendors have all launched AI-focused offerings over the past two years. But most of them are either too tied to a single cloud platform or too focused on cutting-edge AI capabilities rather than the unglamorous work of getting existing infrastructure ready.

Managed Services as the Real Product

The other bet CBTS is making: enterprises don't want to run this themselves. Forge AI includes managed services — monitoring, optimization, troubleshooting, scaling decisions. That's probably the actual product here. The infrastructure is a means to an end. What enterprises are buying is someone else dealing with the operational complexity.

Who This Is Really For

Forge AI isn't aimed at startups or tech-native companies. Those organizations either build their own infrastructure or go all-in on a hyperscaler. This is for the mid-market and enterprise accounts that have been around long enough to accumulate technical debt, compliance requirements, and a healthy skepticism about ripping everything out for the latest trend.

Healthcare systems that want to deploy clinical AI tools but can't move patient data to the public cloud. Financial services firms that need to run fraud detection models without violating data residency rules. Manufacturing companies that want predictive maintenance algorithms running close to the factory floor. These are the use cases CBTS is targeting.

It's also for organizations that tried to stand up AI infrastructure internally and realized it's harder than it looks. Hiring the talent to design, deploy, and maintain GPU clusters, high-performance networking, and hybrid cloud orchestration is expensive and slow. Buying it as a managed service is faster — and potentially cheaper if you're not operating at hyperscale.

The competitive landscape here is messy. CBTS is competing with the hyperscalers' AI infrastructure offerings, with infrastructure-focused managed service providers like Rackspace or Lumen, and with systems integrators that will build custom solutions. The differentiation is supposed to be the combination of flexibility, hybrid support, and managed services — but that's a positioning claim, not a technical moat.

CBTS has been in the managed services and infrastructure game for years, operating as the technology arm of what was formerly Cincinnati Bell. The company provides IT infrastructure, cloud services, and managed solutions across a client base that skews toward mid-market and enterprise. Forge AI represents a bet that its existing relationships and operational capabilities can translate into the AI infrastructure market. Whether that works depends on execution — and whether enterprises believe CBTS can deliver on the "AI-ready infrastructure" promise better than the alternatives.

The Pricing and Procurement Question

CBTS hasn't disclosed pricing models publicly, which is standard for enterprise infrastructure deals. These contracts are typically negotiated based on compute requirements, networking bandwidth, storage needs, and the level of managed services required. But the structure matters. If Forge AI is priced like traditional managed infrastructure — fixed monthly fees with capacity tiers — it's competing on service and integration. If it's consumption-based with on-demand scaling, it's competing more directly with hyperscaler offerings.

The other procurement angle: enterprises that already work with CBTS for networking or data center services have an easier path to adopting Forge AI. Existing relationships lower the barrier. New customers have to be convinced CBTS can deliver what AWS, Microsoft, or Google can't — or won't — provide.

Where the Infrastructure Story Gets Complicated

The challenge CBTS faces is that AI infrastructure is moving fast, and not always in predictable directions. GPU architectures are evolving. Networking standards are shifting as AI workloads push bandwidth requirements higher. Open-source frameworks and model optimization techniques are changing what "AI-ready" even means.

A platform that's optimized for today's AI workloads might need significant retooling in 18 months. CBTS is betting that its managed services model — where it handles upgrades, optimization, and architecture changes — insulates customers from that complexity. But it also means CBTS has to stay ahead of the curve itself, which is expensive and operationally demanding.

There's also the question of whether "AI-ready infrastructure" is a category that lasts. Right now, enterprises are treating AI deployment as a distinct problem that requires specialized infrastructure. But as AI capabilities get embedded into more standard enterprise software — ERP systems, CRM platforms, productivity tools — the infrastructure question might become less specialized. If running AI becomes as routine as running a database, the premium for "AI-ready" infrastructure shrinks.

CBTS is betting that day is far enough off that there's a substantial market in the meantime. Probably a correct bet for the next few years. Less clear what the market looks like in five.

The Lock-In Risk Enterprises Should Consider

One thing the press release doesn't emphasize: portability. If you build your AI infrastructure on Forge AI, how hard is it to move later? Enterprises have learned the hard way that managed infrastructure deals can create dependency. CBTS presumably designed Forge AI to integrate with existing environments and support multi-cloud architectures, which should reduce lock-in risk. But the managed services component — where CBTS handles operations, monitoring, and optimization — creates operational dependency even if the infrastructure itself is portable.

Smart enterprises will negotiate exit clauses and data portability guarantees upfront. Less sophisticated ones might find themselves locked into a platform that becomes expensive to leave once the initial contract term ends.

What Success Looks Like From Here

For Forge AI to matter, CBTS needs to land a few marquee enterprise customers — ideally in different verticals — that can serve as reference accounts. Healthcare, financial services, and manufacturing would be the logical targets given the compliance and hybrid infrastructure requirements in those sectors.

The company also needs to prove it can deliver on the "faster deployment" promise. If Forge AI takes just as long to get running as building infrastructure internally or working with a hyperscaler, the value proposition collapses. Speed to production is the whole point.

Success Metric

What to Watch For

Customer Announcements

Enterprise logos in regulated industries adopting Forge AI within 6-12 months

Deployment Speed

Time-to-production benchmarks vs. traditional infrastructure buildouts

Multi-Cloud Integration

Evidence of Forge AI operating seamlessly across AWS, Azure, GCP environments

Partner Ecosystem

Integrations with major AI model providers, MLOps platforms, data management tools

Scaling Examples

Case studies showing on-demand scaling and cost efficiency vs. fixed infrastructure

The partner ecosystem will matter too. Forge AI can't exist in isolation. It needs to integrate cleanly with the AI model providers enterprises are using, the MLOps platforms they're adopting, and the data management tools they've already deployed. CBTS will need to show it can play nicely with a broad ecosystem rather than forcing customers into a narrow stack.

And CBTS needs to articulate a clear cost-performance story. Enterprises evaluating Forge AI will compare it against building internally, using a hyperscaler, or working with a competitor. The pitch has to be specific: here's how much faster you'll deploy, here's how much operational overhead you'll avoid, here's the cost delta under realistic usage scenarios.

The Broader Infrastructure Conversation

Forge AI is entering a market that's simultaneously crowded and underserved. Crowded because every major infrastructure player has launched something AI-related in the past two years. Underserved because most enterprises still haven't figured out the basics — and the solutions available are either too generic or too hyperscaler-specific.

There's a real need for infrastructure that works across hybrid environments, supports compliance requirements, and doesn't require hiring a specialized team to operate. Whether CBTS can own that niche depends on execution, pricing, and how fast the market evolves.

The other factor: AI infrastructure is getting cheaper and easier over time. Models are becoming more efficient. Cloud providers are commoditizing GPU access. Open-source tools are lowering the barrier to entry. The window for premium-priced AI infrastructure services might be narrower than CBTS expects.

But for now, enterprises are still struggling with the basics. They've got the AI ambition and the budget. What they don't have is infrastructure that makes deployment straightforward. That's the gap CBTS is trying to fill.

Whether Forge AI becomes the standard solution or just another option in an increasingly crowded market will depend on the next 12-18 months. Customer traction, deployment speed, and cost-performance benchmarks will tell the story. Until then, it's a bet on infrastructure mattering more than the models themselves — which, for most enterprises, is probably still true.

What to Watch Next

First customer announcements. If CBTS lands a recognizable enterprise name in healthcare or financial services, it validates the compliance and hybrid infrastructure pitch. If the early adopters are smaller mid-market accounts, the story is different.

Second, competitive response. If AWS or Microsoft see Forge AI as a threat to their enterprise AI infrastructure business, they'll respond with pricing changes, new features, or tighter managed services offerings. If they ignore it, that tells you how seriously they view the competitive threat.

Third, the technical benchmarks. CBTS claims Forge AI reduces deployment timelines and operational overhead. Those claims are testable. If the company can publish specific performance data — time-to-production comparisons, cost-per-workload benchmarks, scaling efficiency metrics — it strengthens the case. If they stay vague, skepticism is warranted.

And finally, the roadmap. AI infrastructure needs aren't static. Model architectures change. Networking requirements evolve. Compliance standards tighten. CBTS will need to show it can keep Forge AI current without forcing customers through disruptive upgrades every 18 months. That's a managed services capability question as much as a technology one — and it's where operational execution will determine whether Forge AI becomes a lasting platform or just a well-timed product launch.

Reply

Avatar

or to participate

Keep Reading