TLDR
- AMD and Celestica announced a collaboration to develop the Helios rack-scale AI platform
- Celestica will handle R&D, design, and manufacturing of scale-up networking switches for the platform
- The switches will connect next-gen AMD Instinct MI450 Series GPUs for large-scale AI clusters
- AMD Helios is built on the Open Compute Project Open-Rack-Wide form-factor and uses Ultra Accelerator Link over Ethernet
- AMD Helios is targeted for customer availability in late 2026; AMD stock rose ~1% premarket Monday
AMD and Celestica (CLS) have teamed up to bring a new rack-scale AI platform to market. Called Helios, the platform is designed for large-scale AI training and inference workloads across cloud, enterprise, and research environments.
$AMD and Celestica Announce Collaboration to Advance the Next Era of AI with Helios Rack-Scale AI Platform
Celestica will undertake the R&D, design, and manufacturing of scale-up networking switches in the $AMD Helios rack-scale AI architecture, based on the Open Compute Project… pic.twitter.com/IKm6vEjpuj
— Daniel Romero (@HyperTechInvest) March 16, 2026
Celestica will take on the R&D, design, and manufacturing of scale-up networking switches for the AMD Helios architecture. These switches are built around the Open Compute Project Open-Rack-Wide form-factor — an open standard that’s gaining traction in hyperscale data centers.
Advanced Micro Devices, Inc., AMD
The networking silicon inside those switches is engineered to enable high-speed interconnect between AMD’s next-generation Instinct MI450 Series GPUs. It uses Ultra Accelerator Link over Ethernet for scale-up connectivity, a key part of how the system keeps GPU clusters talking to each other at speed.
In a rack-scale AI architecture, the entire rack — not just an individual server — acts as the core compute unit. That means GPUs, high-speed networking, and liquid cooling are integrated into a single system. The design is built to efficiently train large language models at scale.
“Helios represents a new blueprint for AI infrastructure,” said Forrest Norrod, AMD’s executive vice president and general manager of Data Center Solutions. He said it enables customers to deploy AI with the performance, efficiency, and flexibility needed for next-generation workloads.
Steven Dorwart, senior vice president at Celestica, said deploying AI at scale requires infrastructure that can be delivered quickly and consistently. Celestica’s role in Helios leans on its existing strengths in data center design, engineering, and supply chain.
AMD’s Broader Momentum
AMD carries a market cap of around $315 billion and has posted a 92% return over the past year as demand for AI infrastructure has grown. The Helios announcement comes as AMD continues to push deeper into the data center GPU market.
UBS has a Buy rating on AMD with a price target of $310, citing revenue growth prospects through 2027. The bank has flagged potential for AMD to land a third major hyperscaler customer for its data center business, with Microsoft floated as a likely candidate.
Wolfe Research also holds an Outperform rating on AMD, pointing to the company’s server momentum and its AI accelerator roadmap as key drivers.
Recent Deals and Partnerships
Beyond Helios, AMD recently signed a multi-year licensing agreement with Adeia Inc., covering access to Adeia’s semiconductor IP portfolio and ending all litigation between the two companies.
Avalon GloboCare has also been accepted into AMD’s AI Developer Program, giving it access to AMD’s tools and resources for AI development.
AMD stock rose around 1% premarket on Monday following the Helios announcement. Celestica climbed about 3% in the same session.
AMD Helios is scheduled to be available to customers in late 2026.





