TLDR
-
OpenAI partners with Broadcom to launch first custom AI chip in 2026.
-
Proprietary AI chip to cut Nvidia reliance and boost OpenAI’s compute power.
-
Broadcom inks $10B deal with OpenAI for exclusive AI chip collaboration.
-
OpenAI shifts strategy with in-house AI chip for GPT-5 scale demands.
-
Custom AI chip marks OpenAI’s push for self-sufficiency in infrastructure.
OpenAI will introduce its first proprietary AI chip next year in collaboration with Broadcom. This move marks a major shift in OpenAI’s infrastructure strategy, aiming to manage rising computing demands. The AI chip will be used internally to support OpenAI’s model operations.
Partnership with Broadcom Signals Strategic Shift
OpenAI and Broadcom have finalized a partnership to develop a custom AI chip, targeting deployment in 2026. The chip will power OpenAI’s infrastructure exclusively, enhancing performance and reducing external hardware dependency. This strategic alignment strengthens OpenAI’s computing capabilities without relying on traditional chip suppliers.
OpenAI is set to produce its own AI chips for the first time next year with a chip designed with Broadcom $AVGO
Broadcom CEO Hock Tan today referred to a mystery new customer committing to $10 Billion in orders – Financial Times pic.twitter.com/skkWAjrewi
— Evan (@StockMKTNewz) September 5, 2025
Broadcom’s CEO confirmed a significant deal with an unnamed fourth customer, now confirmed as OpenAI. This customer committed to over $10 billion in AI infrastructure orders last quarter. The chip production plan follows internal development timelines and strategic collaboration between both firms.
Broadcom has started ramping up production for this custom AI chip, with mass shipments expected next year. The AI chip project follows months of internal prototyping and testing. This move aims to ensure compute availability for future model releases.
Reducing Reliance on Nvidia Hardware
OpenAI has historically depended on Nvidia chips to support its AI model training. However, increasing infrastructure demands prompted OpenAI to diversify its silicon supply chain. By designing an in-house AI chip, OpenAI gains more control over performance, availability, and costs.
The company previously tested AMD chips in parallel with Nvidia’s GPUs. Still, rapid model development and wider user adoption required a more scalable and predictable solution. The AI chip project provides a stable base for scaling future workloads.
Reports last year revealed OpenAI’s initial efforts with Broadcom and Taiwan Semiconductor Manufacturing Company. At the time, the production schedule remained uncertain. Now, with the new chip slated for internal use, OpenAI will prioritize stability and efficiency across its platforms.
Context and Industry Implications
OpenAI’s move follows similar strategies by Google, Amazon, and Meta, which built their own AI chips to manage workload growth. As AI models become more advanced, infrastructure requirements have surged globally. Tech firms have turned to custom silicon to maintain performance while controlling costs.
With the AI chip market expanding, companies seek alternatives to mitigate reliance on any single supplier. This shift reflects increasing competition and innovation in the AI hardware landscape. OpenAI’s entry into proprietary chip design signals rising demand for optimized, model-specific compute hardware.
Broadcom’s growing portfolio of AI chip clients highlights strong momentum in this sector. Its shares rose sharply following the deal, driven by high-volume AI chip orders. Analysts expect Broadcom’s custom silicon business to grow faster than competitors in the next fiscal cycle.
Internal Use and Long-Term Plans
OpenAI does not plan to offer the AI chip to third parties or external buyers. The chip will serve internal training, inference, and model deployment needs. This focus allows OpenAI to tailor its infrastructure closely to evolving model complexity.
According to sources, OpenAI aims to double its compute fleet within five months. The AI chip will play a central role in achieving that expansion. With demand increasing from GPT-5 and other advanced systems, compute capacity remains a key priority.
The launch of this AI chip could streamline OpenAI’s infrastructure roadmap. It also reduces external dependencies while preparing the organization for the next wave of generative models. The project marks a new chapter in OpenAI’s technological self-sufficiency.