TLDR
- Broadcom TPUs are gaining market share as a cheaper alternative to Nvidia GPUs, with UBS forecasting 3.7 million TPU shipments in 2026
- TPUs cost $10,500-$15,000 compared to Nvidia’s Blackwell chips at $40,000-$50,000, making them better for AI inference tasks
- Nvidia bought a nonexclusive license for Groq’s inference technology for $20 billion to compete in the inference market
- Cisco launched its Silicon One G300 networking chip to compete in the $600 billion AI infrastructure market
- Broadcom expects to generate $60 billion in AI revenue for 2026, rising to $106 billion in 2027
Nvidia’s dominance in the AI chip market faces new challenges as Broadcom’s processors and Cisco’s networking technology enter the competition. The shift comes as companies search for more cost-effective ways to run AI systems.
Broadcom has partnered with Google to design Tensor Processing Units that offer a cheaper alternative to Nvidia’s graphics processing units. UBS analyst Timothy Arcuri forecasts Broadcom will ship around 3.7 million TPUs this year. That number is expected to rise to more than five million in 2027.
The price difference between the two chip types is substantial. TPUs sell for between $10,500 and $15,000 per unit. Nvidia’s latest Blackwell chips cost between $40,000 and $50,000 per unit.
TPUs work especially well for AI inference, which is the process of generating answers from AI models. However, Nvidia maintains an advantage in training AI models. According to benchmarks, training a model takes 35-50 days on Nvidia GPUs but around three months on TPUs.
AI start-up Anthropic has placed two major orders for TPUs totaling $21 billion. Meta Platforms is also in talks to use the processors, according to The Wall Street Journal. This marks a shift as TPU sales expand to external clients beyond Google.
Broadcom expects to generate around $60 billion in AI revenue in 2026. That figure is projected to reach $106 billion in 2027. Nvidia is expected to generate around $300 billion in data center sales for its fiscal 2027 year ending in January.
Nvidia Responds with Groq Technology
Nvidia has taken steps to strengthen its position in the inference market. The company recently purchased a nonexclusive license for technology from AI hardware startup Groq. The deal cost Nvidia $20 billion, including compensation packages for Groq employees who joined the company.
Groq specializes in inference hardware, which could help Nvidia compete more effectively with TPUs. Mizuho analysts estimate that between 20% and 40% of AI workloads currently focus on inference. That share is expected to grow to between 60% and 80% over the next five years.
Cisco Enters the Competition
Cisco Systems launched a new chip targeting the AI infrastructure market. The Silicon One G300 switch chip will help AI training and delivery systems communicate across hundreds of thousands of connections. The chip is expected to go on sale in the second half of 2026.
Taiwan Semiconductor Manufacturing Company will produce the chip using 3-nanometer technology. The chip includes new features designed to prevent networks from slowing down during large data traffic spikes. Martin Lund, executive vice president of Cisco’s common hardware group, said the chip can reroute data around problems automatically within microseconds.
Cisco expects the chip to make some AI computing jobs finish 28% faster. The company focuses on improving total network efficiency rather than individual component speed. Lund explained that network problems occur regularly when dealing with tens or hundreds of thousands of connections.
The networking field has become a key battleground in AI infrastructure. When Nvidia unveiled its newest systems last month, one of the six chips was a networking chip competing with Cisco’s products. Broadcom also competes in this space with its Tomahawk series of chips.
The three companies are vying for market share in the $600 billion AI infrastructure spending boom. Each company targets different aspects of AI computing, from training to inference to networking.




