TLDR
- Nvidia will supply 1 million GPUs to Amazon Web Services (AWS) by end of 2027.
- Shipments begin this year and run through 2027.
- The deal includes networking gear, Groq inference chips, and next-gen Blackwell and Rubin chips.
- AWS will use a combination of seven Nvidia chips to power AI inference workloads.
- Both NVDA and AMZN edged higher in after-hours trading following the announcement.
Nvidia’s deal with Amazon Web Services is one of the largest single-customer chip agreements the company has announced. And the details keep getting more interesting the deeper you dig.
🚨 $NVDA × $AMZN — Massive GPU Deal Just Dropped
At GTC 2026, Nvidia and AWS officially announced a deal to deploy 1M+ Nvidia GPUs — including Blackwell & Rubin architectures — across AWS global regions starting this year.
Nvidia VP Ian Buck confirmed deliveries run through… pic.twitter.com/kKABFWM1FW
— Invest Alpha Pro (@InvestAlphaPro) March 19, 2026
Nvidia VP Ian Buck confirmed to Reuters that the 1 million GPU shipments will begin in 2025 and run through 2027. That timeline lines up directly with CEO Jensen Huang’s projection of a $1 trillion market opportunity for Nvidia’s Blackwell and Rubin chip families over the same period.
The deal goes well beyond raw GPU count. AWS is buying into a wide stack of Nvidia hardware, including Spectrum-X and ConnectX networking equipment. That’s worth noting because AWS has historically used its own custom-built networking gear. Adding Nvidia’s networking products to its data centers marks a meaningful shift.
AWS Goes All-In on Nvidia Inference
AI inference — the process where AI systems generate responses and complete tasks — is at the heart of the deal’s architecture. AWS plans to use seven different Nvidia chips to handle inference workloads.
Buck put it plainly: “Inference is hard. It’s wickedly hard. To be the best at inference, it is not a one chip pony. We actually use all seven chips.”
The Groq chips, released by Nvidia this week following its $17 billion licensing deal with an AI chip startup, are part of that inference stack. They work alongside six other Nvidia chips to deliver what the company describes as best-in-class inference performance.
AWS is also set to deploy Nvidia’s Blackwell chips and is expected to adopt the future Rubin architecture as it becomes available. Nvidia and Amazon have not disclosed the financial value of the agreement.
Both stocks moved modestly higher in after-hours trading Thursday following the news. NVDA was down roughly 1% on the day, while AMZN slipped about 0.5%.
Amazon Still Builds Its Own Chips Too
Amazon develops its own AI chips, including its Trainium2 processor. Despite that, the company is still turning to Nvidia for the most demanding workloads. The two approaches appear to be complementary rather than competing.
The deal reflects continued heavy investment in AI infrastructure by major cloud providers. AWS is not replacing its custom systems — it is layering Nvidia hardware on top of them for specific high-demand use cases.
The Nvidia-AWS agreement was first announced this week without specific timing. Buck’s comments to Reuters on Thursday provided the clearest picture yet: sales starting in 2025, running to the end of 2027, covering a broad mix of Nvidia products across compute, networking, and inference.







