TLDR
- Uber is expanding its cloud partnership with AWS, using Amazon’s custom Graviton4 and Trainium3 chips.
- Graviton4 chips power Uber’s Trip Serving Zones, helping match riders and drivers faster during demand spikes.
- Trainium3 is being piloted to train AI models that handle driver matching, arrival times, and delivery recommendations.
- The deal aims to cut energy costs and reduce latency across millions of daily trips and deliveries.
- Amazon is using the deal to showcase its custom chip lineup to enterprise customers amid surging AI demand.
Uber is deepening its cloud relationship with Amazon Web Services, putting AWS custom silicon at the center of its real-time infrastructure and AI ambitions.
$UBER is expanding its $AMZN AWS partnership to power more of its ride, delivery & AI infrastructure.
This move deepens what is already a sweeping Uber-Amazon relationship spanning cloud compute, autonomous vehicles, and AI infrastructure.
Uber and Amazon’s relationship has… pic.twitter.com/hvbjj6V9F1
— Yeboah Walee (@YeboahWalee) April 7, 2026
The expanded partnership puts two of Amazon’s custom chips to work inside Uber’s global operations. Graviton4 handles the compute-heavy lifting behind Trip Serving Zones — the system that decides, in milliseconds, which driver gets which ride. Trainium3 is being piloted for AI model training, fed by data from billions of past trips and deliveries.
Uber processes a staggering volume of decisions every second. Which driver is closest? What’s the fastest route? How long will it take? Getting those calls right at scale — across rush hours, rain storms, and stadium events — is the core engineering problem Uber is paying to solve.
“Uber operates at a scale where milliseconds matter,” said Kamran Zargahi, Uber’s VP of Engineering. “Moving more Trip Serving workloads to AWS gives us the flexibility to match riders and drivers faster and handle delivery demand spikes without disruption.”
By running Trip Serving Zones on Graviton4, Uber says it can scale faster during demand spikes while also lowering energy consumption and cutting costs. That’s a rare combination — usually you pick two of three.
AI Models Built on Billions of Trips
The Trainium3 pilot is where things get more forward-looking. Uber’s AI models crunch data from billions of rides to calculate arrival times, rank couriers, and personalize the in-app experience. Training those models at scale is expensive. Trainium is Amazon’s answer to that cost problem.
“By starting to pilot some of our AI models on Trainium, we’re building a technology foundation that will make every Uber experience smarter,” Zargahi said.
The models trained on Trainium are designed to improve match speed, arrival time accuracy, and delivery recommendations — the metrics that directly affect whether a rider books again or a restaurant stays on the platform.
For Amazon, the deal is as much about marketing as infrastructure. AWS is in an aggressive push to win enterprise AI workloads away from rivals, and landing Uber — one of the most demanding real-time platforms in the world — is a useful proof point.
“We’re helping Uber deliver the reliability hundreds of millions of people count on today — and the AI-powered experiences that will define ride-sharing and on-demand delivery tomorrow,” said Rich Geraffo, VP and Managing Director of North America at AWS.
Why Custom Chips?
Off-the-shelf processors from Intel or AMD aren’t optimized for the specific mix of workloads Uber runs. Amazon designed Graviton for general-purpose compute efficiency and Trainium specifically for AI training — making them a tailored fit for what Uber needs.
Uber is also working to personalize user experiences and accelerate ride-matching to stay competitive in a market where margins are thin and switching costs are low.
The partnership announcement comes as both companies face broader market pressure, with UBER down 0.48% and AMZN down 1.18% on Tuesday.







