TLDR
- Nvidia is developing a new inference computing platform to help OpenAI and others run AI models faster.
- The platform will feature a chip made by startup Groq and is set to be revealed at Nvidia’s GTC conference in San Jose next month.
- OpenAI has been unhappy with how fast Nvidia’s current hardware handles certain tasks, including software development queries.
- Nvidia signed a $20 billion licensing deal with Groq, which ended OpenAI’s separate talks with the chipmaker.
- In September, Nvidia committed up to $100 billion to OpenAI in a deal that gave Nvidia a stake in the company.
Nvidia is building a new processor aimed at making AI inference faster and more efficient, according to a Wall Street Journal report published Friday.
NVIDIA to launch a new AI processing chip, with OpenAI expected as a major buyer. pic.twitter.com/2ZCRJUlXvA
— MANDO CT 🇮🇪 🇦🇪 🇬🇧 (@XMaximist) February 28, 2026
Inference computing is the process that lets AI models like ChatGPT respond to user queries. It’s different from training, which is where Nvidia has long dominated.
The new platform is expected to be revealed at Nvidia’s GTC developer conference in San Jose next month. The system will include a chip built by startup Groq.
Reuters and Nvidia did not immediately confirm the report. OpenAI also did not respond to a request for comment.
The timing matters. Reuters reported earlier this month that OpenAI has been unhappy with how quickly Nvidia’s existing hardware handles certain workloads — specifically software development tasks and AI-to-AI communication.
OpenAI wants hardware that could eventually handle around 10% of its inference computing needs. That’s a slice of the pie Nvidia clearly doesn’t want to lose.
OpenAI’s Search for Faster Chips
Before Nvidia moved in, OpenAI had been in talks with two chip startups — Cerebras and Groq — to source faster inference chips.
Those conversations didn’t last. Nvidia signed a $20 billion licensing deal with Groq, which effectively shut the door on OpenAI’s discussions with the startup.
That’s a pointed move. By locking in Groq, Nvidia kept a key alternative out of OpenAI’s hands while simultaneously incorporating Groq’s chip technology into its own new platform.
Nvidia’s Broader Bet on OpenAI
The relationship between Nvidia and OpenAI runs deeper than just chip supply.
Back in September, Nvidia said it planned to invest as much as $100 billion into OpenAI. The deal gave Nvidia an equity stake in the AI company and gave OpenAI the capital to buy more advanced chips.
So Nvidia is both a supplier and an investor — a position that gives it strong incentive to keep OpenAI’s hardware needs in-house.
NVDA stock was down 4.16% on February 27, the day before this report surfaced.
The new inference platform, if confirmed at GTC next month, would represent Nvidia’s direct response to growing pressure from customers who need faster, more specialized AI processing.
Groq’s chip being included in the platform suggests Nvidia is willing to partner with startups rather than purely compete with them — at least when it keeps rivals out of reach of its biggest customers.
The GTC developer conference is scheduled for San Jose next month, where Nvidia is expected to make the formal announcement.





