TLDR
- Meta revealed a roadmap of four in-house AI chips under its MTIA program
- The first chip, MTIA 300, is already live powering ranking and recommendation systems
- The remaining three chips will roll out through 2027, with the final two focused on AI inference
- Meta plans six-month release intervals to keep pace with rapid data center expansion
- Capital spending is projected at $115–$135 billion in 2026, with Broadcom and TSMC involved in production
Meta unveiled its roadmap of four new in-house AI chips on Wednesday, as the company pushes to expand its infrastructure at pace with surging AI demand.
📢 𝐉𝐔𝐒𝐓 𝐈𝐍: $META Meta to Deploy New MTIA Chips for GenAI Inference Through 𝟐𝟎𝟐𝟕
👉 𝐊𝐞𝐲 𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
➤ Meta expanding custom 𝐌𝐓𝐈𝐀 AI chip roadmap with 𝟒 new generations.
➤ 𝐌𝐓𝐈𝐀 𝟑𝟎𝟎 already in production for ranking and recommendations… pic.twitter.com/fX7jqA3MTm
— Hardik Shah (@AIStockSavvy) March 11, 2026
The chips are part of Meta’s Meta Training and Inference Accelerator (MTIA) program. The first chip, the MTIA 300, is already deployed and powers Meta’s ranking and recommendation systems across its platforms.
The remaining three chips — the MTIA 400, 450, and 500 — will be released across the rest of 2026 and into 2027. The final two are designed specifically for inference workloads.
“We see inference demand exploding at the moment and that’s what we’re currently focused on,” said Yee Jiun Song, Meta’s VP of engineering.
Inference is the process by which an AI model responds to user queries — the part users actually experience. It’s a different, and increasingly critical, workload compared to training large models from scratch.
Meta has had some wins with inference chips before. Training chips, though, have been a tougher nut to crack. The company has long aimed to build a generative AI training chip but hasn’t fully cracked it yet.
Starting with the MTIA 400, Meta has designed an entire server system around the chip — roughly the size of several server racks — and includes liquid cooling. That’s a step up from just designing a processor in isolation.
Meta plans to ship new chips every six months, driven by the speed at which it’s adding data centers. Song put it plainly: “That is the reality of how quickly our infrastructure is being built out.”
Why Meta Is Building Its Own Chips
Custom chips let Meta optimize for its own workloads rather than relying entirely on general-purpose processors. The payoff? Lower energy use and better cost efficiency at scale.
That said, Meta isn’t going fully DIY. The company contracts Broadcom (AVGO) to help design certain elements, and uses Taiwan Semiconductor Manufacturing Co (TSMC) to fabricate the final processors.
In February, Meta also signed large deals with Nvidia (NVDA) and AMD (AMD) to purchase tens of billions of dollars worth of chips — so off-the-shelf hardware remains part of the mix.
Meta’s Spending Plans
Meta said in January that it expects capital expenditure of between $115 billion and $135 billion in 2026. That’s a substantial commitment to infrastructure and underlines why in-house chip design matters — at that spending level, even marginal efficiency gains translate to real money.
The six-month cadence for new chip releases reflects both the pace of Meta’s build-out and the urgency it sees around AI infrastructure. Song confirmed the rollout schedule is tied directly to how fast the company is expanding its data center footprint.
The MTIA 450 and 500 — the final two chips in this current roadmap — are slated for 2027 and are squarely aimed at inference, the workload Meta says is seeing the most rapid growth right now.
Meta stock (META) was up 0.17% on Wednesday as the announcement was made.





