TLDRs;
- Meta deploys MTIA 300 chip to improve recommendation systems across Facebook and Instagram efficiently.
- MTIA 400-500 chips aim at generative AI inference, moving beyond smaller training workloads.
- Meta expands U.S. data centers and invests in AI infrastructure to support chip deployment.
- Meta’s custom chips reflect a wider industry move toward specialized AI hardware by tech giants.
Meta Platforms Inc. (NASDAQ: META) saw its stock tick slightly higher this week following the rollout of its first in-house AI chip, the MTIA 300, as the company moves to bolster its artificial intelligence capabilities while reducing reliance on third-party hardware.
The MTIA 300 is the first in a planned series of four custom-designed AI chips under Meta’s MTIA family. Meta executives say the chip is now fully operational, helping power ranking and recommendation systems across Facebook and Instagram. Future chips in the series, MTIA 400, 450, and 500, are slated to debut roughly every six months, with MTIA 400 already tested and MTIA 450 and 500 scheduled for deployment in 2027.
MTIA 300 Powers Key Platforms
According to Yee Jiun Song, Meta’s vice president of engineering, MTIA 300 enables the company to train smaller AI models that optimize user experience on its platforms. “These chips give Meta more diversity in silicon supply and help insulate the company from price volatility,” Song told CNBC.
By focusing on specialized AI workloads rather than large-scale model training, the MTIA 300 supports the company’s ranking and recommendation algorithms without requiring the massive GPU clusters typically used for training giant models.
The MTIA series represents a “walk, crawl, run” approach, reflecting a cautious, staged strategy that builds on Meta’s previous experience with custom silicon. A failed earlier chip project taught executives the importance of incremental deployment, balancing ambition with operational risk.
Future Chips Target Generative AI
While MTIA 300 focuses on training smaller models, the next-generation chips will target generative AI inference tasks. MTIA 400 has completed testing, and one rack will host 72 MTIA 400 chips. The MTIA 450 and 500 chips, expected to launch in 2027, will further enhance Meta’s AI capabilities.
Meta plans to deploy four new generations of its in-house artificial intelligence chips by the end of 2027 as the company turns to custom silicon to help power its rapidly expanding AI workloads https://t.co/gNRWjR09kg
— Bloomberg (@business) March 11, 2026
These in-house chips are manufactured by Taiwan Semiconductor Manufacturing Co. (TSMC) and include high-bandwidth memory, which Meta says it has secured despite ongoing industry shortages.
This phased rollout highlights Meta’s two-track approach: it will continue to invest heavily in Nvidia and AMD GPUs for large-scale model training, while its own silicon focuses on efficient inference workloads. This strategy allows Meta to control costs and reduce dependence on external suppliers, which historically have been a major expense.
Expanding Data Center Footprint
Meta’s chip initiative aligns with its broader data center expansion in the U.S. The company operates or plans 30 data centers, with 26 located domestically, and the majority of MTIA engineering work is based in the country.
Alongside custom chips, Meta has signed multi-year deals to purchase millions of Nvidia GPUs and up to 6 gigawatts of AMD GPUs, ensuring ample hardware for large-scale AI workloads.
The company anticipates total expenses of up to $119 billion in 2025, with AI infrastructure consuming a significant share. Meta executives view these investments as essential to reducing the billions spent annually on third-party hardware, sometimes described as “renting the keys to our AI future” from vendors like Nvidia.





