Microsoft Launches Maia 200 AI Chip To Boost AI Inference

The chip brings increased memory, data flow, and compute capabilities to Microsoft’s cloud infrastructure and aids the company’s push to support advanced AI workloads.
The Microsoft stand, during the first day of Mobile World Congress 2016 in Barcelona, 22nd of February, 2016.
The Microsoft stand, during the first day of Mobile World Congress 2016 in Barcelona, 22nd of February, 2016. (Photo by Joan Cros/NurPhoto) (Photo by NurPhoto/NurPhoto via Getty Images)
Profile Image
Shivani Kumaresan·Stocktwits
Updated Jan 26, 2026   |   12:10 PM EST
Share
·
Add us onAdd us on Google
  • The chip is built on Taiwan Semiconductor Manufacturing Co.’s (TSM) advanced 3-nanometer manufacturing process with high-speed FP8 and FP4 tensor cores.
  • In internal testing, Maia 200 showed roughly three times the FP4 inferencing power compared with Amazon’s latest Trainium hardware.
  • Microsoft plans to use Maia 200 with a range of AI workloads, including OpenAI’s newest GPT-5.2 models.

Microsoft Corp. (MSFT) introduced a new AI accelerator, Maia 200, on Monday, designed to accelerate artificial intelligence inference and make it more cost-effective. 

The chip delivers increased memory, data flow, and compute capabilities to Microsoft’s cloud infrastructure and supports the company’s push to run advanced AI workloads.

Next-Gen AI Silicon

Built on Taiwan Semiconductor Manufacturing Co.’s (TSM) advanced 3-nanometer manufacturing process with high-speed FP8 and FP4 tensor cores, the Maia 200 provides considerable improvements for AI token generation and model use. 

Following the announcement, Microsoft’s stock traded over 1% higher on Monday mid-morning. On Stocktwits, retail sentiment around the stock improved to ‘extremely bullish’ from ‘bullish’ territory the previous day amid ‘high’ message volume levels. 

Maia 200 is already live in Microsoft’s central U.S. data center near Des Moines, Iowa, with subsequent rollout planned for the West 3 region near Phoenix, Arizona, and additional sites globally. 

Performance And Efficiency

In internal testing, Maia 200 showed roughly three times the FP4 inferencing power compared with Amazon’s latest Trainium hardware and higher FP8 throughput than Google’s TPU v7 series. According to Microsoft, this makes it the most capable and efficient inference chip the company has deployed, with about 30% more performance per dollar than current in-house hardware.

Microsoft plans to use Maia 200 with a range of AI workloads, including OpenAI’s newest GPT-5.2 models, to cut inference costs and boost throughput. Maia 200 units will also be used by Microsoft’s superintelligence team to generate data that will help improve future AI models, said the company’s cloud and AI chief, Scott Guthrie, in a blog post

The tech giant is scheduled to report second-quarter (Q2) fiscal 2026 earnings on Wednesday. Analysts expect a revenue of $80.278 billion and earnings per share (EPS) of $3.91, according to Fiscal AI data. 

MSFT stock has gained over 8% in the last 12 months. 

Also See: AppLovin Stock Gets A Needham Upgrade – Why Does The Firm See A 33% Upside Potential For The Stock?

For updates and corrections, email newsroom[at]stocktwits[dot]com.

Share
·
Add us onAdd us on Google
Read about our editorial guidelines and ethics policy