QCOM To Take On AMD, Nvidia With New Data Center AI Chips

Qualcomm said the new chips are designed to make generative AI more accessible and cost-effective for clients, potentially spelling competition for Nvidia and AMD.
Photo Illustrations Of Qualcomm, Publisher Of Snapdragon Chips
In Creteil, France, on September 26, 2025, the Qualcomm logo appears on a smartphone reflecting an abstract silver illustration. (Photo by Samuel Boivin/NurPhoto via Getty Images)
Profile Image
Prabhjote Gill·Stocktwits
Updated Oct 27, 2025   |   11:26 AM GMT-04
Share
·
Add us onAdd us on Google
  • Qualcomm launched two data center solutions, the AI200 and AI250, aimed at accelerating large language models and multimodal models in enterprise and cloud environments. 
  • The company said the new chips are designed to make generative AI more accessible and cost-effective for clients.
  • Its entry into the AI accelerator market could potentially spells competition for Nvidia and AMD.

Qualcomm (QCOM) on Monday unveiled its latest AI accelerator chips, positioning itself alongside Nvidia (NVDA) and Advanced Micro Devices (AMD) in the generative AI market. 

Qualcomm’s stock surged as much as 19% in the morning trade, hitting its highest level in more than a year,  and was among the top trending tickers on Stocktwits. Retail sentiment around the shares improved to ‘bullish’ from ‘neutral’ territory, while chatter remained at ‘normal’ levels over the past day. 

Meanwhile, Nvidia’s stock rose just 1.9% but saw retail sentiment improve to ‘bullish’ from ‘neutral’ territory over the past day. In contrast, AMD’s stock edged 0.08% lower with retail sentiment trending in ‘bearish’ territory.

The Cloud AI Race Against AMD, Nvidia

Qualcomm, traditionally known for semiconductors powering wireless connectivity and mobile devices, is now targeting generative AI with its new chips, aiming to make large-scale AI workloads more accessible and cost-effective. The move positions Qualcomm as a competitor to industry leaders Nvidia and AMD.

Nvidia dominates the AI accelerator market, with GPUs capturing more than 90% of the sector. Its chips have been central to training OpenAI’s GPT models, which power ChatGPT.

However, companies are increasingly exploring alternatives. Earlier this month, OpenAI announced plans to purchase chips from AMD, the second-largest GPU maker, and may even take a stake in the company. Meanwhile, tech giants including Google (GOOG/GOOGL), Amazon (AMZN), and Microsoft (MSFT) are developing their own AI accelerators for cloud services.

Qualcomm’s New AI Chips

The company launched two data center solutions, the AI200 and AI250, aimed at accelerating large language models (LLMs) and multimodal models (LMMs) in enterprise and cloud environments. 

The AI200 and AI250 are rack-level accelerator cards designed for deployment in data center servers. AI200 supports 768 GB of LPDDR memory per card, enabling large AI models to run efficiently, which is higher than offerings from Nvidia and AMD.

AI200 emphasizes a low total cost of ownership, while AI250 prioritizes energy efficiency to reduce power consumption. Both cards deliver high-performance AI inference, allowing data centers to process generative AI tasks rapidly and at scale.

Both solutions also claim to support major AI software frameworks, PCIe scaling, liquid cooling, Ethernet networking, and confidential computing features to ensure secure AI processing. 

Read also: Ethereum Outperforms Bitcoin As Retail Sentiment Improves Ahead Of Fed Rate Decision, Trump-Xi Meeting This Week

For updates and corrections, email newsroom[at]stocktwits[dot]com.

Share
·
Add us onAdd us on Google
Read about our editorial guidelines and ethics policy