Microsoft has unveiled its latest foray into the world of artificial intelligence (AI) with the introduction of custom cloud computing chips at its annual developer conference, Ignite. The spotlight is on the Azure Maia 100, the company’s bespoke chip for its Azure cloud service, tailored for generative AI tasks. Boasting 105 billion transistors, it stands out as “one of the largest chips on 5-nanometer process technology,” according to Microsoft. The Maia 100 is part of a series of Maia accelerators for AI, marking Microsoft’s entry into the custom silicon space for cloud and AI solutions.
The announcement doesn’t stop there. Microsoft has also revealed the Azure Cobalt 100, its first in-house microprocessor for cloud computing, based on ARM architecture. With 128 computing cores on die, the Cobalt 100 achieves a 40% reduction in power consumption compared to other ARM-based chips used by Azure. Both Maia 100 and Cobalt 100 are supported by 200 gigabit-per-second networking and deliver a data throughput of 12.5 gigabytes per second.
“These new chips are not meant to replace partnerships but to complement them,” Microsoft emphasizes. The company continues to collaborate with Nvidia and AMD for chips for Azure, with plans to integrate Nvidia’s latest “Hopper” GPU chip, the H200, and AMD’s MI300 in the future.
Notably, Microsoft’s custom chips extend beyond powering its own services. They play a crucial role in programs like GitHub Copilot and support generative AI from OpenAI, a company in which Microsoft has invested $11 billion. The collaboration between Microsoft and OpenAI aims to advance the adoption of generative AI in enterprises, a sector where Microsoft sees significant growth.
At the OpenAI developer conference, Microsoft CEO Satya Nadella committed to building “the best compute” for OpenAI, highlighting the strategic importance of these custom chips in advancing AI capabilities. The Maia 100, in particular, is set to power some of the largest AI workloads on Azure, including those related to OpenAI’s language models.
This move into custom silicon signifies Microsoft’s commitment to innovation in the AI space. The company has a history in silicon development, having collaborated on chips for Xbox more than two decades ago. The recent introduction of the Maia and Cobalt chips is part of Microsoft’s broader effort to optimize its hardware stack for the era of AI.
Rani Borkar, Corporate Vice President for Azure Hardware Systems and Infrastructure, emphasizes the significance of these chips, stating that Microsoft is “rethinking the cloud infrastructure for the era of AI.” The Maia 100, manufactured on a 5-nanometer TSMC process, represents Microsoft’s first complete liquid-cooled server processor. This innovation aims to enable higher server density at greater efficiency, fitting seamlessly into existing data center footprints.
Microsoft’s Maia and Cobalt chips signify a series rather than a one-time development, with plans for future iterations. While the exact specifications and benchmarks are yet to be disclosed, Microsoft is positioning itself to compete with industry leaders and offer customers diverse infrastructure choices.
The introduction of these custom chips aligns with Microsoft’s broader strategy to integrate hardware and software, providing end-to-end solutions optimized for AI workloads. As the company co-engineers chips for its cloud services, it emphasizes the importance of flexibility, performance, and efficiency, signaling a new era in the intersection of silicon and cloud computing.