OpenAI Partners with Broadcom, TSMC to Create Custom AI Chips

OpenAI Collaborates with Broadcom and TSMC for Custom AI Chip Development

According to sources close to Reuters, OpenAI is taking steps to create its first proprietary chip to support its artificial intelligence systems, with collaboration from Broadcom and TSMC. It is also integrating AMD chips alongside Nvidia to fulfill its growing infrastructure needs.

Exploring New Chip Supply Avenues

The creator of ChatGPT, OpenAI, has explored various methods to diversify its chip supply chain and cut down on expenses. This includes evaluating the potential for in-house production and considering capital-intensive plans to construct its chip manufacturing facilities, referred to as "foundries." However, due to the high costs and extended timeline required to establish these facilities, OpenAI has paused its foundry ambitions, opting instead to concentrate on designing chips internally, as disclosed by anonymous sources.

Strategic Partnerships and Mixed Sourcing Approach

For the first time, OpenAI's strategic approach—mixing internal and external solutions and forming industry alliances—is being revealed. This strategy is similar to that of tech giants like Amazon, Meta, Google, and Microsoft, which also blend partnerships to manage costs and ensure steady chip supplies. As one of the top chip buyers, OpenAI’s decision to work with multiple suppliers and develop its custom chip could have ripple effects across the tech sector.

Following the news, Broadcom’s stock surged by over 4.5% on Tuesday, while AMD’s stock also saw a 3.7% rise after initial gains during the morning trading session.

When approached, OpenAI, AMD, and TSMC declined to comment. Broadcom also did not immediately provide a response to requests for comments.

Addressing Growing Computational Demands

OpenAI, instrumental in advancing generative AI, requires vast computing resources to operate its AI systems. As a major customer for Nvidia’s GPUs, OpenAI uses AI chips for both training—where the AI system learns from data—and inference, which involves making predictions or decisions based on new inputs. Reuters previously reported on OpenAI’s chip development plans, while The Information highlighted its discussions with Broadcom and others.

Over recent months, OpenAI has collaborated with Broadcom to build an AI chip focused primarily on inference tasks. While training chips currently see higher demand, industry analysts suggest that inference chip requirements may surpass them as AI applications expand.

Broadcom’s role includes aiding tech companies like Google in refining chip designs and supplying key components to facilitate fast data transfer—a critical factor in AI systems requiring thousands of interconnected chips.

Custom Chip Plans and Talent Acquisition

OpenAI is still exploring whether to independently develop or acquire certain components for its chip designs, with additional partnerships possibly being pursued, per two insiders. The company has established a team of approximately 20 engineers, many of whom, including Thomas Norrie and Richard Ho, have previous experience developing Tensor Processing Units (TPUs) at Google.

Through Broadcom, OpenAI has secured a manufacturing agreement with Taiwan Semiconductor Manufacturing Company (TSMC) to produce its first custom chip in 2026. However, sources note that this timeline may be subject to change.

Market Competition and AMD’s Entry

Currently, Nvidia GPUs dominate over 80% of the market. However, shortages and increasing prices have motivated major customers—such as Microsoft, Meta, and OpenAI—to explore alternative in-house and external chip sources. OpenAI’s plan to leverage AMD chips via Microsoft’s Azure highlights AMD’s efforts to gain market share from Nvidia with its new MI300X chips. AMD anticipates $4.5 billion in AI chip revenue for 2024, with sales of the MI300X chip commencing in late 2023.

Cost Management and Supplier Relationships

The operational expenses for training AI models and services like ChatGPT are substantial. OpenAI expects a $5 billion loss this year on $3.7 billion in revenue, with computing costs—encompassing hardware, power, and cloud services—as the largest expense. As a result, OpenAI is working to optimize resource utilization and diversify suppliers. To maintain a positive relationship with Nvidia, OpenAI has been careful in hiring talent, aiming to secure access to Nvidia’s latest Blackwell chips, according to sources. 

Reference


Ezmata Technologies Pvt Ltd
You manage your core business, while we manage your Infrastructure through ITaaS.