OpenAI Puts the Squeeze on TSMC with Massive AI Chip Deals In recent weeks, OpenAI has inked massive deals with AMD and Broadcom to build enormous numbers of AI chips. While the staggering financial commitment has drawn much attention, the broader implications for the chip industry and its central bottleneck are even more critical to understand. The core of these deals involves building unprecedented computing power. OpenAI’s partnership with AMD will see the chip giant produce 6 gigawatts worth of GPUs, with the first deployments starting in late 2026. AMD expects this to generate tens of billions in future revenue. Simultaneously, Broadcom will collaborate with OpenAI to build 10 gigawatts worth of custom AI accelerators and high-speed ethernet systems. These networking components are vital for connecting the vast arrays of systems in OpenAI’s planned data centers. This deployment is also scheduled to begin in the latter half of 2026. According to Phil Burr, head of product at optical processor company Lumai, the term designing for the Broadcom chips is a bit misleading. He explains that Broadcom will essentially assemble a series of pre-designed intellectual property blocks to meet OpenAI’s specific requirements. These custom accelerators are tailored for inference, the process of running already-trained AI models, which can lead to significant power savings or performance gains, but only for OpenAI’s own workloads. Burr also clarified why these deals are measured in gigawatts rather than a simple chip count. It is often because the final number of chips required is not yet known. A rough estimate can be made by considering the power draw of a specific chip and the overall power goal, then cutting that number significantly to account for cooling, which typically requires about one watt of cooling for every watt the chip consumes. For OpenAI, the benefits are multi-faceted. Building custom chips is cheaper than buying from Nvidia due to lower margins. Tailored silicon also offers speed and performance advantages. Crucially, it provides diversity in supply, moving the company away from total reliance on a single provider. However, this diversity is an illusion at the manufacturing level. No matter whose logo is on the chip, nearly all advanced AI silicon comes from the same place, Taiwan Semiconductor Manufacturing Company, or TSMC. Gil Luria, Managing Director at DA Davidson, calls TSMC the greatest single point of failure for the entire global economy. A catastrophic disruption in Taiwan would not only halt AI progress but also impact mobile phone and global car sales. TSMC’s dominance stems from its mastery of Extreme Ultraviolet Lithography and its incredibly high manufacturing yields, meaning more chips emerge from its fabs working correctly. This expertise, built over decades, creates a powerful lock-in effect. Companies like Apple and Nvidia design their chips specifically for TSMC’s processes, making it difficult to simply shift production elsewhere. This reliance makes TSMC a critical bottleneck. The company’s capacity is famously very tight, and any minor disruption can cause major delays. For instance, after a US export ban on AI chips to China was lifted, Nvidia reportedly faced a nine-month wait for production slots at TSMC. TSMC is racing to expand, building new advanced fabs in Taiwan and a massive complex in Arizona. The US facility has grown in scope and is now planned to be only one process generation behind the leading-edge fabs in Taiwan. Despite these efforts, it will take many years for the world to reduce its dependence on TSMC. For now, the entire tech industry’s future hinges on the stability of a single company on a single island, and OpenAI’s blockbuster deals have just tightened the vise.


