OpenAI Teams with Broadcom to Build Custom AI Chips

OpenAI has signed a multi-billion dollar deal with Broadcom to design custom AI chips for data centers, joining Nvidia and AMD commitments as the company prepares for a projected 250 GW compute demand.

Comments
OpenAI Teams with Broadcom to Build Custom AI Chips

3 Minutes

OpenAI is moving closer to producing its own custom AI processors. The company has signed a large, multi-billion dollar agreement with Broadcom to design bespoke chips that OpenAI plans to deploy across its data centers as its compute needs explode.

A major bet on bespoke silicon

According to reports, OpenAI and Broadcom began collaborating around 18 months ago and have now formalized a deal aimed at supplying roughly 10 gigawatts of chip capacity. The agreement — described by the Wall Street Journal as worth several billion dollars — would fund custom hardware design and long-term supply for OpenAI’s growing infrastructure.

Timeline, scale and what's coming to data centers

Broadcom is expected to start installing hardware racks in the second half of 2026, with chip design and production possibly continuing through the end of 2029. That staged rollout mirrors how cloud and AI infrastructure projects ramp: early deployments first, full-scale capacity later.

  • Deal scope: reported multi-billion-dollar contract for about 10 GW of chips.
  • Deployment: rack installations begin H2 2026.
  • Design/production: likely to extend toward 2029.

How this fits with Nvidia and AMD partnerships

OpenAI has been securing capacity from multiple suppliers. Reports indicate Nvidia has committed infrastructure — figures cited in media suggest a large-scale investment linked to roughly 10 GW of compute — while AMD is said to have agreed to provide about 6 GW. OpenAI reportedly paid tens of billions to AMD and may even be positioned to take a meaningful equity stake in the company.

Why multiple partners? Relying on several vendors reduces supply risks, diversifies architecture choices, and helps OpenAI scale faster than any single supplier could support on its own.

Ambitious demand: 250 GW over eight years

CEO Sam Altman has told staff that OpenAI expects to need roughly 250 gigawatts of compute capacity over the next eight years. To put that in perspective, 250 GW is about one-fifth of the entire U.S. electrical generation capacity (~1,200 GW). Building that much AI infrastructure would likely cost on the order of $10 trillion, far beyond the resources of any single hardware partner.

Even with large vendor commitments and rising revenues — OpenAI’s current-year revenue is forecast in media reports at about $13 billion — the company will need innovative financing and new revenue models to fund such an enormous expansion.

What to watch next

Keep an eye on deployment milestones from Broadcom (rack rollouts in 2026) and any further disclosure about capacity deliveries from Nvidia and AMD. The coming years will show whether custom AI silicon and multi-vendor supply strategies can keep pace with escalating demand for generative AI services.

Leave a Comment

Comments