OpenAI’s eyes may have been too big for its belly as the pre-generative company downsizes its ambitious $7 trillion chip foundry and opting to partner with TSMC, AMD, and Broadcom for future chip production.
Before investors started throwing around the terms return-on-investment, yield, or risk tolerance, OpenAI toyed around with the idea of building its own global foundry that was projected to cost up to $7 trillion, based on the billions the company was vacuuming up relatively easily.
In building out its own, admittedly, expensive foundry, OpenAI would be in the driver’s seat when it comes to how quickly and efficiently the company could scale its large language models and produce products and services for government, manufacturing, education, and research sectors.
According to a new Reuters report, it now seems OpenAI is looking to be a bit more pragmatic in its approach to building custom made chips intended to power its server-side AI workload.
More importantly, OpenAI would be diversifying its current chip supply chain to help reduce overall costs as well as setting forward a plan to compete soon instead of letting NVIDIA suck up market share and extensive company contracts while waiting on its massive foundry ambitions to become reality.
With mounting pressure from investors asking for AI hype-men to start explaining the tangible benefits of billions already sunk into pre-generative technologies, OpenAI is taking a more financial solvent approach to generating chips for its AI platform and services.
OpenAI’s new plan let its new partners run the manufacturing and design efforts on custom chips, and follows the likes of well-run companies such as Apple, Microsoft, Amazon, and Google.
Apple has already reserved its own TSMC chip capacity for 2026, which is when the Taiwanese Semiconductor Manufacturing Company plans to have its more efficient and faster A16 processing nodes in place.
OpenAI is looking to leverage TSMC’s expected A16 processing node to build chips that are faster and consume less power for its future model training at scale while using its Brodcom partnership to craft specialized chips for inference activities, and then leaning on its AMD deal to diversify its chip supply to reduce overall costs.
OpenAI’s Broadcom production would host real-time AI computations and focus on the response times of AI models rather than training. OpenAI believes there will be a time in the future when demand for AI inference will outpace training and its Broadcom partnership will help meet that demand.
In the meantime, OpenAI’s 20-man team is looking to pick up customers NVIDIA is losing due to rising cost in chip manufacturing as well as delays due to shortages.
OpenAI is expected to lose another $5 billion in operating costs for its AI models and training with $3.7 billion lost in sheer revenue.
Abandoning a $7 trillion cash sink upfront and going the partnership route makes a lot more sense as well as offers the veneer of financial responsibility from a company historically know for setting money on fire.


