OpenAI Chooses Independent Path for AI Chips, Leaning on Nvidia & AMD While Forging Its Own

 OpenAI has made a definitive statement: it will not be incorporating Google's artificial intelligence chips, specifically their Tensor Processing Units (TPUs), into its products. This clarification comes amidst swirling reports that suggested a deeper integration with Google's hardware infrastructure. While OpenAI confirmed early testing of Google TPUs, a spokesperson made it clear there are no concrete plans for large-scale implementation.

Despite the fierce competition in the AI domain, OpenAI maintains its use of Google Cloud services for general cloud infrastructure. This intriguing dynamic, where a direct competitor leverages another's general cloud resources



but shuns its specialized AI hardware, continues to capture significant attention within the tech industry.

Historically, Google kept its powerful TPU chips primarily for internal projects. However, the company has recently expanded access, making these chips available to external clients. Notable companies now benefiting from Google's AI chips include Apple, Anthropic, and Safe Superintelligence. Crucially, OpenAI has confirmed it is not among these external users.

Instead, OpenAI's current operations are powered by chips manufactured by Nvidia and AMD. Beyond these established partnerships, OpenAI is aggressively pursuing the development of its own custom AI chip. This ambitious in-house project is expected to reach the "tape-out" phase – the point where the chip design is finalized and sent for manufacturing – by the end of the year. This persistent effort underscores OpenAI's commitment to developing proprietary hardware and solidifying its independent technological foundation.

Next Post Previous Post
No Comment
Add Comment
comment url