How Sam Altman’s OpenAI seems to be going all out to ‘prove’ Nvidia is not leaving us


How Sam Altman's OpenAI seems to be going all out to 'prove' Nvidia is not leaving us

Amid speculation about Nvidia’s role in the company’s future, Sam Altman-led OpenAI has said that its partnership with Nvidia remains strong. In a recent LinkedIn post, OpenAI’s head of compute infrastructure Sachin Katti said Nvidia remains the AI company’s “most important partner for both training and inference”, describing the partnership as “foundational” rather than a typical supplier arrangement. According to the post, OpenAI’s entire compute fleet currently runs on Nvidia GPUs.“This is not a vendor relationship,” Katti said in the post, adding that OpenAI and Nvidia work together through “deep, ongoing co-design.” The executive noted that OpenAI’s frontier AI models are built through multi-year collaboration on both hardware and model engineering.

OpenAI’s Master Plan for India

The post also outlined how quickly OpenAI’s computing needs have grown in recent years. Katti said that the AI company has scaled its available compute from 0.2 gigawatts in 2023 to 0.6 gigawatts in 2024, and to about 1.9 gigawatts in 2025.

Here’s the full post shared by OpenAI’s head of compute infrastructure on LinkedIn

Our partnership with Nvidia is foundational. Nvidia is our most important partner for both training and inference, and our entire compute fleet runs on Nvidia GPUs. This is not a vendor relationship. It is deep, ongoing co-design. We build systems together, and our frontier models are the product of multi-year hardware and model engineering done side by side.We scaled available compute from 0.2 GW in 2023 to 0.6 GW in 2024 to roughly 1.9 GW in 2025, and that pace is accelerating. Inference demand is growing exponentially with more users, more agents, and more always-on workloads. Nvidia continues to set the bar for performance, efficiency, and reliability for both training and inference.The demand curve is unmistakable. The world needs orders of magnitude more compute.That’s why we are anchoring on Nvidia as the core of our training and inference stack, while deliberately expanding the ecosystem around it through partnerships with Cerebras, AMD and Broadcom. This approach lets us move faster, deploy more broadly, and support the explosion of real-world use cases without sacrificing performance or reliability. The outcome is simple and durable: infrastructure that can carry frontier capability all the way into production, at global scale.



Source link

  • Related Posts

    Elon Musk gives less than a year to coding as a profession, says: There is no…

    Elon Musk predicts coding as a profession will end by the end of 2026. A clip of the Tesla CEO is doing rounds on the internet where he can be…

    Road rage in Bengaluru: Man clings to car bonnet, dragged for 1km on Old Airport Road | Bengaluru News

    BENGALURU: In yet another road-rage incident, a goods vehicle driver lay on the bonnet of a car and was dragged for around 1 kilometre on Old Airport Road Wednesday afternoon.Ulsoor…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish