How Sam Altman’s OpenAI seems to be going all out to ‘prove’ Nvidia is not leaving us


How Sam Altman's OpenAI seems to be going all out to 'prove' Nvidia is not leaving us

Amid speculation about Nvidia’s role in the company’s future, Sam Altman-led OpenAI has said that its partnership with Nvidia remains strong. In a recent LinkedIn post, OpenAI’s head of compute infrastructure Sachin Katti said Nvidia remains the AI company’s “most important partner for both training and inference”, describing the partnership as “foundational” rather than a typical supplier arrangement. According to the post, OpenAI’s entire compute fleet currently runs on Nvidia GPUs.“This is not a vendor relationship,” Katti said in the post, adding that OpenAI and Nvidia work together through “deep, ongoing co-design.” The executive noted that OpenAI’s frontier AI models are built through multi-year collaboration on both hardware and model engineering.

OpenAI’s Master Plan for India

The post also outlined how quickly OpenAI’s computing needs have grown in recent years. Katti said that the AI company has scaled its available compute from 0.2 gigawatts in 2023 to 0.6 gigawatts in 2024, and to about 1.9 gigawatts in 2025.

Here’s the full post shared by OpenAI’s head of compute infrastructure on LinkedIn

Our partnership with Nvidia is foundational. Nvidia is our most important partner for both training and inference, and our entire compute fleet runs on Nvidia GPUs. This is not a vendor relationship. It is deep, ongoing co-design. We build systems together, and our frontier models are the product of multi-year hardware and model engineering done side by side.We scaled available compute from 0.2 GW in 2023 to 0.6 GW in 2024 to roughly 1.9 GW in 2025, and that pace is accelerating. Inference demand is growing exponentially with more users, more agents, and more always-on workloads. Nvidia continues to set the bar for performance, efficiency, and reliability for both training and inference.The demand curve is unmistakable. The world needs orders of magnitude more compute.That’s why we are anchoring on Nvidia as the core of our training and inference stack, while deliberately expanding the ecosystem around it through partnerships with Cerebras, AMD and Broadcom. This approach lets us move faster, deploy more broadly, and support the explosion of real-world use cases without sacrificing performance or reliability. The outcome is simple and durable: infrastructure that can carry frontier capability all the way into production, at global scale.



Source link

  • Related Posts

    The Indian Ocean is becoming ‘less salty’: Scientists say the consequences could be massive |

    For decades, the Indian Ocean has been known for some of the saltiest waters on Earth. But it seems that’s changing. Experts say parts of the Southern Indian Ocean have…

    Doctor uses hybrid TEVAR to treat high-risk aneurysm case

    When a man in his early 60s from Kerala walked into the hospital with severe mid-back pain, it didn’t look like just another routine case. He had lived with high…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish