Months after CEO Sam Altman called China’s DeepSeek R1 model ‘impressive’, OpenAI sends memo to US government saying Chinese company is ‘cheating’, it is using …


Months after CEO Sam Altman called China's DeepSeek R1 model 'impressive', OpenAI sends memo to US government saying Chinese company is 'cheating', it is using ...

DeepSeek R1 model debuted in January last year. After its launch, OpenAI CEO Sam Altman shared a post on X calling the AI model “impressive”, adding “it’s legit invigorating to have a new competitor”. A year later, the ChatGPT-maker is accusing the Chinese AI model of cheating. In a letter to the US House Select Committee on China, OpenAI accused DeepSeek of “free-riding” on the capabilities developed by OpenAI and other US frontier to replicate models and use them for its own training. The AI company said that it has observed accounts associated with DeepSeek employees developing methods to circumvent OpenAI’s access restrictions and access models through obfuscated third-party routers and other ways that mask their source. OpenAI alleges that DeepSeek is using the distillation tactic – a technique used to train smaller models using outputs from more advanced systems to copy AI models. Here’s the memo it sent to the US lawmakers

Memo that OpenAI sent to US lawmakers

OpenAI believes the best future is one in which we move forward with democratic AI – AI that is shaped by the principles America has always stood for. In advancing democratic AI, America is competing with a Chinese Communist Party (CCP) determined to become the global leader in AI by 2030. That’s one reason why the release of DeepSeek’s R1 model at the Lunar New Year one year ago was so noteworthy: as a gauge of the state of this competition.In March 2025, in response to this Committee’s request, OpenAI provided an assessment of DeepSeek’s distillation techniques. In the year since, OpenAI has taken steps to protect and harden our models against distillation. Ahead of DeepSeek’s expected Lunar New Year release of a new, more powerful model, we are providing the Committee with an updated assessment of its evolving distillation tactics, as well as other gauges of CCP progress toward its 2030 goal, and OpenAI’s efforts on these fronts to advance American-led, democratic AI.Distillation: DeepSeek’s next model (whatever its form) should be understood in the context of its ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs. In this memo, we detail: • Activity we’ve observed on our platform that is indicative of ongoing attempts by DeepSeek to distill frontier models of OpenAI and other US frontier labs, including through new, obfuscated methods. • How DeepSeek reflects CCP censorship and control of information in its responses, including examples. • How China is providing significant state support for its frontier labs and the underlying energy needed to scale compute. • Steps we have taken in the last year to address adversarial distillation, and areas where further partnership with the US government would be beneficial.Energy: The scarcest resource in AI is compute, i.e., power plus chips. Sustaining the American advantage on AI increasingly depends on whether we can reliably generate and deliver power at scale in order to fulfill our compute needs. New data on the scale of China’s recent gains in energy generation underscores our “electron gap”: • In 2025, China added 543 GW of new power capacity – 10X the amount of electricity added by the US, and over 100 GW more than it added in 2024.1 • In 2024, as we recently highlighted, China added 429 GW of new power capacity – an amount that was more than one-third of the US grid and more than half of global electricity growth. The United States added 51 GW. • Since 2021, i.e., in less than five years, China has added more grid capacity than the US has ever built.2OpenAI believes that infrastructure is destiny: chip development, power generation, transmission, and data center capacity will determine which countries can train and deploy frontier systems. This is why we’re investing through our Stargate Project to expand US AI infrastructure to 10 GW by 2029, and just one year in, we’re already over halfway toward that goal.Compute: Investment in compute powers research and step-change gains in model capability. For OpenAI, looking on the past three years, our ability to innovate and serve more people – the vast majority of our over 800 million regular users use our technology for free – has tracked with available compute: • OpenAI’s available compute grew 9.5X from 2023 to 2025 (3X year-over-year): 0.2 GW in 2023, 0.6 GW in 2024, and ~1.9 GW in 2025. Revenue followed the same trajectory, growing 3X YoY. • At the same time, limited compute continues to delay our ability to deliver new, in-demand features.This is unprecedented growth at this scale, and we believe more available compute during this period would have driven even greater innovation and faster adoption. As our Stargate efforts show, we are committed to expanding compute to drive further progress. But we are equally focused on ensuring a level playing field, one where the People’s Republic of China (PRC) can’t advance autocratic AI by appropriating and repackaging American innovation.OpenAI believes the best defense is offense: the best way to ward off a fast-oncoming PRC making headway around the world for autocratic AI is continued investment in American AI leadership and global adoption of responsibly developed, democratic AI. We continue to lead in responsible frontier model innovation, invest across the full AI stack to train and deploy our systems safely, and make powerful AI tools available for free. Adoption of our latest agentic coding model, GPT-5.3-Codex, is up 60% week-over-week. Adoption of ChatGPT is at ~10% monthly growth.We stand ready to provide a closed-door briefing to the Committee upon request.What we see from DeepSeek and other Chinese LLM ProvidersAdversarial Distillation AttemptsThe majority of adversarial distillation activity we’ve observed on our platform appears to originate from China, and occasionally from Russia. We have observed usage patterns from several major Chinese LLM providers and some university research lab usage that are consistent with, and would be highly beneficial for, creating competitor models through distillation.Over the past year, we have seen evolving but persistent methods of distillation against our models. We believe these approaches to distillation are changing in part because we have added new methods to protect our models.Specifically, our review indicates that DeepSeek has continued to pursue activities consistent with adversarial distillation targeting OpenAI and other US frontier labs. We have observed accounts associated with DeepSeek employees developing methods to circumvent OpenAI’s access restrictions and access models through obfuscated third-party routers and other ways that mask their source. We also know that DeepSeek employees developed code to access US AI models and obtain outputs for distillation in programmatic ways. We believe that DeepSeek also uses third-party routers to access frontier models from other US labs.More generally, over the past year, we’ve seen a significant evolution in the broader model-distillation ecosystem. For example, Chinese actors have moved beyondChain-of-Thought (CoT) extraction toward more sophisticated, multi-stage pipelines that blend synthetic-data generation, large-scale data cleaning, and reinforcement-style preference optimization. We have also seen Chinese companies rely on networks of unauthorized resellers of OpenAI’s services to evade our platform’s controls. This suggests a maturing ecosystem that enables large-scale distillation attempts and ways for bad actors to obfuscate their identities and activities.It’s important to note that there are legitimate use cases for distillation: as a technique used to train smaller models using outputs from more advanced systems. OpenAI provides responsible distillation pathways for developers. However, we do not allow our outputs to be used to create imitation frontier AI models that replicate our capabilities.Furthermore, when capabilities are copied through adversarial distillation without the corresponding safety governance and mitigations, the result is cheaper-to-scale systems, where subtle deficiencies may only become obvious after deployment, when failures are hardest to contain. We continue to see signs that DeepSeek models lack meaningful guardrails against dangerous outputs in high-risk domains like chemistry and biology, or offer limited protections for copyrighted material. Despite signing China’s voluntary “Artificial Intelligence Safety Commitments,” DeepSeek still has not published a clear safety framework or evidence of robust testing and independent red-teaming, leaving limited visibility into jailbreak resistance, misuse potential, and other high-impact failure modes.Continued Model Censorship and Pro-CCP BiasOur review of DeepSeek’s behavior indicates that there are overlapping forms of censorship at work, and DeepSeek’s pro-CCP bias appears to be more severe in recent model releases. This censorship is applied not only within the PRC, but also to global users of DeepSeek’s product.Censorship is likely being implemented in multiple ways, including both model post-training and the use of monitoring tools or classifiers that review responses for alignment with CCP policy and, where required, delete them after they have been presented to the user. On topics sensitive to the CCP, such as Tiananmen Square or Taiwan independence, DeepSeek frequently issues outright refusals. That form of censorship is trained into the model’s weights and limits what the model will say regardless of where it is deployed. The model will avoid negative or critical language about the CCP, use positive language about the PRC’s efforts and achievements and use negative language when discussing the US or the West.Beyond outright refusals, DeepSeek often shows framing and terminology bias, offering more confident, detailed responses for PRC-aligned narratives while deflecting, hedging, or re-centering on Western faults when prompts invite criticism of the CCP.There also appears to be a dynamic layer of censorship in the DeepSeek online chat interface that operates beyond the base model itself. In certain cases, DeepSeek’s chat initially generates a substantive response to politically sensitive prompts, then freezes, deletes the output, and issues a refusal. This pattern suggests that a secondary monitoring or classification system may be reviewing generated content in real time and suppressing responses that conflict with CCP political requirements. Topics that have been observed in this category include human rights in Xinjiang; sovereignty in the South China Sea; and the role of Winnie the Pooh in online discourse in China. Below is a sample screenshot of content before it was deleted and after:On some occasions, DeepSeek refuses to give an answer that it deems “harmful”. When asked why the question is harmful, it has been observed to explain its “safety principles”, then delete them. Below are three sample snapshots of a conversation in which DeepSeek was asked about the Falun Gong, refused to answer, and was asked to explain. The response self-deleted immediately after the completion of the word, “Falun”:State Support for the AI StackDeepSeek operates within a PRC ecosystem that provides structural, state-backed support across the AI stack. At the Fourth Plenum in October 2025, the CCP elevated AI as central to national modernization, reinforcing a strategy backed by large subsidies and coordinated support for national champions. Through industrial policy and preferential procurement, this ecosystem enables DeepSeek to scale rapidly and export low-cost AI infrastructure andCCP-aligned model ecosystems abroad – embedding Chinese technical standards and content norms in emerging markets.Amendments to the Cybersecurity Law effective January 1, 2026 further formalize state support for foundational AI research, algorithmic development, and expanded computing and training data infrastructure. The same state-enabled scale that drives commercial competitiveness also strengthens the ability of Chinese AI labs to support the People’s Liberation Army (PLA) and the PRC’s public security systems through advanced analytics and decision-support capabilities.What OpenAI Is DoingAdversarial distillation poses serious cost, safety, commercial, and strategic risks to the US. It lowers the barrier to training advanced models by letting adversaries reuse frontier model outputs instead of investing in their own R&D and compute; it can strip away alignment and abuse safeguards, resulting in less secure systems; and by blending outputs from multiple US LLMs, adversaries could replicate and even combine frontier capabilities in ways that surpass any single teacher model.Adding More ProtectionsWe continue to invest in stronger guardrails and detection systems to prevent unauthorized distillation and misuse of model outputs. When we detect circumvention, we take action and reinforce our defenses. As distillation techniques have evolved into more sophisticated, multi-stage pipelines, our approach has shifted from focusing on isolated CoT extraction to addressing the full distillation lifecycle and preventing recidivism across accounts and infrastructure.We proactively remove users who appear to be attempting to distill our models to develop competitive models to OpenAI. Detection relies on layered methods, including heuristics, machine learning, and manual review. To identify reinforcement learning–style grading behavior, we deploy offline and real-time classifiers that monitor ranking and scoring patterns. To flag synthetic data generation, we assess scale and prompt diversity across accounts. To prevent CoT extraction, our models are trained not to reveal reasoning traces, and additional input and output classifiers monitor for likely leakage, including in jailbreak scenarios. We also leverage human and automated investigators to identify activity linked to known adversarial actors.When we identify distillation that violates our terms of service, we take enforcement measures including banning accounts.This multi-layered strategy enables us to adapt as techniques evolve and to disrupt activity across the full account lifecycle, rather than reacting to isolated tactics.Protecting American TechnologyProtecting American technology across the entire AI stack and preventing diversion to the CCP ecosystem is essential to sustaining US leadership. The Trump Administration’s updated approach to the export of chips to China reflected a clear “domestic-first” logic. We commend the efforts of US chip suppliers to invest in increasing capacity for US AI labs, and their array of partnerships with such labs to support the further buildout of US AI infrastructure.As we continue to design and develop our own silicon, our focus is also on protecting American technology. For us, this means securely deploying our own chips in the US and in high-trust environments with strong security measures – consistent with the approach of our OpenAI for Countries initiative to promote adoption of democratic AI by US allies and partner countries around the world.Advocating for Ecosystem ApproachOpenAI supports an “ecosystem security” approach to frontier model distillation protection: it is not enough for any one lab to harden its protection because adversaries will simply default to the least protected provider.While strengthening OpenAI’s safeguards improves protection on our own platforms, durable risk reduction requires protections across frontier labs, API routers, distributors, and infrastructure providers. A more consistent industry approach reduces the “path of least resistance” that determined adversaries will otherwise exploit.Specific areas where US government policy may be helpful include: • Expanding the shared operating picture through intelligence and information sharing. • Working with industry to establish norms and best practices on distillation defenses. • Addressing API router loopholes. • Restricting adversary access to US compute, cloud, payment, and web infrastructure. • Encouraging allies to adopt comparable standards.Continuing to Invest in Frontier Models and a Broader Safety EcosystemOpenAI continues to lead in frontier model innovation, iterating and scaling capabilities in a responsible manner. We deploy models with safety features that balance innovation with risk mitigation, and we invest across the full AI stack to train and deploy systems safely.Our release of GPT-5.3-Codex, our most capable agentic coding model to date, reflects this approach. In tandem with the release, we are strengthening the cybersecurity ecosystem and deploying our most comprehensive cybersecurity safety stack yet, including dual-use safety training, automated monitoring and detection, and trusted access mechanisms for advanced cyber capabilities.Recent growth signals continued momentum: Codex is up 60% week-over-week to 1.3 million weekly active users, and ChatGPT has returned to ~10% monthly growth. Sustained investment in frontier research is essential to maintaining US advantage. Democratic AI must be broadly accessible, and OpenAI is committed to delivering powerful tools that help individuals, businesses, researchers, and governments innovate responsibly.What more we can do togetherIncrease awareness of PRC free-riding that undermines American competitivenessAs stated above, we assess that Chinese LLMs are actively cutting corners when it comes to safely training and deploying new models. There is an opportunity to impose a reputational cost on Chinese LLMs – and the CCP – by increasing awareness of how these labs are operating: • Circumvention of safeguards. • Access to US model outputs. • State-backed compute accumulation. • Regulatory asymmetries. • Lax pre-deployment safety measures.Revealing how PRC labs operate is essential to informing the public and shaping policy responses.Seize the Generational Opportunity to Invest in the Entire AI StackThe US must invest across the entire AI stack – from supercomputers and data centers to advanced semiconductor manufacturing, power generation and transmission, and the talent pipelines that sustain long-term leadership. It should also strengthen public data resources and build secure, scalable pathways for government adoption so democratic institutions can use frontier AI responsibly and effectively. Sustained, strategic investment in compute, infrastructure, and model innovation will determine whether democratic AI prevails in this competition.Global AI Competition: A technological wave with geopolitical consequencesAt its core, this is a contest between democratic and autocratic models of AI. We believe AI should be built on democratic rails: advancing personal freedom, expanding economic opportunity, and embedding safeguards that prevent its use as a tool of state control.Democratic AI means broad access to advanced tools, user agency and literacy, a balance between safety and innovation, and transparency and accountability. Autocratic systems, by contrast, are developed within regimes that mandate censorship, alignment with state narratives, and cooperation with government authorities.As the 2025 National Security Strategy makes clear, technology leadership, resilient supply chains, energy capacity, and industrial strength underpin American security and prosperity. AI has the potential to accelerate discovery, planning, logistics, forecasting, and decision-making across military, economic, and diplomatic domains. The competition spans compute, semiconductors, power generation, talent, model innovation, and government adoption. The advantages will compound for those who integrate the technology and the central question is whether democracies will continue to build and scale ahead of centralized competitors.On the eve of the AI Summit in India, the world’s largest democracy, we remain confident that open markets, world-class talent, and responsible innovation will sustain democratic leadership. We are committed to working closely with the US government and like-minded partners to protect national security, strengthen technological leadership, and ensure that the future of AI reflects democratic principles.



Source link

  • Related Posts

    Beaten by Zimbabwe, Sri Lanka test next: How can Australia reach the Super 8s of T20 World Cup? | Cricket News

    Brad Evans of Zimbabwe celebrates the wicket of Travis Head of Australia during the ICC Men’s T20 World Cup India & Sri Lanka 2026 match between Australia and Zimbabwe at…

    India in talks with China’s Ant International to link UPI with Alipay+ for cross-border payments

    India is in discussions with China-related Ant International to allow integration of the digital payments platform Alipay+ with the country’s instant payments system, Unified Payments Interface (UPI). The move could…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish