OpenAI CEO Sam Altman makes it clear to employees at Townhall: You do not get to choose how…


OpenAI CEO Sam Altman makes it clear to employees at Townhall: You do not get to choose how…
OpenAI CEO Sam Altman asserted employees have no say in US military operations, even after a Pentagon deal for classified AI deployment. The move followed rival Anthropic’s blacklisting, sparking internal and external criticism over timing and optics. Altman emphasized the Pentagon respects OpenAI’s safety measures but retains operational control, while also acknowledging competitors might offer fewer restrictions.

OpenAI CEO Sam Altman had a blunt message for his employees this week: when it comes to US military operations, the company simply does not get a vote. “So maybe you think the Iran strike was good and the Venezuela invasion was bad,” Altman told staff at an all-hands meeting on Tuesday, according to a partial transcript reviewed by CNBC. “You don’t get to weigh in on that.”The meeting came four days after Altman announced, late on a Friday evening, that OpenAI had struck a deal with the Pentagon to deploy its AI models on classified networks—a deal that landed just hours after rival Anthropic was formally blacklisted by the Department of Defense and hours before the US and Israel launched strikes on Iran.

OpenAI’s Pentagon deal drew immediate backlash—inside and outside the company

The timing could not have been more loaded. Anthropic had just been designated a “supply chain risk to national security” by Defence Secretary Pete Hegseth—an unprecedented label for an American company—after it refused to drop guardrails against AI being used for mass domestic surveillance of Americans or fully autonomous weapons. OpenAI stepped in almost immediately, announcing its own classified deployment deal before the dust had even settled.The optics weren’t great. Altman himself admitted as much. “We shouldn’t have rushed to get this out on Friday,” he said in a post on X over the weekend. “The issues are super complex, and demand clear communication.” In the all-hands, he acknowledged it looked “opportunistic and sloppy,” according to WSJ reporting on the meeting.The backlash was real. Some OpenAI employees publicly criticised the move. Dozens had, just days earlier, signed an open letter standing in solidarity with Anthropic’s red lines. The AI safety community was alarmed. And critics pointed out that OpenAI’s contract language, while it included prohibitions on domestic surveillance and autonomous weapons in principle, ultimately deferred to the legal framework—a framework that, post-Snowden, many argue has already been stretched to accommodate mass surveillance programs like PRISM.

Altman says Pentagon respects OpenAI’s safety stack—but operational calls belong to Hegseth

Still, Altman drew clear lines at Tuesday’s meeting. He told employees that the Pentagon respects OpenAI’s technical expertise, wants input on where its models are a good fit, and has agreed to let the company build and maintain the safety stack it deems appropriate, according to a person familiar with the matter who spoke to CNBC on condition of anonymity. Cleared OpenAI engineers will be embedded with government teams, and safety researchers will remain in the loop.But Altman was equally clear that day-to-day military decisions are not OpenAI’s to make. Secretary Pete Hegseth runs those calls—not Sam Altman.He also addressed a competitive reality that few at the company wanted to hear. “I believe we will hopefully have the best models that will encourage the government to be willing to work with us, even if our safety stack annoys them,” Altman said. “But there will be at least one other actor, which I assume will be xAI, which effectively will say ‘We’ll do whatever you want.'”

Altman now eyeing NATO classified networks as OpenAI doubles down on defence

And OpenAI is already looking beyond the Pentagon. The WSJ reported that Altman told staff the company is now exploring a contract to deploy on all NATO classified networks—a move that would make OpenAI a foundational AI provider for the Western military alliance. Apple received NATO clearance for its consumer devices just last month, but a full classified deployment of frontier AI models would be a different proposition entirely.Meanwhile, Anthropic’s Claude was reportedly used in the Iran strikes over the weekend and in the January operation that resulted in the capture of ousted Venezuelan leader Nicolás Maduro—suggesting the classified handoff from Anthropic to OpenAI and xAI is still very much a work in progress.Altman has said he reiterated to the Pentagon that Anthropic should not be labelled a supply chain risk, and that the same deal terms should be made available to all AI companies. Whether that offer leads anywhere—or whether the standoff ends in court—remains to be seen.



Source link

  • Related Posts

    Why the ‘win the semifinal, win the trophy’ trend matters in India vs England clash | Cricket News

    Suryakumar Yadav and Harry Brook (Image credit: Agencies) India and England have each lifted the T20 World Cup twice, and their rivalry in the semifinals has been perfectly balanced so…

    Elon Musk to Sam Altman in court: Check ChatGPT’s safety record, ‘nobody committed suicide because of Grok’ and …

    Tesla CEO Elon Musk has now launched a new attack on Sam Altman’s OpenAI. In a newly released testimony from his lawsuit against OpenAI, Elon Musk has sharply criticised the…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish