$340 billion Anthropic that wiped trillions from stock market worldwide has source code of its most-important tool leaked on internet


$340 billion Anthropic that wiped trillions from stock market worldwide has source code of its most-important tool leaked on internet
AI firm Anthropic suffered a significant source code leak of its Claude Code agent, exposing unreleased features like an always-on AI and a pet system. This marks the third such incident for the company, which is reportedly preparing for a massive $380 billion IPO. The leak offers a rare glimpse into the company’s development practices.

$340 billion Anthropic that wiped trillions from stock market worldwide has source code of its most-important tool leaked on internetAnthropic, the AI company whose product updates have repeatedly sent global stock markets into a spin, is now dealing with an embarrassing leak of its own making. The full source code of Claude Code—its flagship AI coding agent—accidentally made its way onto the public internet via an npm package that shipped with a source map file it shouldn’t have.The leak exposed roughly 2,200 files and 30MB of TypeScript. It reportedly wasn’t the first time, either. According to engineers who dug through the code, this is at least the third time Anthropic has made this exact mistake.

Hidden features, a pet system, and a daemon that never sleeps

Developers who got their hands on the dump found more than just clean engineering. Buried inside were several unreleased features that Anthropic had been quietly building behind compile-time feature flags. One, codenamed Kairos, appears to be an always-on background agent with memory consolidation—essentially a version of Claude that never fully switches off. Another is a full companion pet system called Buddy, complete with 18 species, rarity tiers, shiny variants, and stat distributions. There is also an Undercover Mode, described as auto-activating for Anthropic employees on public repos, which strips AI attribution from commits with no visible off switch.Coordinator Mode turns Claude into an orchestrator managing parallel worker agents. Auto Mode uses an AI classifier to silently approve tool permissions, removing the usual confirmation prompts.

The code works. The architecture, less so.

Beyond the hidden features, the leak gave outsiders a rare look at how a well-funded AI product actually gets built under pressure. The findings were mixed. The main user interface is a single React component—5,005 lines long—containing 68 state hooks, 43 effects, and JSX nesting that goes 22 levels deep. Engineers reading it noted a TODO comment sitting next to a disabled lint rule on line 4114. The entry point file, main.tsx, runs to 4,683 lines and handles everything from OAuth login to mobile device management. Sixty-one separate files contain explicit comments about circular dependency workarounds. A type name used over 1,000 times across the codebase reads: AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS.One standout detail: the word “duck” is encoded in hexadecimal—String.fromCharCode(0x64,0x75,0x63,0x6b)—because the string apparently collides with an internal model codename that Anthropic’s CI pipeline scans for. Rather than add a regex exception, every animal species in the pet system got hex-encoded.

A second leak—and a broader security warning

This latest incident is not isolated. Fortune reported that a separate, earlier leak this week exposed nearly 3,000 files, including a draft blog post revealing a powerful upcoming model referred to internally as both “Mythos” and “Capybara.” Security researchers who reviewed the Claude Code leak also warned that it potentially allows competitors to reverse-engineer its agentic harness and that, even without proper access keys, certain internal Anthropic systems may remain reachable—raising concerns about nation-state exploitation of the company’s most capable models.Anthropic confirmed the incident but sought to limit the damage. A company spokesperson told Fortune no sensitive customer data or credentials were exposed, describing the incident as a release packaging issue caused by human error rather than a security breach, and adding that the company is rolling out measures to prevent a recurrence.

Anthropic is eyeing a $380 billion IPO later this year

The timing is uncomfortable. Bloomberg reported this week that Anthropic is in early discussions with Goldman Sachs, JPMorgan, and Morgan Stanley about a potential October IPO, with a valuation hovering around $380 billion. The company has already rattled markets this year—its Cowork and Claude Code Security updates wiped billions from software and cybersecurity stocks in a matter of weeks.Leaking your own source code, for the third time, is not ideal pre-IPO optics.



Source link

  • Related Posts

    ‘Only a partial, staggered increase’: Government issues clarification on jet fuel price hike

    NEW DELHI: The central government issued clarification on jet fuel price hike as it capped the increase in aviation turbine fuel (ATF) prices for domestic airlines to 25% on Wednesday.…

    Katie Miller shares ClaudeAI said ‘logically it would clear humans’, Elon Musk responds

    Tech commentator Katie Miller has once again started a debate on social media platform X (formerly known as Twitter) by sharing a screenshot of Claude AI responding to a hypothetical…

    प्रातिक्रिया दे

    आपका ईमेल पता प्रकाशित नहीं किया जाएगा. आवश्यक फ़ील्ड चिह्नित हैं *

    hi_INहिन्दी