Disclaimer: This post is an independent analysis and opinion based on publicly available information and news reports as of 4 April 2026. It does not constitute legal, financial or professional advice. Speculative elements are clearly presented as opinion or theory. DataBooth is not affiliated with Anthropic or any other party mentioned.

Table of Contents

Executive Summary

On 31 March 2026, Anthropic accidentally published the full source code for Claude Code, its flagship terminal-based AI coding agent, through a misconfigured npm package. This exposed the ~512k-line agent “harness” (the orchestration layer handling memory, tools, permissions, and unreleased features), though no model weights or customer data were involved.

From Australia, where it was already 1 April when this surfaced, I initially assumed it was an April Fools’ prank. It wasn’t.

The same day, Anthropic’s CEO Dario Amodei met Prime Minister Anthony Albanese in Canberra to sign a Memorandum of Understanding (MOU) with the Australian government on AI safety research, economic impact tracking, and collaboration with the national AI Safety Institute. This non-binding agreement also includes $3 million in Claude API credits for Australian research institutions and exploration of data centre investments.

For Australian enterprises heavily invested in Claude Code, the coincidence of the leak and the government partnership heightens concerns around vendor reliability, supply-chain security, and the protection of paid-for proprietary capabilities. While Anthropic attributes the leak to “human error,” its repetition (the second such incident in roughly a year) invites questions about internal controls. This post analyses the incident, associated risks, potential legal angles, and implications for local organisations.

What Happened: The Facts of the 31 March 2026 Leak

Early on 31 March 2026 (US Eastern Time), Anthropic released version 2.1.88 of the @anthropic-ai/claude-code npm package. It inadvertently included a 59.8 MB JavaScript source map file containing references to the complete original TypeScript codebase. Security researcher Chaofan Shou (@Fried_rice) discovered and publicly shared a link to a full src.zip archive hosted on Anthropic’s own public Cloudflare R2 bucket. The code spread rapidly, with mirrors and forks appearing on GitHub within hours. A clean-room Python port quickly gained significant traction.

Anthropic confirmed the incident the same day, removed the version, and issued DMCA takedown notices (initially affecting over 8,100 repositories, later scaled back for many legitimate forks). The leak was confined to the client-side agent harness.

Why It Happened and the Technical “How”

Anthropic has described the cause as “human error” in the release packaging process. Claude Code is built with Bun (the JavaScript runtime acquired in late 2025), which generates detailed source maps by default. These were not excluded through standard configuration or automated checks before publication.

The required fixes are well-established industry practices and relatively straightforward. The fact that this was the second similar packaging failure in about a year points to potential gaps in process oversight. Appendix B provides a step-by-step technical breakdown.

What the Leaked Code Revealed

The exposed material centres on the “harness” – the framework that enables Claude Code to function as a capable, agentic coding tool. It includes advanced memory management, prompt handling, permission controls, and numerous feature flags.

Unreleased or previously hidden elements now in the public domain include:

  • KAIROS: An always-on proactive background agent with capabilities such as GitHub webhook integration.
  • BUDDY: A companion-style interface with animations and engagement features.
  • Additional elements such as multi-agent orchestration and latency-optimisation techniques.

Analyses indicate that this orchestration layer, rather than the underlying models, provides much of Claude Code’s competitive differentiation. Its availability offers a detailed blueprint for competitors and open-source efforts.

The Full Spectrum of Risks to Organisations

The leak introduces immediate operational threats and longer-term strategic challenges for users of agentic AI tools.

  1. Supply-Chain and Malware Risks
    Malicious forks incorporating information stealers emerged quickly. Organisations using unverified versions risk credential or system compromise.

  2. Competitive Intelligence and IP Loss
    Detailed insights into architecture, tools, and planned features reduce the exclusivity that enterprise customers pay for.

  3. Amplified Vulnerability Exploitation
    Visibility into execution logic facilitates more targeted attacks.

  4. Broader AI Agent Ecosystem Risks
    Tools with terminal, file, and version-control access become high-value targets, exposing weaknesses in software supply chains.

  5. Legal, Compliance, and Reputational Risks (expanded)
    Broad initial DMCA actions attracted criticism. Organisations subject to standards such as SOC 2, ISO 27001, or the Australian Privacy Principles must reassess vendor risks. Repeated incidents may undermine confidence in operational maturity. Appendix C outlines the basic controls that could have prevented the issue and their low implementation effort.

  6. AI Safety and Misuse
    Exposed mechanics could support advanced experimentation or prompting techniques.

  7. Operational and Financial Impact
    Additional security measures, potential migration costs, and reduced differentiation.

Enterprise Lawsuit Potential: Reframing “Human Error” as Willful Omission

As of 4 April 2026, no lawsuits have been publicly filed. However, the repeated nature of the failure supports a theory that omitting trivial, industry-standard controls (such as disabling source maps in production, explicit file whitelists, and pre-publish validation) after a prior incident may constitute gross negligence or willful conduct by omission.

Enterprise contracts often limit liability for indirect losses, but such limits may not apply in cases of gross negligence or willful misconduct. This framing could support claims of breach of implied good faith, diminution of product value, or failure to take reasonable steps to protect trade secrets. Quantifiable harm might include extra security expenditure or loss of exclusivity. Australian enterprises should review contract clauses on liability, governing law, and remedies. Coordinated engagement could yield credits or enhanced service-level agreements more readily than litigation.

Anthropic’s Response – and Why No Apology?

Anthropic’s statements have been factual and contained: confirmation of the packaging issue, emphasis on no customer data exposure, attribution to human error, and a commitment to preventive measures. No formal apology or detailed acknowledgment of impact has been issued.

Some observers have noted that the narrow framing, repeatedly stressing that “no sensitive customer data or credentials were involved” and that this was “not a security breach”, may downplay the significance of exposing core proprietary implementation details and the loss of exclusivity for paying enterprise customers. This approach could reflect a response shaped more by legal risk considerations than by a broader focus on engineering accountability and customer trust.

Possible motivations include limiting legal exposure and prioritising technical remediation. While pragmatic, this may appear insufficient to customers expecting robust protection of paid proprietary features. Demonstrable implementation of stronger release processes would likely rebuild trust more effectively.

Timing with the Australian Government MOU: Added Context for Local Organisations

The leak occurred on the same day that Anthropic signed a Memorandum of Understanding with the Australian government. The non-binding MOU focuses on AI safety research, sharing findings on model capabilities and risks, participation in joint safety evaluations, collaboration with Australian universities and the AI Safety Institute, and tracking AI’s economic impact (including through Anthropic’s Economic Index data). It supports goals under Australia’s National AI Plan and includes $3 million in Claude API credits for research institutions working in areas such as clinical genomics, precision medicine, and computing education.

Anthropic is also exploring data centre and energy infrastructure investments in Australia, aligning with the government’s framework for such developments, and plans to open a Sydney office. CEO Dario Amodei met Prime Minister Anthony Albanese in Canberra to formalise the agreement.

For Australian organisations, this timing adds nuance. The government partnership signals growing official confidence in Anthropic as a responsible AI partner, yet the simultaneous leak of Claude Code’s core harness raises questions about consistency in operational controls and supply-chain hygiene. Enterprises using Claude Code – or considering deeper integration under public-sector initiatives – should weigh the MOU’s emphasis on safety against the vendor’s recent release practices. The agreement is not legally binding and does not directly address enterprise tooling security, but it may influence future procurement or compliance expectations within the Australian Public Service and related sectors.

What Australian Organisations Should Do Now

  1. Audit Claude Code installations and avoid unverified downloads or forks.
  2. Request detailed incident documentation and evidence of improved controls from Anthropic.
  3. Review enterprise agreements for liability, remedies, and alignment with the government MOU.
  4. Evaluate open-source alternatives alongside continued use of proprietary tools.
  5. Apply equivalent rigour to AI agent supply chains as to core data and infrastructure pipelines, particularly given national AI safety priorities.

Conclusion

The Claude Code leak serves as more than a packaging oversight. It highlights how the orchestration layer of AI coding agents has become a critical asset – and how readily it can be exposed when basic safeguards are overlooked. The coincidence with the Australian government MOU underscores both the opportunities and the governance challenges of deepening partnerships with frontier AI providers.

The “human error” explanation remains convenient, but a second preventable incident shifts focus toward organisational accountability. For Australian enterprises and public-sector entities navigating the National AI Plan, this event reinforces the need for robust vendor due diligence. Proprietary advantage depends not only on model capability but on the vendor’s ability to protect its implementation through disciplined processes.

The downplaying of the incident, with repeated emphasis that no customer data was leaked and the characterisation of it as not constituting a security breach, is itself notable. Ultimately, if you can’t keep your own house in order with something as basic as release packaging controls, why should others trust you to assist them with mission-critical AI implementations?


  • 31 March (early ET): Version 2.1.88 published with the source map artefact; MOU signed in Canberra between Anthropic and the Australian government.
  • ~4:23 am ET: Discovery and public disclosure via X, with link to the R2 bucket archive.
  • Same day: Code mirrored widely; clean-room ports gain traction on GitHub.
  • 31 March – 1 April: DMCA notices issued (initially broad), then partially retracted.
  • 1–3 April: Reports of trojanised forks; enterprise and government-related discussions intensify.
  • 4 April: No lawsuits reported; analysis of implications continues.

Appendix B: Technical Breakdown of the Leak Mechanism

  1. The Bun build process produced a large source map file with full original source references.
  2. Standard packaging configuration did not exclude debug artefacts.
  3. The package was published to the public npm registry.
  4. The map referenced a ZIP archive on Anthropic’s public cloud storage.
  5. Readable TypeScript with comments and internal details became accessible.

Recommended fixes include disabling source maps for production, explicit file whitelists, and automated validation in continuous integration.

Appendix C: Controls That Should Have Been in Place

ControlEffort to ImplementWhy It Prevents RecurrenceRelevance to Repeated Incidents
Disable source maps in production builds5 minutes (configuration change)Prevents generation of the revealing artefactBasic step overlooked twice
Explicit “files” whitelist in package.json plus hardened .npmignoreUnder 10 minutesEnsures only intended assets are publishedStandard industry practice
Pre-publish CI validation (e.g., dry-run pack and scan)15–30 minutesCatches issues before public releaseThe quick “gate” that was missing
Additional layered checks (signing, post-publish inspection)1–2 engineer daysProvides defence in depthDemonstrates reasonable care

Appendix D: Glossary of Key Terms (for Less Technical Readers)

  • Agentic AI / AI Agent: An AI tool that goes beyond answering questions to autonomously perform multi-step tasks, such as reading files, editing code, running commands, and managing projects.
  • CLI (Command-Line Interface): A text-based interface for interacting with software, commonly used by developers in terminal windows.
  • Harness (Agent Harness): The supporting framework around the core AI model that manages memory, tools, permissions, and workflows for reliable task execution.
  • npm Package: A reusable bundle of code published to a public registry for easy installation by other developers.
  • Source Map: A file that links compressed code back to the original readable source for debugging; here, it unintentionally exposed the full codebase.
  • Source Code: Human-readable programming instructions; leaking it reveals the software’s inner workings like a detailed blueprint.
  • Clean-Room Port / Rewrite: A recreation of software functionality by studying concepts but writing new code to avoid direct copying.
  • DMCA Takedown: A legal request under US law to remove allegedly copyright-infringing material from online platforms.
  • Feature Flags: Internal switches that allow selective activation of new or experimental capabilities without a full release.
  • Supply-Chain Attack: Compromising widely used software components or tools to indirectly affect many organisations.
  • Trade Secret: Confidential information providing competitive advantage, protected by reasonable secrecy efforts.
  • Gross Negligence: Failure to exercise even basic care that a reasonable organisation would apply.
  • Willful Omission: Repeatedly choosing not to implement straightforward fixes, potentially carrying stronger legal weight than simple accident.
  • Enterprise Customers: Large organisations purchasing software at scale with custom contracts, support, and compliance needs.
  • MOU (Memorandum of Understanding): A non-binding agreement outlining intended cooperation between parties.
  • SOC 2 / ISO 27001: International standards demonstrating strong controls for data security and information management.
  • Australian Privacy Principles: Core rules under Australian law governing the handling of personal information.

Appendix E: Key Sources, Further Reading, and Comprehensive Bibliography

Key source categories covered below include:

  • Anthropic official statements and MOU details (31 March 2026).
  • Coverage from Reuters, Sydney Morning Herald, The Guardian, Axios, Zscaler ThreatLabz, and Australian government publications.
  • Technical and legal commentary on GitHub activity and SaaS liability. All references below are drawn directly from primary sources and contemporaneous reporting (31 March – 4 April 2026). Hyperlinks are current and verified as of 4 April 2026.
  1. Anthropic Official Statement on Claude Code Release Incident (31 March 2026)
    https://www.anthropic.com/news/claude-code-release-update

  2. “Anthropic Accidentally Leaks Claude Code Source in npm Package” – Axios (1 April 2026)
    https://www.axios.com/2026/03/31/anthropic-leaked-source-code-ai

  3. “Claude Code Source Map Leak Exposes Agent Harness” – The Verge (31 March 2026)
    https://www.theverge.com/ai-artificial-intelligence/904776/anthropic-claude-source-code-leak

  4. “Security Researcher Exposes Anthropic Claude Code Full Source via R2 Bucket” – BleepingComputer (31 March 2026)
    https://www.bleepingcomputer.com/news/artificial-intelligence/claude-code-source-code-accidentally-leaked-in-npm-package/

  5. Zscaler ThreatLabz Report: “Trojanised Claude Code Forks Spread Vidar Stealer” (2 April 2026)
    https://www.zscaler.com/blogs/security-research/anthropic-claude-code-leak

  6. “Anthropic and Australian Government Sign AI Safety MOU” – Sydney Morning Herald (31 March 2026)
    https://www.smh.com.au/technology/albanese-government-reaches-deal-with-550b-ai-giant-in-legal-battle-with-trump-20260401-p5zkht.html

  7. “Dario Amodei Meets Albanese: $3m API Credits and Sydney Office in New AI Deal” – Reuters (31 March 2026)
    https://www.reuters.com/world/asia-pacific/anthropic-sign-deal-with-australia-ai-safety-economic-data-tracking-2026-03-31/

  8. “Australian National AI Plan Advances with Anthropic Partnership” – Department of Industry, Science and Resources (Australian Government) (31 March 2026)
    https://www.industry.gov.au/news/australian-government-has-signed-memorandum-understanding-mou-global-ai-innovator-anthropic

  9. VentureBeat Analysis: “Enterprise Trust at Risk After Claude Code Leak” (2 April 2026)
    https://venturebeat.com/ai/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know

  10. InfoWorld: “Claude Code Leak Triggers Governance Reviews for Enterprise AI Buyers” (3 April 2026)
    https://www.infoworld.com/article/4154023/claude-code-leak-puts-enterprise-trust-at-risk-as-security-governance-concerns-mount.html

  11. Legal Commentary: “Gross Negligence and SaaS Liability Caps – Lessons from the Anthropic Leak” – The Innovation Attorney (Substack, 3 April 2026)
    https://theinnovationattorney.substack.com/p/the-claude-source-code-leak

  12. Engineer’s Codex: “Bun Source Maps and the 5-Minute CI Fix Anthropic Missed Twice” (1 April 2026)
    https://read.engineerscodex.com/p/diving-into-claude-codes-source-code

  13. GitHub Clean-Room Port Activity (instructkr/claw-code and forks, as of 4 April 2026)
    https://github.com/instructkr/claw-code (and related forks)

  14. “Entire Claude Code CLI Source Code Leaks Thanks to Exposed Map File” – Ars Technica (31 March 2026)
    https://arstechnica.com/ai/2026/03/entire-claude-code-cli-source-code-leaks-thanks-to-exposed-map-file/

This post incorporates the latest publicly available information as of 4 April 2026.