Claude Code Leak: What the New Features Mean for Work in 2026

Cracked software box with code pages spilling out and icons for memory, agents, and 24/7 operation above it.

The Claude Code source leak exposed roughly 512,000 lines of internal code, giving us a detailed look at features Anthropic is building into its coding agent. The most significant for business users are persistent memory, multi-agent orchestration, and continuous 24/7 background operation. These features signal that Claude is shifting from a chat tool you prompt into an autonomous agent that remembers your business, coordinates complex tasks, and keeps working while you step away.

At TJ Digital, we’ve helped over 50 businesses build AI-powered marketing systems, and the single biggest factor in getting quality output from AI is giving it enough context. We spend a significant amount of time building, organizing, and maintaining the brand knowledge that makes Claude effective for each client. If these leaked features ship as expected, they could eliminate a huge chunk of that manual work.

What happened with the Claude Code leak?

@tjrobertson52

Claude Code’s source code got leaked and one feature could change how we all use AI for work. Persistent memory. It learns while it works. 🧠 #ClaudeAI #AIForWork #ClaudeCode #TechNews

♬ original sound – TJ Robertson – TJ Robertson

On March 31, 2026, Anthropic accidentally published a large portion of Claude Code’s internal source code through an npm packaging error in version 2.1.88. The leak exposed around 1,900 TypeScript files. Anthropic confirmed that no customer data or credentials were included and characterized it as human error, not a security breach.

The strategic value of the leak is that it revealed Anthropic’s agent roadmap. Claude Code’s competitive advantage lives in its “harness” layer, the memory systems, permission models, orchestration logic, and safety checks that make the model behave like a useful agent. That blueprint is now visible.

Security researchers also flagged real risks from the incident. Threat actors quickly created fake GitHub repositories mimicking the leaked code to distribute malware. And having readable source code makes it cheaper for attackers to study permission models and find potential bypasses.

I hope this isn’t too harmful to Anthropic long-term, because the features revealed in this leak are exactly what business users need.

How does Claude Code’s persistent memory work?

Claude Code’s memory system uses two complementary mechanisms that load at the start of every session. This is the feature I’m most excited about, because it solves the biggest bottleneck in getting AI to do real work for your business.

The first is CLAUDE.md, a set of markdown instruction files that you write and maintain. They provide durable guidance for things like brand voice rules, approval steps, file naming conventions, and workflow standards. You can scope them at the project level, user level, or even organization-wide through managed policy locations.

The second is auto memory, where Claude writes notes to itself based on corrections and preferences it picks up during work. If you keep telling Claude to format reports a certain way or use a specific tool, it remembers. These notes are stored as plain markdown files you can read, edit, or delete.

A few implementation details matter here. Auto memory loads the first 200 lines or 25KB at session start. Longer details get moved into separate topic files that Claude loads on demand. Anthropic is also clear that memory is treated as context, not enforced configuration. There’s no guarantee of strict compliance, especially for vague instructions. That’s why teams often pair memory with deterministic controls like permission hooks and allowlists.

For anyone running AI-powered business workflows, this architecture is a practical split that mirrors what we already do manually. Explicit instructions (what you always want the agent to do) go in CLAUDE.md. Implicit learnings (what the agent picks up over time) accumulate in auto memory.

What is Claude’s “dreaming” feature?

The leaked source code references an unreleased system often labeled Kairos that includes a background memory process called “dreaming” or “AutoDream.” Anthropic hasn’t publicly confirmed this as a shipping feature, so treat it as directional.

The consistent description across multiple reports is that dreaming is a form of memory consolidation during idle time. Rather than just storing notes and loading them later, the system would actively reorganize, re-summarize, and clean up its memory files while you’re not using it. Think of it as the difference between dumping notes into a folder and having someone organize that folder overnight.

If this ships, the workflow impact goes beyond “Claude remembers more.” It means Claude maintains and curates its own knowledge base continuously. Teams would spend less time re-teaching context at the start of each session, because the system is explicitly designed to handle context compaction, which is when long conversations cause early instructions to get lost.

At my agency, we already spend a lot of time building, organizing, and maintaining this kind of context for each brand. Giving Claude all the information it needs about a client’s business, voice, and industry is the major unlock that makes it effective. If memory consolidation could happen automatically in the background, that changes the economics of AI-assisted work significantly.

How will Claude Code manage multiple agents?

Claude Code supports three layers of orchestration, each solving a different problem.

  • Subagents are specialized assistants that run in their own context window with a custom system prompt and independent permissions. The main session delegates tasks to them when the work matches their specialty, and they return results.
  • Agent teams are an experimental feature where one session acts as team lead, spawning and coordinating multiple teammate sessions. Teammates communicate directly and coordinate via a shared task list. The docs note that task claiming uses file locking to prevent race conditions, and there are limitations. No nested teams, and the lead session is fixed.
  • Multi-agent CI services run on Anthropic’s infrastructure. Claude Code’s managed Code Review feature uses a fleet of specialized agents that analyze code diffs in parallel, followed by a verification step to reduce false positives.
Orchestration LayerBest ForRuns On
SubagentsDelegating specialized subtasks within a sessionYour machine
Agent teamsCoordinating multiple independent sessions on a shared goalYour machine or remote
Multi-agent CIAutomated code review on every PRAnthropic’s cloud

The pattern here mirrors how engineering teams already work. Triage, parallel deep dives, consolidation, and a QA gate. For business workflows, this means you could set up a lead agent that coordinates a content audit across multiple brand projects simultaneously, or an analytics agent that pulls data from several sources in parallel before synthesizing a report.

Can Claude Code run continuously in the background?

Yes, and increasingly so. Claude Code now has several continuous operation features either shipped or in development.

  • PR monitoring on desktop lets Claude watch GitHub pull request status in the background, auto-fix CI failures, and auto-merge once all checks pass. You can move on to other work while Claude keeps watching.
  • Event-driven PR monitoring on the web subscribes to PR activity through GitHub App webhooks and pushes fixes when the path forward is clear. The trigger is GitHub activity, not a user prompt.
  • Cloud scheduled tasks run on Anthropic’s infrastructure without requiring your machine to be on or a session to be open. They persist across restarts and run autonomously without permission prompts.
  • Channels are MCP servers that push events like messages, alerts, and webhooks into a running Claude Code session. For an always-on setup, you run Claude in a background process or persistent terminal, and it reacts to events while you’re away.

If the leaked Kairos daemon becomes a real product, it would represent the final step. Continuous operation even without an open terminal window, with proactive checks that surface things you need to see. That would converge scheduling, event channels, and PR monitoring into a single experience where Claude functions as an always-available autonomous agent.

How does Claude Code compare to OpenAI’s agent tools?

The two companies are building from different starting points.

Claude Code is an opinionated, integrated product. Memory, orchestration, and continuous operation are built in. If you want a ready-to-run autonomous workflow, especially around GitHub lifecycle automation and built-in memory conventions, Claude Code is more end-to-end right now.

OpenAI’s approach is more modular. Their Agents SDK, Responses API, and Codex CLI are designed as building blocks for developers who want to create their own agent systems with custom UI, orchestration, and observability. If you’re building a custom agentic application from scratch, OpenAI’s platform is more directly designed for that.

There’s also a philosophical difference in safety architecture. Claude Code’s Auto mode uses a separate classifier model to review and approve actions before execution. OpenAI’s Codex relies on OS-enforced sandbox boundaries plus approval policies. Both reduce the constant permission prompts that make agent tools frustrating, but through different enforcement layers.

FeatureClaude CodeOpenAI
MemoryBuilt-in (CLAUDE.md + auto memory)SDK sessions + guidance docs
OrchestrationSubagents, agent teams, multi-agent CIAgents SDK with handoffs and traces
Background opsPR monitoring, cloud tasks, channelsAPI-level background mode
Safety modelAI classifier reviews actionsOS-enforced sandbox boundaries
Best forReady-to-run autonomous workflowsCustom agent application development

For most business users who aren’t building their own developer tools, Claude Code’s integrated approach is the more practical choice.

What do these features mean for marketing and business automation?

Claude Code is designed for coding, but Anthropic explicitly supports adapting it into other types of agents through output styles. That matters for marketing and brand workflows because modern digital marketing is high-context work. Brand voice, compliance constraints, campaign consistency, and multi-system coordination all require deep familiarity with the business.

Planning mode becomes a quality gate. You can use it to gather requirements and produce a structured campaign plan before anything gets generated or published. Anthropic also supports a model strategy that uses Opus for planning and Sonnet for execution, treating strategy and implementation as distinct phases.

Memory solves brand consistency, which is fundamentally a context problem. Persisting brand rules in CLAUDE.md means they survive across sessions and don’t get lost when conversations get long. Auto memory captures operational preferences like report formatting, preferred metrics, and tool choices.

Continuous operation translates directly to marketing automation. Scheduled cloud tasks can run recurring audits or daily performance checks without requiring a machine to stay on. Channels can push real-time alerts for things like campaign spend spikes or conversion drops into a running session so Claude can react while you’re away.

If dreaming ships broadly, the likely impact for marketing teams is always-on brand operations. Continuous consolidation of brand and process memory, plus proactive surfacing of issues and opportunities.

Should you start preparing for these features now?

Yes, and the best preparation looks the same whether these specific features ship next month or next year.

Start documenting your brand knowledge, processes, and standards in a structured format. If you’re already using Claude projects, you’re ahead of most businesses. The companies that will benefit most from persistent memory are the ones that have already invested in organizing their context, because the system needs good inputs to produce good outputs.

If you haven’t started building AI into your workflows yet, now is the time. The gap between businesses using AI effectively and those that aren’t is accelerating. These features will widen it further.

Will these leaked features actually ship?

Anthropic’s own leadership has emphasized that many internal experiments never make it to production. Multiple outlets covering the leak made the same point. Treat these features as directional, not guaranteed product commitments.

That said, several of the leaked capabilities (persistent memory via CLAUDE.md, subagents, cloud scheduled tasks, PR monitoring) are already documented in Anthropic’s public Claude Code docs. The unreleased pieces, primarily Kairos and dreaming, represent the next logical step from features that already exist. The trajectory is clear even if the exact timeline isn’t.

How should businesses protect themselves from security risks tied to the leak?

The Claude Code leak wasn’t a data breach, but it created secondary risks worth knowing about.

Fake repositories appeared on GitHub almost immediately after the leak, designed to trick developers into downloading malware disguised as leaked Claude Code files. If you or anyone on your team downloaded anything related to the leak, verify the source carefully.

More broadly, the incident highlights a reality of always-on AI agents. As Claude Code becomes more event-driven, with channels, PR auto-fix, and background workflows, it increases the need to treat inbound content as untrusted. PR comments, webhook payloads, and chat messages can all become vectors for prompt injection if the system isn’t properly sandboxed.

Anthropic builds sender gating and remote permission relay into Channels specifically to address this. But any business running autonomous agents needs to think carefully about what those agents are allowed to do without human approval.

We help businesses build AI-powered marketing systems that take advantage of these capabilities as they roll out. Contact TJ Digital for a free digital marketing audit.