Industry
Distribution, Not Capability: What Claude Code's Web Launch Revealed About AI Coding Adoption
CNaught Team
May 11, 2026
Year-over-year growth
growth in AI-authored commits over the last year

When we started analyzing a year of AI coding agent activity on public GitHub, we expected to be writing about model capability. We ended up writing about distribution.

The biggest agent launches of 2025 fit the capability narrative neatly. Industry surveys from The Pragmatic Engineer and JetBrains' 2026 developer survey both documented Claude Code overtaking GitHub Copilot and Cursor in the months following Anthropic's October launches. Anthropic's frontier model improvements, culminating in Opus 4.5's November release and its record 80.9% SWE-bench Verified score, were widely touted across industry benchmarks. The story everyone told was about better models winning more developers.

The data tells a different story. The biggest event in this category last year wasn't a model release. It was an installation experience.

It Wasn't a Better Model, It Was a New Surface

Biggest single-week inflection · absolute commits added
+0k
weekly commits added by Claude Code in a single week, peaking in the week ending Nov 11, 2025

The rate-limiter was distribution. Claude Code became a web product. It stopped requiring a technical installation process. That onboarding friction was what had kept usage contained, and removing it produced the step-change.

The Market Grew While Competitors Held Steady

The natural hypothesis in a launch this large is cannibalization: Claude Code grew because Copilot, Cursor, and the other 30+ tools shrank. The public commit data suggests this didn't happen.

Competing agents' absolute activity in the two weeks after Claude Code's launch stayed on its pre-launch trajectory. Most of the discourse assuming otherwise has been relying on share-of-market framing, which is a different question than "did anyone lose volume." On the volume question, no.

*

A caveat: we're only seeing public repositories. If substantial substitution is happening inside private enterprise codebases, we can't see it. But the visible record throws a wrench in the cannibalization story.

The category expanded, and Claude Code captured the new demand.

Where did the volume come from, then? The total AI coding agent market, all 30+ agents combined, jumped sharply above its pre-launch trend in the two weeks after the web launch.

The market added new demand, and Claude Code captured almost all of it. New users showed up on Claude Code who weren't using a different agent the week before.

This is a familiar shape for SaaS launches: make the product easier to start, attract people who couldn't or wouldn't have made it through the previous onboarding, expand the category. It's just rarely visible at this resolution in real time.

What This Means for Emissions Accounting

The launch dynamics are interesting in their own right, but for sustainability teams there's a specific implication worth flagging.

If you're building a Scope 3 view of your company's AI coding tool usage and considering vendor consolidation as part of an emissions reduction strategy, the public data doesn't support that lever. Each major launch over the past year — Claude Code on web in October, Cursor 2.0, GPT-5.1, Gemini 3 Pro — has expanded the category rather than redistributing it. A developer who switches from Tool A to Tool B in a quarter where Tool C also launched isn't reducing your total AI code footprint. They're moving share within a market that just got bigger.

Fastest-growing tools (90-day growth rate)

OpenAI Codex 22.5×
Roo Code 9.9×
Kilo Code 3.7×
OpenHands 2.2×
Copilot 2.1×
Last 90 days vs. prior 90 days · tools with <100 prior commits filtered.

The practical takeaway: track raw inputs (tokens, model, provider, time) rather than tool selection. That data stays meaningful even as the market reshuffles. We walk through this in our methodology piece.

What We Can't Tell You

Public Repositories Only

Per GitHub's Octoverse reporting, roughly 80% of developer contribution volume happens in private repos. If a substantial amount of enterprise usage shifted from one tool to another inside private codebases over this period, the public commit record won't show it.

A Two-Week Clean Window, Not a Six-Month Verdict

Nine days after Claude Code's web launch,. Nine days after Claude Code's web launch, Cursor shipped version 2.0. Nine days after that, OpenAI released GPT-5.1. Within the same four-week stretch: Gemini 3 Pro, GPT-5.1-Codex-Max, Claude Opus 4.5. The post-launch period is the single most event-dense stretch in the year of data we have. The "no cannibalization" finding is about the first two weeks, not the next six months.

Get Started

Track CO2 and energy consumption per Claude Code session, free and open source.

Try Carbonlog

If you're building Scope 3 measurement for AI-driven code at your organization, we can help.

Talk to us about your AI emissions

Methodology

Commit counts come from CNaught's open-source pipeline, which queries the GitHub Search API and attributes commits to AI agents that sign their own author identity. The dataset covers public repositories only and inherits roughly 5% sampling error on large result sets. Full methodology, source code, and historical data are available in our open-source repository.