"Will AI replace my job?" isn't the most urgent question anymore. The real one is: "We're using the same AI — so why are some people so much better at it?" Anthropic's March report analyzing 1 million Claude conversations answers this with data. Users with 6+ months of experience have 10% higher conversation success rates, and this gap holds even after controlling for task, model, and country. Axios called this "America's next class war: AI fluency."
What is this?
The Anthropic Economic Index uses a privacy-preserving system (Clio) to analyze real Claude usage data and track AI's economic impact. The March report "Learning Curves" used a sample of 1 million conversations from February 5-12, 2026.
The report's key findings break into two categories.
Finding 1: Usage patterns are shifting
The share of the top 10 tasks on Claude.ai dropped from 24% (November 2025) to 19% (February 2026). Coding is migrating from Claude.ai to API, while personal use (sports scores, product comparisons) grew from 35% to 42%. Average task value dipped slightly from $49.3/hr to $47.9/hr — reflecting more mainstream users coming in.
Finding 2: Proficiency determines outcomes
Users with 6+ months of experience: 73.1% success rate vs new users at 66.7%. A 6.4 percentage point gap. Apply O*NET task fixed effects (same-task comparison) and it's still 3pp. Control for model, country, and language too — still 4pp.
The difference between experienced and new users wasn't about what they did — it was about how they did it.
Peter McCrory (Anthropic's head of economic research) put it this way: "People who started using it like a Google search bar completely shift paradigms after 6 months." Economist Ashish Kulkarni shared his own 2-year usage data and confirmed the same pattern — early use was mostly brainstorming, but over time it diversified into coding, data analysis, and research, with conversation length (turn count) steadily increasing.
What's actually changing?
Comparing the previous report (November 2025 data) to this one reveals that the nature of AI adoption is shifting.
| Metric | Previous report (Nov 2025) | This report (Feb 2026) |
|---|---|---|
| Top 10 task concentration (Claude.ai) | 24% | 19% (diversifying) |
| Personal usage share | 35% | 42% (going mainstream) |
| Average task value | $49.3/hr | $47.9/hr |
| Input education level | 12.2 years | 11.9 years |
| API coding task share | Baseline | +14% (accelerating API migration) |
| Top 20 country usage concentration | 45% | 48% (gap widening) |
| US top 5 state concentration | 30% | 24% (converging) |
The most notable change is model selection patterns. Paid users strategically choose models based on task complexity.
Model selection varies by task
Overall Opus usage among paid Claude.ai users is 51%. But software dev tasks hit 55% (+4.4pp), while tutoring drops to 45% (-6.5pp). For every $10 increase in task hourly wage, Opus usage rises 1.5pp on Claude.ai and 2.8pp on API. In other words, knowing which model to use when is itself a proficiency signal.
Two fast-growing API workflows also deserve attention.
SaaStr's Jason Lemkin nailed the key insight in his analysis of the previous report: "When a senior engineer gets 12x speed boost from AI, a junior gets 9x. AI gives the biggest leverage to those who are already the best." This report's learning curve data statistically backs that observation.
The core message: access to AI tools is becoming equal, but the value extracted from AI is becoming unequal in proportion to usage experience. Anthropic interprets this as evidence that the channel of "skill-biased technological change" is already active.
Connection to our previous post
"AI covers 94% of tasks but actual adoption is 33%" explored the gap between AI capability and adoption. This is the sequel: even among those who adopted AI, a proficiency gap exists and is widening.
How to shorten your AI learning curve
Practical ways to apply the behavioral patterns of experienced users identified in the report data.
- Shift from 'directive' to 'iterative'
New users say "write this email for me" and accept the result. Experienced users say "draft it → change the tone here → add data to this section" — they refine AI output through multiple passes. The report shows experienced users use 8.7pp less directive and 3.6pp more iterative approaches. Starting today, don't accept AI output on the first try. Do at least 2-3 rounds of iteration. - Gradually increase task complexity
Report data shows prompt education level increases by about 1 year for every year of Claude usage. Start with search replacement (sports scores, weather), then expand to analysis (research synthesis, data interpretation), then creation (planning docs, strategy). Note: an "imagination ceiling" tends to hit around year one, so consciously seek new use cases. - Match models to tasks
If you're a paid user, use Opus for complex coding/analysis tasks and Sonnet for simple queries/drafts. The report shows API users increase Opus usage by 2.8pp per $10 in task value — twice as sensitive as Claude.ai's 1.5pp — meaning they switch models more strategically. - Shift from personal to professional usage
Experienced users do 7pp more work-related tasks and 10% less personal use. This signals they've transitioned AI from "a thing I ask random questions" to "a part of my work process." Try incorporating AI into real work at least 3x per week — meeting notes analysis, market research, code reviews. - Plug AI directly into your workflow
The migration of coding tasks from Claude.ai to API/Claude Code (+14%) shows experienced users pull AI out of the chat interface and wire it into their workflows. Economist Ashish Kulkarni reported 25% of his usage moved to Claude Code. Even non-developers can automate repetitive tasks using Claude's Projects feature or integration tools like Make and Zapier.



