This is not another of those ‘AI is killing jobs’ reports. Anthropic, in a new research, seems to have asked the deeper questions this time. Its latest labour-market study asks what happens when we stop guessing which jobs AI could affect. What if we, instead, start measuring where it is actually showing up inside real work? And for the same reason, Anthropic seems to have introduced a completely new metric to measure AI job impact.
What I talk about is a new labour-market paper that Anthropic has come up with on March 5, 2026. Titled “Labour market impacts of AI”, the report does not say unemployment has exploded. In fact, it sheds quite a bright light on just the opposite side of things. And this makes it particularly useful for college students, freshers, and anyone trying to stay relevant in today’s job economy. Why? It shows where AI is actually entering work. In short, the real job impact of AI, and not the hype.
Anthropic’s New Research
Most AI-and-jobs research starts with a fairly simple idea: if a model can theoretically do a task faster, then the occupation containing that task is “exposed.” That sounds reasonable until real life gets in the way. A task can be technically possible for AI and still not be used in actual workplaces because the process is messy, the company is slow, the risk is high, the software stack is missing, or a human still needs to sign off on everything. Anthropic’s paper is built around that gap between theory and reality.
That is why this is not really a paper saying, “AI is taking jobs now.” It is a paper saying, “Let’s stop guessing based only on capability and start tracking real usage inside actual work.” Think of it like the difference between owning a gym membership and actually showing up at 6 a.m. every day. The capability exists in both cases. The impact is only real in one of them. Anthropic is trying to measure the showing-up part.
Interestingly enough, it has come up with a completely new way to do this. Anthropic is calling this new method of tracking actual professional usage of AI, and not just its theoretical AI capability – “observable exposure.” But what does it mean? Let us explore
The Core Idea: What “Observed Exposure” Actually Means
The heart of the paper is a new metric called Observed Exposure. In simple terms, it measures not just whether AI could help with a task, but whether it is actually helping or not. Anthropic measures this using three things:
- O*NET task data for around 800 occupations
- prior estimates of whether LLMs can theoretically speed up those tasks
- real usage data from Claude.
Post these 3 metrics, the Observed Exposure concept gives more weight to work-related and automated usage than to casual or purely assistive usage.
That matters because not all AI use is equal. A marketer using Claude to brainstorm five headline options is not the same as a support team plugging AI into a workflow that answers customer queries at scale. One is assistance. While the other is a borderline replacement of human labour. You would love to be on the former’s end. The latter, not so much.
Anthropic explicitly tries to capture that distinction by giving full weight to automated implementations and only half weight to augmentative use. That makes the metric much more grounded than the completely absurd version (in my opinion) of “AI can touch this job, therefore this job is doomed.”
Let’s have a look at this graph by Anthropic for more clarity.

Now let’s break this down:
- Blue area/line shows theoretical AI coverage: the share of tasks in each job category that AI could potentially handle based on its current capability.
- Red area/line shows observed AI coverage: the share of tasks where AI is actually being used in practice.
- The labels around the circle are different occupational categories: Management, Legal, Sales, Healthcare support, Construction, etc.
- The scale from 0.2 to 1.0 represents the level of coverage. 1.0 means 100% AI exposure or usage in that category, while a value closer to 0 means lower exposure.
The graph makes one thing very clear: AI is being used far less than it could be. In many categories, the blue line for theoretical AI coverage sits much farther out than the red line for observed AI coverage, showing a clear gap between capability and actual use. This is especially visible in fields like Business & Finance, Legal, Management, and Computer & Math. In fact, Computer & Math is one of the clearest examples on the chart, where theoretical capability reaches 94% of tasks, but observed Claude coverage is only 33%. So while AI already appears highly capable on paper, real-world adoption is still slower, more uneven, and far less widespread than the hype often suggests.
The Biggest Takeaways
With its stark counterpoints to some of the most common belief systems, Anthropic’s report shares some extremely insightful learnings.
1. The most exposed jobs are exactly where AI is already useful
The first big takeaway is not shocking, but it is important. The jobs with the highest observed exposure are the ones where generative AI already feels naturally useful: screen-based, language-heavy, repeatable work. Anthropic’s most exposed occupations include Computer Programmers at 75% coverage, followed by roles like Customer Service Representatives and Data Entry Keyers at 67% coverage. In simple terms, if a job involves coding, responding, entering, organising, summarising, or processing information on a computer all day, you know AI is already there and mind you, it is there to stay.
2. A huge part of the economy still remains untouched
Now for the other side of the story. Around 30% of workers show zero coverage in Anthropic’s framework because their tasks barely appear in the data at all. That group includes professions like those of cooks, motorcycle mechanics, lifeguards, bartenders, dishwashers, and dressing-room attendants. This matters because it kills the lazy idea that AI is sweeping across every profession with the same force. It is not.
Check out the 5% rule to know more about such professions.
3. Higher AI exposure is linked to weaker long-term job growth
This is where the paper starts getting more serious. Anthropic compares its observed-exposure metric with BLS employment projections for 2024 to 2034 and finds that more exposed occupations are projected to grow less. Specifically, for every 10-percentage-point increase in observed exposure, projected employment growth drops by 0.6 percentage points. That is not a collapse. But it is exactly the kind of signal you would expect if employers slowly begin needing fewer people in certain roles over time.
4. The most exposed workers are not who many people assume
I found this to be one of the most interesting findings in the paper. The workers in the highest-exposure group are more likely to be older, female, more educated, and higher paid. They also earn 47% more on average than the unexposed group, while workers with graduate degrees are much more concentrated in the exposed bucket. That is a useful correction to the lazy narrative that AI risk is mainly about low-skill work. At least for now, the pressure seems to be heavier on white-collar knowledge work.

5. There is still no clear unemployment shock
This is the headline-friendly part. Anthropic finds no systematic increase in unemployment for highly exposed workers since late 2022. It compares unemployment trends between workers in the top quartile of exposure and those in the unexposed group, and the post-ChatGPT difference is small and statistically insignificant. In plain English: the broad unemployment spike that people keep predicting as the real job impact of AI is not clearly visible here, at least not yet.
6. Younger workers may be facing the earliest pressure
This may be the most important finding in the whole paper. Anthropic finds suggestive evidence that hiring into highly exposed occupations has slowed for workers aged 22 to 25. The paper estimates that job-finding rates for young workers entering exposed roles fell by around 14% compared with 2022, although the result is only barely statistically significant. So this is not a slam-dunk conclusion. But it is a serious signal, as this is exactly how disruption often starts in real life. Companies do not always begin by firing senior staff. Sometimes they simply stop hiring as many juniors.

Quick Summary
- The most exposed jobs are exactly where AI is already useful
- A huge part of the economy still remains untouched
- Higher AI exposure is linked to weaker long-term job growth
- The most exposed workers are not who many people assume
- There is still no clear unemployment shock
- Younger workers may be facing the earliest pressure
Why This Matters More Than the Usual AI Jobs Debate
This paper matters because it shifts the conversation from capability theatre to labour-market reality. For the past few years, too much of the AI-jobs debate has sounded like this: “Look what the model can do in a demo, so these jobs must be at risk.” But anyone who has worked in a real company knows that demos do not automatically turn into business transformation. Humans keep checking outputs because mistakes are expensive. Anthropic’s framework acknowledges that work is messy and that job disruption comes from deployment, not just model benchmarks. Hence, the job impact of AI is definitely not what it is being portrayed to be.
It also gives readers a more practical lens. If you are wondering whether AI will affect your role, don’t ask
“Can ChatGPT do a few parts of my job?”
Instead, the better question is
“How much of my day involves repeatable digital tasks that can be standardised, automated, and plugged into a workflow?”
A financial analyst building repetitive reports, a support executive handling common customer queries, or a junior employee doing structured documentation work should probably pay closer attention than someone whose value depends on physical presence, trust-based judgment, negotiation, or highly contextual decision-making. That is a far more useful takeaway than generic fearmongering.
Limits and What the Paper Cannot Yet Prove
Now, to keep this grounded, the paper has real limits. The most obvious one is that Anthropic is using Claude-related usage data, which is informative but not the entire economy. People use multiple AI tools, many firms use internal systems, and plenty of adoption never touches Anthropic’s platform. So this is best read as a serious early framework, not a full census of AI work.
The second limitation is timing. Unemployment is a blunt and lagging signal. A company can slow hiring, cut junior openings, ask one person to do the work of two with AI help, or quietly stop replacing departing employees long before that shows up in unemployment data. In real life, job disruption often begins as a whisper, not a headline. Fewer graduate hires. Smaller teams. Lower starting salaries. More output is expected from the same headcount. By the time unemployment clearly spikes, the transition is already well underway. Anthropic itself hints at this by flagging the younger-worker hiring slowdown as a key area for future study.
There is also the methodological issue. The paper makes judgment calls about how much automation should count relative to augmentation, what threshold qualifies as significant use, and how to handle rare or semantically similar tasks. Now, of course, this could vary for you and me. So, such a generic assumption models the real world closely, but does not necessarily depict it in its true form. So, take it with a pinch of salt.
Conclusion
So what do we really conclude from this report? Not that AI has already flattened the labour market. Not that everyone should panic. And definitely not that unemployment data has confirmed an AI job apocalypse. The real message is sharper: The impact of AI on a job is becoming measurable in a more credible way. As proof, early signs are showing up first in slower projected growth and weaker entry-level hiring, not in mass unemployment.
That is why this paper matters. It treats labour-market change the way it usually happens in the real world: gradually, unevenly, and often quietly at first. If you are already employed, the pressure may show up as higher productivity expectations before it shows up as replacement. If you are just entering the workforce, the impact of AI may show up as fewer chances to get your foot in the door in that job. And if you are a business leader, this paper is a reminder that adoption is no longer theoretical. It is already concentrated in jobs where work is digital, structured, and easy to break into repeatable tasks.
Login to continue reading and enjoy expert-curated content.

