Let’s start a conversation.

Tell us a little about your situation and we’ll come back to you within one working day. No commitment, no sales follow-up.

We respond within one working day. No sales follow-up unless you ask for it.

Protected by reCAPTCHA — Privacy & Terms

Or reach us directly
Article

AI is changing jobs. Here is where it is happening, where it is not, and what it means for law.

New Anthropic research shows where AI is actually changing work, not just where it theoretically could. The legal sector faces real exposure in document-heavy tasks, even as courtroom work remains protected.

Jimmy Skowronski·5 min read·6 March 2026

Based on Anthropic Research, published 5 March 2026

Anthropic published new research this week measuring where AI is actually changing work, not just where it theoretically could. The findings are more nuanced than the usual headlines suggest. Some industries are already feeling real pressure. Others look protected today but may not stay that way. And the legal sector sits in a complicated middle ground.

This article breaks the research down across three areas: the roles most affected now, the gap between what AI can do and what it is actually doing, and what it all means specifically for law firms.


Where AI has gone deepest: tech and office roles

The most AI-exposed jobs right now are computer programmers, customer service representatives, and data entry workers. Programmers top the list with 75% of their tasks now covered by AI in real working environments.

These roles share a common profile. The work is structured, digital, and built around language. Code can be generated and reviewed by AI. Customer queries can be triaged automatically. Data can be read and entered without a human in the loop. Large language models are well suited to exactly this kind of task.

The employment picture is already shifting. The US Bureau of Labor Statistics projects that roles with higher AI exposure will grow less through 2034. This is not a wave of redundancies today. It is a gradual narrowing of where new jobs will be created.

The clearest early signal is in graduate hiring. Workers aged 22 to 25 are around 14% less likely to be hired into the most AI-exposed roles compared to 2022. Firms are not laying people off. They are simply hiring fewer people into these positions in the first place.


Where AI has not yet arrived: the gap between theory and reality

The most striking finding in the research is not which jobs AI cannot do. It is how large the gap is between what AI could theoretically handle and what it is actually doing at work right now.

Figure 2: Theoretical capability vs observed exposure by occupational category. The blue area shows the share of tasks LLMs could theoretically perform. The red area shows actual observed usage. The gap between them is large across every sector.

Even in computer and mathematics roles, where 94% of tasks are theoretically feasible for AI, actual observed coverage sits at just 33%. That is a massive gap between potential and practice. And it exists across every sector the researchers looked at.

Why? The researchers are straightforward about it. Some tasks are held back by legal or regulatory constraints. Others need software integrations that have not been built. Some require human sign-off that organisations are not yet willing to remove. And in many cases, adoption simply has not spread through organisations fast enough.

The researchers give a clear example. Authorising drug refills and providing prescription information to pharmacies is something an AI could theoretically do. But it has not been observed doing so in practice, almost certainly because of the liability and regulatory environment around pharmaceutical decisions.

This pattern runs through professional services broadly. The technology is often ready before the governance, the regulation, and the organisational appetite are. That means the jobs and sectors that look safe today are not necessarily safe. Many are simply earlier in the adoption curve.


What this means for law firms

Law is in a genuinely complicated position. The research is clear that representing clients in court remains beyond what AI can do. Advocacy, judgment, client relationships, and courtroom presence are not going anywhere soon.

But the bulk of legal work is not done in court. Document review, contract drafting, legal research, due diligence, and first-draft advice are all heavily language-based, structured, and repetitive. Those are exactly the kinds of tasks that AI is already handling in other sectors, and the gap between what AI could theoretically do in law and what it is currently doing is narrowing.

The workforce profile that should concern law firms

The research identifies a clear demographic profile for the workers most at risk: educated, well-paid, and producing knowledge work as text. Workers in the most exposed occupations earn on average 47% more than those with no AI exposure and are nearly four times as likely to hold a graduate degree.

That profile describes a significant portion of a law firm's workforce. The risk is not abstract.

Junior hiring is the most immediate pressure point

The broader trend of slowing graduate hiring in AI-exposed roles is the one that should most concern legal businesses right now. The traditional model of building a firm's expertise through large cohorts of junior associates and trainees depends on a particular kind of work existing for those people to do. If AI absorbs more of that work, the pipeline changes.

The research does not specifically measure this in law. But the direction is consistent with what the data shows across other knowledge-work sectors.

Governance is not optional

The legal sector carries professional obligations around accuracy, privilege, and accountability that do not apply in the same way to other industries. AI tools used without proper oversight create real professional and liability risks.

This means the question for law firms is not simply whether to adopt AI. It is how to do it in a way that meets regulatory requirements, protects client privilege, and maintains professional standards. That is a governance challenge as much as a technology one, and most firms are not yet treating it that way.

The honest read

Law is not immune to what this research describes. It is earlier in the same curve that has already started reshaping tech and office work. The gap between theoretical capability and actual deployment in legal settings is still wide. But as the research makes clear, that gap is the story to watch, not the current state.


Frequently asked questions

Is AI actually replacing jobs right now?

Not in large numbers, based on current data. The research finds no significant increase in unemployment for workers in the most AI-exposed roles since late 2022. The more visible effect is in hiring, particularly for workers aged 22 to 25, where job entry rates into exposed roles have dropped by around 14%.

Which jobs are most exposed to AI today?

Computer programmers are at the top with 75% task coverage, followed by customer service representatives and data entry workers. These roles share structured, digital, language-based work that suits large language models well.

Why are some sectors still largely untouched by AI?

The gap between what AI can theoretically do and what it is actually doing at work is large. Regulatory constraints, missing software integrations, liability concerns, and slow organisational adoption all hold back deployment even when the technology is capable. Jobs that appear safe today may simply be earlier in the adoption curve.

Are lawyers at risk from AI?

Courtroom advocacy and client-facing judgment are not currently at risk. But document-heavy legal work such as research, drafting, and review is directly in the zone where AI is already performing in other sectors. Law firms that assume the courtroom exemption covers their whole workforce are reading the research too narrowly.

What should law firms be doing now?

Three things. First, take the junior hiring question seriously and think about what the pipeline looks like in five years. Second, build governance frameworks before regulators or tribunals force the issue. Third, distinguish between the AI tools that genuinely meet professional standards and those that create liability. Adoption without governance is the risk, not adoption itself.

Have a project
in mind?

You tell us the problem. We design the solution, set the price, and deliver the outcome. “I’m not sure what I need” is a perfectly good starting point.

Or start with an assessment

Fixed price. No obligation. Plain English, always.

Let’s start a conversation

No commitment. We’ll come back to you within one working day.

We respond within one working day. No sales follow-up unless you ask for it.

Protected by reCAPTCHA — Privacy & Terms