Anthropic published new research this week measuring where AI is actually changing work, not just where it theoretically could. The findings are more nuanced than the usual headlines suggest. Some industries are already feeling real pressure. Others look protected today but may not stay that way. And the legal sector sits in a complicated middle ground.
This article breaks the research down across three areas: the roles most affected now, the gap between what AI can do and what it is actually doing, and what it all means specifically for law firms.
Based on Anthropic Research, published 5 March 2026.
Where AI has gone deepest: tech and office roles
The most AI-exposed jobs right now are computer programmers, customer service representatives, and data entry workers. Programmers top the list with 75% of their tasks now covered by AI in real working environments.
These roles share a common profile. The work is structured, digital, and built around language. Code can be generated and reviewed by AI. Customer queries can be triaged automatically. Data can be read and entered without a human in the loop. Large language models are well suited to exactly this kind of task.
The employment picture is already shifting. The US Bureau of Labor Statistics projects that roles with higher AI exposure will grow less through 2034. This is not a wave of redundancies today. It is a gradual narrowing of where new jobs will be created.
The clearest early signal is in graduate hiring. Workers aged 22 to 25 are around 14% less likely to be hired into the most AI-exposed roles compared to 2022. Firms are not laying people off. They are simply hiring fewer people into these positions in the first place.
Where AI has not yet arrived: the gap between theory and reality
The most striking finding in the research is not which jobs AI cannot do. It is how large the gap is between what AI could theoretically handle and what it is actually doing at work right now.
Even in computer and mathematics roles, where 94% of tasks are theoretically feasible for AI, actual observed coverage sits at just 33%. That is a massive gap between potential and practice. And it exists across every sector the researchers looked at.
Why? The researchers are straightforward about it. Some tasks are held back by legal or regulatory constraints. Others need software integrations that have not been built. Some require human sign-off that organisations are not yet willing to remove. And in many cases, adoption simply has not spread through organisations fast enough.
The researchers give a clear example. Authorising drug refills and providing prescription information to pharmacies is something an AI could theoretically do. But it has not been observed doing so in practice, almost certainly because of the liability and regulatory environment around pharmaceutical decisions.
This pattern runs through professional services broadly. The technology is often ready before the governance, the regulation, and the organisational appetite are. That means the jobs and sectors that look safe today are not necessarily safe. Many are simply earlier in the adoption curve.
What this means for law firms
Law is in a genuinely complicated position. The research is clear that representing clients in court remains beyond what AI can do. Advocacy, judgment, client relationships, and courtroom presence are not going anywhere soon.
But the bulk of legal work is not done in court. Document review, contract drafting, legal research, due diligence, and first-draft advice are all heavily language-based, structured, and repetitive. Those are exactly the kinds of tasks that AI is already handling in other sectors, and the gap between what AI could theoretically do in law and what it is currently doing is narrowing.
The workforce profile that should concern law firms
The research identifies a clear demographic profile for the workers most at risk: educated, well-paid, and producing knowledge work as text. Workers in the most exposed occupations earn on average 47% more than those with no AI exposure and are nearly four times as likely to hold a graduate degree.
That profile describes a significant portion of a law firm's workforce. The risk is not abstract.
Junior hiring is the most immediate pressure point
The broader trend of slowing graduate hiring in AI-exposed roles is the one that should most concern legal businesses right now. The traditional model of building a firm's expertise through large cohorts of junior associates and trainees depends on a particular kind of work existing for those people to do. If AI absorbs more of that work, the pipeline changes.
The research does not specifically measure this in law. But the direction is consistent with what the data shows across other knowledge-work sectors.
Governance is not optional
The legal sector carries professional obligations around accuracy, privilege, and accountability that do not apply in the same way to other industries. AI tools used without proper oversight create real professional and liability risks.
This means the question for law firms is not simply whether to adopt AI. It is how to do it in a way that meets regulatory requirements, protects client privilege, and maintains professional standards. That is a governance challenge as much as a technology one, and most firms are not yet treating it that way.
The honest read
Law is not immune to what this research describes. It is earlier in the same curve that has already started reshaping tech and office work. The gap between theoretical capability and actual deployment in legal settings is still wide. But as the research makes clear, that gap is the story to watch, not the current state.
Full article is available here https://www.anthropic.com/research/labor-market-impacts