Let’s start a conversation.

Tell us a little about your situation and we’ll come back to you within one working day. No commitment, no sales follow-up.

We respond within one working day. No sales follow-up unless you ask for it.

Protected by reCAPTCHA — Privacy & Terms

Or reach us directly
Article

AI Tools and Client Data: A Technical Note for Legal Practitioners

A technical follow-up to UKUT 81 (IAC). Explains what open source and public domain mean in their correct technical and legal contexts, sets out the consumer vs enterprise distinction that matters for compliance, covers UK GDPR obligations, legal privilege risk, and how ISO 42001 governance strengthens a firm's position. Co-authored by Naz Keceli and Khiliad Ltd.

Naz Keceli, CEO & Co-Founder·10 min read·14 March 2026

In [2026] UKUT 81 (IAC), the Upper Tribunal addressed the use of AI tools by legal practitioners and set out important guidance on supervision, responsibility, and the handling of client data (read about it here). The judgment's core principles are clear and maintain the status quo: lawyers bear full responsibility for everything they submit, supervision of AI-generated work is not optional, and client data must be handled with appropriate care.

Paragraph 21 of the judgment refers to specific AI products and uses technical terms to distinguish between them. Those terms carry precise meanings in the software industry and in data protection law that are worth setting out clearly, so that practitioners can apply the Tribunal's guidance accurately. This note does that.

It covers four areas: what the terms "open source" and "public domain" mean in their relevant technical and legal contexts; what distinction between AI tools is relevant for compliance purposes; what obligations arise under UK GDPR when AI tools process client data; and what the risk to legal privilege actually is and how it is managed.

"We also observe that to put client letters and decision letters from the Home Office into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain… Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks."

— [2026] UKUT 81 (IAC), para. 21


Part 1: What the Technical Terms Mean

Open source software

In the software industry, "open source" refers to software whose underlying code and, in the context of AI models, whose model weights are publicly available for anyone to inspect, modify, and redistribute. The Open Source Initiative has maintained a formal definition of this term since 1998.

Products that meet this definition include Meta's Llama model, Mistral's base models, and DeepSeek. These are genuine open source AI systems. Their code and weights are publicly accessible.

ChatGPT is a proprietary product owned and operated by OpenAI. Its code, model architecture, training data, and weights are not publicly disclosed. Microsoft Copilot is also a proprietary product built on OpenAI's models under a commercial licence. Both products are closed source in the technical sense.

The compliance-relevant distinction between AI tools is not between open and closed source software. It is between consumer-tier access and enterprise-licensed deployment, which is addressed in Part 2.

Public domain

In law, "the public domain" describes information that is freely available to the general public, carries no confidentiality protection, and may be reproduced or used without restriction. Information that enters the public domain in this legal sense is no longer protected by confidentiality obligations, and legal privilege over it is generally lost permanently.

Uploading a document to an AI tool does not place it in the public domain in this sense. The document is transmitted to the provider's servers for processing. It is not published, not indexed by search engines, and not accessible to other users of the service.

The genuine data protection concerns associated with uploading client material to AI tools are different: the content may be retained by the provider for a period after the session ends, it may in some account configurations be used to improve the provider's models, and it passes outside the direct control of the organisation that uploaded it. These are real and serious compliance concerns under UK GDPR. But they are distinct from publication, and the legal analysis and available mitigations differ accordingly.

Placing information in the public domain destroys privilege irreversibly. Transmitting data to a third-party processor under a contractual arrangement that includes appropriate confidentiality obligations is a different risk category, addressed in Parts 3 and 4.


Part 2: The Distinction That Matters for Compliance

The compliance-relevant distinction between AI tools is not between specific named products. It is between consumer-tier access and enterprise-licensed deployment. This distinction applies across all major AI providers and determines whether the protections required for regulated professional use are in place.

Consumer accounts are products accessed by individuals under standard consumer terms of service. They are designed for personal use. They typically do not include a Data Processing Agreement, do not offer configurable data retention, and may use interaction data to improve the provider's models. Examples include ChatGPT Free and Plus, Claude Free, Claude Pro, and Copilot Free.

Enterprise deployments are products accessed by organisations under commercial terms that include a formal Data Processing Agreement, a contractual prohibition on training the provider's models on customer data, configurable data retention including zero-retention options, data residency controls, and a certification framework. Examples include ChatGPT Enterprise, Claude Enterprise, and Microsoft 365 Copilot under a Business or Enterprise subscription.

The same product name can sit in either category depending on which tier is in use. Microsoft markets consumer and enterprise products under the same "Copilot" name. Anthropic markets consumer and enterprise products under the same "Claude" name. Brand name alone does not determine which category a product belongs to.

CharacteristicConsumer account (any product)Enterprise deployment (any product)
Training on your dataOften yes, by default or opt-outNo, contractually prohibited
Data Processing AgreementNone, or basic consumer termsSigned DPA required by UK GDPR Article 28
Data retentionProvider-defined, may be extendedConfigurable, including zero retention option
UK / EU data residencyNot guaranteedContractually available
Security certificationsNot applicable to your useSOC 2 Type II, ISO 27001/27017/27018, CSA STAR
Lawful basis for UK GDPRUnclear, consumer terms onlyProcessor relationship with documented instructions
Legal privilege riskElevated, no contractual confidentiality obligationsManaged, confidentiality obligations in DPA

Note for UK firms using Microsoft 365 Copilot: A specific issue arose in January 2026 when Anthropic's Claude models were integrated into Microsoft 365 Copilot as a subprocessor. Claude is explicitly excluded from Microsoft's EU Data Boundary commitments. When M365 Copilot routes a request through Claude, that data is processed on infrastructure in the United States and is not covered by Microsoft's standard UK data residency guarantees. UK firms using M365 Copilot should verify which AI models are active in their tenant and whether their data residency obligations are met.


Part 3: UK GDPR Obligations When Using AI Tools

Client correspondence almost always contains personal data. Processing it through any AI tool engages UK GDPR, and the obligations are the same regardless of which product is used. The following five questions set out what compliance requires.

1. Lawful basis for processing

Before processing personal data through any AI tool, a lawful basis under Article 6 UK GDPR must be established and documented. For most law firm use cases involving client correspondence, this will be legitimate interests or performance of a contract. The basis must be articulated before processing begins.

2. Data Processing Agreement

Where an AI provider processes personal data on behalf of a law firm, it is acting as a data processor within the meaning of UK GDPR. Article 28 requires a written Data Processing Agreement (DPA) to be in place. This is a legal requirement, not a best practice recommendation. Consumer AI tools do not offer a DPA. Processing client personal data through a consumer AI tool without a DPA in place constitutes a breach of UK GDPR, regardless of which product is used.

3. Data Protection Impact Assessment

A DPIA is mandatory under Article 35 UK GDPR when processing is likely to result in a high risk to the rights and freedoms of individuals. Systematic processing of client data through AI tools, particularly where that data includes special category information such as health data, immigration status, or information about legal proceedings, is likely to meet this threshold. A DPIA must be completed before deployment, not after.

4. International data transfers

Transfers of personal data outside the UK require an adequacy decision, appropriate safeguards such as UK International Data Transfer Agreements, or an applicable derogation. Most major enterprise AI providers address this through Standard Contractual Clauses or the UK IDTA, but this must be verified for the specific contract and licence tier in use. It cannot be assumed from general product descriptions or marketing materials.

5. Records of Processing Activities

Under Article 30 UK GDPR, data controllers must maintain records of processing activities. Introducing any AI tool that processes client personal data creates a new processing activity that must be documented: the purposes of processing, categories of data and data subjects, recipients, international transfers, and retention periods. This obligation applies to each specific tool and tier in use.

On the Tribunal's suggestion that uploading client data to ChatGPT should be reported to the ICO: the notification obligation under Article 33 UK GDPR arises specifically in the event of a personal data breach. Using a cloud-based AI processor does not by itself trigger that obligation. The more immediate requirements are those set out above, which must be satisfied before processing begins.


Part 4: Legal Professional Privilege

Legal professional privilege attaches to confidential communications between lawyer and client made for the purpose of giving or receiving legal advice, or in connection with litigation. Privilege can be waived by voluntarily disclosing privileged material to a third party in circumstances inconsistent with maintaining confidentiality.

The question for any AI tool is whether transmitting client documents to the provider constitutes a disclosure that undermines the confidentiality on which privilege depends. The answer turns on the contractual framework governing that transmission.

Where an AI provider acts as a data processor under a properly drafted agreement that includes binding confidentiality obligations, the position is broadly analogous to disclosure to any other outsourced service provider: a cloud hosting company, a document management platform, or a translation service. Courts have generally not treated disclosure to service providers under confidentiality obligations as privilege-waiving, where the contractual protections are adequate and the disclosure was made for a legitimate purpose in the course of legal work.

Where no such agreement is in place and the provider's terms reserve rights to use or retain the data, the risk is substantially higher. There is no settled authority in England and Wales directly on AI tools and privilege waiver. The Tribunal's guidance in UKUT 81 reflects the direction of travel, and it would be prudent to treat the question as open until it is resolved.

"Before any privileged material is uploaded to any AI tool, the firm must be able to identify the contractual basis on which the provider handles that data, confirm that it includes confidentiality obligations of sufficient substance, and be satisfied that the arrangement is consistent with the duty to maintain privilege. This assessment should be made once, at firm level, as part of a tool approval process. It should not be left to individual fee-earners to determine at the point of use."


Part 5: How Proper AI Governance Strengthens a Firm's Position

Compliance with UK GDPR and the professional conduct rules sets a floor. Firms that go further and adopt a structured AI governance framework are better placed to demonstrate responsible practice to regulators, courts, insurers, and clients. ISO/IEC 42001 provides that structure.

What ISO 42001 is

ISO/IEC 42001:2023 is the first international standard for AI management systems, published by the International Organisation for Standardisation in December 2023. It applies to any organisation that develops, deploys, or uses AI, regardless of size or sector. For law firms, the relevant category is organisations using AI in professional practice.

The standard uses the same plan-do-check-act methodology as ISO 27001, the information security management standard that many firms and their enterprise AI providers already hold. This means firms with existing ISO 27001 compliance can build on controls already in place rather than starting from scratch.

What it requires in practice

ISO 42001 requires firms to maintain an inventory of all AI systems in use, document the purposes and risks associated with each, assign clear accountability for oversight, and establish a process for continuous review as tools and provider terms change. It covers the same ground as the compliance obligations in Parts 2, 3, and 4 of this note but organises them into an auditable management system rather than a one-time assessment.

Critically, it requires firms to address human oversight explicitly: who reviews AI-generated work, what the verification process is, and where supervisory responsibility sits. This maps directly onto what the Tribunal required in UKUT 81 and what the SRA expects under its existing supervision obligations.

Why it matters for firms facing regulatory scrutiny

A firm that can demonstrate ISO 42001 certification, or documented progress toward it, is in a materially stronger position if its AI use is questioned by a regulator, an insurer, or a court. The certification provides independent, third-party verification that the firm has done the work: it has identified its AI systems, assessed the risks, allocated accountability, and built a process for ongoing review.

It also answers the question the Tribunal implicitly raised in UKUT 81: not just whether a firm used the right tool on a particular occasion, but whether it has a governance system in place that makes responsible AI use the default rather than the exception.

Firms working toward ISO 42001 certification should start by mapping every AI tool currently in use, including any consumer-tier accounts held by individual fee-earners, against the compliance framework in Parts 2 and 3 of this note. That gap analysis is both the first step toward certification and the most immediate risk reduction measure available.


About Khiliad Legal

Khiliad Legal provides AI and ISO 42001 governance assessment and guidance for law firms. It maps your tools and workflows against UK GDPR, SRA obligations, and ISO 42001, identifies gaps, assigns accountability, and maintains the documentation trail that regulators and insurers require.

To find out more, visit khiliad-legal.com.


Case reference: UK and R (on the application of Munir) v SSHD (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC), promulgated 17 November 2025.

This note is technical commentary intended to assist practitioners applying the judgment and does not constitute legal advice. Data protection terms referenced are those publicly available as at March 2026 and are subject to change. Practitioners should verify current terms directly with providers before making compliance decisions.

Have a project
in mind?

You tell us the problem. We design the solution, set the price, and deliver the outcome. “I’m not sure what I need” is a perfectly good starting point.

Or start with an assessment

Fixed price. No obligation. Plain English, always.

Let’s start a conversation

No commitment. We’ll come back to you within one working day.

We respond within one working day. No sales follow-up unless you ask for it.

Protected by reCAPTCHA — Privacy & Terms