Responsible AI Is a Procurement Problem, Not a Philosophy Problem
Every serious company now has an AI principles document. Most of them say roughly the same things: human oversight, transparency, fairness, privacy, accountability. The documents are fine. The documents are also, by themselves, doing almost none of the work.
The last twelve months made this obvious. In case after case where an AI deployment went wrong in 2025, the principles document was in place. The thing that was missing was upstream: in the vendor contract, in the data-processing addendum, in the procurement checklist that the AI tool sailed through because nobody updated it for what AI vendors actually do differently.
Responsible AI is a procurement problem. If your contracts don’t make it concrete, your principles won’t make it true.
What the gap actually looks like
Classic SaaS procurement evolved over twenty years to handle a specific class of risk: a vendor stores your data, occasionally touches it to provide a service, and is liable for security breaches. The contracts and DPAs reflect that.
AI vendors break several assumptions of that model simultaneously:
- They send your data to third-party model providers (and those providers may be in other jurisdictions).
- They may train on your data unless you actively opt out.
- They produce outputs that can affect real-world decisions and can be wrong in non-obvious ways.
- They expose tools and actions, not just content, to autonomous agents.
- Their behaviour can change materially with a silent model update.
A procurement process calibrated for “this vendor hosts our CRM” is not calibrated for “this vendor’s agent can approve expense reports.”
The clauses that actually matter in 2026
The following are the contractual hooks we now recommend buyers insist on. None of them are exotic. Most serious AI vendors will accept them — and the ones who won’t are telling you something useful.
1. Data retention and training
“Vendor will not use Customer Data, prompts, completions, or any derivative thereof to train, fine-tune, or evaluate any model, whether operated by Vendor or by a sub-processor, except with Customer’s prior written consent on a per-purpose basis.”
Default should be no training. If the vendor wants training data, they should ask for it explicitly, for a named purpose, and preferably pay for it. “Opt out via a setting in your account” is not an adequate control; the setting changes, the default flips, and nobody notices.
2. Sub-processor transparency
“Vendor will maintain and publish a current list of sub-processors, including model providers, inference providers, and data-storage providers. Vendor will provide Customer with 30 days’ notice of any addition or material change and a right to terminate without penalty if the change is unacceptable.”
The interesting sub-processor in an AI product is the model provider. “We’re built on GPT/Claude/open-weights-model-X” is data you need, not a trade secret you should tolerate being hidden.
3. Model provenance and versioning
“Vendor will disclose the specific model or model family used to produce outputs for Customer, and will provide at least 14 days’ notice before changing the underlying model or making material changes to system prompts affecting Customer’s use case.”
Silent model swaps break evaluations that the customer has done. They should be contractually uncomfortable.
4. Prompt-injection and output liability
“Vendor acknowledges that its product is susceptible to prompt-injection and related adversarial inputs. Vendor will maintain commercially reasonable mitigations and will indemnify Customer against direct losses caused by outputs that Vendor’s product produced in response to such inputs, up to [cap].”
This is the clause most vendors resist. That resistance is informative. Shared responsibility with an explicit cap is a reasonable middle ground.
5. Incident disclosure with AI-specific triggers
“Vendor will notify Customer within 72 hours of any (a) data exposure, (b) sustained degradation of output accuracy materially affecting Customer’s use case, (c) confirmed prompt-injection incident affecting Customer’s tenant, or (d) material change to model behaviour, regardless of source.”
Classic incident clauses cover (a). AI-specific incidents are (b), (c), (d). Most boilerplate DPAs don’t touch them.
6. Human-in-the-loop thresholds
“For actions taken by Vendor’s product on Customer’s behalf that [move funds, send external communication, modify records, or access third-party systems], Vendor will provide configurable thresholds requiring human confirmation, defaulting to confirmation required.”
The default matters. “You can turn on human approval in settings” is different from “human approval is on by default and you can turn it off for specific well-understood cases.”
7. Auditability
“Vendor will maintain logs of inputs, outputs, and tool invocations attributable to Customer’s tenant for [period], and will make them available to Customer on reasonable request and on incident.”
If the vendor cannot reconstruct what the agent did on your behalf in a specific 30-minute window, you cannot defend yourself when something goes wrong.
8. Exit and data portability
“On termination, Vendor will provide Customer with all Customer Data and logs in a machine-readable format within 30 days, and will certify deletion of all remaining copies, including those held by sub-processors, within a further 30 days.”
AI vendors in a fast-moving market go out of business or get acquired. Your exit clause is not a theoretical concern.
Questions to ask every AI vendor
Before contract negotiation, a short intake interview. You are looking for fluent, specific answers. Hedging is data.
- Which model(s) power this product today, and who operates them?
- Do you train on customer inputs or outputs by default? Under what conditions would you?
- How do you notify customers about model upgrades? What changed in the last one?
- What is your prompt-injection mitigation stack? What’s the residual risk?
- Walk me through your worst incident in the last 12 months. What did you change?
- If I wanted to leave in 18 months, what exactly would that look like?
- Who sees my data on your side, and under what authorisation?
- What’s your policy on sub-processor changes and customer notification?
- What’s in your logs, for how long, and how do I get them in an incident?
A vendor that can answer all nine without looking at Slack is probably safe to buy from. A vendor that treats several of them as pricing-sensitive or off-limits is telling you where their governance maturity actually sits.
The reframe
“Responsible AI” as a philosophy question — what should AI do? — is interesting and mostly intractable. “Responsible AI” as a procurement question — what does this contract commit this vendor to, on paper, today? — is tractable and cumulative. Every contract you update makes the next one easier.
If you’re working through an AI procurement now and want a second pair of eyes on the terms, that is a conversation we enjoy. It is usually cheaper than the incident it prevents.