AI & GenAI Software Procurement · Pillar Guide · 2026

AI and GenAI Software Procurement: The Enterprise Negotiation Guide

Artificial intelligence and generative AI have become the fastest-growing categories of enterprise software spend. OpenAI Enterprise, Microsoft Copilot, Google Gemini, AWS Bedrock, Anthropic Claude, and dozens of specialised AI platforms now sit in every enterprise software portfolio — often acquired through rapid adoption pilots that bypassed normal procurement controls. This guide covers everything enterprise procurement and legal teams need to know about AI software contracting: consumption models, data rights, IP ownership, SLA standards, vendor lock-in risks, and the 15 negotiation tactics that protect your organisation. For help finding specialist advisors, see our Best GenAI Negotiation Consulting Firms ranking.

Editorial Disclosure: Rankings and recommendations are produced independently by enterprise software licensing practitioners. Full disclosure →
$100B+
Enterprise AI Spend in 2026
Token
Primary AI Consumption Metric
IP/Data
Most Critical Contract Issue
15 Tactics
Covered in This Guide

The AI Procurement Landscape in 2026

Enterprise AI procurement has matured dramatically since the initial ChatGPT wave of 2023. What began as shadow IT experimentation has become a governed procurement category in most Fortune 1000 organisations — though the governance frameworks are often still catching up with the rate of adoption. Three tiers of AI vendor have emerged:

  • Foundation model providers: OpenAI, Anthropic, Google DeepMind, and Meta. These organisations develop and operate the large language models (LLMs) that power most enterprise AI applications. Enterprise procurement typically involves API access agreements, volume commitments, and enterprise-grade terms for data handling and privacy.
  • Platform providers with embedded AI: Microsoft (Copilot for M365, Azure OpenAI), Google (Gemini for Workspace, Vertex AI), Salesforce (Einstein/Agentforce), ServiceNow (Now Intelligence), SAP (Joule). These vendors embed AI capabilities into platforms enterprises already use, billing AI as an add-on or uplift to existing licences.
  • Specialised AI application vendors: Vertical AI applications (legal, finance, healthcare, code), AI infrastructure (vector databases, orchestration, fine-tuning services), and AI agents (RPA with AI, autonomous agents). This category is the most fragmented and requires the most individual contract scrutiny.

Each tier presents distinct procurement challenges. Foundation model API contracts require deep attention to data rights and token pricing. Platform-embedded AI requires careful analysis of whether the AI add-on value justifies the premium over the base platform. Specialised AI vendors often have weaker contract infrastructure and less established enterprise terms — creating both negotiation opportunity and risk.

2026 Market Context

AI pricing has dropped significantly since 2023 as model efficiency has improved. GPT-4-class capabilities that cost $30/million tokens in early 2023 are available for under $5/million tokens in 2026, with frontier models replacing them at the old price points. This deflationary dynamic means that multi-year AI contracts signed in 2023–2024 at then-current pricing are now significantly above market. If you signed multi-year AI contracts more than 18 months ago, benchmark them against current pricing immediately.

AI Pricing Models: Tokens, Seats, and Commits

Understanding AI pricing models is the foundation of AI procurement. Unlike traditional software licences (per-user seats, per-processor), AI pricing introduces consumption models that are difficult to forecast and can scale unexpectedly with usage.

Expert Advisory

Want independent help negotiating better terms? We rank the top advisory firms across 14 vendor categories — free matching, no commitment.

Token-Based Pricing (API Access)

Most foundation model APIs charge per token — the basic unit of text processed by the model. Tokens are approximately 4 characters or 0.75 words in English. A 1,000-word document contains roughly 1,300 tokens. Token pricing is typically split between input tokens (text sent to the model) and output tokens (text generated by the model), with output tokens usually priced higher — often 3–5x the input token price.

Token pricing varies by model tier. Smaller, faster models (e.g., GPT-4o-mini, Claude Haiku) cost significantly less per token than frontier models (GPT-4o, Claude Opus, Gemini Ultra). AI application architects have strong reasons to route queries to the most cost-efficient model capable of handling the task, but procurement must ensure contracts allow model substitution without triggering new commercial terms.

Per-Seat Pricing (Embedded AI Add-Ons)

Microsoft Copilot for M365, Google Gemini for Workspace, and Salesforce Einstein add-ons are priced per user per month on top of the base platform licence. This pricing model is more predictable than token consumption but introduces its own risks: organisations often deploy AI add-ons across their entire user base without assessing actual adoption, leading to significant shelfware. The per-seat AI premium typically adds 20–50% to the base platform cost for the users to whom it is assigned.

Committed Spend Models

AWS Bedrock, Azure OpenAI Service, and Google Vertex AI offer volume commitment models analogous to cloud reserved instance or committed use discounts. You commit to a minimum monthly or annual AI spend in exchange for a discount on per-token or per-API-call pricing. Committed spend models typically offer 10–30% discounts versus pay-as-you-go rates for meaningful volume commitments. The risk is committing to a model that becomes obsolete or oversized before the commitment period expires.

Platform Subscription Models

Some AI platforms charge a flat platform subscription fee that includes a token or usage allowance, with overage charges for consumption above the included allocation. This model — common in specialised AI applications — trades predictability for flexibility. Ensure you understand the included allocation and overage rates before committing, as overage pricing is typically significantly higher than the per-token cost implied by the base subscription.

Pricing Model Primary Vendors Forecast Difficulty Negotiation Lever
Per-token (API) OpenAI, Anthropic, Cohere, AI21 HARD Volume commits, model tier routing
Per-seat add-on Microsoft Copilot, Google Gemini, Salesforce Einstein EASY Phased rollout, adoption gating
Cloud committed spend AWS Bedrock, Azure OpenAI, Google Vertex AI MODERATE Commit level, term length, model flexibility
Platform subscription + overage Specialised AI apps, vertical AI MODERATE Included allocation, overage cap
Usage-based (API calls) Computer vision APIs, speech APIs, search AI HARD Volume tiers, monthly caps

Token Pricing in Depth: What You Need to Know

Token pricing is the most critical — and most misunderstood — dimension of AI API procurement. Before signing any token-priced AI agreement, ensure you understand the following.

Input vs. Output Token Cost Asymmetry

Output tokens are consistently priced 3–5x higher than input tokens across all major providers. This matters because the ratio of input to output tokens varies significantly by use case. A document summarisation workflow (long input, short output) has a very different cost profile than a content generation workflow (short prompt, long output). Calculate your expected input/output ratio for each AI use case before estimating costs from the advertised per-token price.

Context Window Costs

Modern LLMs support context windows of 128K–1M tokens. Longer context windows allow you to include more background information in each API call — but every token in the context window is billed as an input token on every API call. A workflow that repeatedly passes a 50,000-token context document with each API call is consuming 50,000 input tokens per call regardless of the user's actual query length. Context window consumption can drive token costs 10–50x higher than naive per-query estimates suggest.

Prompt Caching

Several providers (Anthropic, OpenAI) offer prompt caching mechanisms that reduce input token costs for repeated context elements — typically at a 50–90% discount on cached input tokens. If your AI application uses the same system prompt or context document across many queries, prompt caching can substantially reduce token costs. Negotiate for prompt caching inclusion in enterprise agreements and confirm which model tiers support it.

Model Pricing Trajectory

AI model pricing has followed a consistent deflationary pattern: frontier model prices at launch, then price reductions of 50–80% within 12–18 months as efficiency improves and competition increases. OpenAI's GPT-3.5 pricing dropped from $0.020/1K tokens in 2023 to $0.002/1K tokens by late 2024 — a 90% reduction. Multi-year AI contracts with fixed pricing are either at risk of overpayment (if you fix current prices and the market drops) or provide certainty (if you lock in today's price and the market stays flat). Use price-most-favoured-customer clauses to protect against paying above the market rate.

Data Rights and Training Restrictions

Data rights are the most commercially and legally sensitive aspect of AI vendor contracts. The fundamental question: when you send data to an AI vendor's API, what rights does that vendor acquire over your data?

Free Resource

Get the IT Negotiation Playbook — free

Used by 4,200+ IT directors and procurement leads. Oracle, Microsoft, SAP, Cloud — all covered.

Training Data Rights: The Core Issue

The default terms of many consumer AI services allow the provider to use submitted data to train and improve their models. Enterprise contracts universally should — and typically do — include explicit opt-out provisions that prohibit the provider from using your data for model training. Verify this clause is present and unconditional in any enterprise AI agreement. "Opt-out" clauses that require affirmative action or have exceptions (e.g., "unless anonymised") are insufficient — require a categorical prohibition with no exceptions.

Ask specifically about fine-tuned model training. If you use the vendor's fine-tuning service to adapt a model to your specific use case, understand who owns the resulting fine-tuned model weights and whether the vendor retains any rights to the adaptations or training data used in fine-tuning.

Data Retention and Deletion

AI API vendors typically retain your input and output data for a period after each API call — nominally for abuse detection and service improvement. Enterprise agreements should specify a maximum retention period (typically 0–30 days for enterprise, vs. 30–90 days for consumer) and include a right to request deletion of retained data on reasonable notice. For regulated industries (financial services, healthcare, legal), retention periods may need to be zero — data is processed and immediately discarded without logging.

Data Residency and Sovereignty

AI API calls are processed in the vendor's cloud infrastructure, which may be located in any jurisdiction unless contractually restricted. For GDPR compliance, EU data subjects' personal data must remain within the EU or be transferred under appropriate mechanisms (SCCs, adequacy decisions). For certain regulated sectors (banking, healthcare, government), there may be absolute requirements for processing within specific national boundaries.

Major providers offer data residency options at additional cost. Azure OpenAI Service supports EU-only data residency through Azure EU data boundaries. OpenAI Enterprise offers data processing agreements with EU processing options. Google Vertex AI supports regional data processing configurations. Negotiate data residency requirements upfront rather than discovering they require a premium add-on after contract signature.

IP Ownership in AI Contracts

AI-generated outputs raise novel IP ownership questions that standard software licence agreements have not historically needed to address. Enterprise procurement teams must ensure their AI contracts explicitly address output ownership.

AI Output Ownership

The standard position of major AI vendors is that the enterprise customer owns the outputs generated by the AI in response to their prompts. This is generally the correct starting point — but verify it is stated explicitly rather than implied. The ownership clause should be clear that the vendor has no licence to use your outputs for any purpose (including training) without your consent.

AI-Assisted Work Product: Third-Party IP Risk

AI models may reproduce training data verbatim in outputs — including material subject to third-party copyright. Major AI providers include indemnification clauses that protect enterprise customers from third-party IP claims arising from AI-generated outputs, subject to limitations. Understand the scope and limits of any IP indemnification before relying on it — most have carve-outs for customer-provided prompts that direct the model to reproduce specific content.

Model Fine-Tuning IP

When you fine-tune an AI model on your proprietary data, the resulting model weights may be considered joint IP between you and the vendor, depending on contract terms. Negotiate explicitly that: (1) you own any fine-tuned model you create using your data, (2) the vendor has no licence to the fine-tuned model weights beyond providing the service, and (3) on termination, you receive the fine-tuned model weights in a portable format.

AI SLA Standards and Uptime Requirements

AI platform SLAs are less mature than traditional enterprise software SLAs. Many AI APIs launched with consumer-grade availability expectations and are still building out enterprise-grade reliability infrastructure. Before accepting any AI vendor's standard SLA, benchmark it against your application's actual requirements.

SLA Metric Typical Consumer/Startup AI Enterprise AI Standard Mission-Critical Requirement
Monthly uptime 99.0% (~8h downtime/month) 99.9% (~44min/month) 99.95%+ (~22min/month)
API latency (P95) Not specified Under 5 seconds Under 2 seconds
Throughput guarantee Best effort; rate limited Reserved capacity tier Dedicated capacity allocation
Credit for downtime None or token credits only Service credit percentage Service credits + financial remedy
Incident notification Status page only Direct notification + root cause Real-time + SLA report

Throughput and Rate Limiting

Most AI APIs enforce rate limits — maximum tokens per minute (TPM) or requests per minute (RPM). Consumer-tier rate limits are typically 10,000–100,000 TPM. Enterprise agreements include higher or dedicated rate limits that guarantee your application access to model capacity even during peak demand periods. Throughput guarantees are particularly critical for customer-facing AI applications where rate limiting directly impacts user experience.

Negotiate for dedicated capacity allocation — a reserved computational resource tier that ensures your workload is not competing with other customers for model access. This is more expensive than shared capacity but provides meaningful SLA guarantees. For high-stakes applications (legal review, financial analysis, healthcare), dedicated capacity is non-negotiable.

AI Vendor Lock-In: Risk Assessment

AI vendor lock-in is a more complex risk than traditional software lock-in because it manifests at multiple levels: model-specific prompt engineering, fine-tuned model weights, proprietary embedding formats, and data stored in vendor-specific vector databases.

Prompt Engineering Lock-In

Prompts engineered for one model (e.g., GPT-4) may not perform identically on a different model (e.g., Claude 3 Opus, Gemini Ultra). Organisations that invest heavily in prompt engineering for a specific model build tacit switching costs. Mitigate this by testing prompts across multiple models periodically and maintaining model-agnostic prompt design principles where performance requirements allow.

Fine-Tuned Model Lock-In

Fine-tuned models trained on vendor infrastructure may not be portable. If you fine-tune OpenAI's GPT-4 and later decide to switch to Anthropic Claude, your fine-tuning investment cannot be transferred — you would need to rebuild fine-tuning on the new platform. Negotiate for model weight portability rights in any fine-tuning service agreement.

Embedding and Vector Database Lock-In

Many enterprise AI RAG (Retrieval Augmented Generation) architectures use the AI vendor's embedding models to generate vector representations of enterprise data stored in vector databases. Embeddings from different models are not interchangeable — switching the embedding model requires re-indexing all stored data. This creates switching costs proportional to the volume of data in your vector index. Mitigate by using open embedding models (Sentence Transformers, Instructor) or negotiating embedding model portability.

Privacy, GDPR, and Compliance Requirements

AI vendor contracts must satisfy a growing body of AI-specific regulation as well as existing data protection frameworks.

GDPR Requirements for AI Processing

Under GDPR, AI vendors processing EU personal data act as data processors and must sign a Data Processing Agreement (DPA) with the enterprise customer. The DPA must specify: processing purposes, data categories, technical and organisational security measures, sub-processor lists, and audit rights. Standard enterprise AI DPAs from major vendors are available but must be reviewed against your specific data flows — cookie-cutter DPAs may not accurately describe the processing that actually occurs.

EU AI Act Compliance

The EU AI Act creates obligations for both AI providers and enterprises deploying AI systems in the EU. High-risk AI applications (HR screening, credit scoring, biometric identification) face additional requirements for transparency, human oversight, and technical documentation. Ensure your AI contracts include warranties from vendors that their AI systems meet EU AI Act requirements for your intended use case, and include provisions for ongoing compliance updates as implementation guidance evolves.

Sector-Specific Compliance

Financial services (FCA, SEC, FINRA), healthcare (HIPAA, NHS DSP Toolkit), and government (IL2/IL4/IL6 for UK government) all have sector-specific AI compliance requirements that exceed standard enterprise data protection. AI vendor contracts for regulated sector use cases must include specific compliance attestations, audit access rights, and incident reporting obligations tailored to the relevant regulatory framework.

Vendor-by-Vendor Comparison: Key Contract Issues

Vendor Pricing Model Data Training Default Key Contract Concern Negotiability
OpenAI Enterprise Token-based; volume commits Opt-out in enterprise Model deprecation timeline; version lock MODERATE
Microsoft Copilot (M365) Per-seat add-on ($30/user/month) No training on M365 data ROI measurement; adoption requirements MODERATE
Azure OpenAI Service Token + provisioned throughput No training (enterprise) PTU (provisioned throughput) pricing; region availability HIGH
Google Gemini for Workspace Per-seat add-on No training (enterprise) Feature parity with M365 Copilot MODERATE
Google Vertex AI Token-based + committed use No training (enterprise) CUD discount levels; model access HIGH
AWS Bedrock Token-based; on-demand + commits Isolated by default Model availability; cross-AWS spending leverage HIGH
Anthropic Claude Enterprise Token-based; volume commits No training (enterprise) Context window pricing; enterprise SLA maturity MODERATE
Salesforce Einstein/Agentforce Per-seat; usage credit model No training (enterprise) Credit consumption rates; agent action pricing MODERATE

15 AI Procurement Negotiation Tactics

Tactic 1: Require a Consumption Proof of Concept Before Committing

Never sign a committed AI spend agreement without a validated consumption model. Run a 30-day paid POC at your expected production volume and measure actual token consumption across your use cases. AI consumption is notoriously difficult to forecast theoretically — empirical measurement is the only reliable basis for volume commitments. Use POC data to negotiate the right commitment tier: not so low that you forfeit discount levels, not so high that you are locked into overages.

Tactic 2: Negotiate Price-Most-Favoured-Customer Clauses

AI pricing is deflating rapidly. Insert a Most Favoured Customer (MFC) clause that guarantees you automatically receive the benefit of any publicly available pricing reductions during your contract term. Without this clause, you may pay 2023 pricing for AI capabilities that the market is delivering at 50% of that cost by 2025. See our guide on negotiating most-favoured-customer clauses for implementation details.

Tactic 3: Demand Model Version Continuity

AI models are deprecated and replaced on 12–24 month cycles. If your AI application is built on GPT-4-turbo and that model is deprecated, your application may need re-testing and re-optimisation on the successor model. Negotiate a minimum model availability period — typically 12 months post-announcement of deprecation — so you have adequate time to adapt. Also negotiate access to the previous model version during the migration window.

Tactic 4: Negotiate Absolute Training Data Prohibitions

Do not accept language that says "we will not use your data for training unless required by law" or "unless anonymised." These qualifications undermine the protection. Require categorical language: "Provider will not use Customer Data, including Inputs and Outputs, for any purpose including training, fine-tuning, evaluating, or improving AI models." Have legal counsel review the clause for any conditions or exceptions that could be exploited.

Tactic 5: Secure Fine-Tuned Model Portability

If you intend to fine-tune AI models, negotiate portability of the resulting model weights before you make any fine-tuning investment. The clause should specify: you own the fine-tuned model, you can download the model weights in a standard format on request, and the vendor's licence to host and serve the fine-tuned model is limited to providing the service to you. Do not invest in fine-tuning without this protection.

Tactic 6: Require Reserved Throughput, Not Best-Effort

Best-effort API throughput is insufficient for any customer-facing or operationally critical AI application. Negotiate for reserved or provisioned throughput — a contractually guaranteed rate limit that ensures your application has access to model capacity even during peak periods. For Azure OpenAI, this is Provisioned Throughput Units (PTUs). For OpenAI, it is the Enterprise tier's rate limit commitments. Budget for the premium — typically 50–100% more than on-demand pricing — and treat it as an SLA cost.

Tactic 7: Play Cloud AI Against Direct Provider

Most major AI models are available both directly from the foundation model provider (OpenAI, Anthropic) and through hyperscaler cloud AI services (AWS Bedrock, Azure OpenAI, Google Vertex AI). If you have an existing AWS EDP, Azure MACC, or GCP committed spend, using AI through your cloud provider allows the spend to count against your existing cloud commitment — often delivering a 15–25% effective discount through commitment drawdown. Compare direct provider pricing against cloud-hosted pricing net of commitment drawdown before selecting a commercial pathway.

Tactic 8: Negotiate AI-Specific Data Processing Agreements

Standard DPAs from AI vendors are written to minimise vendor obligations and maximise operational flexibility. For enterprise deployments, negotiate AI-specific DPA provisions covering: zero-retention option for sensitive use cases, sub-processor list with advance notification of changes, audit rights for AI processing, and specific guarantees around model training restrictions. Do not accept a DPA that was written for a different use case category than yours.

Tactic 9: Build in Competitive Substitution Rights

AI capability is advancing rapidly. A model that is state-of-the-art today may be outperformed by a competitor model within 12 months. Negotiate the right to substitute a different AI model provider within your agreement's commercial framework — for example, the right to shift a portion of your committed spend from GPT-4 class models to a competitor model if the competitor demonstrates superior performance on a defined benchmark for your use case. Few vendors will agree to open-ended substitution rights, but a performance-benchmarked substitution mechanism is achievable.

Tactic 10: Require Explainability and Audit Trail Provisions

For AI systems used in regulated decisions (credit scoring, medical diagnosis, HR screening), require vendors to provide explainability features — the ability to document why the AI produced a specific output. This is both a regulatory requirement under GDPR Article 22 (automated decision-making) and the EU AI Act's high-risk system requirements, and a sensible governance requirement for any consequential AI use. Vendors who cannot contractually commit to explainability capabilities for regulated use cases should not be deployed for those use cases.

Tactic 11: Include AI Incident Response Obligations

AI models can produce harmful, biased, inaccurate, or confidential outputs unexpectedly. Your AI contract should specify vendor obligations for: notifying you of known model failures or harmful output patterns that could affect your application, providing temporary model rollback capability when a new model version causes degraded output quality, and participating in joint incident investigation when AI outputs cause third-party harm. These provisions are not standard in AI vendor contracts and must be negotiated explicitly.

Tactic 12: Negotiate Overage Caps and Spend Alerts

AI token consumption can spike unexpectedly — through application bugs that generate infinite loops, through prompt injection attacks that inflate context windows, or through legitimate usage spikes. Negotiate hard monthly spend caps that automatically throttle consumption when reached, plus real-time spend alerting at 70% and 90% of monthly budget thresholds. These controls prevent catastrophic budget overruns from AI consumption anomalies.

Tactic 13: Secure Contractual Right to Benchmark

Include a benchmarking right that allows you to compare your AI vendor pricing against published market rates and, if your pricing is above the defined benchmark threshold, to trigger a commercial renegotiation. Link this to a price protection mechanism that commits the vendor to match market pricing within 90 days of benchmarking results being presented. See our guide on benchmarking clauses in software contracts for implementation language.

Tactic 14: Require Security Attestations for Your AI Use Case

AI vendor security attestations (SOC 2 Type II, ISO 27001, FedRAMP for government) may not cover the specific AI service components you are using. Request a service-specific security assessment and, for sensitive use cases, consider a vendor security questionnaire that addresses AI-specific risks: model inference infrastructure isolation, prompt injection attack controls, output filtering mechanisms, and security of fine-tuned model storage. Generic SOC 2 attestations do not address AI-specific attack surfaces.

Tactic 15: Include Exit Provisions and Data Return Rights

Exit provisions are particularly important in AI contracts because of the data portability and model portability risks described above. Ensure your contract includes: right to receive all customer data (inputs, outputs, fine-tuning datasets) in a machine-readable format within 30 days of termination; right to receive fine-tuned model weights in a portable format; vendor's obligation to delete all customer data within 60 days of termination, with written confirmation; and a 90-day post-termination grace period during which the service remains available for data export. For a comprehensive checklist, see our software contract negotiation checklist which covers AI-specific provisions alongside standard commercial terms.

Procuring or renewing AI platforms in 2026?

Our recommended firms have specialist expertise in AI contract negotiation, data rights, and token pricing benchmarks.

Get Expert Help →

AI Procurement Article Cluster

This pillar guide is supported by a cluster of deep-dive articles on specific AI procurement topics. Each sub-article provides the detailed analysis needed for specific AI vendor negotiations or contract issues:

How to Negotiate Enterprise AI Platform Contracts

Tactics for negotiating OpenAI, Azure OpenAI, AWS Bedrock, and Google Vertex AI enterprise agreements.

OpenAI Enterprise Licensing Guide

Complete guide to OpenAI Enterprise terms, token pricing, data protections, and negotiation tactics.

Microsoft Copilot vs Google Gemini: Enterprise TCO

Side-by-side cost comparison of Microsoft M365 Copilot and Google Gemini for Workspace.

AI Token Pricing: Understanding Consumption Models

Deep dive into input/output token pricing, context window costs, and volume commit strategies.

Data Privacy Clauses in AI Vendor Contracts

Model contract language for AI data processing agreements, training restrictions, and GDPR compliance.

AI Vendor Lock-In: Maintaining Portability

Strategies for prompt portability, embedding migration, fine-tuned model portability, and multi-vendor architecture.

AWS Bedrock vs Azure OpenAI: Licensing Comparison

Commercial and technical comparison of the two leading enterprise AI cloud services.

Google Vertex AI Pricing and Negotiation Guide

Vertex AI pricing models, CUD discounts, and negotiation strategies for Google Cloud AI.

Enterprise AI Governance: Contract Requirements

EU AI Act, GDPR, sector-specific compliance, and audit rights for enterprise AI deployments.

How to Benchmark AI Platform Pricing

Methodology for benchmarking AI token and seat pricing against market rates across major providers.

Anthropic Claude Enterprise: Licensing Guide

Claude Enterprise terms, volume pricing, data handling, and negotiation strategies.

AI SLA Requirements: Uptime and Performance

What SLA standards to demand from AI vendors and how to enforce them contractually.

Building vs Buying AI: Enterprise TCO Analysis

Framework for assessing build vs buy decisions for AI capabilities across different enterprise use cases.

Negotiating AI Training Data Rights

Model contract language for training data rights, IP ownership, and fine-tuning agreements.

Frequently Asked Questions

Is AI software procurement significantly different from traditional software procurement?
Yes — in several important ways. AI introduces consumption-based pricing that is inherently more difficult to forecast than per-seat licences. AI contracts raise novel data rights questions (training restrictions, output ownership) that traditional software agreements did not address. AI vendor SLAs are typically less mature than established enterprise software vendors. And AI creates new compliance obligations under the EU AI Act and sector-specific AI regulations that standard software procurement processes do not cover. Standard procurement playbooks require significant AI-specific adaptations.
How do I estimate AI token consumption for budgeting purposes?
Run a 30-day production-scale POC and measure actual token consumption. Theoretical estimates based on word counts and average conversation lengths consistently underestimate real consumption by 2–5x because they fail to account for context window usage, system prompt tokens, chain-of-thought reasoning tokens, and retry consumption. If a POC is not feasible before contract signing, negotiate a short-term pilot period with no commitment and measure before committing to volume.
Can I negotiate OpenAI's standard Enterprise Agreement terms?
Yes — OpenAI negotiates its Enterprise Agreement for customers with significant volume commitments (typically $500K+ annual). Negotiable terms include data retention, training restrictions (standard on enterprise but worth confirming), pricing and volume tiers, SLA uptime levels, rate limits, model version access and deprecation timelines, and DPA provisions. OpenAI's standard EA is a starting point, not a take-it-or-leave-it offer for enterprise customers.
What is the difference between Azure OpenAI and OpenAI direct for enterprise procurement?
Azure OpenAI runs the same underlying OpenAI models but within Microsoft's Azure infrastructure, covered by Microsoft's standard enterprise terms and DPA. This provides the security, compliance, and enterprise commercial framework that Azure enterprise customers are already accustomed to. Key advantages of Azure OpenAI include: it counts against MACC spend commitments, it benefits from Azure enterprise data residency and compliance features, and Microsoft's enterprise sales team provides procurement support. Direct OpenAI Enterprise provides access to OpenAI's latest models (sometimes ahead of Azure availability) with OpenAI's own enterprise terms. Many enterprises use both: Azure OpenAI for production workloads requiring Azure compliance integration, OpenAI direct for access to newest models during development.
How does the EU AI Act affect enterprise AI procurement?
The EU AI Act creates specific obligations for "deployers" of AI systems in the EU — which includes enterprises using AI platforms for business purposes. For high-risk AI applications (listed in Annex III), enterprises must conduct fundamental rights impact assessments, maintain use logs, implement human oversight, and ensure AI system transparency to affected individuals. Procurement contracts must include vendor warranties that their AI systems include the technical capabilities needed to meet these deployer obligations — including explainability APIs, audit logs, and human override mechanisms. The Act is being phased in through 2026–2027; procurement teams should require AI vendors to provide an EU AI Act compliance roadmap as part of enterprise agreements.

Need Help with AI Contract Negotiation?

Get matched with a specialist AI procurement firm that has experience with OpenAI, Microsoft Copilot, AWS Bedrock, and other enterprise AI platforms.