Google Cloud Negotiation — BigQuery

BigQuery Pricing Optimization: Slots vs On-Demand

Master the BigQuery pricing model, determine the break-even point for slots vs on-demand, apply 8 query cost reduction techniques, and negotiate BigQuery spend-based CUDs as part of your GCP enterprise agreement.

Editorial Note: BigQuery pricing reflects Google Cloud rates as of Q1 2026 in the US multi-region. Pricing varies by region and edition. This is a sub-page in our Google Cloud Contract Negotiation guide. See also our Cloud Cost Optimization guide and FinOps Enterprise guide for broader cloud cost management strategies.
$6.25
On-Demand per TB (US Multi-Region)
$0.04
Slot Cost per Slot-Hour (Standard)
400 TB/mo
Typical Slot Break-Even (500 slots)
8 Tactics
Cost Reduction Strategies

BigQuery Pricing Models: On-Demand, Autoscale, and Reservations

BigQuery's pricing architecture has evolved significantly with the 2023–2024 introduction of BigQuery Editions. Understanding the three available pricing models — and when each is appropriate — is the starting point for cost optimization.

On-Demand Pricing

The original and simplest BigQuery pricing model: you pay per byte scanned by each query. As of Q1 2026, on-demand pricing is $6.25 per TB in US regions (multi-region). The first 1 TB scanned per month is free. On-demand pricing has a key advantage: zero fixed costs. You pay nothing when no queries run. The disadvantage: unpredictable costs at scale, and no ceiling on a single runaway query that accidentally scans petabytes of data.

On-demand is ideal for: intermittent workloads, exploration and development environments, ad-hoc analytics with low query volume, and organizations early in their BigQuery journey where usage patterns aren't yet established.

Autoscale with BigQuery Editions

The 2023 Editions model introduced autoscaling as the primary mechanism for managed-cost BigQuery. In autoscale mode, Google dynamically allocates slots as needed up to a configured maximum, and you pay only for the slots actually used. Pricing is per slot-hour, billed in per-second increments. This eliminates the "waste" of unused reserved capacity while providing a cost ceiling.

Autoscale is the recommended default for most enterprise workloads — it combines cost efficiency (pay for what you use) with capacity assurance (slots scale up for peak demand).

Slot Reservations (Commitments)

For predictable, high-volume workloads, purchasing slot reservations (1-year or 3-year commitments) provides the lowest per-slot cost. Purchased slots are always available — no autoscaling delay — and provide price certainty. The tradeoff: you pay for reserved capacity whether or not it's fully utilized. Slot reservations make economic sense when your sustained slot utilization exceeds 60–70% of reserved capacity.

Understanding BigQuery Slots

A BigQuery slot is a unit of computational capacity. Google's documentation describes slots as "a virtual CPU used by BigQuery to execute SQL queries." In practice, slots represent query processing parallelism — more slots means queries process more data simultaneously and complete faster. Slot consumption is a function of both query complexity and data volume.

Expert Advisory

Want independent help negotiating better terms? We rank the top advisory firms across 14 vendor categories — free matching, no commitment.

Get Matched with an Advisor → See Rankings →

Key slot mechanics to understand for cost optimization:

  • Slot sharing: Multiple concurrent queries share a pool of slots. If you have 1,000 slots and run 10 queries simultaneously, each query gets approximately 100 slots (subject to fair scheduling)
  • Slot autoscaling units: The minimum autoscale increment is 100 slots. You cannot purchase or autoscale in smaller increments
  • Slot idle time: Reserved slots that are not actively processing queries are "idle" but still billed. Autoscale mode eliminates idle slot cost
  • Baseline + autoscale: A common optimization pattern is purchasing a small baseline reservation (e.g., 200 slots) for steady-state workloads, with autoscale enabled for peak capacity — combining cost certainty on the baseline with flexible peak coverage
The 60% Rule

The break-even point for slot reservations vs on-demand is approximately 60–70% slot utilization. If your average slot utilization across a month is above 60%, reserved slots are cheaper than on-demand. Below 60%, autoscale mode pays less. Monitor utilization using BigQuery's INFORMATION_SCHEMA.JOBS_TIMELINE view before committing to reservations.

On-Demand vs Slots: Break-Even Analysis

The break-even analysis between on-demand and slot reservations depends on your specific workload. The following model provides the framework; apply your actual byte-scan rate and query patterns.

Break-Even Formula
Monthly On-Demand Cost = (TB scanned/month) × $6.25
Monthly Slot Reservation Cost = (Slots purchased) × (Slot-hours/month) × ($/slot-hour)

Standard Edition 1-year commitment: ~$0.02/slot-hour
Standard Edition on-demand autoscale: ~$0.04/slot-hour

Example: 500 slots × 730 hours/month × $0.02 = $7,300/month
Equivalent on-demand break-even: $7,300 / $6.25 = 1,168 TB/month

This means: if your BigQuery workloads scan more than 1,168 TB/month consistently, 500 reserved Standard Edition slots at 1-year commitment rates are cheaper than on-demand pricing. Below 1,168 TB/month, autoscale on-demand is more economical.

Reserved Slots (Standard, 1-yr) Monthly Slot Cost On-Demand Break-Even (TB/mo) Best For
100 slots ~$1,460/mo 234 TB/month Small workloads with predictable usage
500 slots ~$7,300/mo 1,168 TB/month Mid-size analytics teams with regular workloads
1,000 slots ~$14,600/mo 2,336 TB/month Large-scale analytics, BI platforms, ML feature stores
2,000 slots ~$29,200/mo 4,672 TB/month Enterprise data platforms, real-time analytics

Note: These calculations use Standard Edition at 1-year commitment rates (~$0.02/slot-hour). Enterprise and Enterprise Plus Editions have higher slot rates but include additional features. 3-year commitments provide ~25–30% additional discount on slot rates.

BigQuery Editions: Standard, Enterprise, Enterprise Plus

Google introduced the BigQuery Editions pricing model in 2023, creating three tiers that align compute pricing with feature requirements rather than a single flat compute rate. Understanding which edition your workloads actually need prevents over-purchasing premium capacity.

Free Resource

Get the IT Negotiation Playbook — free

Used by 4,200+ IT directors and procurement leads. Oracle, Microsoft, SAP, Cloud — all covered.

Edition Slot Price (Autoscale) Slot Price (1-yr Commit) Key Features Added Best For
Standard $0.04/slot-hr ~$0.02/slot-hr Base analytics, no CMEK, no BI Engine Ad-hoc analytics, dev/test, non-sensitive data
Enterprise $0.06/slot-hr ~$0.04/slot-hr CMEK, BI Engine reservation, materialized view auto-refresh, column-level security Regulated industries, BI platforms, production analytics
Enterprise Plus $0.10/slot-hr ~$0.07/slot-hr All Enterprise + Continuous queries, cross-region replication, 99.99% SLA Mission-critical analytics, real-time streaming, global data platforms

The most common optimization opportunity: organizations using Enterprise Plus for all workloads when only a subset (real-time streaming, compliance-critical pipelines) genuinely require Enterprise Plus features. Segment workloads by edition requirement and create separate reservations by edition — mixing workloads across editions in a single reservation is not possible, but maintaining parallel reservations of different editions is standard practice.

Storage Pricing: Active, Long-Term, Physical

BigQuery storage costs are a frequently underestimated component of total BigQuery spend. For data-heavy organizations, storage can represent 20–40% of BigQuery costs.

BigQuery offers three storage models:

  • Active storage: $0.02/GB/month for data modified in the last 90 days. All tables default to active storage
  • Long-term storage: $0.01/GB/month for data not modified in 90+ days. Automatically applied — no manual action required
  • Physical storage billing: An opt-in model where you pay for compressed, physical storage rather than logical storage. For heavily compressed data, physical billing can reduce storage costs by 30–60%

Physical storage billing is one of the most underutilized BigQuery cost optimizations. Enterprise datasets with high compression ratios (JSON, logs, transactional data) regularly see 40–50% storage cost reductions when switching from logical to physical billing. The trade-off: physical billing exposes compression ratios rather than hiding them in logical pricing — for some organizations, the transparency is actually preferred for FinOps reporting purposes. Switch at the table level via the `biglake_tables` configuration or at the project/dataset level in the GCP console.

Is BigQuery a top-5 line item on your GCP bill?

Our advisors model slot vs on-demand economics, identify over-provisioned editions, and structure BigQuery spend-based CUDs that reduce costs 20–40%.

Get Expert Help →

8 Query Cost Reduction Techniques

Technique 01
Select Only Required Columns (Avoid SELECT *)
BigQuery charges for bytes scanned, and SELECT * scans every column in every queried table. In a 10-column table where you need 2 columns, SELECT * costs 5× more than SELECT col1, col2. Enforce column selection discipline across your analytics team. For BI tools that generate SQL automatically, review their query generation settings — many tools allow column selection optimization that dramatically reduces BigQuery scan volume. A single policy change requiring explicit column selection across your analyst population can reduce on-demand query costs 30–60%.
Technique 02
Implement Partition Pruning on All Large Tables
BigQuery's partitioning feature creates sub-table segments by a time column (ingestion time, date, timestamp). When a query includes a WHERE clause filtering on the partition column, BigQuery only scans the relevant partitions rather than the entire table. For time-series data (logs, transactions, events), partitioning is the single most impactful cost optimization — reducing scan volume from 100% of the table to 1–5% for typical time-bounded queries. Always add PARTITION BY to new table creation for any table expected to grow beyond 1 GB.
Technique 03
Use Clustering to Reduce Scan Volume Beyond Partitioning
Clustering organizes table data by the values of specific columns (up to 4 cluster columns). When a query filters on clustered columns, BigQuery skips blocks that don't match the filter — further reducing bytes scanned beyond what partitioning provides. The combination of partitioning + clustering is the most cost-effective table structure for analytics workloads: partition on date, cluster on the most frequently filtered dimensions (user_id, region, product_category). Well-clustered tables typically reduce bytes scanned 20–40% compared to partitioned-only tables.
Technique 04
Implement Materialized Views for Frequently Run Aggregations
Materialized views pre-compute and store query results. When a subsequent query can be served from the materialized view, BigQuery reads the pre-computed result (significantly smaller than the source table) rather than recomputing from raw data. For aggregation queries that run repeatedly on large tables (daily revenue summaries, user cohort metrics, product performance dashboards), materialized views reduce scan volume by 80–95%. Enterprise Edition includes materialized view auto-refresh — a key reason to evaluate Enterprise over Standard for production analytics workloads.
Technique 05
Enable Query Result Caching
BigQuery automatically caches query results for 24 hours. Subsequent identical queries (same SQL, same table, table not modified) are served from cache at no charge. Dashboard queries that refresh hourly often re-query the same data — if the underlying table hasn't changed, cached results serve the refresh for free. Ensure your BI tool or query framework doesn't bypass cache by appending random parameters or timestamps to queries. Review BigQuery's job metadata (jobs.configuration.query.useQueryCache) to understand current cache hit rates across your workload.
Technique 06
Set Custom Quotas and Cost Controls by Team
BigQuery's custom quota feature allows you to set daily scan limits per user or per project. Without quotas, a single analyst with an unoptimized query on a 100 TB table can generate $625 in charges in minutes. Implement: (1) daily per-user scan quotas ($100–$500 depending on role), (2) project-level monthly spend alerts via GCP Budget alerts, (3) a query cost preview requirement in your BI platform (most modern BI tools show estimated query cost before execution). Quotas prevent accidents without blocking legitimate work — essential for organizations with non-technical SQL users. See our Cloud Cost Governance guide for broader quota policy frameworks.
Technique 07
Switch Applicable Tables to Physical Storage Billing
As described in the storage pricing section, physical storage billing provides 30–60% storage cost reduction for tables with high compression ratios. Identify your largest tables (top 10 by logical storage size) and calculate the estimated physical size using BigQuery's storage metadata. Tables with compression ratios above 2:1 are good candidates for physical billing migration. The switch can be done at the table level with no data movement and takes effect on the next billing cycle. For a 1 PB BigQuery dataset with 3:1 average compression, physical billing saves ~$7,000/month in storage costs alone.
Technique 08
Negotiate BigQuery Spend-Based CUDs as Part of GCP Committed Spend
BigQuery is one of the services eligible for GCP spend-based CUDs. If BigQuery represents $500k+ of your annual GCP spend, negotiate a BigQuery-specific spend-based CUD as part of your GCP committed spend agreement. A 1-year BigQuery spend CUD typically provides 15–20% discount on BigQuery analysis charges (on-demand and autoscale slot charges). A 3-year CUD provides 25–30% discount. Bundle this negotiation with your broader GCP commit renewal — see our GCP CUD Negotiation guide for the complete spend-based CUD negotiation framework. For organizations where BigQuery is a significant cost driver, the CUD negotiation alone can return $100k–$500k annually.

Negotiating BigQuery Committed Spend CUDs

BigQuery's spend-based CUD is one of the most valuable but least-understood commitments available to GCP enterprise buyers. Unlike resource-based CUDs (which commit to specific vCPU or memory capacity), BigQuery spend-based CUDs commit to a dollar amount of BigQuery usage and receive a flat percentage discount on all BigQuery charges within that commitment.

BigQuery CUD Structure

You commit to a minimum monthly spend (e.g., $40,000/month = $480,000/year) and receive a discount percentage on all BigQuery charges that count toward your commitment. Both on-demand (per-TB) charges and autoscale slot charges are eligible. Storage charges are not included in BigQuery spend-based CUDs.

Published BigQuery CUD discounts: approximately 15% for 1-year, 25% for 3-year commitments. Negotiated enterprise rates for large commitments: 20–25% for 1-year, 30–35% for 3-year. The negotiation path: as part of your GCP committed spend agreement renewal, request that BigQuery spend CUDs be included at enterprise-negotiated rates rather than standard published rates. Google's deal desk has authority to improve CUD discount percentages when they're bundled with a larger committed spend agreement.

Stacking CUDs with Slot Reservations

A commonly overlooked optimization: spend-based CUDs and slot reservations can be used simultaneously. Your slot reservation provides capacity assurance; the spend-based CUD provides a percentage discount on the slot charges. For example: you purchase 1,000 Enterprise Edition slots at 1-year reservation rates, then apply a BigQuery spend-based CUD that provides 20% discount on those reservation charges. The combined economics produce lower effective costs than either mechanism alone. Structure slot reservations first (to establish baseline cost), then layer CUD discounts on top. See our GCP CUD Negotiation guide for the full stacking framework.

Frequently Asked Questions

How do I monitor BigQuery slot utilization to decide whether to purchase reservations?
Use the BigQuery INFORMATION_SCHEMA.JOBS_TIMELINE table to analyze historical slot utilization. Key query: SELECT TIMESTAMP_TRUNC(period_start, HOUR) as hour, SUM(slots_ms) / (3600 * 1000) as avg_slots FROM INFORMATION_SCHEMA.JOBS_TIMELINE GROUP BY 1 ORDER BY 1. Run this analysis over 60–90 days of production data to identify your p75 and p90 slot utilization. If p75 utilization exceeds 60% of a candidate reservation size, reservations are economically justified. GCP's FinOps recommendation tools in the console also provide slot utilization analysis with reservation recommendations.
Can BigQuery on-demand and slot reservations run in the same project simultaneously?
Yes, and this is a common optimization pattern. You can configure a "baseline" slot reservation for high-priority, predictable workloads, while ad-hoc or development queries run on on-demand pricing. BigQuery's workload management capabilities (via reservations and assignments) let you route specific projects or jobs to specific reservation pools. This allows cost-optimized pricing for production workloads while maintaining on-demand flexibility for exploratory queries.
What's the difference between BigQuery Omni and standard BigQuery pricing?
BigQuery Omni extends BigQuery's analytics capabilities to data stored in AWS S3 or Azure Blob Storage — allowing SQL queries on non-GCP data without migrating it to GCP. BigQuery Omni pricing is higher than standard BigQuery pricing (approximately $8/TB on-demand for cross-cloud queries) and slot reservations for Omni are priced separately from GCP-based reservations. If you have significant data in AWS or Azure that you want to query without full migration, Omni enables this at a premium — factor the additional cost into cloud migration TCO analysis.
How does BigQuery pricing compare to AWS Redshift and Azure Synapse for enterprise workloads?
At scale, BigQuery Enterprise typically compares favorably to Redshift Serverless and Azure Synapse on a per-TB-scanned basis. Redshift requires cluster sizing (fixed provisioned cost) or Serverless RPU charges ($0.36/RPU-hour); Azure Synapse charges per DWU-hour or per TB scanned in serverless pools. For workloads with unpredictable scan volumes, BigQuery's autoscale model provides better cost efficiency than Redshift's fixed provisioned model. For ultra-high-volume, always-on analytics, Redshift provisioned with Reserved Instances can be 20–30% cheaper than BigQuery Enterprise reservations. The comparison is workload-dependent — request a formal workload cost model from your cloud negotiation advisor before making platform decisions based on pricing alone.

Cut Your BigQuery Bill by 20–40%

Our advisors model BigQuery slot economics, identify edition over-purchasing, and negotiate spend-based CUDs as part of GCP enterprise agreements. Most engagements pay for themselves in the first month.