BigQuery Cost Optimisation: Slots, Partitions, Queries

BigQuery is brilliant until your team learns to write SELECT *. Here's how to cut a runaway BigQuery bill by 60-90% without changing your data model.

By Andrii Votiakov on 2026-03-31

BigQuery is the line item that scales with how much your data team experiments. It rewards good schema design and punishes lazy queries. The good news: most of the optimisation is mechanical, and there are usually two or three queries doing 80% of the damage. If you also run Snowflake, the same principles apply — see Snowflake cost optimisation for the warehouse equivalent.

Quick answer

BigQuery on-demand pricing charges per byte scanned (~$5/TB after the free tier). The biggest savings come from partitioning and clustering the largest tables, switching to slot-based pricing (Editions) above a few TB/day of scan, killing SELECT *, and caching repeated queries. Typical audits cut the BigQuery bill by 60-90%.

On-demand vs Editions: the threshold

  • On-demand (default): $5/TB scanned. Good for low-volume usage and ad-hoc analysis.
  • Editions (slot-based, formerly Flat-Rate): you reserve compute capacity ($/slot-hour). Standard, Enterprise, Enterprise Plus tiers.

Rough rule for when to switch: if you're spending more than ~$8-10k/month on on-demand BigQuery, slot-based usually comes out cheaper, especially with auto-scaling and 1-year/3-year commitments.

Editions also unlock assured workload isolation — your dashboard queries don't get crushed by an ELT job. Worth a lot operationally even at break-even cost.

The mechanical fixes

1. Partition every table over 1 GB

Partitioning by _PARTITIONTIME or a date column means a query filtering by date scans only the relevant partitions, not the whole table.

CREATE TABLE events_partitioned
PARTITION BY DATE(event_timestamp)
CLUSTER BY user_id, event_type
AS SELECT * FROM events;

A single 2 TB events table queried daily for "last 7 days" goes from scanning 2 TB per query to ~28 GB. 70x cost reduction on that one query pattern.

2. Cluster on common filter columns

Clustering co-locates rows with the same column values, letting BigQuery skip blocks. Typical winners: user_id, tenant_id, event_type, country_code.

Use INFORMATION_SCHEMA.JOBS to find the columns most often filtered:

SELECT
  REGEXP_EXTRACT(query, r"WHERE\s+(\w+)") AS filter_column,
  COUNT(*) AS query_count,
  SUM(total_bytes_billed)/POW(10,12) AS tb_billed
FROM `region-eu`.INFORMATION_SCHEMA.JOBS
WHERE creation_time > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY)
  AND statement_type = 'SELECT'
GROUP BY filter_column
ORDER BY tb_billed DESC
LIMIT 20;

Cluster on the top three or four.

3. Kill SELECT *

SELECT * from a wide table scans every column. A 100-column table where you actually need 5 columns is paying 20x what it should.

Find offenders:

SELECT user_email, query, total_bytes_billed
FROM `region-eu`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
WHERE query LIKE '%SELECT *%'
  AND creation_time > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY)
ORDER BY total_bytes_billed DESC
LIMIT 50;

Send the worst offenders back to whoever wrote them.

4. Materialised views and BI Engine

For dashboards hitting the same aggregations all day, materialised views pre-compute the result and refresh incrementally. Often cuts dashboard query cost by 95%.

BI Engine (in-memory cache) is even better for Looker/Looker Studio dashboards with predictable patterns: $0.0416/GB-hour for cached data, but every cached query is free of scan cost.

5. Approximate aggregates where they fit

APPROX_COUNT_DISTINCT() is dramatically cheaper than COUNT(DISTINCT ...) on big columns. Same for APPROX_QUANTILES(), APPROX_TOP_COUNT(). For most analytics use cases, ±1% accuracy is fine and saves 10x.

6. Cache queries

BigQuery caches query results for 24 hours by default — but only for identical queries (whitespace-sensitive on older versions; logical comparison on newer). Tools like dbt and Looker Studio sometimes append timestamps to bust cache. Check your tools and disable cache-busting where it's accidental.

7. Cost controls

Set custom per-user and per-project quotas to prevent runaway queries.

bq query --maximum_bytes_billed=1000000000 \
  "SELECT * FROM dataset.events"
# Errors out if would scan > 1 GB

In Console, set Editions reservation limits and project-level daily quotas.

What gets you the biggest one-time wins

When I audit a BigQuery account, the usual hit list:

  1. One unpartitioned monster table scanned daily — partition it, instant 50-90% cut on dependent queries.
  2. A dashboard refreshing every minute that no one looks at — drop refresh interval or kill the dashboard.
  3. A scheduled query running at 3 AM that's doing the wrong filter — WHERE created_at > '2020-01-01' on a 5 TB table when you only need yesterday.
  4. Data engineers running SELECT * in BigQuery UI to "explore" 4 TB tables. Often a single afternoon costs hundreds.

Slots, scaling, and reservations

If you commit to Editions:

  • Auto-scaling: pay for what you use within configured min/max. Default for Standard tier.
  • Baseline + flex: reserve a baseline (cheaper per slot), allow burst above it.
  • Per-project assignment: route ELT jobs to one reservation, dashboards to another, so they don't fight.

1-year commitment buys you ~20% off slot pricing; 3-year ~40% off. Same caveats as other commitment discounts: don't 3-year-commit if your usage is volatile.

Realistic numbers

Recent data-heavy client (~$22k/month BigQuery on-demand):

  • Partitioning + clustering on 4 monster tables: $8,400/month
  • Switching to Editions Standard with auto-scaling baseline: $3,200/month net (after migration)
  • Dropped 11 unused dashboards/scheduled queries: $1,800/month
  • Materialised views for Looker workspace: $1,400/month
  • Cost guardrails (per-user max bytes billed): prevented future bills, hard to quantify

Final: $7,200/month, ~67% reduction. Plus much faster queries because clustering and BI Engine.


For a broader view of GCP cost levers beyond BigQuery, the GCP cost optimisation playbook covers Compute, GKE, and storage alongside analytics.

If your BigQuery bill has crept up and you'd like it diagnosed and fixed on a pay-for-savings basis, book a call.