Cloud Cost Integration After Acquisition: A 90-Day Playbook

Two AWS accounts, two cloud bills, two stacks after an acquisition. Here's the 90-day sequence to consolidate without breaking anything or overpaying.

By Andrii Votiakov on 2026-04-14

Acquisitions are celebrated in announcements and then quietly painful in the months that follow. One of the first concrete problems the engineering team hits is: you now have two cloud setups, two billing accounts, two observability stacks, and two sets of commitments — some of them conflicting. Nobody planned for this. The acquiree kept building on their own cloud account right up until close. Now it's your problem.

I've worked through this scenario several times. The pattern is almost always the same: if you don't run a deliberate 90-day integration sequence, you end up paying for both setups indefinitely while the migration "gets planned."

Quick answer

90 days is enough to get the billing unified, the obvious duplicate spend eliminated, and a realistic migration plan in place. The sequence: Days 1-15, get visibility and park the accounts in your AWS Organisation. Days 16-45, eliminate zombie spend and renegotiate commitments. Days 46-75, migrate the portable workloads. Days 76-90, converge observability and tagging. Don't try to fully migrate everything — pick what's cheap to move, leave the rest.

Days 1-15: Visibility and account consolidation

Pull both accounts into AWS Organisations

If you're on AWS, the first technical step is inviting the acquired company's root account into your AWS Organisation. This is non-destructive — it doesn't change their resources, IAM, or network. What it does:

  • Gives you a consolidated billing view in your master payer account
  • Makes their Reserved Instances and Savings Plans visible alongside yours
  • Allows you to start applying SCPs (Service Control Policies) for guardrails
  • Enables you to share RI/SP discounts across accounts (more on this below)

If they're on a different cloud entirely (GCP or Azure), you still want to get read-only billing access as soon as possible. Export their billing data to BigQuery or Cost Explorer equivalent and load it alongside your own.

Map the two bills side by side

Group each by service, by region, and by team/environment. What you're looking for in week one:

  • Overlapping services (two separate Datadog accounts, two Snowflake contracts, two GitHub organisations)
  • Zombie spend (the acquired company almost certainly has dev/staging infra that nobody's touched since the deal was signed)
  • Commitment conflicts (they may have 1-year RIs that expire in 8 months; buying more on your side now would be premature)
  • Region mismatches (they're in us-west-2, you're in eu-west-1 — data gravity matters for migration sequencing)

Don't action anything yet. Just map. A single spreadsheet with both bills, top 20 services each, is the output of this phase.

Don't buy new commitments

This is important. Freeze new RI/SP/CUD purchases until you've right-sized both accounts post-consolidation. Buying 1-year compute commitments on infrastructure you'll be migrating in 45 days is a common mistake that locks in waste.

Days 16-45: Kill duplicates and reclaim commitment value

Eliminate duplicate SaaS and tooling

This is often the fastest-payback work of the entire 90 days. After an acquisition you typically have:

  • Two separate Datadog accounts with 80% overlapping instrumentation
  • Two GitHub Enterprise contracts (or one Enterprise, one Team)
  • Two PagerDuty accounts
  • Two Slack workspaces (temporarily)
  • Sometimes two separate AWS Support contracts at Business or Enterprise tier

Each of these has a consolidation path. Datadog: merge organisations, drop the smaller account. GitHub: migrate repos to the acquiring org, cancel the acquiree's contract. AWS Support: one account gets Enterprise, the rest can drop to Business or Developer.

Datadog alone frequently saves $3,000-8,000/month post-consolidation for mid-size engineering teams. See /blog/datadog-cost-optimisation for the mechanics.

Transfer or let expire existing RIs

AWS allows RI transfers between accounts within the same Organisation. If the acquired company has 1-year RIs covering ec2 instance types you also use, check whether you can benefit from their existing commitments without doubling up.

Specifically:

  • Standard RIs can be sold on the AWS Reserved Instance Marketplace (you won't get face value, but you'll recoup something)
  • Convertible RIs can be converted to different instance families within the same account — useful if their existing RIs are for instance types you don't run
  • Compute Savings Plans automatically apply across instance families — if they have unexpired CSPs, pulling their account into your Org may let those discount your workloads

Check the expiry dates carefully. An RI with 3 months left isn't worth migrating workloads to match. One with 18 months is worth planning around.

Renegotiate EDP if you're above $1M/month combined

An AWS Enterprise Discount Programme (EDP) is a multi-year spend commitment for a percentage discount. If your acquisition pushes combined AWS spend above $1M/month, you now have more leverage than you did pre-acquisition.

Reach out to your AWS account manager immediately after the acquisition closes. The negotiating window is typically 60-90 days post-close. Combined spend means a bigger commitment, which means a better discount tier — typically 10-18% off the rack rate across all services.

Don't leave this until month six. AWS renegotiates EDPs at renewal time; showing up mid-contract with "we just acquired a company, here's our new combined spend" is a legitimate opening.

Days 46-75: Migrate the portable workloads

Pick what's cheap to move first

Not everything is worth migrating. The calculus is:

  • Cost of running both: what does the duplicated infrastructure cost per month?
  • Cost of migrating: engineering time, risk, testing
  • Payback period: cost of running both / monthly migration cost reduction

Stateless services (Lambda functions, container workloads, static frontends) migrate cheaply. Data-heavy services (databases with large datasets, data warehouses, ML training pipelines) migrate expensively and slowly.

A reasonable priority order:

  1. Stateless API services and background workers (days, not weeks)
  2. Caches and queues (Redis, SQS — typically straightforward)
  3. Object storage migrations (S3-to-S3 within Org using S3 Batch Operations)
  4. Application databases (plan carefully, blue/green or CDC replication)
  5. Data warehouse (last, or never if it's genuinely workload-optimised for its current cloud)

For the data warehouse specifically: if they're on BigQuery and you're an AWS shop, the cost to run BigQuery for analytics may be lower than migrating to Redshift or Athena. See /blog/bigquery-cost-optimisation and /blog/multi-cloud-cost-when-it-pays for the service-by-service analysis.

Account structure for the acquired company

Two options during migration:

  1. Keep their account as a member account in your Organisation. Clean OU structure, apply SCPs, let workloads migrate gradually. This is lower-risk and allows you to maintain their existing network topology.

  2. Migrate resources into your existing accounts for tighter integration from day one. Higher-effort, higher disruption, but cleaner long-term.

For most acquisitions under 50 engineers, option 1 for the first year is the right call. Full account consolidation is a nice-to-have, not a requirement, and forcing it on a tight timeline creates risk.

Days 76-90: Observability and tagging convergence

Unified tagging strategy

Cost attribution breaks if two engineering orgs use different tag keys. The acquired company probably uses environment: prod while you use env: production. These look like different tags in Cost Explorer.

Establish a canonical tag set and apply it across both accounts. At minimum:

  • team or squad
  • service
  • env (with agreed values: prod, staging, dev)
  • project or product

AWS Tag Editor and Resource Groups can help you retag at scale. Automate enforcement going forward with an SCP or Config rule that flags untagged resources.

Observability merge

Running two Datadog accounts, two Grafana instances, or two logging stacks is expensive and makes cross-system debugging harder. The merged observability state should ideally be:

  • One Datadog organisation (or equivalent)
  • One centralised logging destination (CloudWatch, Loki, OpenSearch)
  • Shared dashboards and alert routing

This doesn't have to be done in 90 days — but the plan should be in place. Merging observability is often as much a cultural change (agreeing on naming conventions, alert ownership) as a technical one.

What goes wrong

The patterns I've seen derail post-acquisition cloud consolidation:

  • Trying to migrate everything before the engineers have context. The acquired team knows which parts of their infrastructure are fragile. Involve them in sequencing decisions.
  • Buying new commitments immediately to "lock in savings." Wait until the new combined spend floor is clear.
  • Leaving SaaS contracts on autopilot. Datadog, GitHub, PagerDuty — these renew automatically. Calendar-mark the renewal dates during day 1-15 and cancel duplicates before auto-renewal.
  • Treating the acquired account as untouchable. It isn't. You own it. Apply guardrails (budget alerts, SCPs on dangerous API calls) from day one.

Realistic numbers

A recent engagement following an acquisition (~$38k/month combined pre-consolidation, target was $22k):

Action Saved/month
Duplicate Datadog account eliminated $4,200
Zombie staging environments shut down $3,100
EDP renegotiation (new combined tier) $5,800
SaaS duplicate contracts cancelled $2,400
RI transfers between accounts $1,900
Stateless service migration (removed duplicate ALBs, NAT) $2,600

Final: $18,000/month, a saving of $20,000/month (53%). The full workload migration wasn't complete at day 90 — but the billing was already well below target.


If you're working through a post-acquisition cloud integration and want a structured approach, book a call. This is a well-worn path and most of the sequencing decisions are predictable.