A 100-person Danish B2B SaaS hits a moment that most growing companies hit eventually. Eighteen months ago, the Azure bill was around 4,000 EUR a month. It is now closer to 9,000. Finance is asking why. The CTO does not have a confident answer. The platform team has theories. The fact that nobody can give an evidence-backed explanation is the actual problem. The bill is the symptom.

This article is for the CEO or CTO of a growing B2B SaaS who is somewhere on this curve. The aim is to be honest about what cloud cost drift actually is, why it is rarely what it looks like, and what minimum viable FinOps for a company at your stage looks like. Not a dedicated FinOps team. Not a five-figure tool. The smallest set of practices that arrests the drift, and the operating-model decisions that prevent it from coming back.

Cost drift is not slow growth

The first thing to get clear is the difference between cost growth and cost drift. They are not the same thing, and treating them as the same thing is how growing companies end up cutting muscle when they meant to cut fat.

Some cloud cost growth is justified. New product features ship. New regions are opened. Existing customers add more data. A quiet doubling of the customer count produces a noisier-than-doubling of the cloud bill because you carry shared overhead anyway. That is growth, and the response to growth is to optimise the unit cost, not to flatten the absolute number.

Drift is what is left over once growth is accounted for. If your Azure bill grew 80% in 18 months and your customer count grew 30% with no significant new product surface, the gap is drift. Drift is the cumulative effect of small unjustified decisions: a forgotten environment, a premium SKU picked “just in case”, an egress pattern nobody modelled, an instance commitment that should have been made and was not.

Most growing companies cannot tell drift from growth because nobody is measuring per-customer or per-feature cloud unit economics. The bill is one number. The number is bigger. Finance asks why. That is the conversation. The fix starts with measuring well enough to separate the two, then closing the governance gaps that produce the drift.

The pattern I see, again and again

Let me walk through a specific engagement, anonymised. The numbers are real, the structure is typical of the pattern I see in growing B2B SaaS companies that have not yet put senior IT discipline in place.

A 100-person SaaS company in Copenhagen, single Azure tenancy, two Azure regions (West Europe primary, North Europe secondary). Monthly Azure spend went from about 4,200 EUR in October 2024 to about 9,100 EUR by April 2026. Customer count grew from 110 to 145 in the same window. New product features: one significant addition (a usage-analytics module) plus two smaller ones. North Europe region added in mid-2025 for a customer-driven data residency requirement.

Where the 4,900 EUR per month of growth actually went, after a five-day Cloud Cost Review:

SourceApprox. monthly EURJustified or drift?
Genuine product growth (new feature compute, North Europe baseline, customer-driven data volume)1,200Justified
Forgotten or oversized non-production environments (three engineer dev sandboxes left running for 11+ months, a staging environment running production-tier SKUs)1,500Drift
Premium-tier SKU defaults on three new services where standard tier would have met the requirement800Drift
Cross-region egress from a data-lake architecture that streamed the same dataset between West and North Europe four times a day600Drift (architectural)
Reserved-instance commitments that should have been made twelve months earlier, not made500Drift (procurement)
A feature-flag system spun up on dedicated VMs that could have run as a small Container App or against an existing service300Drift (architectural)

The justified-growth line is 1,200 EUR. The drift lines together are 3,700 EUR. The bill grew 4.9K, the actual unjustified portion is about 75% of the growth. Most of the conversations I have with CTOs at this stage assume the drift is some small percentage of a mostly-justified bill. The reality, repeatedly, is the inverse.

The single largest line is almost always non-production environments. Engineers are empowered to create them, often for good reasons (a feature spike, a customer-specific test scenario, a CI/CD experiment). The discipline that almost never exists is automatic teardown. Resources outlive their purpose by months because no policy says “this kind of thing shuts itself off after 14 days unless someone explicitly extends it.”

The second most common pattern is premium-tier SKU defaults. Azure offers Standard and Premium tiers on a lot of services. The cost difference is significant; the performance difference for a growing SaaS workload is usually invisible. Engineers pick Premium because the documentation says it is more reliable. It is, marginally, but in most cases the marginal reliability is not worth the cost premium.

Three governance gaps that produce drift

Cost drift is rarely caused by one decision. It is the cumulative effect of three governance gaps that compound over time. If you fix the gaps, drift stops. If you only fix the immediate cost line items without closing the gaps, the same drift returns within six to twelve months. I have walked back into companies a year after a cost-cutting exercise and watched the same pattern reassemble.

The three gaps are tagging, cadence, and pre-flight. None of them is exotic. All three are skipped, abbreviated, or assigned to people who cannot enforce them.

Gap 1: no tagging policy that people actually populate

A tagging policy that gets ignored is worse than no policy. It creates the illusion of governance. It also creates the dashboard pattern where 60-70% of resources have no owner tag, so the dashboard cannot answer the “who is this” question, so finance escalates, so the conversation runs in circles.

The most common failure mode I see is a policy with 15-25 required tags. The policy was written by someone who read a Microsoft tagging best-practices guide and treated it as a starting floor. The practical effect: engineers cannot keep up, ops gives up enforcing, the policy decays, and the dashboard tells you nothing.

The fix is the smallest set that gives you the answers you need. Four tags, no exceptions:

  1. environment with values prod / staging / dev / sandbox.
  2. owner-team with a finite list of teams (six to ten depending on org size).
  3. project or service name (free-text but governed).
  4. cost-center with a short list (three to four is reasonable for most growing companies).

Enforcement is the part most companies skip. The four tags are required at create time via an Azure Policy that denies non-compliant deploys. The platform team backfills existing untagged resources over an 8-week sweep, not all at once and not via a one-day mass migration. By the end of the sweep, untagged spend should be under 5%.

That is the entire policy. Nothing about regulatory category, classification level, data sensitivity, owner email, business unit code. Those things may matter elsewhere; they do not need to live in your tagging policy. Keep it small. People will populate four tags. They will not populate twenty.

Gap 2: no monthly cost cadence with names attached

A FinOps dashboard that nobody reviews monthly with named owners is a screen saver. There are 30K EUR a year FinOps tools running into dashboards that get visited twice a quarter and do not produce a single decision.

The fix is a 30-minute monthly cost meeting. Three required attendees: finance (CFO or controller), platform engineering (head of platform), and the IT/security/AI lead (the person who owns the operating model). Three optional attendees as needed: an engineering manager whose team owns the largest cost category, the CTO if a category-level reallocation is being proposed, and the CEO once or twice a year for the full picture.

The agenda is fixed:

  1. Five minutes: cost vs budget recap. Are we tracking, ahead, or behind on the absolute monthly number, and on the per-customer unit cost.
  2. Fifteen minutes: walk the top ten cost categories. Named owner per category. Two questions per category: is this growth justified, and what would it take to reduce it 20%.
  3. Five minutes: action items from last month. Status, blocker, owner.
  4. Five minutes: action items for next month. Each one has a named owner, a target reduction, and a target date.

The discipline that matters: someone leaves with written action items per category. Not aspirations, not “we should look into that”. Specific names, specific targets, specific dates. The action items get tracked between meetings in whatever your team uses (Linear, Jira, a Notion page, a shared sheet, the tool does not matter). The next meeting opens by closing the loop on each item.

Set up correctly, this meeting takes 30 minutes a month plus about 90 minutes of preparation by whoever runs it. Total senior-time cost: about two hours per month. The savings it produces in a year typically run 10-20% of the cloud bill on their own, before any architectural work.

Gap 3: no architectural pre-flight on provisioning

Engineers are empowered to provision. That is the right default for product velocity. The wrong default is provisioning with no governance over architectural choices that compound.

The fix is a 30-minute pre-flight architecture review before any change that adds more than X DKK per month of expected spend. For most growing SaaS, X = 3,000 DKK is a reasonable threshold. Below that, no review needed. Above that, a 30-minute review.

Two questions in the review:

  1. Is the SKU choice the smallest one that meets the requirement? Default to standard tier unless there is a documented reason for premium.
  2. Is there an existing service that could be reused? Adding a new dedicated tier-1 service when an existing one can absorb the load is the most expensive small decision a small company makes.

Two pieces of paper for the process. A one-page pre-flight form (five fields: change description, monthly cost estimate, SKU choice with rationale, alternative considered, owner). A standing decision log of completed pre-flights, kept somewhere everyone can find. Not a process. Just a record.

Pre-flight is not about slowing engineering down. It is about pausing for half an hour on the decisions that compound the most. A bad SKU choice on a service that runs for two years costs you 24 months of premium pricing. A 30-minute review costs you 30 minutes once.

Why a FinOps tool alone does not fix this

I see two patterns in growing companies that are concerned about cloud spend. Pattern A: they buy a tool (CloudHealth, Apptio Cloudability, Spot.io, Azure Advisor at scale) and assume the tool will produce the answer. Pattern B: they assign the platform team to “look into cost” without changing the operating model.

Both patterns produce dashboards. Neither produces decisions.

A FinOps tool tells you that your bill grew. It can show you the resources that grew the most. It cannot tell you which of those growths are justified and which are drift, because the tool does not know what your product does, who owns each resource, or what your customer growth rate is. That context is governance work, not tooling work. The tool helps once the governance is in place. It does not substitute for it.

Growing companies routinely pay 30-50K EUR a year for a FinOps tool while their cost drift is 200K EUR a year, with nobody acting on the dashboard. That is not the tool's fault. The tool did its job. The operating model around it did not exist.

The same pattern shows up with Vanta and Drata in the security domain. The tool generates evidence. Without leadership to act on it, the evidence sits in a dashboard. The lesson is general: tooling is a force multiplier on a working operating model, and a tax on a broken one.

Minimum viable FinOps for growing companies

For a company at this stage you do not need a FinOps team. You do not need a five-figure tool. You need the operating model that closes the three governance gaps. Concretely:

PracticeWho ownsSetup timeSteady-state cost
Four-tag policy enforced via Azure PolicyPlatform engineering1 week setup, 8 weeks backfill sweep~2 hours/month maintenance
30-minute monthly cost meeting with named ownersIT/security/AI lead1 week to design agenda, scope categories~2 hours/month (incl. prep)
Architectural pre-flight on changes >3K DKK/monthPlatform engineering + change owner2-3 days to draft the form and decision log~30 min per pre-flight, typically 2-4 per month
Threshold and anomaly alertsPlatform engineering1 day in Azure Cost ManagementNegligible; routes alerts to the cost-meeting owner

Total setup is 1-2 weeks of focused work plus an 8-week tagging backfill running in the background. Steady-state cost is about 4-6 hours of senior time per month plus sporadic pre-flight reviews. Tooling cost: zero, if you are willing to use Azure Cost Management. Optional layer: a third-party FinOps tool once the operating model is producing decisions and you want better cross-cloud visibility. Not before.

The savings the operating model produces typically run 20-30% in year one against the analysed scope. About half of that comes from closing forgotten environments and right-sizing SKUs (the dashboard work). About half comes from the cadence: people change behaviour when their resource decisions get reviewed monthly with their name attached.

Common mistakes I keep correcting

Over-engineered tagging policy

Already covered. The single most common failure mode. Cut the policy to four required tags, accept that a fifth and sixth might be useful for specific reporting and add them later if the data justifies them.

Reserved instance commitments without forecasting

Reserved instances and savings plans are real money. A 1-year RI in Azure gets you roughly 30-40% off list price; a 3-year RI 55-65%. But they are commitments. If you commit to 100 cores of compute for three years and your usage drops 20%, you are paying for 20% of unused capacity for the duration.

Growing companies will make 200K EUR three-year commitments based on a three-month usage trend. That is a finance decision masquerading as a technical one. My default rule for growing B2B SaaS: 1-year RIs on the bottom 70% of stable workloads, no 3-year RIs unless you have 18+ months of stable usage data and a credible forecast for the next 18 months.

Treating egress as a free service

Egress is the line that surprises growing SaaS companies the most. Cross-region egress (e.g. West Europe to North Europe) costs around 2 cents per GB. Internet egress costs 8-12 cents per GB depending on volume. A data-lake architecture that copies a 200GB dataset between regions four times a day is 800GB per day, about 16 EUR per day, around 5,800 EUR per year per copy job. Multiply by every job in the system and the line adds up fast.

Fintech and other regulated cases compound this. A Nordic fintech with a Danish data-residency requirement and a Swedish customer base will end up replicating data between regions for compliance reasons. Cost the replication during the architecture decision; do not discover it on the bill twelve months later. The same pattern applies to healthcare SaaS with national data-residency rules. Regulatory egress is real, sometimes unavoidable, and easier to budget for than to surprise yourself with.

The fix is not to ban cross-region patterns. It is to model the egress cost during architecture review and ask whether the pattern can be avoided or right-sized (process locally, replicate state via change feed, batch nightly instead of streaming). Most architects do not check egress lines because the cost calculator does not surface them clearly. Add it explicitly to the pre-flight form.

Outsourcing FinOps thinking to the platform team

“The platform team will handle cost” is a real failure mode. Platform engineers are good at platform engineering. They are not finance people, and they do not own the budgets. Cost ownership has to sit with the team that uses the resources, with platform engineering supporting them with tools, alerts, and right-sizing recommendations.

The IT/security/AI lead (or whichever role is closest to that for you) is the right person to own the operating-model side: the cadence, the ownership map, the escalations, and the board-level cost reporting. Without that role, the platform team becomes the de-facto finance function, and they will neither enjoy it nor do it well.

Confusing autoscaling with cost discipline

Autoscaling is useful, but it is not a cost-discipline mechanism. It only helps if your usage is genuinely variable and your floors and ceilings are set sensibly. A pattern that catches teams: autoscaled services with the floor set at the historical peak, which means the service runs at peak capacity 24/7 because the floor never goes below that. That is autoscaling configured as a fixed-cost increase.

Audit your autoscale floors and ceilings during the cost meeting twice a year. Most floors are set higher than they need to be because nobody wanted to be paged for cold starts during launch. Once the service is mature, the floor can usually drop.

Reacting to one big bill, then forgetting

The one-time cost-cutting exercise without operating-model change is the most common pattern of all. CTO panics, platform team kills 30% of the bill in two weeks, finance is happy, the calendar moves on. Twelve months later the same drift has reappeared because none of the three gaps were closed. The savings were real but unsustained.

The unsexy truth is that the operating model produces durable savings. The one-time exercise produces a temporary win.

The 2x-savings-or-refund guarantee

I offer a Cloud Cost Review with a guarantee: if the prioritised report does not identify committed-savings opportunities of at least 2x my fee within 90 days of delivery, I refund the difference.

Why I am willing to put that in writing: the pattern is consistent. Cloud Cost Reviews on growing B2B SaaS companies have repeatedly identified 20-30% in committed-savings opportunities against the analysed scope. At my fee tiers (25K, 50K, or 100K DKK depending on cloud spend), 2x payback is comfortable for any company whose annual cloud bill is above the threshold for the matching tier.

What the guarantee covers: committed-savings opportunities identified in the prioritised report within 90 days, measured against the cloud spend of the analysed scope. What it does not cover: savings the client chose not to act on, or savings that depended on a structural change the client decided not to make. If you receive a savings list and decide not to implement half of it, the half you implemented is what counts toward the threshold.

It is not a marketing gimmick. It is a way of making a contract that aligns my outcome with yours. If the savings are not real, the fee is not real either.

What a Cloud Cost Review actually covers

A Cloud Cost Review is a 5-10 business day fixed-fee engagement, sized to your cloud spend.

TierAnnual cloud spendFeeDurationSpecific deliverables
SmallUp to ~200K DKK / year25,000 DKK fixed5 business daysPrioritised savings list (top 10), four-tag policy draft, monthly cost meeting agenda template, single-region architecture review, 90-day roadmap
Medium~200K to ~1M DKK / year50,000 DKK fixed7-8 business daysPrioritised savings list (top 15-20), tagging + cadence + pre-flight policy set, RI/savings-plan strategy with forecast, multi-region architecture review (egress modelling), 90-day roadmap, ownership map
Large1M+ DKK / year100,000 DKK fixed10 business daysEverything in Medium plus: per-team unit-economics breakdown, cross-cloud comparison if applicable, board-ready cost narrative, two implementation working sessions with platform engineering

Common across all tiers: a written report you keep, no proprietary tool, no SaaS subscription you have to maintain, and the 2x-savings-or-refund guarantee on the prioritised savings list.

What it is not: a continuous FinOps service. If you want ongoing help, that lives under the Standard or Executive Retainer, not under a one-off Review.

When NOT to do this work

There are three situations where a Cloud Cost Review is not the right starting point.

First, you are mid-migration. Wait until the migration is stable. Drift mid-migration is usually misallocation, not waste, and the Review will give you noisy data that triggers the wrong actions.

Second, your annual cloud spend is under 60K DKK. The math does not work. You are better off applying the four-tag policy and the 30-minute monthly review yourself, then revisiting the question in 12 months once the bill is large enough that an external review pays for itself.

Third, you have an active major incident. Fix the incident first. A Cloud Cost Review while the team is firefighting is a distraction with low ROI.

If none of those apply, the review is one of the highest-ROI things a growing company can do in the first six months of taking IT operations seriously.

What to do this week, this quarter, this year

If you are reading this and the curve looks familiar, here is the order I would do it in.

This week

  1. Open Azure Cost Management. Look at the last 18 months. Calculate the cloud-cost growth rate. Calculate the customer-count growth rate. The gap is your drift.
  2. Pull the top 10 cost categories. For each, name the team that owns it. If you cannot name an owner for more than three of the ten, the cadence gap is your most expensive one.
  3. Check the percentage of resources missing the basic environment tag. If it is above 30%, the tagging gap is your second most expensive one.

This quarter

  1. Define the four-tag policy. Enforce at create time via Azure Policy. Start the 8-week backfill sweep.
  2. Stand up the 30-minute monthly cost meeting. First two meetings will be messy. By the third, the discipline holds.
  3. Draft the pre-flight form and decision log. Apply the threshold to anything new starting now; do not retrofit existing services.
  4. Set the three threshold alerts in Azure Cost Management. Route to the cost-meeting owner.

This year

  1. Run a Cloud Cost Review at the 6-month mark, after the operating model has bedded in. The Review will catch the items the operating model alone does not surface (architectural changes, RI strategy, egress refactoring).
  2. Bring board-level visibility to cloud unit economics. One slide a quarter: cost per customer, cost per major feature, cost per region. The slide takes 15 minutes to produce once the tagging is in place.
  3. Revisit RI and savings-plan strategy in month 9, with 9 months of clean tagging data behind you. By then the forecast is credible.

Closing

Cloud cost drift is real. It is not slow growth, and it is not bad luck. It is the cumulative effect of three governance gaps: tagging, cadence, and pre-flight. Tools help once the gaps are closed. Tools do not close the gaps.

For a growing B2B SaaS, the operating model that arrests drift takes about a week and a half to set up and four to six hours of senior time per month to maintain. The savings from closing the gaps typically run 20-30% in year one, with most of that holding into year two. The largest single line is almost always forgotten or oversized non-production environments. The second is premium-tier SKU defaults. The third is egress patterns nobody modelled.

The cost number is the symptom. The governance gap is the cause. If you only fix the symptom, the drift returns within twelve months. If you close the cause, you stop having this conversation every six months.

If you want a second pair of eyes on whether your Azure or AWS or GCP bill is telling you a governance story, a scoping call is free. Thirty minutes, no deck, straight answers.

If you already know the bill is telling you that story and you want a fixed-fee diagnostic with a written savings roadmap, the Cloud Cost Review is the productised version of this work. Three tiers (25K / 50K / 100K DKK depending on annual cloud spend), 5-10 business days, a written report you keep, and the 2x-savings-or-refund guarantee on committed-savings opportunities identified within 90 days. Scope and tier-by-tier deliverables are on the services page.

Related reading: ISO 27001 for growing SaaS covers the same governance-vs-tooling pattern in the compliance domain. Fractional security leadership covers when a fractional engagement is the right operating-model fit. AI readiness is not data readiness applies the same operating-model lens to AI governance. NIS2 readiness for Danish SaaS covers how cloud governance intersects with the NIS2 security programme you may already be building.