A pattern I see in almost every conversation with a growing-company CEO or CTO right now. The question of AI readiness comes up. The answer goes something like this: “We are cloud-native. Our data is in Snowflake or BigQuery. We have dashboards. We have a data team. We are AI-ready.” The mental model is that being cloud-native and data-mature equals AI-ready by default. That mental model is wrong, and the gap it hides is the most expensive misconception in the current cycle.
Cloud-native infrastructure is not data readiness. Data readiness is not AI readiness. They are three different things, with three different operating-model implications, and you can have one or two of them without having the third. Most growing companies have the first one, are partway to the second one, and have given the third almost no thought. This article is for the CEO or CTO who wants an honest framing of what AI readiness actually is, where their company probably stands, and what 90 days of focused work looks like at this stage.
Data readiness, defined briefly
Data readiness is a real and useful concept. A data-ready company has a clear inventory of its data, the data is reliable enough to make decisions on, lineage is traceable, access is governed, and freshness matches the use case. A growing SaaS that has invested in a modern data stack (a warehouse, dbt or similar transformation tooling, a BI layer, role-based access on the warehouse) is reasonably data-ready for analytics use cases.
Data readiness has a known set of failure modes: stale ETLs, undocumented joins, broken lineage when source schemas change, dashboards that drift from each other, role permissions that have not been reviewed in two years. These failure modes have known fixes. The discipline is mature, the tools are mature, and most data teams know what good looks like.
None of that translates automatically to AI readiness. Data readiness is a foundation on which AI work happens; it is not the AI work itself. The translation gap is what this article is about.
Why data readiness is not AI readiness
Three reasons the translation does not happen automatically.
First, data readiness is about data the company controls. AI readiness is significantly about models the company does not control. When your support team uses ChatGPT to summarise tickets, the data going in is your data, but the model processing it is OpenAI's. Your data team has no governance authority over that model. The model can change behaviour overnight (a new version, a new training run) without your team being notified. None of the data discipline you have applied to your warehouse applies to the model.
Second, AI readiness is about decisions, not data. A data team optimises for “the right number reaches the right dashboard.” An AI system optimises for “the right output reaches the right user, in a way they can act on or contest.” The shape of the governance is different. You do not lineage-trace a chatbot answer the same way you lineage-trace a revenue figure.
Third, the regulatory frame is different. GDPR governs personal data. The EU AI Act governs AI systems. There is overlap (especially around personal-data processing in AI systems), but there is also material additional surface: provider obligations, deployer obligations, transparency duties, AI literacy duties, and a separate enforcement mechanism. Companies that assume their GDPR programme covers their AI obligations are skipping work the AI Act explicitly requires.
These three gaps compound. A company can have excellent data discipline and still ship an AI feature that fails an enterprise customer's AI risk questionnaire, because the customer is asking about model ownership, decision traceability, and oversight, not about data quality.
Six dimensions of AI readiness
AI readiness is not a single capability. It is six dimensions, each of which has to be at least partially in place for the answer to a customer audit or a board question to land cleanly.
1. Model ownership
For every AI system in the company, somebody can answer: which model is this, who provides it, what version, where does it run, and what changes when the provider updates it. Model ownership covers staff use (the ChatGPT account the marketing team uses, the Copilot license the engineering team uses) and product features (the OpenAI API call inside your customer-facing summary feature).
The artefact is an AI inventory. The bare minimum is a spreadsheet with one row per AI system: name, vendor, underlying model, business owner, purpose, data types processed, whether personal data is involved, date added, risk classification. Most growing companies do not have this. The ones that do almost always have it incomplete (the marketing team's tools are listed; the engineering team's VS Code Copilot, Cursor, and Claude Code subscriptions are not).
Why this dimension is foundational: every other AI governance question reduces back to it. You cannot do risk classification without an inventory. You cannot do vendor due diligence without knowing your vendors. You cannot answer a customer questionnaire about your AI use without knowing what your AI use is.
2. Decision traceability
For any AI system that meaningfully affects a customer or an employee, you can reconstruct, after the fact, what went into a particular output and what came out. Worth distinguishing this from explainability in the academic sense. Explainability asks why the model produced a specific token sequence at a mechanistic level, which foundation-model providers cannot meaningfully give you. Traceability is more pragmatic: the inputs, the system prompt, the configuration, the model and version, the time, the output, and any post-processing applied. Traceability is what the AI Act largely requires (specifically for high-risk systems under Article 12 logging obligations), what customer audits ask for, and what you can actually deliver. Explainability is research; traceability is engineering.
Even traceability is harder than it sounds. The default state of most AI integrations in growing companies is “we call the API, we use the response, we do not log either side.” That state fails the first serious customer question. “A user complained about the output your system gave them on Tuesday. Show us the input and what you sent to the model.” The honest answer of “we have no record” is a procurement-conversation-ender.
The fix is unglamorous: log the input, the prompt, the model and version, the output, and the user-facing surface. Retain for a defined period. Apply normal data-protection discipline (no PII in plain logs unless your DPA covers it). Most companies need a week of engineering effort to retrofit this on existing AI features. Add it to the architecture pre-flight for new ones. (Cloud cost note: the same logging is also useful for unit-economics analysis; see the companion piece on cloud cost drift for how this connects.)
3. Accountability when systems fail
Models fail. Sometimes silently (a quality drop), sometimes loudly (a hallucination that produces a wrong answer to a customer-facing query, a prompt injection that exfiltrates context, a latency spike that breaks a workflow). When that happens, the company has to know: who notices, who decides whether to disable the feature, who communicates with affected users, who decides whether to notify the regulator, who decides whether the feature comes back online and on what conditions.
This is incident response, applied to AI. It is not the same as your existing incident response process, because the failure modes are different. A model that is producing subtly wrong outputs may not trigger any of your existing alerts. A prompt injection attack does not look like a SQL injection attack. The runbook needs to be specific.
At a minimum, three things need to exist. A simple AI incident classification scheme (quality degradation, harmful output, prompt injection or jailbreak, vendor outage, data leakage). A response process that names the on-call decision-maker for each class. A retrospective process that updates the inventory and the oversight controls after each incident.
For high-risk systems under the EU AI Act, certain serious incidents must be reported to the competent authority within specific timelines (Article 73). Even for non-high-risk systems, customers are increasingly asking about your AI incident process in their security questionnaires. The cost of having a written process is low. The cost of not having one when asked is high.
4. Governance posture for staff use (deployer mode)
Under the EU AI Act, most growing SaaS companies hold two roles simultaneously. You are a deployer for the AI tools your staff use (ChatGPT, Copilot, coding assistants, and similar) and a provider for any AI feature you ship to customers under your own name. The same company, two different sets of obligations, running in parallel. This distinction matters because the compliance work differs by role: deployer obligations are lighter (acceptable-use, literacy, human oversight), while provider obligations attach to your product features and are heavier (transparency disclosures, technical documentation, and for high-risk systems, conformity assessment).
Note (April 2026):The European Commission's Digital Omnibus proposal, currently under trilogue negotiation, would defer some high-risk deadlines. Until formally adopted, the 2 August 2026 date remains in force. Plan for the original date.
Most growing companies are deployers under the EU AI Act for the AI tools their staff use internally. Deployer obligations are lighter than provider obligations, but they are not zero. They include: using the system per the instructions, maintaining human oversight where relevant, ensuring staff are sufficiently AI-literate (Article 4, in force since 2 February 2025), and for high-risk systems specifically, additional duties around input-data quality, monitoring, and notifying affected individuals.
The artefact set for deployer governance: an acceptable-use policy (one to three pages, role-differentiated, focuses on what data goes where), a literacy programme with attendance records, and a vendor-onboarding checklist for any new AI tool a team wants to bring in.
Most growing companies have a draft acceptable-use policy that someone wrote in a hurry six months ago. Few have a literacy programme. Almost none have a vendor-onboarding checklist that gates AI-tool procurement. The literacy gap is the most often missed and the most often surfaced in customer questionnaires (“how do you ensure your staff uses AI responsibly?”).
5. Governance posture for product features (provider mode)
If your product has an AI feature your customers use, you are a provider for that feature, even if the underlying model is OpenAI's or Anthropic's or Mistral's. Provider obligations are heavier than deployer obligations. For high-risk systems they include conformity assessment, technical documentation, post-market monitoring, and registration in the EU database. For limited-risk systems with transparency obligations (a customer-facing chatbot, a generate-with-AI button), they include disclosure that the user is interacting with an AI and machine-readable marking of AI-generated content where technically feasible.
The artefact set for provider governance: technical documentation per AI feature, a transparency-disclosure pattern baked into the UI, a post-deployment monitoring approach, and any conformity-assessment artefacts required for the risk class.
Most growing-company AI features are in the limited-risk category (transparency obligations only, applicable from 2 August 2026). The transparency disclosure is the work most likely to require product changes between now and that date. The disclosure cannot be buried in a terms page; it has to be visible at the point of interaction. Any company that has shipped a chatbot or a generate-with-AI button in the last year has six months to update the UI.
6. Vendor due diligence baseline
Every AI tool you buy is a supply-chain decision. The model provider is your supplier, your customer's supplier (transitively), and a regulated entity under the AI Act. Vendor due diligence on AI suppliers is not the same as vendor due diligence on a SaaS infrastructure provider. The questions are different.
A defensible AI vendor checklist covers: who provides the underlying model, what data the vendor trains on, the DPA terms (specifically how the vendor handles prompts, completions, and metadata; does it train on your inputs by default), zero-retention or data-isolation modes available, GPAI obligations the provider is meeting under the AI Act, model card and training-data summary availability, region of processing, and pass-through clauses you need to honour to your own customers.
Most growing companies have a vendor due diligence process that was designed for traditional SaaS and that misses the AI-specific questions entirely. Updating it is a one-week piece of work. Re-running the top three AI vendors against the new checklist is another week. After that, every new AI vendor onboards through the updated process and the cumulative review burden is small.
Speed as the real risk
At a Nordic CISO summit last week, one of the conversations that landed hardest was the speed mismatch between AI-assisted development and the security and governance review process. Teams are accelerating from proof-of-concept to MVP in weeks rather than months because AI-assisted tooling makes it possible. Security teams continue to do manual architecture reviews on a calendar that has not changed. The gap is widening.
The instinct in growing companies is to fix this by hiring more security or governance people. That instinct is wrong, for two reasons.
First, the math does not work. If your engineering output is 5x faster, you do not need 5x more security reviewers. You need a review process that scales differently. Hiring is a linear answer to a structural problem.
Second, the speed gain comes from AI-assisted tooling. The fix is also AI-assisted tooling, on the review side. Pre-flight checklists that lint architecture diagrams against your patterns. Static analysis that flags AI-vendor calls without DPA coverage. Automated pull-request reviews that check for prompt-template patterns that leak data. The same automation discipline that scaled engineering scales review.
For a growing company, the realistic version of this is not a custom-built review platform. It is a checklist baked into the pull-request template, a CI step that flags new external API calls for human review, and a 30-minute weekly review cadence with the IT/security/AI lead. None of that is exotic. All of it is missing in most companies that ship AI features today.
The risk if you do not close the gap: a customer audit catches an AI feature that shipped without the disclosure, the inventory entry, or the vendor DD, and the procurement conversation becomes unrecoverable. The cost of catching it before the audit is small. The cost of catching it during the audit is the deal.
The board knowledge gap
Two regulations changed the management-liability picture for IT and AI. NIS2 Article 20 places a personal accountability burden on management for cyber-risk-management practices. DORA, for in-scope financial entities, does the same for ICT risk. Both apply at the management body, not at the CISO. Personal liability is a real lever; insurance covers some of it but not all.
Two things have to be true for a board to discharge that liability well. The CISO has to be able to communicate the technology and risk picture in business language. The board has to be able to ask informed questions back. CISOs have made significant progress on the first half over the last five years. Boards have made limited progress on the second half.
The pattern I observe at Nordic boards is roughly this. The CISO presents a deck. The deck is calibrated to land in the room. The board asks one or two questions, usually about budget or the latest headline incident. The CISO answers. Everyone moves on. The questions a competent board should be asking (about residual risk, about specific control coverage, about whether the AI inventory is complete, about whether the company has a credible answer if a customer or regulator asks) do not get asked, because the language to ask them has not become standard.
For a growing company, the bar is not a board with cybersecurity PhDs on it. The bar is a board that can read a four-question dashboard quarterly and ask follow-up questions when a number moves the wrong way. The four questions are usually: what is our residual risk in the top three categories, what is the trend, what would be the cost of a serious incident in each, and what is the next investment that would change the picture.
AI adds a fifth question over the next 18 months: what is our exposure under the AI Act, where are we behind the August 2026 dates, and what would close the gap. A board asking that question quarterly is materially better protected than a board not asking it at all.
Where growing companies actually stand
An honest stocktake. The pattern across recent engagements with growing Nordic SaaS companies looks roughly like the table below. This is observed pattern across my own work, not industry-wide data; your company will not match the median exactly, but the shape will probably be familiar.
| Dimension | Typical state | Honest grade |
|---|---|---|
| AI inventory | Partial. Marketing tools listed, engineering tools partly listed, customer-facing AI features sometimes listed. | C |
| Risk classification per system | Not done. Most companies have no written classification per AI system. | D |
| Decision traceability (logging) | Inconsistent. Some product features log; staff tools usually do not. | C- |
| Incident response for AI | Almost never written down separately from existing incident response. | D |
| Acceptable-use policy | Drafted, not enforced or refreshed. | C |
| AI literacy programme (Article 4) | Not started or planned for “next quarter”. | D |
| Vendor due diligence (AI-specific) | Generic vendor DD applied; AI-specific questions missing. | C- |
| Provider obligations on product features | Transparency disclosures missing or buried in terms page. | D+ |
| Customer-facing AI governance one-pager | Not produced. Sales engineers improvise. | D |
| Board-level AI risk reporting | Mentioned in the security update; not its own line item. | C- |
A median grade across these is somewhere around D+. The companies that are at C+ across the board are visibly different in customer audits. They answer faster, they answer with confidence, they get fewer follow-up questions, their procurement conversations close faster.
The interesting observation is that the work to move from D+ to C+ is not large. It is roughly 90 days, with a clear owner and a defined work plan. The reason most companies do not do it is not capacity. It is that nobody owns the operating-model side of AI, and committees do not produce decisions.
Ninety days to move the needle
A focused 90-day plan for a growing B2B SaaS, assuming 0.3-0.5 FTE of internal effort spread across legal, security, product, and one executive sponsor, plus light external support if you do not have the operating-model knowledge in-house yet.
Days 1-30
- Designate a single accountable owner for AI governance. Usually the CTO, head of security, or the IT/security/AI lead. Not the DPO by default; not the head of data by default.
- Build the AI inventory. Send a 10-question form to every department head. Accept that the first version will be incomplete; a 70% inventory is better than no inventory.
- Risk-classify each entry. Prohibited, high-risk, limited-risk with transparency obligations, or minimal. One-page rationale per entry, referencing the relevant article of the AI Act.
- Identify the top three AI vendors by spend or by customer-data exposure. Schedule the new vendor DD against them.
Days 31-60
- Stand up the AI literacy programme. Role-differentiated training (engineering, customer success, sales, leadership). One session per role, recorded, attendance logged.
- Draft the three policies that matter: acceptable use, vendor due diligence, AI incident response. Two to three pages each. Kill anything longer.
- Run the new vendor DD on the top three. Get model cards, training-data summaries, AI Act compliance statements, DPA addenda on file.
- Bake transparency disclosures into customer-facing AI features. Inline UI, not buried in terms pages. Update before August 2026, not after.
Days 61-90
- Build the customer-facing AI governance one-pager. Three to five pages, boring on purpose, sales-enabled. Sections that cover the standard 40 questionnaire questions: governance ownership, AI inventory and classification, vendor and model supply chain, data handling and retention in AI flows, human oversight, transparency disclosures, incident response, AI literacy programme, EU AI Act compliance posture. Hand it to sales engineering. Use it as the starter answer to incoming AI questionnaires.
- Write the AI incident response runbook. One page. Test it with a tabletop exercise (a model produces harmful output, a prompt injection, a vendor outage). Find the gaps.
- Establish the operating model for steady state: monthly inventory refresh, quarterly literacy refresh, annual policy review, the AI line item on the board pack.
- Brief the board. One slide. Where the company stands across the six dimensions, what changed in the last 90 days, what the next 90 days will deliver.
That is the work. It is not glamorous. It is not a transformation programme. It is twelve concrete actions, ordered, with named owners, on a 90-day clock. Companies that do it move from a D+ posture to a C+ posture, which is the difference between losing customer audits and winning them.
Common misconceptions
“We are AI-ready because we have a data team”
Already covered, but worth restating. Data readiness is foundational and necessary; it is not sufficient. The questions buyers and regulators ask about AI are not data questions, and the team that answers data questions is not the team that answers AI questions.
“The DPO will own AI governance”
The Data Protection Officer is a GDPR role. The skills are real but not the same skills AI governance needs (product knowledge, model literacy, risk classification judgement, incident-response framing for non-deterministic systems). Some DPOs grow into the AI lead role; most do not by default. Assigning AI governance to the DPO because both acronyms start with a D is one of the most common anti-patterns I see.
“ISO 27001 covers our AI obligations”
It does not. ISO 27001 is an information security management system. Some Annex A controls in the 2022 revision do touch AI (A.5.7 threat intelligence covers AI-specific threats like prompt injection; A.8.16 monitoring covers logging that AI systems also need), but the coverage is incidental, not designed for the AI Act surface. The aligned AI standard is ISO/IEC 42001, an AI management system standard designed to pair with 27001. You do not need to certify to 42001 today, and most growing Nordic companies will not yet, but it is the right management-system frame to start from.
“We will get to it after the next funding round”
The work compounds. The cost of catching up after 18 months of unacknowledged AI debt is significantly higher than the cost of staying current quarterly. The compounding is most visible in two places: the AI inventory (the longer you wait, the more shadow tools accumulate) and the vendor footprint (every quarter you delay, you sign more DPAs without the AI-specific clauses you will eventually need to renegotiate).
“We will buy a tool”
AI governance tools exist. Some are useful, none are sufficient. The same pattern that breaks Vanta and Drata implementations breaks AI governance tool implementations: the tool generates evidence, nobody acts on it, the dashboard becomes a screen saver. The operating model is what produces decisions. The tool produces dashboards. Buy the tool once the operating model is producing decisions, not before.
“Our AI features are not high-risk, so we have nothing to worry about”
High-risk classification is one of four risk categories under the AI Act. Most growing-company features are limited-risk with transparency obligations, which is its own set of duties (visible disclosure, machine-readable marking of AI-generated content where feasible). “Not high-risk” is not the same as “no obligations.”
Closing
AI readiness is not data readiness. It is a separate set of capabilities, organised around six dimensions: model ownership, decision traceability, accountability when systems fail, governance posture for staff use, governance posture for product features, and vendor due diligence. Most growing companies have one or two of the six in partial state. The rest are absent or assigned to the wrong owner.
The work to close the gap is bounded. Ninety days, with a named owner and a defined plan, gets a growing B2B SaaS to a posture it can defend in a customer audit and a board meeting. The reason this work does not happen by default is rarely capacity. It is that nobody owns the operating-model side, and the AI question keeps getting routed to a data team or a DPO whose toolkit was not designed for it.
The framing that has been most useful in conversations with Nordic CISOs and CIOs lately is this: most of the hard problems in this space are not technology problems. They are governance problems wearing a tech costume. The tech costume keeps the wrong owner on the work. Strip the costume off and the right owner becomes obvious.
If you want a second pair of eyes on where your company actually stands across the six dimensions, a scoping call is free. Thirty minutes, no deck, straight answers.
If you have a customer AI questionnaire in front of you right now and the deal is stalling on it, the more direct fit is the Customer AI Readiness service. Standard variant (25-30K DKK fixed) for general posture work. The Questionnaire-Unlock variant (45K DKK fixed, ten business days) for one specific situation: a customer questionnaire is blocking a deal and you need governed answers, an inventory, and a defensible posture in days, not months. Scope and process are on the services page.
Related reading: EU AI Act readiness for growing companies covers the regulatory mechanics in depth (timelines, role classification, fines, GPAI pass-through). NIS2 readiness for Danish SaaS covers the cybersecurity governance baseline that most AI work also depends on. Cloud cost drift covers the same operating-model pattern applied to a different problem domain. DORA for Nordic fintech covers the financial-services regulatory layer that adds AI governance obligations for in-scope firms.