By now your company uses AI in ways that were not on your radar 18 months ago. Support agents paste ticket threads into ChatGPT. Engineers commit code that Copilot helped write. Marketing has a tool that drafts sequences. Maybe your product itself has a new AI feature your sales team already pitches. None of this is wrong. It just means the set of systems you are responsible for has quietly grown.

The EU AI Act is now in partial force, with more obligations landing through 2026 and 2027. Most of what you have read about it is either too legal to act on or too alarmist to trust. This article is the practical middle. It is written for the CEO or CTO of a growing B2B SaaS company in Denmark or the Nordics who wants a clear picture of what applies, what does not, and what reasonable work looks like between now and August 2026.

What is already in force and what is coming

The AI Act is Regulation (EU) 2024/1689, published in the Official Journal of the European Union. It entered into force on 1 August 2024, but applies in phases. That phased approach is the single most useful thing to understand, because it explains why the obligations you face today are narrower than the ones you will face in August 2026, and why waiting until 2026 to start is a bad plan.

Here is the timeline in the form most operators find useful.

DateWhat appliesWhy it matters to you
1 August 2024AI Act enters into forceThe clock starts. Nothing is directly enforceable yet, but this is the legal reference point.
2 February 2025Prohibited AI practices are banned. AI literacy obligation applies to providers and deployers.The literacy obligation is the one most companies miss. It applies to you today if your staff use AI.
2 August 2025General Purpose AI (GPAI) model obligations apply. Governance structure and national competent authorities are in place. Penalties for prohibited practices become enforceable.Your upstream model providers (OpenAI, Anthropic, Google, Mistral and others) are now regulated entities. Some of their obligations flow through to you.
2 August 2026Most remaining provisions apply, including obligations for stand-alone high-risk AI systems and transparency obligations for certain systems (chatbots, AI-generated content, emotion recognition, biometric categorisation).This is the real deadline for most SaaS companies. The transparency rules hit product features that are already live.
2 August 2027Obligations for high-risk AI systems embedded in products already covered by existing EU product safety law (Annex II) apply.Relevant if your product is a regulated medical device, machinery, toy, or similar. For pure software SaaS it is usually not the binding deadline.

Note (April 2026):The European Commission's Digital Omnibus proposal, currently under trilogue negotiation, would defer some high-risk deadlines. The European Parliament committees voted in favour of the proposal in March 2026, but trilogue talks stalled in late April 2026 and a third round is scheduled for 13 May 2026. Until the Omnibus is formally adopted, the 2 August 2026 date remains in force. Do not plan around the deferral; plan for the original date and treat any delay as a bonus.

Two things to take from the table. First, the February 2025 literacy obligation is the easiest to overlook and the easiest to show up in a customer audit or procurement questionnaire. Second, August 2026 is the date your board should be anchored on, not August 2027.

A third, less obvious point: the phased timeline is not a gentle on-ramp. It is a sequence of dates where different enforcement bodies start looking. Prohibited practices were enforceable from August 2025. GPAI obligations have been the focus of the European AI Office since the same date. From August 2026 onwards, national market surveillance authorities across all 27 Member States will have the mandate and, increasingly, the capacity to act on the rest. If you operate in multiple Nordic markets, assume your lowest-patience regulator sets the pace.

Where you likely sit: deployer, provider, or both

The AI Act assigns obligations by role. Two roles matter for almost every growing SaaS company: deployer and provider. A third role, importer or distributor, occasionally applies but is usually not the binding one for Nordic B2B SaaS.

Deployer

A deployer is anyone who uses an AI system under their own authority in the course of a professional activity. If your customer success team uses an AI tool to summarise support tickets, you are a deployer. If your recruiting team uses a tool that scores CVs, you are a deployer. If marketing uses an AI writing assistant, you are a deployer.

Deployers have lighter obligations than providers in most cases. They are expected to use systems according to instructions, maintain human oversight where relevant, log usage where the system requires it, and ensure their staff are sufficiently AI literate. For high-risk AI systems specifically, deployers also have duties around input data quality, monitoring, and notifying affected individuals in some cases.

Provider

A provider is an entity that develops an AI system or has one developed, and places it on the market or puts it into service under its own name or trademark. If your product has an AI-driven feature that your customers use, and you put that feature on the market, you are a provider for that feature.

Providers have the heavier set of obligations. For high-risk systems this includes conformity assessment, technical documentation, quality management systems, post-market monitoring, and registration in the EU database. For systems that are not high-risk but have transparency obligations (a customer-facing chatbot, for example), providers must ensure users know they are interacting with AI and must mark AI-generated content in machine-readable form where feasible.

Why most growing SaaS companies are both

I see this pattern constantly. A 120-person SaaS company deploys OpenAI and Anthropic APIs internally for support, engineering, and marketing productivity. The same company has shipped two or three AI-driven features in its product over the last year. That company is a deployer for its internal use and a provider for its product features. The two sets of obligations run in parallel. Your AI inventory needs to reflect both.

There is also a rule that matters when the line gets blurry. If you materially modify a high-risk AI system, or put your own name on it, or use it for a purpose different from the one it was intended for, you can become the provider even if someone else built it. This is most relevant if you fine-tune or wrap a foundation model and sell access to customers as if it were your own capability.

I have seen a Danish SaaS company argue, for three months, over whether a feature that summarises call transcripts using an OpenAI model made them a provider. The honest answer was yes, for that feature. The product was sold to customers under the company's name. The fact that the underlying model was OpenAI's did not change the role question, it just added a supply-chain layer. Once the team stopped arguing and accepted the classification, the documentation work became tractable. The lesson: role classification is a short exercise if you do it honestly, and a long one if you are trying to find a way out of it.

Risk categories: which ones actually apply to growing SaaS

The Act sorts AI systems by risk. Not every category applies to you. A lot of founder anxiety comes from the assumption that the high-risk category is the default. It is not.

Prohibited practices

A small list of uses are banned outright. Social scoring of natural persons, manipulative techniques that exploit vulnerabilities, untargeted scraping of facial images to build recognition databases, real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions), and a few others.

For B2B SaaS these almost never apply. The one edge case worth flagging is emotion inference in the workplace or in education. If your product infers employee emotions from voice, video, or text in a work context, that is a prohibited practice with narrow exceptions for medical or safety purposes. If you have a feature in that neighbourhood, get it reviewed before August 2026.

High-risk

This is the category most SaaS founders assume they are in. Most of the time, they are not. High-risk AI systems are defined either by being safety components of regulated products (Annex II, relevant to medical devices, machinery and similar), or by fitting one of the use cases listed in Annex III.

The Annex III categories that most often touch growing SaaS are employment and HR (CV screening, task allocation, performance evaluation, termination recommendations), access to essential services (credit scoring for consumers, eligibility for public assistance, some insurance pricing), and education (admissions decisions, exam evaluation). Law enforcement, migration, and administration of justice are on the list too but are rarely relevant to private SaaS.

Two things before you classify yourself as high-risk. One, read the exact category description carefully. An AI feature that helps a recruiter sort candidates by keyword match is a different beast from one that ranks candidates by predicted job performance. Two, there is an exemption worth knowing. If your system performs a narrow procedural task, improves the result of a previously completed human activity, detects patterns in decisions without replacing the human assessment, or performs preparatory work, it may be exempt from high-risk classification even if it sits within an Annex III category. One important carve-out: any system that performs profiling of natural persons remains high-risk regardless of these exemptions, which matters for HR-scoring, credit-scoring, and similar use cases. The exemption is a judgement call and needs documentation, but it rules out high-risk classification more often than founders expect.

Limited-risk, or transparency obligations

This is the category most SaaS companies underestimate. It applies to AI systems that interact with humans, generate synthetic content, or perform emotion recognition or biometric categorisation.

In plain terms: if your product has a chatbot, users must be told they are interacting with an AI, unless it is obvious from context. If your product generates text, image, audio or video, that content must be marked as AI-generated in a machine-readable format where technically feasible. If your product uses emotion recognition or biometric categorisation, affected people must be informed.

These obligations apply from August 2026 and cover a lot of product features that are already in the wild. If you have a customer-facing chatbot or a “generate with AI” button, this is the obligation that most directly shapes your 2026 product roadmap.

Watermarking and machine-readable marking of AI-generated content is an area where the standards are still settling. The text of the Act requires providers to mark outputs in a machine-readable format and detectable as artificially generated or manipulated, as far as is technically feasible. What that looks like in practice for text, in particular, is evolving. C2PA-style content provenance is the most mature approach for images and video. For text, the current defensible position is clear UI-level disclosure plus whatever metadata your model provider exposes. Do not wait for a perfect standard; ship a reasonable disclosure now and iterate.

Minimal or no risk

Most internal productivity uses of AI (writing assistance, meeting summaries, code completion, generic search) fall here. There are no specific product obligations for these uses beyond the horizontal ones: AI literacy, basic governance, respecting whatever obligations your upstream provider passes down.

GPAI model obligations

General Purpose AI models are the large foundation models provided by the likes of OpenAI, Anthropic, Google, Meta and Mistral. The providers of those models carry obligations around technical documentation, training data summaries, copyright compliance, and in the case of GPAI models with systemic risk, additional evaluation and incident reporting duties.

These obligations largely apply to the model provider, not to you as a downstream user. However, some of them become your problem through contract and documentation. Your upstream is required to give you certain information about the model. You are required to pass some of that information on in your own documentation, especially if you are a provider of an AI system built on top of the foundation model.

The AI literacy obligation everyone misses

Since 2 February 2025, providers and deployers of AI systems have been required to take measures to ensure that their staff and other people operating or using AI systems on their behalf have a sufficient level of AI literacy. This is a short clause with wide reach. It applies to any company that uses AI, which at this point is effectively every company.

There is no official curriculum. “Sufficient” is left to you to define, taking into account the technical knowledge, experience, education and training of the people involved, and the context in which the AI systems are used. In practice, a defensible programme covers four things.

  1. Basic concepts: what an AI system is, what machine learning is, what generative AI is, what a large language model is, and what these systems can and cannot do.
  2. Risks and limitations: hallucination, bias, data leakage, prompt injection, over-reliance, and the difference between outputs that look confident and outputs that are correct.
  3. Your company's approved use cases and tools, and the ones that are not approved.
  4. The specific obligations under the AI Act and GDPR that apply to the reader's role (a developer, a recruiter, a support agent and a salesperson do not need the same training).

Document the training programme. Keep attendance records. Refresh annually or when you make a material change to your AI tooling. This is the easiest obligation to meet and the easiest to be caught flat-footed on in a customer audit. A 50-person SaaS company can run the first version of this programme in one working week.

Fines and enforcement

The penalty ceilings in the AI Act are higher than GDPR.

CategoryUpper limit
Prohibited AI practicesUp to €35 million or 7% of total worldwide annual turnover, whichever is higher
Most other obligations (high-risk, transparency, etc.)Up to €15 million or 3% of total worldwide annual turnover, whichever is higher
Supplying incorrect, incomplete or misleading information to authoritiesUp to €7.5 million or 1% of total worldwide annual turnover, whichever is higher

For SMEs and start-ups, the Act instructs national authorities to apply the lower of the two figures rather than the higher. That is a real moderating factor, but it is also a relative one. The upper bounds exist to scale with worldwide turnover, and for mid-market Nordic companies the absolute exposure is still material.

Enforcement happens through national competent authorities in each Member State, with a European AI Office coordinating at EU level for GPAI matters. In Denmark, the competent authority framework was set up under the coordination of Digitaliseringsstyrelsen (the Agency for Digital Government), with sector regulators retaining jurisdiction in their own domains (Finanstilsynet for financial services, Lægemiddelstyrelsen and similar for health, Datatilsynet for anything that overlaps with GDPR). So a Danish fintech building AI credit scoring features answers to Finanstilsynet in the first instance. A Danish HR tech company answers to the general framework.

The practical point for a CEO is this: enforcement authorities at this stage are under-resourced and still building capacity. Early enforcement will focus on prohibited practices and on egregious non-compliance in high-risk categories. Your real exposure in 2026 is less likely to be a regulator fine and more likely to be a customer walking away because your answers to their AI questionnaire are poor.

There is also a reputational channel worth mentioning. Under GDPR we learned that the real cost of non-compliance often arrived through a single Berlingske or Politiken story, not through a DPA fine. The same dynamic will apply here. A Danish SaaS company that mishandles an AI feature in a way that harms a customer's end user will live with the headline long after any enforcement case is resolved. Governance is partly insurance against that outcome.

What real readiness looks like for a growing SaaS

Readiness is not a document. It is a set of artefacts and processes that together let you answer, within a day, any reasonable question a customer, regulator or auditor asks you about AI. Concretely, it is the following stack.

AI inventory

A simple register of every AI system used internally and every AI-featured component in your product. For each entry you want: name, vendor, underlying model, business owner, data types processed, whether personal data is involved, purpose, date added, and a risk classification. A spreadsheet is fine to start. Many companies eventually move this into a GRC tool, but the spreadsheet version is enough for the first 12 months.

The inventory is the foundation of everything else. Every other artefact refers back to it. Do this first.

Risk classification per entry

For each inventory entry, a one-page classification note: prohibited, high-risk, limited-risk with transparency obligations, or minimal. For each classification, a two-line rationale that references the relevant part of the Act. If you have applied the narrow-procedural-task style exemption, write down exactly why. You are writing for your future self in a customer audit.

AI acceptable use policy

One document, two or three pages. What tools are approved, what data can go into them, what data never can, who to ask when in doubt, and the consequences of policy breach. Do not pretend to be more sophisticated than you are; a short policy people actually read is worth ten times a long policy nobody has seen.

AI literacy training programme

Role-differentiated training. Attendance records. Annual refresh. Covered above.

Vendor due diligence

A standard set of questions you ask every AI vendor before procurement. Who provides the underlying model? What data do they train on? What is your DPA and how does it handle prompts and outputs? Do you offer zero-retention or data-isolation modes? How are you meeting your GPAI obligations under the AI Act? What model card and training data summary is available? Where is data processed?

Keep the answers on file. This is a sizeable chunk of what customers will ask you about in their questionnaires, and you cannot answer well if your own vendors have not answered well.

Human oversight documentation

For any AI system that meaningfully affects a customer or an employee, document the human oversight mechanism. Who reviews outputs? On what sample rate? What are the escalation paths? How do you detect and respond to degraded model behaviour? This is required for high-risk systems, and strongly advisable for the rest.

Transparency disclosures

Where transparency obligations apply, bake the disclosure into the product. A chatbot should state it is AI. AI-generated images or text should carry a provenance marker where feasible. Do not bury these in a terms page. They are meant to be visible at the point of interaction.

Incident and malfunction reporting

An internal process to capture and triage AI incidents: a model that starts producing harmful output, a prompt injection attack, a data leakage event, a vendor outage that degrades your product. For high-risk systems, certain serious incidents must be reported to the competent authority within specific timelines. Even for non-high-risk systems, a simple log is useful and is likely to be asked about in customer questionnaires.

Customer-facing AI governance documentation

A short document your sales team can send to any prospect that asks about your AI practices. It describes your governance framework, your use of foundation models, your data handling, your oversight mechanisms, and your AI Act compliance posture. Three to five pages. Boring on purpose. Customers want boring here.

The overlap with customer AI questionnaires

Enterprise customers have started sending AI-specific security and governance questionnaires. Typical ones cover 60 to 100 questions across model selection, data handling, training data use, retention, oversight, bias testing, explainability, incident handling, AI Act compliance, and disclosure to end users.

Two observations from doing this work with Danish and Nordic SaaS companies. First, the questionnaires are getting more similar to each other. The same core 40 questions show up in most of them, lightly re-worded. Building a clean internal answer library for those 40 questions is now one of the highest-return things you can do in AI governance. Second, the commercial pressure and the regulatory obligation point at the same underlying work. The AI inventory, risk classification, vendor due diligence and oversight documentation you need for the AI Act are the same artefacts the questionnaire answers draw from. You do not have two programmes. You have one programme with two audiences.

Practically: when a questionnaire arrives and your sales engineer is overwhelmed, the cause is almost always that the underlying artefacts do not exist yet, not that the questionnaire is hard. Once the artefacts exist, answering a new questionnaire is a day of work, not a week.

A specific pattern I recommend: set a target response time for AI questionnaires (three business days is a reasonable bar). Measure yourself against it. If you are consistently missing the bar, that is the signal to invest in the answer library, not the signal to hire more sales engineers. The bottleneck is almost always content, not capacity.

GPAI supplier pass-through obligations

If you use OpenAI, Anthropic, Google, Meta, Mistral, or any other foundation model provider, some of the AI Act obligations flow down to you via contract and through the information the providers are required to share with you.

Specifically, GPAI model providers are required to publish a sufficiently detailed summary of the content used to train the model, maintain technical documentation including model cards, put in place a policy to comply with EU copyright law, and for models classified as having systemic risk, perform additional evaluation, red-teaming and incident reporting.

As a downstream user you want to do three things with this information.

  1. Collect it. Store the model card, the training data summary, and the provider's AI Act compliance statement for each model you use. Re-collect on material updates.
  2. Pass through what you must. If you yourself are a provider of an AI system (a product feature built on the foundation model), some of this information needs to be reflected in your own documentation to customers.
  3. Ask the right questions before you sign. If a provider cannot tell you which of their models are classified as GPAI or GPAI-with-systemic-risk, or cannot tell you where their training data summary lives, that is a signal about their compliance maturity.

In contract terms, your DPA should now sit alongside an AI-specific addendum, or include AI-specific clauses. Zero-retention mode for your prompt data, no training on your inputs or outputs without opt-in, region of processing, and documented GPAI obligations pass-through are the four clauses I check first.

One subtlety worth flagging: not every AI tool you use runs on a GPAI model. Some vendors offer narrower, task-specific models that are not classified as GPAI. The Act still applies to your use of those tools, but the supply-chain pass-through question is different. Ask each vendor explicitly which model they are providing and how it is classified under the Act. If the answer is vague, that is your finding.

Common traps and misconceptions

The patterns below come up repeatedly in growing Nordic SaaS companies. Most are cheap to avoid once named.

Thinking the AI Act is “for tech giants”

The headline obligations around high-risk AI systems do target large providers. The AI literacy obligation, the transparency obligations, and the deployer duties apply to everyone who uses AI in a professional context. Size does not exempt you from those. It does, however, scale the expected effort. A 60-person company does not need the governance of a 6,000-person bank.

Classifying customer-facing chatbots as minimal risk

A chatbot that talks to your customers is almost certainly subject to transparency obligations, even if it is not high-risk. Users must know they are interacting with an AI. If you generate support responses with AI and present them as human-written, you are on the wrong side of the rule from August 2026.

Treating the DPO as the AI lead

Your Data Protection Officer is a GDPR role. The AI Act overlaps with GDPR but is not the same. The skills required for AI governance are different: product knowledge, model literacy, risk classification judgement, and a feel for how ML systems break. Assigning AI governance to the DPO because both acronyms start with a D is one of the common anti-patterns I see. Some DPOs are the right person for the AI role as well; most are not by default.

Writing an AI policy before you know what you have

Companies often write an acceptable use policy first and figure out what they actually use second. The order should be reversed. Build the inventory, classify the risks, then write the policy. A policy grounded in what you actually do is shorter, sharper, and more defensible than a generic template.

Over-engineering governance

I have seen 80-person companies draft 15 AI policies covering every conceivable scenario. They do not need 15 policies. They need two or three (acceptable use, vendor due diligence, incident handling) and a live inventory. Policies you do not enforce are worse than no policies, because they create written evidence of the gap between what you say and what you do.

Assuming ISO 27001 or SOC 2 covers AI obligations

They do not. ISO 27001 is an information security management system. SOC 2 is an attestation of controls against the Trust Services Criteria. Both touch AI tangentially, but neither maps to AI Act obligations. The aligned standard is ISO/IEC 42001, an AI management system standard. You do not have to certify to 42001, and most mid-market Nordic SaaS companies will not yet. But if you want a management-system frame for your AI governance work, 42001 is the one that matches.

Panicking about high-risk classification

Before you conclude that your product is high-risk, read Annex III carefully and check the narrow-procedural-task style exemption. A surprising number of features that look high-risk on first glance turn out to sit in limited-risk territory with transparency obligations only. That is a different, cheaper compliance posture. Do not volunteer yourself into the harder category.

Costs and timelines

Rough, realistic ballparks for a growing Danish SaaS company. These assume you are starting near zero and want to reach a posture you can defend to a regulator or a demanding customer.

PathEffortExternal spendWhat you get
90-day readiness sprint with advisory support0.3 to 0.5 FTE internal, spread across legal, security, product, and one executive sponsor60 to 100K DKK advisoryInventory, risk classification, policies, literacy programme, vendor DD framework, customer-facing governance doc, answers to the standard 40 questionnaire questions
Ongoing steady state after the sprint0.1 to 0.2 FTE internalLow, occasional advisoryInventory kept current, annual literacy refresh, new vendor reviews, questionnaire responses, incident handling
Large consultancy alternativeSimilar internal effort; different external pattern500K to 1.5M DKKMore deliverables, more slideware, usually more generic and less well integrated with your actual product and team

The large consultancy option is not without merit, but for a growing SaaS company it typically over-produces documentation relative to what the regulator or a customer will actually check. The cheaper path works if your internal people are reasonably senior and willing to own the work, and if the external support is shaped to your business rather than to a template deck.

What to do this week, this quarter, this year

Twelve concrete actions, in the order I would do them.

This week

  1. Assign a single accountable owner for AI governance. Usually a CTO, Head of Security, or Head of Product. Not the DPO by default.
  2. Start the AI inventory. Open a spreadsheet. Send a 10-question form to department heads asking what AI tools their teams use and for what purpose. Accept that the first version will be incomplete.
  3. Read the top of Annex III of the AI Act with your product lead. Ten minutes of reading will tell you more than ten hours of second-hand summaries.

This quarter

  1. Finish the inventory. Classify each entry. Document the classification rationale.
  2. Stand up an AI literacy programme. Role-differentiated, one session per role, recorded, attendance logged.
  3. Draft two or three short policies: acceptable use, vendor due diligence, AI incident handling. Kill any draft that runs over three pages.
  4. Review your top three AI vendors. Get model cards, training data summaries, AI Act compliance statements, and data processing clauses on file.
  5. Build your customer-facing AI governance one-pager. Circulate internally for sales enablement.

This year

  1. Bake transparency disclosures into any customer-facing AI feature that needs them. Do this before August 2026, not after.
  2. Build your questionnaire answer library. Cover the standard 40 questions that appear in most customer AI assessments. Keep it versioned.
  3. Run a tabletop exercise for one AI incident (a model producing harmful output, a prompt injection, a vendor breach). Find the gaps. Fix them.
  4. If any of your features sit in Annex III territory without a clear exemption, commission an independent review before the August 2026 applicability date. Do not self-certify high-risk status in an ambiguous case.

Closing

The AI Act is large, phased, and in places genuinely complex. For a growing Nordic SaaS company the practical reality is smaller than the headlines suggest. A focused 90 days gets you to a defensible posture. A light ongoing effort keeps you there. The commercial pressure from enterprise customers is pushing on the same door as the regulatory obligation, which means the work pays off twice.

The companies that quietly do the inventory, the classification, the literacy programme, and the vendor work during 2026 will not have a story to tell. Their customer questionnaires will go out the door in a day. The ones that wait until the questionnaires start blocking deals will pay a premium in time and money to catch up, and will probably do a worse job while they are at it.

If you want a second pair of eyes on where your company actually stands, or help shaping the 90-day plan and the answer library before the next big customer assessment lands, a scoping call is free. Thirty minutes, no deck, straight answers.

Related reading

AI readiness is not data readiness covers the operating-model side of AI governance in depth: the six dimensions (model ownership, decision traceability, accountability, deployer posture, provider posture, vendor due diligence), where growing companies actually stand, and what 90 days of focused work delivers.

Cloud cost drift: how a doubled Azure bill tells a governance story covers the AI and cloud cost intersection. Companies scaling AI features often discover the compute and API costs compound in the same way cloud infrastructure costs do: without tagging, cadence, and pre-flight discipline, the spend drifts faster than the product grows.