AI Act & GDPR: The Rules That Can Make or Break Your Startup
Many startups believe regulation is something they can deal with later — after traction, after funding, after product–market fit. At sTARTUp Day, the session AI Act & GDPR: The Rules That Can Make or Break Your Startup delivered a clear reality check: if you use AI or personal data, compliance is already your problem. Whether you realise it or not.
The session was led by legal experts from TRINITI Law Firm: Maarja Lehemets, Karmen Turk, and from Veriff - Aleksander Tsuiman. Together, they unpacked how the EU AI Act and GDPR intersect — and why founders often underestimate how early these rules start to apply.
Using AI does not require selling an AI product. Even deploying an internal system for HR, marketing, or customer support can trigger obligations under the AI Act. Similarly, GDPR applies whenever personal data is processed — including during model training, testing, or optimisation.
The speakers stressed that the AI Act is not an abstract future law. Large parts are already in force, and others will apply over the coming years. Waiting for “perfect clarity” is not a viable strategy.
Being “just a deployer” of an AI system does not automatically mean low responsibility under GDPR. In many cases, deployers still carry substantial obligations, especially if they control how the system is used in practice.
Correct role qualification is the foundation of compliance. Without it, companies risk underestimating both their obligations and their liability.
High-risk AI use cases — listed in Annex III of the AI Act — come with particularly heavy obligations. But even so-called limited-risk or minimal-risk systems often trigger transparency requirements, user rights, and documentation duties.
Crucially, being wrong is not necessarily the biggest risk. Failing to assess and document is.
If a supervisory authority later disagrees with your classification, having a clear, well-documented assessment provides meaningful protection.
For example, the AI Act requires human oversight mainly for high-risk systems. GDPR, however, grants data subjects the right to human intervention whenever automated decision-making affects them. This means that even low-risk AI systems may still require human review under GDPR.
Legal grounds for data processing remain critical. Consent, contract, legal obligation, and legitimate interest each come with strict conditions.
Missing or weak legal grounds remain one of the most heavily fined GDPR violations across Europe.
Privacy policies also play a bigger role than many founders expect. They are often the first thing regulators, competitors, or disgruntled former employees examine.
Documentation is the connective tissue of compliance. Assessments, role classifications, transparency decisions, and internal policies should all be written down and maintained over time.
This applies even to seemingly simple use cases such as chatbots, product recommendations, or personalised pricing. Low technical complexity does not equal low regulatory impact.
Founders were encouraged to start early, think systematically, and treat compliance as a strategic layer rather than a last-minute fix. In the AI era, legal structure is not overhead — it is infrastructure.
For startups building with AI and data, the rules are already shaping the game. The companies that understand them early will move faster, not slower.
If you think AI and GDPR don’t apply to you, they probably do
The session opened with a simple observation: most startups use AI, and almost all process personal data. Sometimes this happens directly in the product. Other times it happens through analytics, chatbots, recommendation engines, CV screening, dynamic pricing, or even internal tools.Using AI does not require selling an AI product. Even deploying an internal system for HR, marketing, or customer support can trigger obligations under the AI Act. Similarly, GDPR applies whenever personal data is processed — including during model training, testing, or optimisation.
The speakers stressed that the AI Act is not an abstract future law. Large parts are already in force, and others will apply over the coming years. Waiting for “perfect clarity” is not a viable strategy.
Roles matter more than you think
One of the core mistakes startups make is misidentifying their role. Under the AI Act, companies are categorised as providers, deployers, importers, or distributors. Under GDPR, they are controllers or processors. These roles overlap — but they are not interchangeable.Being “just a deployer” of an AI system does not automatically mean low responsibility under GDPR. In many cases, deployers still carry substantial obligations, especially if they control how the system is used in practice.
Even if you think you are just using a bit of AI, you may still carry most of the responsibility.
– Karmen Turk
Correct role qualification is the foundation of compliance. Without it, companies risk underestimating both their obligations and their liability.
Risk-based regulation changes how compliance works
Both the AI Act and GDPR are built on a risk-based approach. This means there is no simple checklist that guarantees compliance. Instead, companies must assess their own risks, make reasoned and diligent decisions, and document those decisions carefully.High-risk AI use cases — listed in Annex III of the AI Act — come with particularly heavy obligations. But even so-called limited-risk or minimal-risk systems often trigger transparency requirements, user rights, and documentation duties.
Crucially, being wrong is not necessarily the biggest risk. Failing to assess and document is.
Your biggest legal risk is not being wrong — it’s not being diligent or documented.
– Aleksander Tsuiman
If a supervisory authority later disagrees with your classification, having a clear, well-documented assessment provides meaningful protection.
GDPR is always in the background
A recurring theme was that GDPR never disappears just because the AI Act exists. The two frameworks apply simultaneously — and sometimes inconsistently.For example, the AI Act requires human oversight mainly for high-risk systems. GDPR, however, grants data subjects the right to human intervention whenever automated decision-making affects them. This means that even low-risk AI systems may still require human review under GDPR.
Legal grounds for data processing remain critical. Consent, contract, legal obligation, and legitimate interest each come with strict conditions.
Missing or weak legal grounds remain one of the most heavily fined GDPR violations across Europe.
Privacy policies also play a bigger role than many founders expect. They are often the first thing regulators, competitors, or disgruntled former employees examine.
Assessments and documentation are not optional
Two types of assessments featured prominently: Data Protection Impact Assessments under GDPR, and AI risk assessments under the AI Act. While they address different risks, the speakers recommended combining them where possible to avoid contradictions and reduce complexity as well as bureaucracy.Documentation is the connective tissue of compliance. Assessments, role classifications, transparency decisions, and internal policies should all be written down and maintained over time.
If it’s not documented, it effectively doesn’t exist.
– Maarja Lehemets
This applies even to seemingly simple use cases such as chatbots, product recommendations, or personalised pricing. Low technical complexity does not equal low regulatory impact.
What founders should take away
The session’s message was pragmatic rather than alarmist. Regulation does not exist to block innovation — but ignoring it can quietly destroy momentum later through blocked deals, delayed launches, or investor concerns.Founders were encouraged to start early, think systematically, and treat compliance as a strategic layer rather than a last-minute fix. In the AI era, legal structure is not overhead — it is infrastructure.
For startups building with AI and data, the rules are already shaping the game. The companies that understand them early will move faster, not slower.
Articles you might also like:
Ask for What You’re Worth: real stories behind confidence, careers, and growth in tech
16.02.2026
At sTARTUp Day, the seminar Ask for What You’re Worth: Your Unapologetic Playbook for Growth From Women in Tech tackled a question...
Why Nobody Cares About Your Brand on Social (and How to Fix It)
16.02.2026
Most brands are not ignored on social media because they are bad. They are ignored because they are forgettable. At sTARTUp Day,...