Category: Security / Compliance / AI infrastructure
While the EU AI Act is getting the lion’s share of attention in the AI debate, NIS2 has already come into force – and it’s a directive with direct implications for how you manage your AI infrastructure today, not a year from now.
NIS2 (Network and Information Security Directive 2) replaced the original NIS Directive and entered into force in the EU in October 2024. It extends cybersecurity and incident reporting requirements to significantly more sectors and organizations than before.
If you operate in energy, transport, health, public administration, digital infrastructure, water supply, financial markets, or food – or if you supply organizations in these sectors – NIS2 is likely to be relevant to you. And if your AI environment processes sensitive operational data, is part of critical workflows, or connects with external systems, it is directly applicable.
What NIS2 actually requires
NIS2 is not a tool-specific requirement – it is about risk management and responsibility. Specifically, it means that organizations must:
– Implement technical and organizational security measures
– Have processes to identify and report incidents within 24 hours (initial notification) and 72 hours (detailed report)
– Ensure supply chain security
– Management actively responsible for cybersecurity work – including personal responsibility in case of serious failures
The latter is new and important. Under NIS2, managers can be held personally liable for non-compliance. It is no longer an IT issue that can be fully delegated.
Why AI environments are particularly exposed
AI systems introduce areas of security that many traditional IT security frameworks are not designed for.
A trained model is an asset – it contains implicit knowledge from training data, sometimes including sensitive information. The question of where the model is stored, who can access it and how it is protected is directly relevant under NIS2.
Training and inference pipelines often handle large amounts of data, sometimes in real time. If these pipelines are poorly isolated or integrated with external services without clear control, they can be an entry point for attacks or data breaches.
Cloud-based AI services further complicate the picture. If your AI is running on an external provider’s infrastructure, you are still responsible for ensuring that the provider meets your NIS2 obligations. Supply chain security is an explicit requirement of the Directive.
Four concrete questions to ask about your AI environment
1. Where do your AI models run and who has access to them?
If the answer is ‘in the cloud with an external provider and we are not completely sure about the authorization controls’, it is a red flag.
2. How do you handle incidents in the AI system?
If a model starts to behave unexpectedly, if training data is leaked, or if the inference API is attacked – do you have a documented process for that? Do you have a timeline that meets the 24-hour requirements of NIS2?
3. How is your supply chain documented?
What external components, APIs and services are part of your AI environment, and have you assessed their security profile?
4. is management informed and involved?
Under NIS2, it is not enough for the IT department to manage the issues. Management needs to have a real understanding of the risks and be actively engaged in security efforts.
On-prem as a response to NIS2 requirements
One of the more direct implications of NIS2 for AI infrastructure is that on-prem or private hosting in Swedish data centers provides a clearer chain of custody. You know where the data is, you own the infrastructure, and you can document access controls and security measures without having to rely on an external vendor’s certification report.
It is not an argument that the cloud is insecure. It’s an argument that clarity and control are easier to achieve and document when you own your environment.
This is exactly the kind of environment Aixia is building – with AiQu as the platform and Swedish data centers as the foundation.
If NIS2 compliance is an active issue for your organization, we are happy to join the conversation about what your AI infrastructure should look like. Contact us at Aixia or start with aiqu.ai.

