“The government’s goal is to establish good infrastructure for AI in Norway, and standards are a strategic tool for fulfilling the requirements of the regulations which are rooted in the EU’s AI Act,” said Tung.

From the left: Jon Sandnes, Chair of the Board at Standards Norway, Sigve Knudsen, director general, Havtil, Karianne Tung, Minister of Digitalisation and Public Governance Photo: Standard Norge

The event took place at Havtil’s offices in Stavanger in parallel with the meetings of the European committee for standardisation in the field of AI (CEN/CENELEC Joint Technical Committee 21).

“AI technology is evolving rapidly and we need to work together to make sure we have standards that keep pace with this development. Without industry standards, we risk ending up with a fragmented approach to AI safety, which could lead to the development and adoption of AI systems that actually increase the risk of accidents and incidents. It is important for us to be engaged in this work, and it is gratifying to see the companies really come onboard too,” said Finn Carlsen, Director of Professional Competence at Havtil.

The key to responsible AI

A key instrument in the implementation of the AI Act is the development and application of harmonised standards, which will specify technical requirements and documentation procedures. This will provide both governments and developers with a common basis for compliance. The standards contribute to common terminology, risk management and trust – and facilitate sustainable innovation.

“Build on standards from day one. This will make it easier to meet regulatory requirements and ensure the ethical use of artificial intelligence,” said Jon Sandnes, Chair of the Board at Standards Norway

Risk-based approach

The AI Act, which is set to become Norwegian law, is based on a risk-based classification of AI systems:

Prohibited AI: Systems that pose an unacceptable risk – such as social scoring or manipulative techniques – will be prohibited.

High-risk AI: Systems used in critical sectors such as health, education, justice, safety and working life are subject to strict requirements, including risk assessment, transparency and human control.

Limited risk: Requires a simple duty to communicate information – e.g. when using AI chatbots that the user must be informed about.
Minimal risk: E.g. AI used in games or spam filters – is not regulated separately.

Norwegian participation is crucial for Norwegian interests

The seminar highlighted the importance of Norwegian participation in the international work on standardisation. Standards Norway invited representatives from the public and private sectors, academia and technology environments to take part in this important work.

Other themes of the programme were the status of AI standardisation, parallels between the AI Act and the GDPR, and practical examples of the responsible use of AI in the Nordic region. The day was rounded off with a panel discussion on how enterprises can fulfil the requirements of the AI Act.