We share a lot of information about AI and safety, but find that this information does not always reach the target audiences. We are therefore sending a letter directly to the companies, as well as publishing the information here on the website.
We ask managers and employees responsible for acquiring, developing and operating systems that use AI, as well as employees’ representatives, to be made aware of the letter’s contents.
Regulations
The HSE regulations in the petroleum industry are function-based, technology-neutral and based on principles of risk management. This makes the regulations as they currently exist equally applicable when following up AI solutions. The information from Havtil must be viewed in the light of the latest applicable HSE regulations.
Background
The Government’s national strategy for artificial intelligence was published in January 2020.The strategy emphasises that the supervisory authorities, within their areas of responsibility, should monitor that systems using artificial intelligence operate under the principles of responsible and trustworthy use of artificial intelligence. The Norwegian Ocean Industry Authority (Havtil) uses the national strategy’s definition of AI.
In 2024, the Ministry of Digitalisation and Public Governance published a new national digitisation strategy.Concerning AI, this sets out that, leading up to 2030, the Government will establish a national infrastructure for AI, placing Norway at the vanguard of ethical and safe AI use.Norway has established an overview of national AI resources, which is available at regjeringen.no (in Norwegian only).
Knowledge base and follow-up in the industry
We have published a number of articles, research reports and investigations on our website about risks associated with AI.One example is DNV GL’s report “Responsible use of artificial intelligence in the petroleum sector” (2024).
The report describes the fundamental risk factors involved in the development and use of artificial intelligence in the petroleum industry, especially with regard to major accident risk.
This report, as well as other studies, knowledge summaries and investigations relevant to our follow-up of digital technologies such as AI, are available under the Artificial Intelligence topic here on the website. This is knowledge that companies should deploy in their own risk-mitigation activities.
In recent years, a number of international publications and overviews have also been prepared that provide information about possible risks and key mechanisms for the proper development and use of AI. This is knowledge that has been key to the design of European regulations and standards, as well as various national and international strategies. These have also been important sources of knowledge for Havtil’s work on AI and follow-up of the industry.
AI in the petroleum industry also represents uncertainty. There is currently a limited basis for knowledge-based decisions related to its safe use in systems that are important for safety. In many cases, the technology is not sufficiently documented and reliable. There is limited knowledge about incidents that may occur, or how AI may be one of several underlying causal factors in potential incidents. A weak knowledge base and high uncertainty suggest that the precautionary principle must apply to the responsible development and use of AI.
The introduction of solutions that include AI is developing at a rapid pace. Through our audits, we often meet decision makers who have limited familiarity with how AI is regulated. The industry has repeatedly expressed the need for professional guidance and information on how the development of technology should be seen in the light of the design and purpose of the regulations.
Based on this industry trend of increasing deployment of AI systems, we see a need to strengthen our follow-up of the companies’ risk management pertaining to the development, use and maintenance of AI. Havtil’s attention is primarily directed towards the responsible use of AI in the petroleum industry as a major accident industry.
AI in the light of the design and purpose of the regulations
Petroleum activities are associated with major accident risk. Havtil's regulations have a risk-based approach, where necessary risk reduction and a prudent level of safety are prerequisites for the activity. The companies are responsible for safety. This responsibility also includes the responsible use of AI.
The use of AI in the petroleum industry can affect risks related to major accidents, safety systems, barriers or critical infrastructure. AI in planning, operations and as decision support can directly and indirectly affect safety, the working environment, emergency preparedness and security.
AI is a discipline with a high pace of development and work is being done to develop new standards in the field. We will consider the need for updates to our regulations as well as references to new standards in the future. We therefore ask you to keep up-to-date on any changes in our regulations.
The list below shows selected regulatory requirements and how they may be applicable (the list is not exhaustive).
Framework Regulations, section 11 concerning risk reduction principles, second and third paragraphs
This entails choosing solutions that reduce the likelihood of the use of AI contributing to the occurrence of injury and fault, hazard and accident situations. This also means choosing solutions that reduce known AI risks, and that technical, operational and organisational measures should contribute to further risk reduction. Risk reduction is essential throughout the lifecycle of the AI system.
Framework Regulations, section 23 concerning general requirements for material and information
This entails the responsible party securing technical documentation of AI systems, when purchasing, upgrading or developing them in-house. This includes documenting the development and use of the AI system. This may include, for example, information about what data is included, as well as training, testing and validation of the AI system. Documentation of the AI system should also be made available to the supervisory authorities so that audits and investigations can be conducted.
Framework Regulations, section 13 concerning facilitating employee participation, first paragraph
This entails that the knowledge and experience of both users and employee representatives is included in the work of developing and implementing AI systems that affect the working environment and safety.
When the introduction of new technologies such as AI leads to changes in work processes and other organisational changes, the employee representatives must be given the opportunity to participate.
Management Regulations, section 5 concerning barriers, first, second and fourth paragraphs
Where the use of AI will be included as part of barrier functions related to various fault, hazard and accident situations, the same requirements regarding reliability, availability, functionality, integrity, robustness and independence will apply.
Management Regulations, section 11 concerning the basis for making decisions and decision criteria, first and fourth paragraphs
This means that a decision-making basis informed by AI must have the requisite quality. Decisions informed by AI must be expressed in such a way that they can be followed up.
This requirement also addresses decisions included in the company’s risk management of AI. It is important to ensure an adequate knowledge base by involving relevant professionals and user groups.
Management Regulations, section 16 concerning general requirements for analyses, first, second and third paragraphs
This entails using recognised and fit-for-purpose AI models. This means that the purpose of the model, and the representativeness, validity and limitations of the data must be made visible. Decisions that can affect health, safety and the environment and that are informed by AI systems must be trustworthy.
Results from analyses from AI systems may be inaccurate or incorrect as a result of sources of error from, for example, training of models, discrepancies between training and operational data, or use of outdated data. It is important that users of AI systems that are important for safety are made aware of the AI system’s characteristics, and that AI system results are communicated to the target audience in a nuanced and holistic manner.
Management Regulations, section 13 concerning work processes, second and third paragraphs
This means that where AI systems are included in work processes, it is important to pay heed to their interaction with human and organisational factors.
AI systems included in work processes and interfaces between them must be described.
Facilities Regulations, section 9, and Technical and Operational Regulations, section 9, concerning the qualification and use of new technology and new methods
This entails establishing representative criteria for the development, trialling and testing of AI systems. This also applies to systems which include AI systems. Criteria for the development and testing of AI systems should be representative of the area of use. Test criteria must cover both normal and fault, hazard and accident situations.
It is important that the design of AI systems is human-centred and that test criteria for system performance are developed, i.e. interactions between people, the AI technology and the organisation.
Facilities Regulations, section 21, and Technical and Operational Regulations, section 21, concerning human-machine interface and information presentation, first and second paragraphs
This entails AI systems presenting correct and easily accessible information to the user. This also includes the quality of information presentation, transparency, and explainability in the user interface.
AI systems shall be capable of being monitored and it shall be possible to take measures to quality assure results from the AI system. This means that the interface communicates limitations and uncertainties in the AI system’s results in a correct and easily understandable manner. The AI system must also support the user’s ability to easily and quickly perform necessary actions, and be dimensioned for both normal and critical situations.
Facilities Regulations, sections 32-34 concerning fire and gas detection system, emergency shutdown system and process safety system, first paragraph
For systems where independence is currently required, the independence requirement will also apply if AI solutions are included in the systems.
Facilities Regulations, section 34a concerning control and monitoring system
This entails the AI system being protected against ICT-related hazards and the system itself having robust protection against attacks that can affect the reliability of the AI solution.
Activities Regulations, section 31 concerning monitoring and control, first and third paragraphs and
Technical and Operational Regulations, section 57 concerning monitoring and control
This means that where AI is used in systems of importance for safety, it shall be possible for users to perform monitoring and control functions in a safe and effective manner at all times. This also involves sufficient staffing and expertise to perform these tasks.
Activities Regulations, section 45 concerning maintenance and Technical and Operational Regulations, section 58 concerning maintenance
This entails AI systems being maintained from a lifecycle perspective, including the maintenance of data included in the AI system.
Activities Regulations, section 23 concerning training and drills, first paragraph and
Technical and Operational Regulations, section 52 concerning practice and exercises
The requirement for training and drills also applies to personnel using AI systems.
Training, practice and exercises are essential for enabling personnel to use AI systems in a responsible manner.
Activities Regulations, section 47 concerning maintenance programme, first and second paragraphs and Technical and Operational Regulations, section 59a concerning maintenance programme, first and second paragraphs
This means that failure modes and model biases are systematically prevented through the use of a maintenance programme for the AI system.
This also entails activities to monitor the performance and technical condition of the AI system so that failure modes or model biases are identified and corrected from a lifecycle perspective.