Skip to main content
To Technical subjects
A JOURNAL FROM THE NORWEGIAN OCEAN INDUSTRY AUTHORITY

Weak risk management of AI in high-hazard industry 

Artificial intelligence can create new safety risks in the petroleum industry. High uncertainty calls for a cautious approach when developing and deploying the technology. Havtil is seeing worrying examples of weak risk management, especially when the solutions are developed by specialists without adequate knowledge of major accident risk. 

  • Artificial intelligence

Artificial intelligence (AI) is evolving rapidly, but higher uncertainty makes this technology more challenging to handle than established solutions.  Established technologies are based  on solid experience, historical data and known risks. With AI, much of this knowledge is lacking, which creates greater uncertainty.  

“We lack knowledge about new risk scenarios that may arise or how failure of the AI technologies could affect known major accident scenarios,” warns Elisabeth Lootz, principal engineer at Havtil.  

She emphasizes that higher uncertainty requires a new approach where both developer environments and users of the technology must impose strictrequirements on how risks are assessed and managed.   

“When the knowledge base is limited and the uncertainty high, this must be reflected in the risk assessments.”  

Elisabeth Lootz
"When developing and deploying AI solutions in the industry, greater uncertainty calls for a cautious approach. Our concern is that we are finding the opposite, namely examples of weak risk management in the development and use of AI systems", warns Elisabeth Lootz, principal engineer at Havtil.  Photo: Elisabeth Kjørmo/Havtil

Concept of risk

In 2015, it was specified in the Norwegian petroleum regulations that risk is to be understood as the consequence of the activities, with associated uncertainty. To follow this up, Havtil published a memorandum on the concept of risk and one on risk management.  

Relevant memorandums

“The 2015 clarification of the concept of risk is especially valuable when new technologies such as AI are introduced into a high-risk industry such as the petroleum sector, because it places greater emphasis on uncertainties and the quality of the knowledge base in the risk assessments. That means that AI requires a more cautious approach than traditional technologies”, Lootz stresses.  

Competency gap augments the challenges 

The uncertainty that accompanies AI technology is augmented by new domain experts entering the industry without sufficient knowledge of either major accident risk or of how offshore operations are planned and performed. This creates a competency gap that can affect the ability to identify and manage safety risks from early in the development process.  

“We see that many developers coming from other industries and domains have limited knowledge of risk management and barrier management principles, risk assessment tools and technology qualification requirements,” says Lootz.  

“It is a challenge when specialists who do not fully understand the offshore risk picture develop systems to be used there. They have the technical expertise, but some lack an understanding of thecontext they are developing solutions for.”  

Havtil also observes that risk management is often restricted to each specific AI system, without consideration of how the systems will work together in operating environments involving many people, teams and technologies. We see examples of insufficient integration between the companies’ safety experts and developer environments, which could result in potential risks not being identified, assessed and addressed in the development process.  

New safety risks 

The rapid development of AI is occurring in a competitive environment where companies themselves are expressing concern about being left behind. This pressure can lead to safety considerations being deprioritised in favour of rapid implementation.  

Lootz highlights several new safety risks:  

Unreliable predictions and incorrect output constitute one major risk. Because many AI systems are insufficiently transparent, they can fail in ways that are difficult to predict or understand. This is especially critical when people are expected to monitor and intervene in the event of system failure.  

Degradation over time represents a further challenge. AI models may gradually become less accurate because the world and the data they are based on are changing. This requires more continuous monitoring and maintenance compared to traditional systems that remain relatively stable over time.  

AI requires a more cautious approach than traditional technologies.

Overconfidence in human monitoring 

One general finding in Havtil’s follow-up is that many companies rely on a constant human presence to monitor and intervene if the AI system fails. There is an inadequate use of research-based knowledge of human capabilities and limitations in the design of technologies to ensure  human-centred design that supports safe operations.    

“Research shows that people have limited ability to monitor automated systems without losing situational awareness and that their ability to intervene effectively diminishes over time. Overtrust in decisions made by automated systems is also a known phenomenon that is not adequately addressed in many digitalisation projects”, Lootz points out.  

“An interdisciplinary approach with broad user-and worker representatives involvement is essential for developing and qualifying safe solutions.” 

A lack of learning from incidents  

Systematic incident reporting and learning from incidents have been crucial for safety improvements in the petroleum industry. However, the industry lacks established systems for addressing AI incidents.  

“To date, no criteria have been developed for reporting AI-related incidents. No incidents have yet been reported to Havtil, nor have we seen any systematic reporting within the companies. Nor have any incidents that involve AI been investigated so far. This means that we are missing important learning opportunities.”  

This contrasts with how the petroleum industry otherwise works on safety issues.  

“Informed decisions, aimed at reducing risk through increased knowledge, are a crucial feature of risk management practices,” says Lootz.   

Letter to the industry about AI  

When developing and deploying AI solutions in the industry, greater uncertainty calls for a cautious approach.   

“Our concern is that we are finding the opposite, namely examples of weak risk management in the development and use of AI systems. Havtil expects the companies to take into account the uncertainty that accompanies AI technology, and for the competency gap between technology developers and safety experts to be addressed.

In May, we therefore sent a memorandum to the industry about how our regulations apply to AI. This affirms, among other things, that the requirements for risk and barrier management are also applicable in respect of AI.  

 “Prudent risk management is a fundamental prerequisite for operating on the Norwegian continental shelf,” Lootz concludes.  

This interview is based on the article (2025) Risk management and uncertainty of artificial intelligence in high hazard industry, presented at the ESREL conference in June 2025.