Skip to main content
To Technical subjects
A JOURNAL FROM THE NORWEGIAN OCEAN INDUSTRY AUTHORITY

Artificial intelligence is also a risk factor

person with a black box on their head Photo: Elisabeth Kjørmo
person with a black box on their head Photo: Elisabeth Kjørmo
 Photo:
The black box metaphor is often used in AI to describe systems or models whose internal processes are hidden from or incomprehensible to users. They can see the inputs and outputs but are unable to understand how the systems reach their decisions.
In a complex neural network, tracing precisely how a model has learned to recognise patterns or take decisions can be difficult. This is what makes it a “black box”.
Black box technologies can be problematic because it is important that we not only understand and can explain the crucial underlying decision processes, but are also able to identify errors and vulnerabilities or assess the quality and use of data.

Artificial intelligence (AI) offers many benefits and improvements, positive solutions and reduced risk, notes Havtil director general Anne Myhrvold. “But it may also be a risk factor in itself. Our main issue for 2025 gives us the opportunity to emphasise this and promote reflection, debate and action in the sector.”

  • Artificial intelligence

Myhrvold emphasises the importance of people understanding AI’s limitations so that they can intervene and take action when necessary.

“An exaggerated reliance on AI may undermine people’s vigilance and lead to poorer decision making. It may also increase vulnerability to cyberattacks.”

She adds that responsible use of this technology is particularly important in industries with a risk of major accidents.

Limitations

“Risk can arise from incorrect use of data or errors in the underlying information. This may be distorted or transparency could be lacking – as with black box technology.”

Issues Myhrvold identifies include how the systematics are understood and where decisions get taken, and she notes it may be difficult to identify and correct errors unless this is understood.

“With our main issue for 2025, we aim to emphasise the importance of recognising both the positive aspects of AI and the risk factors it presents. The latter mean that the technology also involves a number of challenges.”

Performance-based

“One question is whether our regulations are adapted to developing and utilising AI,” Myhrvold observes. “We enforce a technology-neutral, performance- and risk-based regulatory regime for both the petroleum sector and new industries.”

In Havtil’s view, she says, these regulations are relatively applicable in their present form for ensuring good AI solutions.

“They set out basic requirements for prudent operation and risk management, which is important with a view to integrated and appropriate development and application of AI.

“But we also recognise that they may lack references to good norms and industry standards which can offer useful guidance where AI is concerned.

“We’ll therefore be monitoring the industry’s work on standards and standardisation, precisely to ensure that AI is covered in a better way.”

Watch video: Main issue 2025

Artificial intelligence is also a risk factor

Where the energy sector is concerned, AI continues to be integrated into ever more technologies – including those used in safety-related operations. AI-based systems represent a key resource and can help to reduce risk. But they may also do the opposite. Industries exposed to major accident risk are particularly vulnerable.      

The challenge is to take a broad view and consider AI in an integrated perspective. To ensure the safe and secure use and maintenance of such systems, their development must rest on an interplay between people, technology and organisations. We must also ensure that AI does not make us more vulnerable to external threats and malicious actions.   

Responsible use of AI is in the interest of everyone in the industry. Ultimately, responsibility for ensuring this rests with management.