Skip to main content
To Technical subjects
A JOURNAL FROM THE NORWEGIAN OCEAN INDUSTRY AUTHORITY

AI on the fast track

person with a black box om their head Photo: Elisabeth Kjørmo
A new knowledge overview has clarified the fundamental risks posed by developing and applying AI in the petroleum sector, particularly where major accident risk is concerned. It finds that new technologies are being rapidly introduced.
  • Artificial intelligence

"Conducted for Havtil by DNV, the review has sought to boost knowledge about the risks linked to the development and use of AI in safety-related operations on the NCS. Its purpose has also been to explore how AI can enhance efficiency and safety while taking account of the unique risks introduced by AI when compared with traditional IT and automation systems.

Rapid

The study reveals a clear expectation that AI will be introduced to the petroleum sector at a fast pace. Such solutions will initially be utilised to generate various kinds of documents and source codes, and used in the short term for advisory applications in such areas as operations optimisation and predictive maintenance.

In the longer term, AI is also expected to be applied in control functions – including with lifting operations and well control. Autonomous AI-based systems will initially be introduced where the damage potential is low, such as subsea vehicles and drones.

Undermine

Published in Norwegian in 2024, the DNV report identifies several risk factors with the potential to undermine operational safety, including:

  • inadequate training of algorithms,
  • poor data quality,
  • model deterioration over time,
  • over-adaptation to training data.

AI-based systems will also be vulnerable to software design flaws, hardware defects and deliberate interference such as cyber attacks. AI is expected to be adopted in various types of systems and operations. Most of these involve a risk that information generated using AI-based systems could result in faulty operational decisions, which may in turn cause accidents.

The report uses “safety-related system” as an umbrella term covering safety, control and monitoring solutions plus applications for advisory purposes as well as planning and condition monitoring.

Barriers

The expectation is that personnel and environmental safety will continue to be governed by the barrier philosophy which forms the basis of Havtil’s current regulatory regime.

Underlying this approach is the idea that, no matter how much effort is put into creating safe and robust solutions, errors, hazards and accidents may occur. Barriers should then activate to help manage these circumstances. Were an AI-based application to give a result which means that an operation becomes unsafe, the philosophy is that barriers will prevent this escalating into a hazardous incident.

Such barriers include manual overrides, autonomous control functions and the use of independent safety functions to shut down the process being controlled. In addition come operational restrictions to reduce the risk of an incident if the other barriers prove ineffective.

Interaction

The decision to activate barriers in many types of operations is taken today by people. But the challenge is that undesirable software conditions do not always trigger an alarm.

It is far from certain, for example, that this would happen were an AI algorithm exposed to an operational scenario it has not been trained for.

In such circumstances, barriers will only be activated if personnel are able to utilise the total available information to detect that something is wrong.

This underlines the need for AI-human interaction, a human-centred design approach to AI solutions, and the ability to detect unsafe conditions in a way which is independent of the system containing AI.

Complexity

AI will typically boost both the capability and complexity of a system. Research reveals that the more capable and complex a system, the less a user will be able to understand it and its limitations – and to monitor it reliably.

The challenges posed by human detection of unsafe circumstances mean that the industry should explore opportunities for automated detection of such conditions caused by AI.

Read more: Responsible use of artificial intelligence in the petroleum sector

Limiting factors

To ensure that barriers are effective at all times, it must be possible to detect the existence and cause of an unsafe condition. If software has led to this in one way or another, it could only be identified – regardless of cause – if detection mechanisms entirely independent of the same software are in place.

An example of a barrier with an independent detection mechanism is a safety function linked to a safety-critical process which is automatically activated when its own condition measurements indicate this to be necessary.

Increased levels of automation nevertheless mean that independent detection of undesirable conditions and their causes may be difficult in many cases. As a result, the petroleum sector – like many other industries, is moving towards a grey area where critical and complex functions must continuously perform as intended to maintain safety.

The challenges faced here relate particularly to human detection of undesirable conditions, and will often be present regardless of whether AI is in use. But adopting the latter may exacerbate them.

A lack of mechanisms able to detect unsafe conditions, regardless of the system using AI, is expected to set limits to where the technology can safely be introduced.

Standards

The standards referenced in Havtil’s regulations for purely safety functions set very strict requirements concerning software development, verification and validation. Incorporating AI into such functions will thereby create a big burden of proof. So DNV believes AI is unlikely to be incorporated in such functions any time soon.

But it nevertheless accepts that AI-based components may be introduced in the longer term – in the form of safety functions activated with this technology as a supplement to human intervention. Where some AI systems are concerned, the results generated will not be deterministic. That means multiple tests using the same input data will not necessarily give the same results.

This may create difficulties in the qualification, validation and maintenance of software containing AI, and may also impose restrictions on where it can be introduced. The industry needs to investigate how that challenge can be met.

Universal guidelines

Havtil’s regulations refer to a large number of Norwegian and international standards and guidelines. Most players will stick to these rather than having to show that alternative approaches are as good or better. Using a common set of standards and guidelines helps to harmonise the level of safety in the industry.

No standards or guidelines have so far been created with the safe use of AI in the petroleum sector specifically in mind. To keep safety levels harmonised and reduce the burden of proof for each player, it would be desirable for relevant industry participants to join forces on setting guidelines for best practice in the use of the various types of solutions containing AI. Such collaboration should make it easier to meet the requirements in the EU’s AI Act. Meanwhile, players wanting to adopt AI must individually clarify and meet the Act’s requirements in their own management systems.

Such operationalisation of high-level requirements is normally labour-intensive, but the workload for each organisation could be reduced if the industry collaborates.