On human terms

Artificial intelligence (AI) is on the cusp of revolutionising the industry. At Equinor, it is crucial to find the right balance between leveraging new technology and ensuring safety.
- Artificial intelligence
- Working environment
Jan Tore Ludvigsen, head of organisational safety and human factors at Equinor, summarises the most important significant success factors resulting from the introduction of AI:
"Critical thinking and human control are crucial to success," he says.
He explains that the company's philosophy is to put people front and centre when onboarding new technology.
"We cannot implement new technology and then expect people to adapt to the technology. We must carry out thorough risk assessments and develop tools which support people in their work processes.
AI will not replace people, but certain tasks will be changed or solved in new ways. I believe that humans become even more important when we implement AI”.

Strengths and weaknesses
Ludvigsen believes that successful AI use is about exploiting the complementary strengths of humans and machines.
“AI systems excel in their ability to process huge amounts of data in a short time and to find patterns that humans cannot detect as efficiently. Human expertise, on the other hand, is superior when it comes to contextual understanding, the ability to put information into context, and to make intuition-based or knowledge-based decisions.
This is the basis for our human/machine collaboration. We must use AI for what the systems excel at, and humans for what we excel at. Just as we have done with automated systems in the past.”
Critical thinking and human control are crucial to success.
Documented benefits
Equinor's approach to artificial intelligence is twofold. Office-based AI solutions applying major language models are used by most employees for tasks such as translation work, copywriting, and presentations. At the same time, the company is developing industrial applications aimed at specific applications.
"As per today, we have over 150 AI tools in our register, and we have already reaped significant benefits from the technology," says Ludvigsen.
Amongst other things, he highlights machine learning for condition monitoring in maintenance optimization, where the company has documented tangible cost savings.
The technology is also used extensively in subsurface work, particularly for well planning.
"We also strongly believe in using AI to assist in finding the correct requirements in the management system more easily and more rapidly than previously. However, there are still experts in the field who must assess whether the requirements are correct and whether the interpretations are satisfactory”, he emphasizes.
Overconfidence
The EU's new regulation on artificial intelligence stipulates requirements for human control of AI systems, i.e. that humans must monitor and be able to intervene if something goes wrong. Ludvigsen is very positive towards the requirement, but indicates an ethical dilemma:
“It becomes problematic when technology companies claim that their applications can be used in safety-critical settings provided humans are monitoring the situation. The responsibility thus becomes transferred to the end user.
We are aware that the models have their limitations. They can be unreliable, make errors or provide decision support which is not entirely correct. At the same time, humans also have their limitations. Humans make mistakes. We humans are not good at monitoring, because we get bored easily. We also tend to trust too much in the systems, also known as overconfidence.”
The irony of automation
To illustrate the problem, he takes an example of motorists blindly following their GPS.
“In Eastern Norway, it sometimes happens that people drive onto ski slopes, because the trails serve as roads in the summer. People can drive for several kilometers before they recognise their mistake. Even when they can see that they are on a ski slope, some people continue to drive until the car can advance no further.
This phenomenon is very familiar in safety research under the term "irony of automation". The more you trust and use a system, the less likely you become able to understand the situation and act autonomously.
"My concern is that we will become passive in relation to technology. If you don't question the information you’re provided with, you're trusting something to which you should possibly be sceptical. This creates a loss of situational awareness which renders you unable to intervene when something is wrong or goes wrong.”

We must play our part
Ludvigsen stresses that an increased application of AI requires interdisciplinary expertise.
“We need to understand humans, technology and how organizations function. We cannot leave development to the technologists alone.
A key part of the work is therefore the inclusion of employees.
"We must listen to the end users. They are the ones who will be working with the risk and feeling its impact personally. At the end of the day, they know their job better than anyone.”
Strict qualification
To ensure that the systems at Equinor are reliable, the company carries out thorough processes for the qualification of new technology. This means that certain applications may be rejected.
“One such example is an AI assistant that was to be used in the control room. Testing with actual operators in the simulator showed that the decisions made by the assistant were not sufficiently reliable, and the project was therefore discontinued”, says Ludvigsen.
Although the company has well-established guidelines for risk analysis and barrier management, he points out that these are now being further developed to encompass AI.
"International standardisation work is underway in this field, but the landscape remains demanding. There is a definite need for user-friendly guidelines.”
We must listen to the end users. They are the ones who will be working with the risk and feeling its impact personally. At the end of the day, they know their job better than anyone.
AI is not yet mature
"We believe that AI can make a positive contribution to safety on the shelf, for example through improved condition monitoring for safety systems and in terms of reducing people's exposure to risk and hazardous work.”
At the same time, Ludvigsen believes that AI’s efficiency gains currently surpass its safety gains, and that the technology is not yet ripe for implementation in critical control systems.
"I am concerned that the significance of human surveillance is not fully appreciated. We should avoid AI machine learning in our control systems until we can confidently document that the technology is reliable and that it is possible for humans to retain control.
Whilst AI can help us systematize data, monitor systems and provide a better overview of the security barriers we have in place, I believe that the uncertainty associated with the technology as per today overshadows the potential benefits, particularly in the case of safety-critical work in control rooms.”