On 28 October 2025 Havtil organised a workshop with participation from the authorities, operators and suppliers in the petroleum industry. The objective was to discuss how artificial intelligence (AI) can improve learning from incidents. Representatives from HSEQ, operations, IT and data/AI showed great commitment and provided valuable input as a basis for advancing work in this area.
Havtil possesses a large volume of data, but the most relevant information for finding root causes lies in the companies’ own investigation reports and internal incident reports. The challenge is that this knowledge is fragmented across organisations and departments.
As a result, the sharing of learning can lead to information overload, whereby individuals are unable to read everything and, even if they do, they find it difficult to remember and consolidate the information later.
Need for a common taxonomy
A recurring theme in the workshop was the need for a common taxonomy to give the industry a shared language for understanding causes. This is essential for effective learning across organisations and departments.
Taxonomy
The science of classifying and systematising information. In this context, it concerns establishing a common language and framework for classifying incidents and contributory causes in a consistent way across different companies and organisations.
A standardised framework offers a number of benefits:
- Interoperability: Incident data from different systems can be linked and analysed in a consistent way.
- Better learning across organisations/domains: When causal factors and contributory elements are described using a common language, trends and patterns can be identified across organisations and domains.
- Effective use of AI: AI models need structure and context to provide reliable results and reduce the risk of misclassification and "hallucinations".
- Flexibility for companies: Companies can retain their internal classifications, but these should be mappable onto the common taxonomy.
This is not just a technical issue, but a strategic move to build a shared learning ecosystem that makes the industry more resilient and proactive.
How AI can help
DNV presented an AI tool developed in a project for Havtil in 2024. The tool demonstrates how artificial intelligence can handle the process of converting unstructured or semi-structured data into structured and contextualised data.
Experience from the DNV/Havtil project shows that, when you structure knowledge and concepts in a systematic way and combine this with a clear framework, the AI model produces better answers. By defining how different terms interrelate, the AI gains a better understanding and can respond more precisely – rather than relying solely on raw data. The result is a database that is not only organised, but also contains explanations of the relationships between the data elements.
The solution may be to extract key data such as consequences, causal factors and measures implemented. In this process, data is anonymised so that learning can be shared without sharing the entire data set. The result is a shared learning ecosystem that can be used both reactively to analyse past events and trends, and proactively to update governing documents and processes with a view to improving the planning of new work activities.
Quality and trust
The experience of most of the workgroup participants was that AI does not always return trustworthy results, or that it lacks traceability. Improving the quality of AI was an ambition shared by all.
Havtil addressed general challenges and risks experienced by the industry, including errors or biases in training data, obsolete data, results based on hallucinations, unexplained results and opaque processes, inadequate documentation, vulnerability to manipulation, and deficient classification of studies.
One important point was emphasised: The foundation must be data quality. Without good data, AI produces poor results. In general, AI is mostly used to streamline office work or as a support tool where people make the final decision.
Obstacles
The group discussion identified several obstacles that may inhibit the realisation of a shared learning ecosystem.
These obstacles are technical, organisational and cultural:
Data sharing is a challenge because many companies are reluctant to share detailed incident data due to legal, competitive and security concerns. Without a robust mechanism for anonymisation and clear usage guidelines, data sharing will remain difficult.
Data quality is critical, as AI and advanced analytics are only as good as the data itself. Incomplete, inconsistent or misclassified data reduces the value of shared learning. Standardisation of terms and structures is essential for ensuring quality.
Resource constraints can be an obstacle. Implementing new systems, taxonomies and ontologies requires time, expertise and investment. Smaller players may have limited capacity to fully participate in such initiatives.
Confidence in AI results remains a challenge, especially when the results are perceived as coming from a “black box”. Transparency, traceability and validation of AI models are necessary to build trust.
Several companies already have their own classifications and frameworks, such as "Life Saving Rules". This can create resistance to change. The solution is not to replace internal systems, but to establish a common taxonomy that acts as a "translation layer", so that different classifications can be linked to a common set.
The way forward
It is Havtil’s impression and hope that the workshop has provided knowledge and inspiration for further work on improving learning from incidents, including the use of AI, and that the individual companies and other collaborative forums take this work forward. No specific further activities were identified.