Skip to main content
To Technical subjects
A JOURNAL FROM THE NORWEGIAN OCEAN INDUSTRY AUTHORITY

Intelligence explained

Picture of AI researcher Inga Strümke Photo: NTB
Trust in technology? “It’s meaningless to talk about trusting or having confidence in mechanical processes,” says AI researcher Inga Strümke. “Trust is something you have in players, which machines are still not considered to be.”

What is actually meant by explainable AI? And how can decisions taken by a technology most people do not fully understand be trusted?

  • Artificial intelligence

Inga Strümke, an associate professor and researcher in machine learning, explainable AI and AI ethics at the Norwegian University of Science and Technology (NTNU), has responded to three key questions on this issue.

What is meant by explainable AI?

This covers tools created to clarify a model which can’t be interpreted by people just like that. These can be used when training a new model or explaining an existing one.

Why is it so important that AI is explainable?

There are several reasons, including a desire to safeguard human autonomy and control (we can’t protect our own interests if we don’t understand what’s going on).

Second, we want to make it possible to discover new knowledge (machine learning models exist which can find answers humans can’t come up with).

Finally, we must comply with existing and future regulatory regimes such as the EU’s general data protection regulation (GDPR) and AI Act, and the Norwegian Public Administration Act.

How can we trust decisions taken by a technology we don’t fully understand?

I believe it’s meaningless to talk about trusting or having confidence in mechanical processes. Trust is something you have in players, which machines are still not considered to be. It’s like asking whether we should have confidence in a quadratic equation.

Unanswered question

What is not only an adequate but also a possible explanation of a machine learning model for us as humans has been extensively explored, but it remains an unanswered question.

The main problem, as I see it, is that it’s impossible to explain something complicated in simple terms.

It’s like trying to draw a cube in the form of a square. It simply doesn’t work, the cube has a whole dimension which the square lacks.

All explanations are necessarily simplifications, and that part of the information which has to be simplified away at the point of explanation must be determined by somebody.

(Inga Strümke: Maskiner som tenker (Machines which think), Kagge Forlag, 2023).