Skip to main content
To Technical subjects
A JOURNAL FROM THE NORWEGIAN OCEAN INDUSTRY AUTHORITY

Five AI paradoxes

Lady with many mathematical formulas around her head Photo: Shutterstock
AI isn’t actually all that intelligent. “There’s no common sense and no situational awareness for taking good decisions. All it does is to recognise statistical patterns and act entirely on the basis of these.

Leading US researcher Mica R. Endsley is very positive about AI, but admits to having certain reservations. She highlights here some key preconditions for interacting with AI.

  • Artificial intelligence

Paradoxes and challenges when automating systems were addressed by British psychologist Lisanne Bainbridge in 1983. The main argument of her article on ironies of automation was that, although the aim is to reduce human error and boost efficiency, it may actually create new problems.

This influential piece is regarded as a pioneering contribution to highlighting problems associated with automating systems.

Endsley, an engineer and former chief scientist with the US Air Force, extends that work to the current wave of AI-based automation and finds the challenges persist.

“My research reveals that many of the problems identified in connection with automation in general are directly applicable to AI,” she maintains.

Mica R Endsley

An American engineer, a former chief scientist with the US Air Force and currently president of SA Technologies, Endsley is best known for her work on situational awareness.

This has had great influence in fields such as aviation, military operations and human-machine interaction. Her theories and research are widely applied to improve safety and efficiency in complex systems.

A concentration on the human dimension in technology development has been a red thread through Endsley’s career. She has distinguished herself as a leading researcher in the field of responsible AI.

Picture of Mica R. Endsley, American engineer
Mica R. Endsley is an American engineer, former Chief Scientist of the US Air Force, and director of SA Technologies. Photo: Mica R. Endsley

In her 2023 article on the ironies of artificial intelligence, Endsley lists five paradoxes presented by AI.

Paradox 1: “AI isn’t all that intelligent”

First, AI is in practice a machine learning programme good at identifying statistical patterns in the large databases used to train it – but that is as far at it goes. The technology has problems dealing with anything which happens outside these databases.

“AI isn’t actually all that intelligent,” says Endsley. “There’s no common sense and no situational awareness for taking good decisions. All it does is recognise statistical patterns and act entirely on the basis of these.”

Paradox 2: “People struggle to understand AI”

The second paradox is that people struggle to understand AI. The more advanced a system, the more difficult it is to comprehend.

Endsley notes that AI only does parts of a job, never everything. People need to be involved in monitoring it.

“That’s difficult, especially given the ‘black box’ principle where you don’t always understand exactly why the AI is acting as it does.”

“And when even programmers don’t know why the software behaves as it does, we’re left guessing. Research shows that we’re not good at this, especially when systems change over time to become better and more highly trained.”

Paradox 3: “We struggle to compensate for AI’s limitations”

This is related to the previous paradox.

“Research shows that people monitoring AI systems find it hard to maintain their concentration,” Endsley reports. “That’s especially true when systems are working well.

“We tend to place far too much trust in the technology. Findings also indicate that people doing nothing other than monitoring AI systems have a tendency to lose their situational awareness.”

As AI systems become increasingly advanced and ever more highly trained on a variety of databases, human monitoring becomes even harder. And this leads to paradox four.

Paradox 4: “The more intelligent AI becomes, the more difficult it is to grasp its faults and limitations”

A number of well-known examples show that AI can be both biased and racist. Many of these are very obvious and easy to spot – as when a generative AI image generator asked to depict a doctor often show a man, for example.

Such biases or systematic errors, which reflect limitations in the datasets the systems are trained on, are not difficult to detect.

“But the more advanced and highly trained a system is, the harder it is to recognise bias – and that can create problems,” says Endsley.

The main problem is that people are poor at monitoring and recognising faults in AI, and at assessing the trustworthiness of the advice and assistance they provide.

Paradox 5: “The more universal AI technology becomes, the more difficult it is to assess its trustworthiness”

To illustrate paradox 5, Endsley cites ChatGPT as an impressive language model where facts are nevertheless often faulty.

“When ChatGPT lacks access to the information requested, we see that it conjures things up – it hallucinates.

“And when such behaviour is integrated into an otherwise well-functioning language model, people struggle to assess how far they can trust it.

“That’s because we don’t know what its sources are, how the information is put together and whether correct facts are mingled with errors. This becomes even more problematic where safety-critical technologies are concerned.”

Being aware

The question is how to work around these paradoxes. Endsley believes this is a matter of focusing on people. That means ensuring thorough testing, pursuing effective management – and not least being aware of the paradoxes as well as the difference between high- and low-risk technologies.

“This doesn’t matter so much in low-risk cases,” she says. “You can live with being recommended a film you don’t like. Problems start to arise with high-risk environments in industrial settings.

“Where safety-critical technology is concerned, we’re a long way from being able to trust AI. It needs thorough development and testing.”

She makes it clear that testing is not just about exposing software errors and deficiencies, but also establishing how technology interacts with people – human-centred development.

“When that isn’t the case, you see these systems fall apart.”

To illustrate this, Endsley cites the 2010 Deepwater Horizon disaster in the Gulf of Mexico as a tragic case where workers on the rig had problems seeing and understanding the automated systems.

“We must pay attention to interfaces and user-friendliness when developing such systems. The more complex the technology, the more important this is.”

Advice

Where AI is concerned, Endsley likes to call herself “a cautious optimist about the future”. But she sees major challenges, especially in safety-critical technologies.

And she has some clear advice for managers and decision-makers who are to implement AI in their systems.

“Recruit specialists in human-centred technology development who can design interfaces on this basis, and experts who understand how people interact with technology. Failing to do so simply creates new problems.

“And, most importantly of all, concentrate attention on safety. Employ someone who understand the paradoxes with AI and can attract management’s attention.”

Boeing

Endsley uses aircraft manufacturer Boeing as an example of a company which failed to understand the risk picture – and had to pay for it.

“For many years, it enjoyed a very good reputation for designing and manufacturing safe aircraft. But cost-cutting over a number of years led to the focus on safety being pushed down the organisation.

“Management talked a lot about focusing on technology and how important it was to be positive to new advances. The human dimension and a grasp of the limits of technology were overlooked.

“This has created a wealth of hidden problems and faults in the company, leading to major accidents (see separate section).”

Boeing accidents

A Boeing 737 MAX aircraft crashed in Indonesia during October 2018, with a similar accident in Ethiopia the following year. In all, 346 people lost their lives.

Investigations attributed both crashes to a new system intended to push the aircraft’s nose down automatically if it risked stalling. But pilots lacked adequate training in turning off the system when it activated in error.

Boeing was blamed, hundreds of aircraft were grounded around the world, and the company had to pay USD 2.5 billion in compensation to the families of accident victims.

Implications

Endsley points out that it is not enough to be aware of paradoxes with AI. The consequences of these must be accepted, particularly the fact that people are poor at monitoring AI systems.

“Problems start when AI becomes a decision tool, with the system deciding automatically and the human-centred aspect disappearing.

“The latter dimension calls for the integration of people and technology, and exploiting what each is best at. AI must support people – that’s when it’s most effective.”

Listen to more from Mica R. Endsley in the podcast: The 5 AI Paradoxes. How intelligent is artificial intelligence? What does an industry that wants to implement AI solutions need to know about the technology, and how can we implement it responsibly?”