A recent article by Simar Grewal discussed how artificial intelligence (AI) can improve safety in the energy and utilities sector.
The article talks about the ‘transformative role of AI in workplace safety’. However, the examples provided seem to be more evolutionary than revolutionary, and hardly transformative. But what caught my attention was the discussion to do with challenges and risks. This section focused on cybersecurity.
I suggest that the real challenge will be to do with litigation. Imagine the following scenario:
An operating company decides that it wants to reduce the risk of fatalities and injuries to a specified level.
An AI company offers a tool that uses the client’s data and on information from the world wide web.
The AI tool proposes changes to the operations in many areas, including instrumentation, shift organization and turnaround schedules. How this system comes up with its conclusions and recommendations is a mystery because it is, after all, artificial intelligence.
The client company implements the AI’s recommendations.
In the following six months there are two fatalities, many injuries and record levels of lost production.
The client company decides to sue the vendor. The vendor’s response is that the data used was just ‘out there’, and therefore the company has no responsibility for the results. Moreover, the fact that the system is based on artificial intelligence means that the vendor is not responsible, since all they can do is apply natural intelligence.
It’s anyone’s guess as to how long the resultant litigation would last, but it will obviously go on for many years. It will slow down the rate at which AI is implemented; and maybe that’s not all bad.
To make any sensible decisions, any such system must be complemented with a system of verification where possible. Ultimately, an AI system for PSM should be a recommendation process with verifiable data sources.
How AI works is a mystery to me and probably many others. Maybe it is related to data mining and neural networks from 30 years ago ? The concern I have with using AI in the health and safety field is that the link between causation and consequences is not transparent. It may find common factors which indicate that certain outcomes are possible or likely. But the bit in the middle, understanding how causes lead to consequences, will remain unknowable. This is where 'natural' intelligence applies deductive, logical thinking which gives confidence in justifying appropriate controls. AI is just a case of " believe me, I know what I'm doing, but I can't explain it"