Artificial Intelligence and the Integrity of Process Safety Information
For those of us who write about process safety the following article expresses a commonly-felt concern.
The article states,
Over the past two years, a Wiley spokesperson told The Register, the publisher has retracted more than 11,300 papers from its Hindawi portfolio.
. . . paper mills rely on various unethical practices – such as the use of AI in manuscript fabrication and image manipulations
The article makes it clear that this problem is not confined to just one publishing house.
But the concern over scholarly research integrity isn't confined to Wiley publications. A study published in Nature last July suggests as many as a quarter of clinical trials are problematic or entirely fabricated.
Academic publishers, however, appear to want the benefits of AI writing assistance without the downsides.
The article refers to the United2Act that is ‘committed to addressing the collective challenge of paper mills in scholarly publishing.
Liability for Incorrect Process Safety Information
The above concerns relate to earlier discussions in this series of posts to do with liability.
Let’s take a simple example. A Process Hazards Analysis (PHA) team finds that a section of piping is subject to corrosion. The team recommends that the piping should be made of corrosion-resistant material. In the ‘good old days’ the facility’s manager would use a consultant to advise on what material of construction to use. If the consultant’s recommendation turned out to be flawed then he or his company could be held liable.
But now, in this Brave New World, someone on the PHA team may use AI to generate a recommendation regarding what material to use. If that recommendation is incorrect and someone is seriously injured as a consequence who is liable?
The same concern applies to those who write about process safety. If the author uses a tool such as ChatGPT as part of his research how does he or she know that the information supplied is correct? There is an endless amount of information on the internet that is either false or self-serving. All of this information can be scooped up by ChatGPT or its brethren and then regurgitated.
Circularity of Information
In order to learn more about the above difficulties I put the following question to ChatGPT.
How can I be sure that the information ChatgGPTprovides is correct?
It had an eight-point reply.
Cross-Referencing
Source Checking
Reputable Sources
Current Information
Critical Thinking
Ask Follow-Up Questions
Specialized Resources
Direct Confirmation
The third of these points ― Reputable Sources ― plunges us once again into the circularity that is endemic to so much of the process safety discipline. How do I know that a source is reputable given that highly respected publishers such as Wiley are now being plagued with unreliable, AI-generated information?
Further thoughts on this conundrum are provided at the post The Circularity of ChatGPT in Process Safety.