The Circularity of ChatGPT in Process Safety
ChatGPT became available for public use in November 2022. The program has obviously had a major impact on many areas of society. In this series of posts we jot down some thoughts as to how this new technology has impacted the process safety management discipline. Needless to say, we are trying to hit a moving target; the technology is growing more quickly that our apprehension of its implications. Nevertheless, a year and a half is long enough time to provide a few tentative conclusions.
The thoughts provided in this post are confined to the use of ChatGPT 3.5 and the Process Safety discipline. Other applications of AI, such as the use of robotics, are excluded from this discussion.
The first post in the series was to do with legal liability (The Real Challenge for AI and PSM). That post generated some useful discussion. We will revisit that topic as time permits. In the meantime we consider the reliability of ChatGPT’s output ― once more in the context of process safety management.
One of the most troublesome aspects of process safety discipline is its inherent circularity. For example, in a PHA (Process Hazards Analysis) the team is often faced with a conundrum on the following lines.
Could high temperature cause an accident?
What is high temperature?
It is that temperature that could cause an accident.
Circular logic of this sort crops up in most areas of process safety.
It is likely that ChatGPT will exacerbate this difficulty.
Part of the ChatGPT disclaimer is that the system will use the information that it generates based on the questions we ask. Version 3.5 of the program uses a ‘frozen’ data base. However, future versions will be live. Therefore, anything written, including this post, could be used to generate future responses.
This is a troublesome conclusion. The reliability of the information that the program provides depends in part of the reliability of the information that is fed to it. The program has no external means of evaluating the quality of the information used.
In order to help better understand this difficulty, I naturally went to ChatGPT. I asked the following question.
Is the quality of the information that ChatGPT provides on the topic of process safety reliable? Can we use that information to make decisions that affect the life and health of many people?
In response, the program punted. The final paragraph of its reply was,
If you're looking to understand general concepts or want to supplement your existing knowledge, I'm here to assist. For critical decisions impacting safety and health, it's always best to consult with qualified professionals who have direct experience and expertise in the area.
There is nothing wrong with that reply, but we have to recognize that many users of ChatGPT will use the program without consulting qualified professionals. That output may not be accurate but it will be incorporated into the program’s data base. The information so generated may then lead to further low quality responses. And so on, and so on.