How AI Can Make You a Better Process Safety Expert
We have published a series of posts to do with Artificial Intelligence ― specifically Large Language Models (LLM) ― and their potential for use in the process safety discipline. These posts include:
- ChatGPT: What’s the Difference between Prestartup and Operational Readiness Reviews?; 
- AI, Process Hazards Analysis and Not Thinking the Unthinkable; 
- Artificial Intelligence and the Integrity of Process Safety Information; 
So far, all of our work has been done using ChatGPT 3.5. However, internet feedback suggests that an upgrade to Version 4.o is justified. Replies to these posts further suggest that Gemini provides greater engineering accuracy.
Here are some of the early conclusions I have come to so far (all in the context of process safety management.)
- Most of us are still very close to the bottom of the learning curve when it comes to AI. There are very few power users. 
- ChatGPT is not to be trusted when it provides hard information. If we want facts, we go to Google, or even a book. ChatGPT makes stuff up. 
- Most of us are still at the ‘Tell me about . . .’ stage of use. Even when we are looking for system insights, we are still searching for facts. 
- ChatGPT can provide well-informed opinions. For example, when asked ‘Which is the most important element of process safety management?’ it gives a reply that is defensible, and that stimulates thinking. It is like talking to another process safety expert. 
- ChatGPT is creative. Very few of us use this capability, and even fewer understand just what this will mean. For example, the program will help us create stories using people with strong personalities. These people will develop and manage a process safety program as if they are actual characters. We will learn more from them than from reading lengthy reports. 
Two Points of View
There are still profound disagreements to do with the value of this technology. Two recent posts illustrated this dilemma. The first post states that AI is already having a positive effect on business decisions, and that trend will continue apace. The second post suggests that AI contains within itself its own seeds of destruction.
Post #1 — Scott Galloway
Scott Galloway publishes a blog that provides valuable insights into the world of business, particularly the world of tech. His colleague, Greg Shove, wrote Thought Partner.
Boston Consulting Group, Harvard Business School, and Wharton released a study that compared two groups of BCG consultants — those with access to AI and those without. The consultants with AI completed 12% more tasks and did so 25% faster. They also produced results their bosses thought were 40% better. Consultants are thought partners . . .
He suggests that we should treat AI as follows,
Ask for ideas, not answers. If you ask for an answer, it will give you one (and probably not a very good one). As a thought partner, it’s better equipped to give you ideas, feedback, and other things to consider. Try to maintain an open-ended conversation that keeps evolving, rather than rushing to an answer . . .
The hardest part of working with AI isn’t learning to prompt. It’s managing your own ego and admitting you could use some help and that the world will pass you by if you don’t learn how to use AI. So get over your immediate defense mechanism — “AI can never do what I do” — and use it to do what you do, just better.
Post #2 — Ted Gioia
An opposing point of view comes from Ted Gioia at Google Thinks Beethoven Looks Like Mr. Bean.
Studies show the dangers of training AI with AI inputs. The results get worse and worse. But how can you prevent it when AI content is everywhere, and almost never disclosed as such?
Those huge AI companies have caused this by making the most boneheaded mistake of them all. As any ranch hand can tell you, you don’t dump your waste products into your drinking water. But that’s exactly what’s happening with AI contamination in the culture—the degraded outputs become inputs for the next round of bot technology.
It’s not hard to understand the risks. We’re feeding our robots with garbage, so don’t expect them to regurgitate it as a gourmet meal . . .
I doubt I will need to wait long—the monomaniacal AI cult members are making the largest capital investments in history to accelerate this replacement of the real with the bogus. They can’t get to the finish line fast enough.
In this kind of environment, things are going to hit a wall very soon—almost certainly within the next 12 months.
In an earlier post Gioia says,
. . . when AI inputs are used to train AI, the results collapse into gibberish. This is a huge issue. AI garbage is now everywhere in the culture, and most of it undisclosed. So there’s no way that AI companies can remove it from future training inputs.
This problem is almost impossible to fix in a culture that relies heavily on algorithms. The algorithm is, by definition, a repeating pattern that always looks backward.
As I said in my post ChatGPT, Process Safety and Recursive Gibberish my guess is that the process safety output from AI is not yet badly contaminated because (a) it is such a tiny business area, and (b) most of the information on the internet has been provided by professionals.
Conclusion
The problem of ‘recursive gibberish’ means that we need to very careful when AI provides a factual answer to a prompt. But we already knew that. However, if we ask AI for an opinion, suggestions, and creativity then it will be very helpful ― if we filter its results through human judgment.
Which is why I started this post with the words.





