AI, Incident Investigation and Misleading Information
AI will be helpful with Incident Investigations. It can handle vast amounts of data and analyze potential systems effects better than a human.
But ― and there’s always a but ― what about human misinformation?
Prejudice
I posed the following prompt to ChatGPT.
One of the ways in which AI can improve process safety is the area of incident investigation. It can analyze complex system failures thoroughly and quickly than a human. However, how does AI handle misinformation?
The obvious concern is that someone will lie to cover up a mistake that they made. But my concern is more subtle. It is to do with ‘honest error’.
For example, I worked on a very large chemical plant that had many problems. Most of the instruments were manufactured by ABC Controls (a made-up name). One of the technicians running the plant had had bad experiences with ABC Controls instruments when working in a different plant and company. Therefore, he blamed our problems on the ABC Controls instruments.
He was wrong ― there was nothing wrong with the instruments ― he was prejudiced (‘pre’ + ‘judge’). How can an AI incident investigation handle this type of error?
ChatGPT’s reply was organized into six major headings:
Cross-Correlation of Testimony with Physical Data
Source Weighting and Provenance Tracking
Bias Detection Algorithms
Use of Digital Twins or Simulation Models
Continuous Learning with Human-in-the-Loop Review
Anomaly Detection from Prior Patterns
Each category was interesting and useful, but ‘Bias Detection’ caught my attention. It said,
AI can be trained to detect cognitive biases in language and reporting patterns. For example:
Excessive generalization (“ABC equipment always fails”)
Repetition of unverified claims across reports
Lack of supporting evidence
But There’s Still a Limit
Its reply concluded as follows,
AI cannot know what it cannot know. If all inputs are biased—if an entire team shares a mistaken belief—then AI may simply encode and repeat that mistake, especially if there's no contradictory sensor or trend data to challenge it.
This is why AI must remain a tool—not a judge. Its real value is in:
Processing large, complex, and conflicting datasets rapidly
Identifying patterns and contradictions
Flagging weak reasoning or unsupported claims
Suggesting alternative hypotheses