Artificial Intelligence, Process Safety and Litigation
In recent posts I have suggested that many of the concerns to do with AI (specifically ChatGPT) will wind up in the law courts. For that reason I was interested to read a the following headline at CNN Business,
Scarlett Johansson lawyers up over ChatGPT voice that ‘shocked and angered’ her.
The article reports that,
Actor Scarlett Johansson said in a statement shared with CNN on Monday that she was “shocked, angered and in disbelief” that OpenAI CEO Sam Altman would use a synthetic voice released with an update to ChatGPT “so eerily similar” to hers.
Her specific concern is to do with the new OpenAI product GPT-4o. About which, another CNN article reports,
Based on the company’s Monday demonstration, GPT-4o will effectively turn ChatGPT into a digital personal assistant that can engage in real-time, spoken conversations. It will also be able to interact using text and “vision,” meaning it can view screenshots, photos, documents or charts uploaded by users and have a conversation about them.
So here is a potential scenario for the process safety community.
A computer technician who knows nothing about refineries feeds P&IDs, equipment data sheets and photographs into an AI program.
He has a ‘conversation’ with the AI and asks the program to write the start-up procedures for the refinery.
The AI produces a polished, well-written, beautifully illustrated start-up manual.
The instructions in the manual are followed by the refinery operators.
There is an error in the instructions. During the start-up an operator incorrectly opens Valve A instead of Valve B.
As a result a valuable catalyst in one of the process vessels is poisoned.
This error costs the refinery millions of dollars ― both in lost production and replacement of the catalyst.
Whom do the refinery’s attorneys sue?