Our recent posts to do with the ChatGPT and its use in the process safety discipline have generated some useful discussion to do with information integrity.
There was an interesting study which showed that one LLM was quite willing to believe feedback telling it that its answer was incorrect, when the answer was actually correct. Reason for doubt that user feedback is an effective AI training method.
In preparing a paper that I am writing for the upcoming CCPS Global Congress, I spent considerable time "exercising" three common large language models (LLMs): ChatGPT (OpenAI); Bard (Google), and Copilot (Microsoft). It is hard not to anthropomorphize these AI programs; e.g., by viewing their output to be "opinions."
I asked each LLM: "Should I regard your responses to be your opinions?" Excerpts from their responses included:
ChatGPT: "No, my responses are not opinions; they are generated based on patterns and information in the data I was trained on. "
Bard: "No, you shouldn't consider my responses to be my opinions. "
Copilot: "Certainly! " But, when asked in follow-up "Do you have opinions?" the response was "As an AI language model, I don’t have personal opinions or feelings. " BTW, this demonstrates a common concern with LLMs -- inconsistency in answers.
LLMs are text generators. As one LLM described it: "“[M]y responses are generated based on patterns learned during training. The training process involves exposure to a diverse range of internet text, and the model learns to predict the next word in a sentence.” Sometimes those "patterns learned during training" result in the generation of incorrect or misleading text.
Regarding the nature of opinions, isn’t it do with the human ability to be both illogical and correct at the same time?
In a recent meeting I noted that two people with similar education and profile can be asked a question such as, ‘Is climate change caused by human activity?’ One will say ‘yes’, the other ‘no’. A computer program would simply follow its internal logic and data base.
I recently asked ChatGPT a question to do with process safety. It came up with two very different answers. It then asked me which I though was correct because it wanted the feedback to update its information base.
There was an interesting study which showed that one LLM was quite willing to believe feedback telling it that its answer was incorrect, when the answer was actually correct. Reason for doubt that user feedback is an effective AI training method.
In preparing a paper that I am writing for the upcoming CCPS Global Congress, I spent considerable time "exercising" three common large language models (LLMs): ChatGPT (OpenAI); Bard (Google), and Copilot (Microsoft). It is hard not to anthropomorphize these AI programs; e.g., by viewing their output to be "opinions."
I asked each LLM: "Should I regard your responses to be your opinions?" Excerpts from their responses included:
ChatGPT: "No, my responses are not opinions; they are generated based on patterns and information in the data I was trained on. "
Bard: "No, you shouldn't consider my responses to be my opinions. "
Copilot: "Certainly! " But, when asked in follow-up "Do you have opinions?" the response was "As an AI language model, I don’t have personal opinions or feelings. " BTW, this demonstrates a common concern with LLMs -- inconsistency in answers.
LLMs are text generators. As one LLM described it: "“[M]y responses are generated based on patterns learned during training. The training process involves exposure to a diverse range of internet text, and the model learns to predict the next word in a sentence.” Sometimes those "patterns learned during training" result in the generation of incorrect or misleading text.
Regarding the nature of opinions, isn’t it do with the human ability to be both illogical and correct at the same time?
In a recent meeting I noted that two people with similar education and profile can be asked a question such as, ‘Is climate change caused by human activity?’ One will say ‘yes’, the other ‘no’. A computer program would simply follow its internal logic and data base.
I recently asked ChatGPT a question to do with process safety. It came up with two very different answers. It then asked me which I though was correct because it wanted the feedback to update its information base.