Discussion about this post

User's avatar
Walt Frank's avatar

There was an interesting study which showed that one LLM was quite willing to believe feedback telling it that its answer was incorrect, when the answer was actually correct. Reason for doubt that user feedback is an effective AI training method.

Expand full comment
Walt Frank's avatar

In preparing a paper that I am writing for the upcoming CCPS Global Congress, I spent considerable time "exercising" three common large language models (LLMs): ChatGPT (OpenAI); Bard (Google), and Copilot (Microsoft). It is hard not to anthropomorphize these AI programs; e.g., by viewing their output to be "opinions."

I asked each LLM: "Should I regard your responses to be your opinions?" Excerpts from their responses included:

ChatGPT: "No, my responses are not opinions; they are generated based on patterns and information in the data I was trained on. "

Bard: "No, you shouldn't consider my responses to be my opinions. "

Copilot: "Certainly! " But, when asked in follow-up "Do you have opinions?" the response was "As an AI language model, I don’t have personal opinions or feelings. " BTW, this demonstrates a common concern with LLMs -- inconsistency in answers.

LLMs are text generators. As one LLM described it: "“[M]y responses are generated based on patterns learned during training. The training process involves exposure to a diverse range of internet text, and the model learns to predict the next word in a sentence.” Sometimes those "patterns learned during training" result in the generation of incorrect or misleading text.

Expand full comment
2 more comments...

No posts