A few years ago I worked with a colleague on the development of Expert Systems for the process industries.
One example of his work was to do with shell and tube heat exchangers. The basic idea was that a designer would submit his or her design to the Expert System. He or she would provide information such as composition of the fluids, pressures and temperatures, materials of construction, and physical location. The Expert System would then analyze the proposed design and respond by:
Validating the design against pertinent regulations, codes and standards.
Checking the design against examples in similar situations.
Even saying, ‘This design is technically correct, but it looks odd’.
The system was developed by heat exchanger experts who knew and understood the technical standards of their discipline. There was little room for judgment (except maybe for the last comment to about ‘looking odd’). Therefore, the answers the Expert System provided were objectively correct and reliable.
We are now in a world of Artificial Intelligence (AI) and Large Language Models (LLMs). Many process safety professionals are uneasy about these systems because they may provide misleading or incorrect results. Even the best models can hallucinate (i.e., just ‘make stuff up’). (I explore the topic of professionalism and LLMs in the post AI: Only For Old(er) Process Safety Experts.)
So ― needless to say ― I asked an LLM to comment on the distinction between AI and Expert Systems. Here is what it said.
Expert System
A computer system that emulates the decision-making ability of a human expert in a specific domain by using a knowledge base and predefined rules.
Relies on a fixed set of "if-then" rules and a knowledge base curated by human experts.
Focused on narrow, specialized domains (e.g., medical diagnosis, financial planning).
Does not learn or adapt beyond its programmed rules and knowledge.
Provides consistent, explainable decisions based on its rule-based logic.
But it cannot handle situations outside its knowledge base, lacks adaptability, and requires manual updates to incorporate new knowledge.
Artificial Intelligence
A broad field of computer science focused on creating systems that can perform tasks requiring human-like intelligence, such as learning, reasoning, problem-solving, and perception.
Encompasses various techniques, including machine learning, deep learning, natural language processing, and more.
Can learn from data, adapt to new information, and improve over time (e.g., through neural networks or reinforcement learning).
Applicable to both narrow (e.g., image recognition) and general tasks (e.g., conversational AI like me).
Handles complex, dynamic environments and can generalize to new scenarios.
Examples are chatbots (like me), recommendation systems (such as Netflix, Amazon), and autonomous vehicles.
But it may require large datasets, significant computational resources, and can be less transparent.
LLMs are versatile but shallow.
The above thoughts came to mind when I listened to a video (unfortunately, I cannot track down the link) in which the speaker compared today’s LLMs to the Swiss Army Knife, i.e., a jack of all trades, but master of none. The author further argues that LLMs will never achieve specialist expertise sufficient to be confident in topics such as heat exchanger design.
Hybrid Proposal
The above discussion suggests that, if an Expert System (built by human experts) can be overlaid on top of an LLM, then it may be possible to get the best of both worlds: the imagination, learning capabilities, and broad-ranging knowledge of the LLM, along with the human expertise of an Expert System.
A hybrid such as this addresses potential weaknesses of expert systems, including the fact that human knowledge is not complete, standards are always evolving, and unanticipated systems effects may not be anticipated. An LLM can provide themissing information.
The idea of a hybrid does, however, have potential weaknesses.
How do we ensure that the LLM respects the expert system’s rules rather than contradicting them?
How do we audit and verify outputs when the two systems disagree?
How do we avoid the LLM ‘sounding confident’ in ways that mislead users about the authority of the expert rules?




