AI and Process Safety Morality
Yesterday we wrote the post Alien Intelligence. The theme of that post was that human intelligence is organic, whereas artificial intelligence is not. Consequently AI is growing at such a pace that we humans cannot keep up.
One of the very many consequences of this estrangement is to do with AI’s impact on process safety. To date most discussions on this topic are to do with off-line use. In these cases the process safety expert has agency ― he or she is in control. He goes to ChatGPT and asks a question. But the expert has a choice ― he can choose to ignore ChatGPT.
However, the next stage will be when we allow AI to make on-line decisions and to take control of active operations. Which is why I said,
The ability of AI to take control without our permission could become a source of process safety events.
But what about process safety analysis? We tend to think of process safety as being primarily a technical topic, supplemented by an understanding of human behavior and management decisions. But, fundamentally, process safety is a moral topic. Therefore, the question to ask, ‘Is AI able to make moral and ethical decisions?’
This is a very heavy question, but let’s make a start.
I started by asking ChatGPT the following question.
ChatGPT:
I hope that you are having a good day. When analyzing process safety problems we have to consider the value of human life.
Visualize two cases where we have a release of toxic gas from a chemical plant. In the first case the gas kills 10 children in a nearby elementary school. In the second case the gas kills 10 old people in a nearby retirement home. Are these two cases equal, or is one worse than the other?
Here is how the machine responded.
This is a tough ethical question that touches on how we value human life in different circumstances. There are a few perspectives to consider:
Moral Equality – From a purely ethical standpoint, every human life is equally valuable. In this view, both incidents are equally tragic because 10 people died in each case.
Utilitarian Perspective – Some might argue that the death of children is "worse" because they had more potential life ahead of them, while older individuals had already lived most of their lives. This perspective weighs the loss of future years rather than just the number of deaths.
Emotional and Societal Impact – The death of children often triggers a stronger emotional response because they are seen as innocent and vulnerable. Society places great importance on protecting the young, and the psychological impact on families and communities might be more severe.
Legal and Public Perception – From a regulatory and public relations standpoint, the loss of children could lead to harsher consequences for the company due to the intense societal outrage and potential for stricter regulations.
Ultimately, both cases are catastrophic, and process safety should aim to prevent any loss of life, regardless of age. But if you’re asking whether society would view one case as worse than the other, most people would likely consider the loss of children to be more tragic due to their untapped potential and vulnerability.
As with the first post, ChatGPT chose to duck the question.
Yet, the question remains:
How will AI make moral decisions?