AI and Process Safety Ethics
What's a Human Life Worth?
Developments in Artificial Intelligence (AI) had a huge impact on society in 2025. They will have an even greater impact in 2026. So what does this mean for the process safety discipline?
There are many possible responses to that question, some of which we will explore in future posts. The first post in the series, Process Safety, Gorillas and AI, is to do with what is often referred to as the ‘Alignment Problem’.
In this post we consider an ethical issue that we tend to dodge, particularly during Process Hazards Analyses (PHAs), but that AI may force us to confront directly.
What’s a human life worth?
Most of us believe that a human life is not reducible to dollars; it is inherently valuable.
The Consequence Matrix
When ranking identified hazards, companies often use a consequence matrix. The Table shown below (from the book Process Safety Management) is representative.
It shows four consequence categories:
Worker Safety
Public Safety / Worker Health
Environmental Impact
Economic (annual)
At first glance this looks like a neutral, technical tool.
It is not.
A consequence matrix is an ethical instrument. It asserts that fundamentally different harms — fatalities, injuries, environmental damage, and financial loss — can be placed on a single scale, compared, and traded with one another. The matrix encodes values.
Consider what the ‘Very Severe’ row typically implies. In the economic column, the highest consequence level is shown as ‘≥ $10 million’. In the worker safety column, that same level corresponds to ‘fatality or multiple serious injuries’.
Built into that alignment is a judgment:
A human life is worth $10 million
Where does that number, that value judgment, come from?
In practice it usually comes from a mixture of sources:
Regulatory Precedent
Government agencies sometimes use a ‘value of statistical life’ in cost–benefit analysis for safety and environmental rules.Corporate Precedent
Companies inherit older matrices, copy peer-company standards, or adjust categories to match insurance limits and perceived ‘industry norms’. In other words, they dodge the question.Legal and financial experience
Past settlements, litigation outcomes, and the general financial exposure of the business shape what feels ‘realistic’. Societies assign monetary values to life for compensation, regulation, and public policy decisions.Pragmatism
The number must be high enough to reflect moral seriousness, but not so high that it implies ‘shutdown the facility’ as the rational response to any non-zero fatality risk.
There are other issues to consider. For example, is the life of a young mother with three small children worth more than a single person who is approaching retirement and who has been diagnosed with a terminal illness?
Hazard Analyses
When leading or participating in PHAs, teams rarely discuss these embedded assumptions explicitly. They use phrases such as, “that’s really bad”, or “that’s catastrophic”, or “that’s unacceptable,” and then move on.
There are good reasons for such avoidance. PHA teams are time-bounded and multidisciplinary. If someone asks, “What is a human life worth?” the discussion can quickly shift from causes, consequences and likelihoods (technical and management topics) into philosophy, theology, law, and politics. Most teams just let the matrix do the uncomfortable work silently.
That approach has functioned tolerably as long as humans remained the final arbiters. A facilitator can override the matrix when it feels wrong. A manager can escalate an issue on moral grounds. A team can apply judgment without making every assumption explicit.
AI Changes the Stakes
AI removes our ability to look away. AI does not introduce this ethical problem. The problem is already there. AI exposes it.
As AI systems become more prevalent, particularly if they are used for decision support, alarm management, advanced process control, or autonomous optimization, they cannot rely on vague language or moral intuition. They require explicit objective functions, weights, thresholds, and constraints. If we ask an AI system to balance safety, production, availability, and cost, it must encode the trade-offs that we previously kept implicit.
Companies must decide how much they are willing to spend to reduce risk, even when the consequence includes fatality. Slogans such as ‘Safety is Priceless’ provide no operational guidance.
Assigning Values
The ethical challenge is not that organizations assign values. They do. The challenge is that the values are often unexamined, inherited, and unaccountable. AI merely forces transparency and accountability
If AI is going to play a meaningful role in operational decision-making, the organization will be forced to answer questions it has historically avoided:
Who sets the values in our consequence matrix, and why?
On what moral, regulatory, or societal basis are those values justified?
Do we value worker risk and public risk differently, and if so, why?
Do we distinguish between voluntary and involuntary risk exposure?
What constraints are ‘hard limits’ (never trade off) versus ‘soft limits’ (trade off under certain conditions)?
Who is accountable when an algorithm makes a decision that is technically consistent with our encoded values but morally unacceptable?
These are not hypothetical questions. They are process safety governance questions.
AI will not absolve human beings of responsibility. It will do the opposite. It will force us to state, explicitly, what we have been assuming all along.




