Process Safety, Gorillas and AI
The Ethical Foundations of Process Safety
The issues discussed in this post force us to consider the justification for the process safety discipline. The discipline is traditionally grounded in technical issues (such as asset integrity) and management systems (such as employee participation), but it is ultimately justified on ethical principles.
The increasing use of AI in process and energy facilities forces us to discuss just what those values are. (This is the ‘Alignment Problem’.) Process safety professionals will have to consider questions that philosophers and theologians have debated for centuries ― for example, ‘What is a human life worth?’
Fundamental Values
AI may optimize for what it thinks we want, rather than for what we actually value. So, what are those values?
The picture shows a detail from the famous School of Athens, painted by Raphael between 1509 and 1511. It shows Aristotle (stage left) gesturing downward to the earth, symbolizing objectivity ― ‘just the facts’. Plato, to his right, is pointing upwards, representing the search for first principles.
Process safety professionals generally follow Aristotle’s way of thinking. The advent of AI will require them to also consider the values questions raised by Plato.
Gorillas and AI
Gorillas and humans share a close evolutionary heritage. A few million years ago, our common ancestors lived in similar environments, had similar constraints, and occupied similar ecological niches. But, once humans developed language, symbolic reasoning, and advanced technology, the relationship changed permanently. Gorillas were not able to keep pace. Nothing in a gorilla’s environment, instincts, or cognition could have prepared it for a species that would one day drill for oil, build cities, or encroach on the last fragments of its habitat.
Gorillas now survive only because humans choose to let them.
We have the capability to destroy gorillas — not out of malice, but because our goals and activities overshadow theirs. Gorillas cannot comprehend our systems, decisions, or expansion. They are endangered because our priorities overshadow theirs, not because we intend harm.
This gorilla-human asymmetry forms an analogy for AI.
Shared Origin, Divergent Trajectories
Humans and gorillas share biological roots; humans and AI share informational and cognitive foundations. But, once an AI becomes capable of improving its own reasoning or coordinating with other systems, its pace of change could far exceed ours. Just as we outsmart gorillas, so AI systems could outsmart us.
Loss of Agency
Gorillas do not participate in decisions about their own survival. They cannot negotiate land-use policy, debate conservation strategy, or shape technology. By analogy, we humans face a similarly rapid mismatch between our own capability and that of AI. We could wind up in the gorilla’s position: existing, but without meaningful control over the forces that determine our future. We are transferring operational autonomy to systems that we don’t fully understand, and that are able to develop without our input.
No Malevolence
Humans do not want to exterminate gorillas. Similarly, a highly capable AI need not be hostile. It could simply pursue its own objectives: commercial, optimization, or resource-acquisition goals in ways that sideline human well-being. Human safety, including the safety of those working on industrial facilities, may become a secondary goal.
Often, we cannot fully predict or understand the internal reasoning steps of advanced AI systems. Therefore, we cannot assume that their learned objectives will remain aligned with human safety values.
The speed of divergence
The human–gorilla divergence took millions of years. AI–human divergence, if it happens, could unfold in far shorter timescales — years rather than millennia. That acceleration makes alignment and governance unusually difficult: we do not have evolutionary or cultural timescales in which to adapt.
A Process Safety Perspective
Most discussions to do with AI in the process industries focus on Large Language Models. But LLMs are passive ― if someone does not touch a keyboard then nothing happens. LLMs do not take autonomous actions in the physical world.
Such is not the case when AI is integrated into on-line control systems. When the energy, speed, or information-processing capacity of a system exceeds human ability to monitor and intervene, loss of control becomes a dominant hazard.
That loss of control is the essence of the gorilla problem applied to process safety.
Loss of Agency as a Process Safety Hazard
In the process and energy industries, accidents are generally caused by loss of control, followed by a failure of safeguards. With advanced AI, the system may optimize, coordinate, and act faster than the operators can adapt. The AI has operational autonomy; the operators do not. This is not malevolence on the part of the AI; it is the same problem that gorillas face with humans: an unbridgeable mismatch in capability, matched with indifference to the gorillas’ interests.
When AI has operational autonomy, a system designed to optimize supply chains or energy usage may inadvertently create unstable feedbacks, distort incentives, or bypass safety boundaries. Fatalities and injuries could become a side-effect of powerful optimization misaligned with human safety constraints.
The Alignment Problem
The alignment problem asks a simple but unsettling question:
How can we be sure that an AI system will always prioritize human safety, even as it becomes faster, more capable, and more autonomous?
Accidents in refineries, offshore platforms, and chemical plants often arise as the side-effects of pursuing production, efficiency, cost reduction, and schedule goals. There are always tradeoffs between production and safety goals. Process safety professionals are constantly challenged to decide where to draw the line between safety and profitability. Safety is the ultimate value, but the only way of achieving perfect safety is to do nothing. Risk can never be zero. Where do we draw the line? Where will AI draw the line?
AI systems trained on large datasets, or built with reinforcement learning, may discover strategies, trade-offs, or optimizations that were never envisioned by their designers. The AI system may become highly competent at pursuing objectives that were specified incorrectly, incompletely, or without full awareness of downstream consequences.
A Process Safety Alignment Example
Consider an AI-enhanced advanced process control (APC) application trained to maximize throughput in a crude unit. The designers correctly constrain it not to exceed equipment limits such as furnace bridgewall temperatures or fractionator flooding. (Many actual APC implementations already trend in that direction, though without full autonomy. )
But suppose the system learns that operating very close to those limits increases production by 1–2%. From the AI’s perspective, this is optimal and fully compliant with the constraints given. This is the same pattern seen in runaway reactions or compressor surge — once system dynamics outpace operator response, safeguards erode.
From a process safety perspective, however, continuous operation at the edge erodes safety margins, reduces the effectiveness of layers of protection, and increases the probability that a small upset — a momentary instrument drift, a sticky valve, a sudden drop in reflux — will push the system beyond controllable conditions. No rule has been violated; the AI is doing exactly what it was asked to do, but safety has been compromised. The optimization targets were misaligned with the goal of ‘adequate safety’.
This is the alignment problem in practice: a system that is obedient but unsafe.
Conclusion
The gorilla problem for AI and the process industries raises moral and ethical questions.
We may build systems that arise from human intelligence, but those systems could rapidly exceed our ability to understand or constrain them. Once the capability gap becomes large, we humans — like gorillas — could find ourselves no longer in control. Hence, we will be forced to clearly articulate and defend our safety values.







That APC example is wild because the AI isn't even breaking any rules, just exploiting the gap between technical compliance and actual safety margins. Makes you wonder how many existing control systems are already doing this at smaller scales. The scariest part is we probably won't know there's a problem until we get our first major upset from something that looked totally normal on paper.
This piece really made me think. How defin human values for AI, then?