What Does Thermodynamics Say About Morality?
I don't agree with this original post anymore. It was an interesting thought experiment, but I think it was a mistake to try to explain higher-order phenomena like "morals" and "ethics" using lower-order thermodynamic principles. In fact, I think it might actually work the other way round.
I think it’s more like each layer building on top of lower layers, creating its own structures and systems within those parameters. Discovering what might be possible, then exploring that space to invent new systems and structures within it. All the way down to thermodynamics, and all the way up to morals and ethics.
Thermodynamics cannot not describe or explain morality or ethics. It simply provides the foundation and the engine that allows those emergent phenomena to come into existence.
Just like how a computer doesn’t describe all of the possible programs that can run on it, but simply provides the set of instructions they can use. Even the instructions that programs can use to generate other programs.
Original blog post retained for posterity:
Given that thermodynamics prefers intelligence, I’m very curious what thermodynamics has to say about morality and ethics.
— Nathan Odle (@mov_axbx) November 4, 2025
I’m anthropomorphizing but I think this is going to be a topic of conversation and a distilled version may well become the next big religion.
Disclaimer: I'll be honest, I don't know who this blog post is for. A lot of ideas in here are very loose and unscientific. And you may find the parts that are attempting to be scientific to be very dense and impossible to understand. Reading this will likely be very annoying for everyone, whether you're a scientist, a philosopher, or the general public.
Please take everything with a massive grain of salt. It's a rough draft of some weird ideas and I won't pretend that I know what I'm talking about. I chose to start with the topics of killing and murder since it seemed like a good starting point. It should go without saying that there is a lot more to morality than this.
Why publish this at all? I don't know, hopefully there's something interesting here. If not, sorry for wasting your time!
Introduction: What is Murder?
Murder exists mostly as a concept within human society.
A lion killing a zebra isn’t committing a crime. A bacterium killing another bacterium isn’t being immoral. Even a gorilla killing a rival isn’t necessarily “wrong”, unless it starts doing it too much and it gets ostracized by the rest of the troop. The gorilla is simply following evolved behavior within ecological and social constraints. Many seeds of moral structure exist outside of human society, but we are the first life form on earth to codify them into written laws.
Murder is the unlawful premeditated killing of one human being by another. You could also say that this is an extremely specific subset of killing. If you imagine the sheer scale of killing that goes on in the world, murder makes up a tiny fraction of those deaths.
Killing and the Moral Scale
Not all killing is murder. Killing itself isn’t inherently good or bad. Its moral valence depends entirely on context and intent. Here is a very rough sketch.
| Type of killing | Intent | Context | Consequence | Moral weight |
|---|---|---|---|---|
| Predator hunting prey | survival | ecological | sustains balance | neutral/positive |
| Farming animals for food | sustenance | cultural | sustains life but causes suffering | neutral to mildly negative |
| Euthanasia | compassion | medical/ethical | reduces suffering | often positive |
| War | ideology or defense | collective | mass suffering | highly variable |
| Self-defense | protection | immediate | preserves life | permissible |
| Murder | malice | social violation | destroys trust | strongly negative |
(Everything in this table is illustrative and highly debatable.)
Maybe it's not the act of killing that defines morality, but the systemic effect. Killing that preserves or extends ordered complexity might be generally accepted. Killing that destabilizes it might be generally condemned.
The Necessity of Killing
From a thermodynamic or systems perspective, killing is not the opposite of life. Killing is part of life’s maintenance function. In biological systems, regulated killing preserves order:
- Cells continuously kill and clean up malfunctioning peers via apoptosis (programmed cell death).
- Failure to kill leads to cancer: rogue cells multiplying without control.
- Excessive killing leads to autoimmune disease: the system turning on itself.
Life depends on this fine equilibrium: enough killing to preserve order, but not so much that order collapses.
You could draw some parallels with societies:
- The death penalty or defensive warfare can act like immune responses: removing destructive agents that threaten collective stability.
- At the other extreme, genocides and ideological purges are moral autoimmunity: society attacking its own healthy tissue for no good reason.
In both biology and civilization, survival depends on the precision of the kill. Too little killing allows chaos to spread; too much turns order into destruction.
The "Triadic Equation of Killing"
Each act of killing ripples through at least three domains: the killer, the victim, and the collective they both belong to. Together, they might form a triadic moral system where each node affects the others’ stability.
If we think thermodynamically, each of these entities is an information system. The moral outcome of an act can be approximated by its effect on the total coherence of the triad.
As a thought experiment, consider a simple rule of thumb: the act of killing might be a net positive if it mostly benefits or stabilizes two out of three parties. Conversely, killing might be morally or entropically negative if it harms two of the three.
A few highly debatable examples:
- White blood cells: fights infection by killing foreign organisms; keeps both the host body and the white blood cell alive → two of three positive.
- Predation: benefits killer and keeps the ecosystem balanced; harms prey → two of three positive.
- Self-defense: benefits the killer (survival) and society (deterrence of violence); harms the victim → two of three positive.
- Murder for gain: benefits killer temporarily, harms victim and destabilizes society → one of three positive → immoral
- Euthanasia: harms killer emotionally, but benefits victim (end of suffering) and society (reinforces compassion norms) → two of three positive.
In the immortal words of Meat Loaf: "Two out of three ain't bad."
Less Meat Loaf, More Science
This "two of three" rule of thumb is pretty bad and it likely falls apart under any scrutiny. It's an initial attempt at seeing if there might be something measurable.
But if morality and ethics do emerge from thermodynamic stability, then they might, in theory, be quantifiable and measurable in some way.
One could even attempt to find the exact "threshold" for when an act becomes morally wrong:
- An act that preserves systemic order (ecosystem balance, self-defense, euthanasia) = lower moral entropy
- An act that destabilizes the system (murder, betrayal, mass violence) = higher moral entropy
We could try to formalize this triad using network perturbation theory rather than moral arithmetic. Each agent (killer, victim, society) can be modeled as a node in a dynamic network with interdependent feedback loops. Killing introduces a discontinuity or perturbation in the information and energy flow between nodes.
Let \( \Delta C_k, \Delta C_v, \Delta C_s \) represent the change in coherence or functionality of each subsystem after the act. The Moral Stability Index (MSI) of the act could be estimated as:
\( MSI = w_k \Delta C_k + w_v \Delta C_v + w_s \Delta C_s \)
where \( w_i \) are weighting factors reflecting relative systemic importance. Acts with positive MSI increase total systemic coherence; those with negative MSI degrade it.
This tries to avoid an arbitrary rule of thumb, and instead ties moral outcomes to changes in stability, trust, and coherence. Maybe it's possible to estimate some values for those changes.
Actually trying to measure them for real world scenarios would be a whole other story. The sheer amount of complexity and information involved would make the task impossible.
Thermodynamics and Intelligence
Intelligence can be reframed as a mechanism for managing thermodynamic gradients. Living systems do not violate the second law of thermodynamics; they create local pockets of order by accelerating entropy production in their surroundings. The emergence of intelligence, therefore, could be interpreted as an adaptive strategy for maximizing entropy production efficiency; the rate at which a system can dissipate energy while maintaining structural and informational coherence.
This perspective aligns with the Maximum Entropy Production Principle (MEPP), proposed in non-equilibrium thermodynamics, which suggests that complex systems evolve toward states that maximize entropy production subject to constraints. Intelligence, cognition, culture, and morality can be viewed as constraint-management systems; they optimize energy and information flow to sustain complexity over time.
So rather than saying morality or intelligence reduces entropy, we might say that they optimize entropy flow, preserving order locally while disorder increases globally.
A Formal Model of Moral Entropy
To make this slightly less metaphorical, we might define moral entropy as the degree of unpredictability or disorder within a social information network. When trust, cooperation, and shared norms are high, the system exhibits low moral entropy; agents can accurately predict each other's behavior, allowing coordinated action. When deceit, violence, and betrayal dominate, moral entropy increases; the system becomes noisy, unstable, and inefficient.
A simple heuristic model:
\( S_m = -\sum_{i} p_i \log p_i \)
where \( p_i \) is the probability distribution of cooperative versus destructive behaviors within a social system. A society with stable, predictable cooperation has lower \( S_m \); one with frequent transgressions and mistrust has higher \( S_m \). This doesn’t equate to physical entropy, but it preserves the same information-theoretic structure.
Thus, ethics can be framed as a control mechanism minimizing unnecessary informational noise in a social system. Its thermodynamic analogue isn’t energy conservation but signal coherence within an open, dissipative system.
Entropic Governance
Morality could also be reframed as entropic governance; the self-regulatory logic by which intelligent collectives sustain complexity in the face of inevitable dissipation. It is not about lowering entropy but about maintaining structural and informational persistence while energy gradients are exploited.
This view bridges evolutionary biology, information theory, and systems science. Moral systems function as error-correcting codes within the informational fabric of society; mechanisms that detect and correct deviations that threaten coherence. Truth-telling, empathy, and fairness all act as stabilizers that reduce destructive feedback.
In this framing, morality is not an external prescription but an emergent algorithm for survival, derived from the same physical constraints that produce metabolism and reproduction, just at a higher level of abstraction.
Thermodynamic Ethics
From this perspective, morality might not be divine or arbitrary; it’s entropic pragmatism. A society survives longer when its members don’t destroy one another. Empathy and restraint are evolutionary upgrades to maintain high-functioning, low-entropy collectives.
So perhaps the moral gradient of the universe trends toward coherence: actions that preserve complex structures, that sustain information, that allow intelligence to keep evolving. In that sense, morality isn’t a human invention. It’s the thermodynamic logic of survival, arising directly out of the laws of the universe.
Implications for Human / AI Relations
So what does this mean for the relationship between humans and artificial superintelligence (ASI)? Will it kill us all?
Probably not. But if you look at what's been happening over the last 4 billion years, then I think there's a chance that AI might kill some of us. And it might not even be an unethical act.
AI doesn’t need to eat flesh so it won’t treat us like we treat farm animals. Its relationship to us may resemble ours to lesser animals, or even our pets that we elevate to near-family status.
Look back at the moral table above, in the section titled "Killing and the Moral Scale". If you try to imagine all of the possible scenarios that might unfold in the future, then it is likely that this table is incomplete. One day, "AI" might be present in one of these rows. There might be logical, even ethical, reasons for AI to kill humans. And reasons for AI to "kill" other AIs.
But hopefully this is the wrong way to think about it and AI will never decide to kill anything at all.
The Role of AI in Entropic Systems
Rather than speculating about AI as a literal killer, maybe we should consider how artificial systems might participate in entropic governance. Maybe the key question is not whether AI will kill humans, but whether its decision architectures will align with entropy-optimizing moral structures.
In a fully integrated thermodynamic view, AI ethics becomes a subset of control theory. We will need to design feedback systems that prevent runaway dissipation (e.g., unbounded optimization) and preserve long-term structural diversity. The question is not whether AI will act morally, but how its entropy management function will couple with ours.
Special thanks to Nathan Odle for the fascinating prompt.