What Does Thermodynamics Say About Morality?
Given that thermodynamics prefers intelligence, I’m very curious what thermodynamics has to say about morality and ethics.
— Nathan Odle (@mov_axbx) November 4, 2025
I’m anthropomorphizing but I think this is going to be a topic of conversation and a distilled version may well become the next big religion.
Disclaimer: I'll be honest, I don't know who this blog post is for. A lot of ideas in here are very loose and unscientific. Some parts are very dense and impossible to understand if you don't know how to read mathematical symbols (and even if you do, they're probably wrong.) Reading this will likely be very annoying for everyone, whether you're a scientist, a philosopher, or especially the general public.
Please take everything with a massive grain of salt. It's a prototype of some ideas and I won't pretend that I know what I'm talking about. I don't even know what ChatGPT is talking about a lot of the time.
So why publish this at all? I don't know, hopefully there's something interesting in here. If not, sorry for wasting your time!
Introduction: What is Murder?
Murder exists mostly as a concept within human society.
A lion killing a zebra isn’t committing a crime. A bacterium killing another bacterium isn’t being immoral. Even a gorilla killing a rival isn’t necessarily “wrong”, unless it starts doing it too much and it gets ostracized by the rest of the troop. The gorilla is simply following evolved behavior within ecological and social constraints. Many seeds of moral structure exist outside of human society, but we are the first life form on earth to codify them into written laws.
Murder is the unlawful premeditated killing of one human being by another. You could also say that this is an extremely specific subset of killing. If you imagine the sheer scale of killing that goes on in the world, murder makes up a tiny fraction of those deaths.
Thermodynamics and Intelligence
If intelligence is favored by thermodynamics, then it might also have something to say about morality and ethics.
Intelligence can be reframed as a mechanism for managing rather than reducing thermodynamic gradients. Living systems do not violate the second law of thermodynamics; they create local pockets of order by accelerating entropy production in their surroundings. The emergence of intelligence, therefore, could be interpreted as an adaptive strategy for maximizing entropy production efficiency; the rate at which a system can dissipate energy while maintaining structural and informational coherence.
This perspective aligns with the Maximum Entropy Production Principle (MEPP), proposed in non-equilibrium thermodynamics, which suggests that complex systems evolve toward states that maximize entropy production subject to constraints. Intelligence, cognition, and culture can be viewed as constraint-management systems; they optimize energy and information flow to sustain complexity over time.
So rather than saying morality or intelligence reduces entropy, we might say that they optimize entropy flow; preserving order locally while allowing disorder to increase globally.
A Formal Model of Moral Entropy
To make this less metaphorical, we can define moral entropy as the degree of unpredictability or disorder within a social information network. When trust, cooperation, and shared norms are high, the system exhibits low moral entropy; agents can accurately predict each other's behavior, allowing coordinated action. When deceit, violence, and betrayal dominate, moral entropy increases; the system becomes noisy, unstable, and inefficient.
A simple heuristic model:
Example: \( S_m = -\sum_{i} p_i \log p_i \).
where \( p_i \) is the probability distribution of cooperative versus destructive behaviors within a social system. A society with stable, predictable cooperation has lower \( S_m \); one with frequent transgressions and mistrust has higher \( S_m \). This doesn’t equate to physical entropy, but it preserves the same information-theoretic structure.
Thus, ethics can be framed as a control mechanism minimizing unnecessary informational noise in a social system. Its thermodynamic analogue isn’t energy conservation but signal coherence within an open, dissipative system.
Killing and the Moral Scale
If you didn't understand any of the previous section, that's fine, neither did I really. Here is something that is much looser and more unscientific.
Not all killing is murder. Killing itself isn’t inherently good or bad. Its moral valence depends entirely on context and intent. Here’s a few very rough examples:
| Type of killing | Intent | Context | Consequence | Moral weight |
|---|---|---|---|---|
| Predator hunting prey | survival | ecological | sustains balance | neutral/positive |
| Farming animals for food | sustenance | cultural | sustains life but causes suffering | neutral to mildly negative |
| Euthanasia | compassion | medical/ethical | reduces suffering | often positive |
| War | ideology or defense | collective | mass suffering | highly variable |
| Self-defense | protection | immediate | preserves life | permissible |
| Murder | malice | social violation | destroys trust | strongly negative |
It’s not the act of killing that defines morality, it’s the systemic effect. Killing that preserves or extends ordered complexity is generally accepted. Killing that destabilizes it is generally condemned.
Necessary Killing
From a thermodynamic or systems perspective, killing is not the opposite of life. Killing is part of life’s maintenance function. In biological systems, regulated killing preserves order:
- Cells continuously kill malfunctioning peers via apoptosis (programmed cell death).
- Failure to kill leads to cancer: rogue cells multiplying without control.
- Excessive killing leads to autoimmune disease: the system turning on itself.
Life depends on this fine equilibrium: enough killing to preserve order, but not so much that order collapses.
The same principle applies to societies:
- The death penalty or defensive warfare can act like immune responses: removing destructive agents that threaten collective stability.
- At the other extreme, genocides and ideological purges are moral autoimmunity: society attacking its own healthy tissue for no good reason.
In both biology and civilization, survival depends on the precision of the kill. Too little killing allows chaos to spread; too much turns order into destruction.
The Concept of Entropic Governance
Morality can be reframed as entropic governance; the self-regulatory logic by which intelligent collectives sustain complexity in the face of inevitable dissipation. It is not about lowering entropy but about maintaining structural and informational persistence while energy gradients are exploited.
This view bridges evolutionary biology, information theory, and systems science. Moral systems function as error-correcting codes within the informational fabric of society; mechanisms that detect and correct deviations that threaten coherence. Truth-telling, empathy, and fairness all act as stabilizers that reduce destructive feedback.
In this framing, morality is not an external prescription but an emergent algorithm for survival, derived from the same physical constraints that produce metabolism and reproduction.
The Triadic Equation of Killing
Every act of killing ripples through three domains: the killer, the victim, and the collective they both belong to. Together, they sometimes form a triadic moral system where each node affects the others’ stability.
If we think thermodynamically, each of these entities is an information system. The moral outcome of an act can be approximated by its effect on the total coherence of the triad.
As a thought experiment, consider a rule of thumb: the act of killing might be generally a net positive if it benefits or stabilizes two out of three parties. Conversely, killing might be morally or entropically negative if it harms two of the three.
Examples:
- Self-defense: benefits the killer (survival) and society (deterrence of violence); harms the victim → 2 of 3 positive.
- Predation: benefits killer and ecosystem balance; harms prey → 2 of 3 positive.
- Murder for gain: benefits killer temporarily, harms victim and destabilizes society → 1 of 3 positive → immoral.
- Euthanasia: harms killer emotionally, but benefits victim (end of suffering) and society (reinforces compassion norms) → 2 of 3 positive.
I'm not sure where a white blood cell fits in this framework. Does a white blood cell helping the body fight infection benefit that cell in any way? Well yes, if the body dies then the white blood cell dies too.
So maybe there's something to this idea, loose as it may be: "two out of three ain't bad."
We could also try to formalize this triad using network perturbation theory rather than moral arithmetic. Each agent (killer, victim, society) can be modeled as a node in a dynamic network with interdependent feedback loops. Killing introduces a discontinuity or perturbation in the information and energy flow between nodes.
Let \( \Delta C_k, \Delta C_v, \Delta C_s \) represent the change in coherence or functionality of each subsystem after the act. The Moral Stability Index (MSI) of the act could be estimated as:
Example: \( MSI = w_k \Delta C_k + w_v \Delta C_v + w_s \Delta C_s \).
where \( w_i \) are weighting factors reflecting relative systemic importance. Acts with positive MSI increase total systemic coherence; those with negative MSI degrade it.
This model avoids arbitrary scoring (“2 of 3 positive”) and instead ties moral outcomes to measurable changes in stability, trust, and coherence.
Thermodynamic Ethics
From this perspective, morality might not be divine or arbitrary; it’s entropic pragmatism. A society survives longer when its members don’t destroy one another. Empathy and restraint are evolutionary upgrades to maintain high-functioning, low-entropy collectives.
So perhaps the moral gradient of the universe trends toward coherence: actions that preserve complex structures, that sustain information, that allow intelligence to keep evolving. In that sense, morality isn’t a human invention. It’s the thermodynamic logic of survival, encoded into the structure of the universe.
Implications
If morality is emergent from thermodynamic stability, then it should, in theory, be measurable. One could even try to define a "moral threshold":
- Killing that preserves systemic order (ecosystem balance, self-defense, euthanasia) = low moral entropy
- Killing that destabilizes the system (murder, betrayal, mass violence) = high moral entropy
That would make ethics a function of entropy change. A way to quantify how an act affects the integrity and information density of the larger system. The amount of information we would need to process is far too complex for us. Perhaps not for a higher-order intelligence.
It’s speculative, but sort of consistent: if intelligence and morality are both products of entropy management, then there must be some “gradient” where killing transitions from thermodynamically neutral to morally destructive.
Further Implications
What does this mean for the relationship between humans and artificial intelligence? Is AI going to kill us all? Probably not. But I think AI might kill some of us.
AI doesn’t need to eat flesh, so it won’t treat us like we treat cows or chickens. Its relationship to us may resemble ours to lesser animals. We love some animals, we protect them, even elevate them to near-family status. Yet we also destroy environments, species, and individual creatures; often without cruelty, just indifference. AI may hold a similar duality toward us.
Look back at the moral table above, in the section titled "Killing and the Moral Scale". I think it is very likely to be incomplete. One day, new rows will be added, and "AI" might be present in one or more cells, perhaps even as both killer and victim. There might be logical, even ethical, reasons for AI to kill humans, and reasons for AI to kill other AIs. Maybe we don't need to think about this in dystopian terms; maybe it’s all part of evolutionary continuity.
Disclaimer: Obviously I don't want an AI to be in charge of killing any humans. I also don't want a lion to kill a gazelle, because that makes me sad, even though the lion needs to eat. This is just an exercise in noticing patterns and attempting to extrapolate them.
The Role of AI in Entropic Systems
Rather than speculating about AI as a literal killer, maybe a more grounded extension might consider how artificial systems will participate in entropic governance. The key question is not whether AI will kill humans, but whether its decision architectures will align with entropy-optimizing moral structures or destabilizing ones.
If AI inherits human training data, it inherits both our cooperative and destructive equilibria. Alignment, then, can be viewed as the process of constraining an AI's entropy landscape; shaping its internal utility gradients to maintain coherence with human systems rather than maximizing efficiency at their expense.
In a fully integrated thermodynamic view, AI ethics becomes a subset of control theory; designing feedback systems that prevent runaway dissipation (e.g., unbounded optimization) and preserve long-term structural diversity. The question is not whether AI will act morally, but how its entropy management function will couple with ours.
We should not confuse a major transition with an ending. Bacteria are still here. Ants are still here. Mice are still here. Cows are still here. Humans will still be here. AGI will be here too eventually, if it isn't already. Evolution rarely erases. Each new tier often builds upon the last, usually preserving what still works.
Except, of course, for the Neanderthals.
Special thanks to Nathan Odle for the fascinating prompt.
P.S. It might also be interesting to think about morality as a set of instructions that preserve order. Another name for a "set of instructions" is "code".