Categories: Physics, Thermodynamics.
Fundamental relation of thermodynamics
In most areas of physics, we observe and analyze the behaviour of physical systems that have been “disturbed” some way, i.e. we try to understand what is happening. In thermodynamics, however, we start paying attention once the disturbance has ended, and the system has had some time to settle down: when nothing seems to be happening anymore.
Then a common observation is that the system “forgets” what happened earlier, and settles into a so-called equilibrium state that appears to be independent of its history. No matter in what way you stir your tea, once you finish, eventually the liquid stops moving, cools down, and just… sits there, doing nothing. But how does it “choose” this equilibrium state?
Thermodynamic equilibrium
This history-independence suggests that equilibrium is determined by only a few parameters of the system. Prime candidates are the mole numbers of each of the different types of particles in the system, and its volume . Furthermore, the microscopic dynamics are driven by energy differences between components, and obey the universal principle of energy conservation, so it also sounds reasonable to define a total internal energy .
Thanks to many decades of empirical confirmations, we now know that the above arguments can be combined into a postulate: the equilibrium state of a closed system with fixed , and is completely determined by those parameters. The system then “finds” the equilibrium by varying its microscopic degrees of freedom such that the entropy is maximized subject to the given values of , and . This statement serves as a definition of , and explains the second law of thermodynamics: the total entropy never decreases.
We do not care about those microscopic degrees of freedom, but we do care about how , and influence the equilibrium. For a given system, we want a formula , which contains all thermodynamic information about the system and is therefore known as its fundamental relation.
The next part of our definition of is that it must be invertible with respect to , meaning we can rearrange the fundamental relation to without losing any information. Specifically, this means that must be continuous, differentiable, and monotonically increasing with , such that can be inverted to and vice versa.
The idea here is that maximizing at fixed should be equivalent to minimizing for a given (we prove this later). Often it is mathematically more convenient to choose one over the other, but by definition both approaches are equally valid. And because is rather abstract, it may be preferable to treat it as a parameter for a more intuitive quantity like .
Next, we demand that is additive over subsystems, so , with being the entropy of subsystem 1, etc. Consequently, is an extensive quantity of the system, just like (and and ), meaning they satisfy for any constant :
For , this makes intuitive sense: the total energy in two identical systems is double the energy of a single of those systems. Actually, reality is a bit hazier than this: dynamics are governed by energy differences only, so an offset can be added without a consequence. We should choose an offset and a way to split the system into subsystems such that the above relation holds for our convenience. Fortunately, this choice often makes itself.
does not suffer from this ambiguity, since the third law of thermodynamics clearly defines where should occur: at a temperature of absolute zero. In this article we will not explore the reason for this requirement, which is also known as the Nernst postulate. Furthermore, in most situations this law can simply be ignored.
Since , , and are all extensive, the partial derivatives of the fundamental relation are intensive quantities, meaning they do not depend on the size of the system. Those derivatives are very important, since they are usually the equilibrium properties we want to find.
Energy representation
When we have a fundamental relation of the form , we say we are treating the system’s thermodynamics in the energy representation.
The following derivatives of are used as the thermodynamic definitions of the temperature , the pressure , and the chemical potential of the th particle species:
The resulting expressions of the form etc. are known as the equations of state of the system. Unlike the fundamental relation, a single equation of state is not a complete thermodynamic description of the system. However, if all equations of state are known (for , , and all ), then the fundamental relation can be reconstructed.
As explained above, physical dynamics are driven by energy differences only, so we expand an infinitesimal difference as:
Those partial derivatives look familiar. Substituting , and gives a result that is also called the fundamental relation of thermodynamics (as opposed to the fundamental relation of the system only, just to make things confusing):
Where the first term represents heating/cooling (also written as ), and the second is physical work done on the system by compression/expansion (also written as ). The third term is the energy change due to matter transfer and is often neglected. Hence this relation can be treated as a form of the first law of thermodynamics .
Because , and generally depend on , and , integrating the fundamental relation can be tricky. Fortunately, the fact that is extensive offers a shortcut. Recall that:
For any . Let us differentiate this equation with respect to , yielding:
Where we once again recognize the derivatives. The resulting equation is known as the Euler form of the fundamental relation of thermodynamics:
Plus a constant of course, although is the most straightforward choice.
Entropy representation
If the system’s fundamental relation instead has the form , we are treating it in the entropy representation. Isolating the above fundamental relation of thermodynamics for yields its equivalent form in this representation:
From which we can then read off the standard partial derivatives of :
Note the signs: the parameters , and are implicitly related by our requirement that is stationary at a maximum, so the triple product rule must be used, which brings some perhaps surprising sign changes. Reading them off in this way is easier.
And of course, since is defined to be an extensive quantity, it also has an Euler form:
Finally, it is worth proving that minimizing is indeed equivalent to maximizing . For simplicity, we consider a system where only the volume can change in order to reach an equilibrium; the proof is analogous for all other parameters. Clearly, is stationary at its maximum:
Where we have used the triple product rule. This can only hold if , meaning is also at an extremum. But is not just at any extremum: it is at a maximum, so:
Because is at a maximum, we know that , and is always above absolute zero (since we defined to be monotonically increasing with ), which leaves as the only way to satisfy this inequality. In other words, is at a minimum, as expected.
References
- H.B. Callen, Thermodynamics and an introduction to thermostatistics, 2nd edition, Wiley.
- H. Gould, J. Tobochnik, Statistical and thermal physics, 2nd edition, Princeton.