You are on page 1of 26

3. The Second Law - 1 C.

Rose-Petruck, Brown University, 1999


3. The Second Law

C. Rose-Petruck, Brown University, 1998

3. The Second Law - 2 C. Rose-Petruck, Brown University, 1999

3.1 Entropy
The first law of thermodynamics treats all forms of work and heat on an equal basis: they all add up to the
total energy, which then must be conserved. As far as the first law is concerned, there is no difference of
quality between the different forms of energy. However, observations do show that heat and work are not
equal: one can always transform work to heat, but the reverse is much more complicated and often not
possible. For instance, we have learned in the second chapter that in order to do mechanical work, e.g., a
gas-system has to change its volume against an external pressure. If this external pressure is zero and
an ideal gas expands (certainly irreversibly) no work would be done and no heat would be absorbed.
That's puzzling because the internal energy doesn't change; there is no energetic "motivation" for the gas
to expand. Nevertheless, the gas will evenly fill the larger volume that became available. What's the
"driving force" for that change? Clearly, there must be some property, called entropy that governs the
transformation of systems. We shall see that this quantity is of statistical nature.
While the first l aw determines what transformations of a system are energeticall y possibl e, the
second law determines the probability for these changes to happen.
In searching for a new property that describes the observation we clearly need to look at the amount of
heat that is exchanged in the process. We define the entropy, S, as follows:
T
Q d
dS = (3.1)
3. The Second Law - 3 C. Rose-Petruck, Brown University, 1999
Even though the entropy was defined for a path (via Q d , for some unspecified reversible process), it
turns out that the function S obtained through integration of the above equation is a state function. It
depends only on the state of the system, not on the method by which the system was prepared. We shall
proof now that S is a state function even though Q d is not.
If a system does only work in the form pV, the 1
st
law (2.5) can be written
pdV Q d dU
rev
= . (3.2)
Let's consider a perfect gas. The internal energy of a perfect gas is independent of the volume. We then
can combine (2.19) with (3.2) and obtain
pdV dT C Q d
V rev
+ = . (3.3)
Inserting the ideal gas' equation of state pV=RT
V
dV
RT dT C Q d
V rev
+ = . (3.4)

V
dV
R
T
dT
C
T
Q d
dS
V
rev
+ = = . (3.5)

= =
A
B
A
B
V A B
V
V
R
T
T
C S S S ln ln . (3.6)
This implies that
T
Q d
rev
is an exact differential and the S is a state function for a perfect gas. More
general arguments, as derived by Carathodory (1909), of this type enable us to show that S is a state
function for all substances.
Certainly, the heat that the system exchanges with its surroundings can be expressed as:
Q d T dS = . (3.7)
The temperature T is the absolute temperature, which we will discuss in more detail later. Our definition of
the entropy is only valid for reversible processes.
We now discuss how the definition can be extended to irreversible processes. Consider a system that
moves from a state A to a state B in either a reversible or an irreversible way. Such a situation is always
possible: states A and B are independent of their history, and therefore it is always possible to construct
reversible and irreversible paths between them. Now recall from our discussion of the first law that the
amount of work that a system can deliver is always maximized in a reversible process. We therefore
write:
rev irrev
W W < (3.8)
Note that the system is supposed to perform work; therefore, both W
irrev
and W
rev
are negative quantities,
3. The Second Law - 4 C. Rose-Petruck, Brown University, 1999
so that the minus signs makes both sides positive; the left side symbolizes a smaller amount of work than
the right side. We rewrite this as:
rev irrev
W W > (3.9)
Now, we also know that the system in states A and B must be described by an internal energy U that is,
as a state function, independent of the paths between A and B. The energy difference between the states
is therefore:
rev rev irrev irrev
Q d W d Q d W d dU + = + = (3.10)
Combining the last two equations yields:
rev irrev rev rev irrev irrev
Q d W d Q d W d Q d W d + < + = + (3.11)

rev irrev
Q d Q d < (3.12)
Next we divide both sides of the inequality by the temperature T:
T
Q d
T
Q d
rev irrev
< (3.13)
The expression on the right hand side we recognize as the entropy change !S of the system in a
reversible process, Therefore the entropy change in an irreversible process is:
T
Q d
dS
irrev
> (3.14)
Combining the entropy changes for a reversible and an irreversible process we have:
T
Q d
dS (3.15)
with Q standing for the heat that is exchanged in either a reversible or an irreversible process. The '=' sign
of the equation represents the reversible path.
We are now in a position to develop several statements, all of which are expressions of the second law of
thermodynamics.
In a reversible process, the sum of the entropy of a system and the entropy of its surroundings is
unchanged:
0 = + =
Surr Sys
rev
Univ
S S S (3.16)
This statement comes about as follows: the entire universe is divided into two parts, the one that I call
system, and then the rest, the surroundings. Now, in a reversible process, the entropy change of the
system is given by the last equation, with an =sign. The entropy change of the surroundings has the
same magnitude, but the opposite sign, because whichever heat the system takes on, the surrounding
looses. Thus, the sum of the entropy changes of the system and the surroundings is simply zero in a
reversible process.
3. The Second Law - 5 C. Rose-Petruck, Brown University, 1999
Another statement of the second law is as follows:
In an irreversible process, the total entropy of a system plus its surroundings must increase:
0 > + =
Surr Sys
irrev
Univ
S S S (3.17)
To understand this statement of the second law, consider that the whole universe is the sum of system
and surroundings. Also, the whole universe is an isolated system, and any processes we invent inside the
universe don't cause a flow of heat to the outside of the universe, if there is such a thing. Therefore, the
heat flow is zero, Q = 0. Then, we are left with the statement that 0 > S for an irreversible process.
Finally:
A process for which 0 <
Univ
S is impossible.
This statement follows automatically from the previous ones: if a reversible process has a total entropy
change of zero, and the irreversible process has only increases of the total entropy, then logically there is
never a process in which the total entropy decreases.
In just the same way 0 <
isolated
S is never true for isolated systems as expressed in equation (3.15)
for zero heat exchange.
All the statements of the second law are equivalent. Given one of them it is possible to derive the others.
Notice that what we have discussed is no proof of the second law. J ust like the first law of
thermodynamics, or the basic principles of quantum mechanics, there is no proof for the second law. The
second law is simply the summary of many observations that can be explained by it. The evidence for the
second law comes therefore from many experiments; no experiment has yet provided any evidence
contradicting the second law. As we will see later in the semester, it is in principle possible to observe
violations of the second law. However, they are so rare that the age of the universe isn't sufficiently long
for such violations to occur.
We also see the close relation between entropy change and reversibility: A reversible process is
reversible because the entropy of system and surroundings don't change. So we can just "step back in
time" and reverse the process. In an irreversible process the entropy of the universe increases. But we
cannot reverse this process because the entropy of the universe can never decrease.
Let's shed light of the statistical properties of the entropy by considering an ideal gas system in more
detail.
3.2 Microscopic Interpretation
Consider N gas molecules initially contained in one half of a volume, as shown in the picture below.
3. The Second Law - 6 C. Rose-Petruck, Brown University, 1999

The probability that this state A could occur by chance is
N

2
1
, that is the same as the chance the N
objects are all in one of two boxes between which they have been randomly distributed. We can write the
probability of state A with respect to state B as
N
B
A
P
P

=
2
1
. (3.18)
If instead of choosing
2
1
=
B
A
V
V
we had selected arbitrary volumes we can show that
N
B
A
B
A
V
V
P
P

= . (3.19)

A
B
A
B
V
V
N
P
P
ln ln (3.20)
3. The Second Law - 7 C. Rose-Petruck, Brown University, 1999
While the system evolves from state A to state B we have gone from a state of low probability to one of
high probability.
Using (3.6) for one mole of gas we derive with T
A
=T
B

( ) ( ) [ ]
A B
A
B
A B
P P k
P
P
k S S ln ln ln =

= . (3.21)
Thus, the entropy S of a system in any particular state is proportional to the logarithm of the probability P
for that state of the system, i.e.,
( ) P k S ln = . (3.22)
P is the probability of the entire system to be in a certain state. Don't confuse P with the probabilities pi
of the system's particles to be in certain quantum states, such as quantum states of molecular vibrations or rotations.

Comparison of the entropy for systems with few and with many parti cles
In simple mechanical systems the entropy is usually negligible. Assume that the system can assume two
states with one being twice as probable as the other. From (3.22) follows
K
J
10
K
J
301 . 0 3 . 2 10 38 . 1 2 ln
23 23
= = k S S S
A B
. (3.23)
This is a very small entropy difference, which may be neglected without introducing any significant
inaccuracy when performing calculations on such a system. This is why only energy needs to be
considered when performing calculations on simple mechanical systems. In contrast, for a system with
the same two possible states but with 1-mole particles we obtain
K
J
8 . 5
K
J
301 . 0 3 . 2 3 . 8 2 ln = =
A
N
A B
k S S S , (3.24)
which can be a substantial contribution.

Deviations from the most probable state of a system are very unli kel y.
We found in chapter 2 that the individual states are distributed according to the Boltzmann distribution
(1.6). We did discuss that the Boltzmann distribution is the most probable of all conceivable distributions
for a system in equilibrium. However, we did not address the question how much less probable other
conceivable distributions are. The probability is overwhelming that a system assumes the even
distribution of particles in equilibrium from which is deviates only to a very small extend. We insert the
result from (3.22) into (3.23) for a hypothetical process from B to A.

= =
B
A
P
P
k S ln
K
J
8 . 5 . (3.25)

( )
23
23
10 exp
10 4 . 1
8 . 5
exp

=
B
A
P
P
(3.26)
3. The Second Law - 8 C. Rose-Petruck, Brown University, 1999

Are the postul ates 2 and 3 correct?
Postulate 2: We see from (3.22) that the entropy is proportional to the logarithm of the probability for the
state of the system. Naturally, the state with the highest probability and, consequently, the largest entropy
is assumed once relaxation processes are over. This means that for composite systems the extensive
parameters will vary until the entropy is maximized. For instance, a wall separating two volumes will move
or heat will flow until the state of highest probability is assumed.
Postulate 3: The probability to find a composite system in a certain state is the product of the
probabilities to find each subsystem in a certain state. Therefore,
( )

= =

=
i
i
i
i
i
i
S P P S ln ln , (3.27)
the entropy is additive over the constituent subsystems. Furthermore, the entropy is a monotonically
increasing function of the internal energy. Let's consider an isolated system, i.e., dV=0. We then derive
from (3.7):
T
dU
T
Q d
dS = = , (3.28)

=
T
dU
S . (3.29)
q.e.d.
This implies that S is linear in U and linear in the number of subsystems.

The Fundamental Equations
In fact, the entropy is a homogeneous first-order function of the extensive parameters, i.e.
( )
i
N N N V U S S ,..., , , ,
2 1
= (3.30)
with
( ) ( )
i i
N N N V U S N N N V U S ,..., , , , ,..., , , ,
2 1 2 1
= . (3.31)
The monotony implies that the partial derivative
0
,..., , ,
2 1
>

i
N N N V
U
S
. (3.32)
We shall see later that the inverse of (3.32) is taken as the definition of the temperature. Therefore, the
temperature cannot assume negative values.
The continuity, differentiability and monotony of the entropy imply that the entropy function can be
inverted with respect to the energy and that the energy is a continuous and differentiable function of the
entropy.
3. The Second Law - 9 C. Rose-Petruck, Brown University, 1999
( )
i
N N N V S U U ,..., , , ,
2 1
= , (3.33)
The equations (3.29) and (3.33) are alternative forms of the fundamental relation, and each
contains all thermodynamic information about the system.
Both, the entropy and the internal energy are extensive parameters. Consequently we can scale them for
properties of system of N moles of some substance to systems of 1 mole of the same substance
according to (3.31).
( )

= 1 , , ,..., , , ,
2 1
N
V
N
U
NS N N N V U S
i
. (3.34)
But U/N, and V/N are the energy and the volume per mole, respectively.
With
N
U
u , (3.35)
and
N
V
v (3.36)
we obtain the entropy of a single mole:
( ) ( ) 1 , , , v u S v u s . (3.37)
Or
( ) ( ) v u Ns N V U S , , , = . (3.38)
Let's briefly discuss how these equations can be useful when solving a thermodynamic problem. The
entropy of a system consisting of various subsystems is the sum of the entropies of the subsystems. We,
therefore, obtain the entropy as a function of the extensive parameters of the subsystems. In a
constrained equilibrium, e.g. when the internal energy is constant, the entropy does not change. The
entropy reaches an extremum. This is mathematically equivalent to vanishing of the first derivatives of the
entropy with respect to the extensive parameters. In the general case, depending on the second
derivative we can classify the extrema as stable or unstable extrema. Stable extrema are entropy
maximums; all others are unstable extrema.
The fundamental equations (3.30) and (3.33) are equivalent. Any of them can be used to characterize the
system, e.g., for finding equilibrium states. It turns out that, in fact, the energetic fundamental equation is
often more convenient to use. Since the derivative of the entropy with respect to the energy is positive, a
maximum for the entropy implies a minimum for the energy and vice versa.
The correspondence between entropy maximum and energy minimum is represents a natural
correspondence principle between thermodynamics and mechanics. In mechanics thermal effects do not
influence the stability of a system and the entropy is not used. A stable equilibrium, however, is a state of
minimum energy.
3. The Second Law - 10 C. Rose-Petruck, Brown University, 1999
3.3 Engines
In order to convert heat into work we require some suitable thermodynamic engine that consumes heat
and produces work. During any process this machine itself does not suffer any permanent changes itself.
Any series of processes that returns the system to its original state is called a cycle. The system with
which the engine operates is called working substance. The engine itself does not include the heat source
and the heat sink but comprises just the thermodynamics cycle.
In any cycle, such as the one below, the amount of work done by the system is equal to the enclosed
area in the figure below and is equal the net absorbed heat during the cycle. From (3.7) follows therefore

= dS T Q
rev
. (3.39)

Since the entropy is a state function
0 = =

T
Q d
dS
rev
(3.40)
for a reversible cycle.

The Thermal efficiency is defined as
h
Q
W
= , (3.41)
with Q
h
being the heat absorbed from a hot heat reservoir. Writing the heat discharged into a cold
reservoir Q
c
we can transform (3.40) using the first law,
c h
Q Q W = , into
3. The Second Law - 11 C. Rose-Petruck, Brown University, 1999
h
c
Q
Q
=1 . (3.42)
Certainly, since also the internal energy is a state function, the change of internal energy after one cycle
is zero.
We consider the engine to be the thermodynamics system and the heat baths part of the surroundings.
Our resulting "energy accounting" deviates from that of some authors. However, it is more consistent because the we
calculate the efficiency of the engine (=system) and have to consider the energy flow in and out of this system, not in
and out of the surroundings of the system. W and Qc leave the engine and are, therefore, negative. Furthermore, it is
essentially a matter of personal taste whether we put the sign indicating the direction of energy flow into the formulas
or into the number that we plug into the formulas once we perform a calculation. We put the signs into the formulas.

The Carnot cycl e and the Carnot theorem
A thermodynamic cycle of particular importance is the Carnot cycle. This cycle, as we shall see, is the
cycle with the largest possible thermal efficiency ".

The Carnot cycle consists of four processes:
3. The Second Law - 12 C. Rose-Petruck, Brown University, 1999
1. Reversible, isothermal expansion from A to B at
the temperature T
h
of the hot heat source.
h
h
AB
T
Q
S =
2. Reversible, adiabatic expansion from B to C. In
the course of this expansion the temperature of the
system falls from T
h
to T
c
.
0 =
BC
S
3. Reversible, isothermal compression from C to D
at the temperature T
c
of the cold heat sink.
c
c
CD
T
Q
S =
4. Reversible, adiabatic compression from D to A.
In the course of this expansion the temperature of
the system rises from T
c
to T
h
.
0 =
DA
S
The Carnot cycle is the only circle in which a single working substance exchanges heat at two
temperatures only. ("Two temperatures only" means two isothermal processes and two processes that do
not exchange heat and that are, therefore, adiabatic.)
While the Carnot cycle is often depicted in the p-V-diagram the T-S-diagram, see below, shows much
more clearly the simplicity of the process. Furthermore, in contrast to p-V-diagram the T-S-diagram allows
to perform all calculations without the reference to the materials of the system. The conclusions drawn
are, therefore, of general validity for all systems.

3. The Second Law - 13 C. Rose-Petruck, Brown University, 1999
The total change in entropy is
c
c
h
h
CD AB cycle
T
Q
T
Q
S S S = + = = 0 . (3.43)

c
c
h
h
T
Q
T
Q
= . (3.44)
( )
A B h AB h h
S S T S T Q = = . (3.45)
Since the extracted work is equal to the net absorbed heat which is equal to the area enclosed by the
circle we get
( )( )
( )
( )
h
c h
A B h
A B c h
h
rev
T
T T
S S T
S S T T
Q
W
=

= . (3.46)
Thus we see that the effici ency of a thermal engine operating according to a reversibl e Carnot
circl e is independent of the working substance and depends only on the two operating temperatures.
This result is known as Carnot's theorem. Equation (3.45) also means that a reversible Carnot engine can
be used as a thermometer by having the engine work with one heat bath at some reference temperature
while measuring the thermodynamics efficiency.
This thermodynamic efficiency is by no means large. Let's consider the temperatures that are available in
a typical steam engine: Th=390 K(117 C) and Tc=350 K(77 C). The thermodynamic efficiency of the
best possible engine, the Carnot engine, would be just 10%; the efficiency of a real steam engine would
certainly be even lower then that.
Sometimes Carnot's theorem is written as follows: "No engine operating between two given reservoirs
can be more efficient than a Carnot engine operating between the same two reservoirs". Or it is
expressed as: "All reversible engines operating between the same reservoirs are equally efficient", CJ A,
p.56. We have implicitly proven these forms of the Carnot theorem by deriving (3.45) without reference to
a particular working substance. Moreover, 0 =
cycle
S is a true statement for any reversible process and,
consequently, (3.45) is true for any reversible engine. We summary by stating
( )
h
c h
rev
T
T T
= (3.47)
for any reversible engine operating between the same heat reservoirs. Since
rev irrev
W W < (3.8)
it follows from equation (3.45)
rev irrev
< . (3.48)
Finally, let's have a look at two forms of the second law that are frequently cited in the literature.
3. The Second Law - 14 C. Rose-Petruck, Brown University, 1999

Kelvin Statement
"No process is possible whose sole result is the complete conversion of heat into work", CJ A, p.53.
Prove: Equation (3.45) and (3.47) express this explicitly.
( )
0 all for 1 > <

c
h
c h
T
T
T T
. (3.49)
Since the thermodynamics efficiency is smaller then 1, a complete conversion of heat into work is not
possible. q.e.d.

Clausius Statement
"No process is possible whose sole result is the transfer of heat from a colder to a hotter body", CJ A,
p.53.
Prove: If we bring to bodies (=sub-systems) in thermal contact the amount of heat that leaves one of the
bodies is equal to the heat that enters the other, i.e.,
h c
Q d Q d = .
The overall change in entropy has to be positive, that is
0
!
1 1
>

= =
h c h c
T T
Q d
T
Q d
T
Q d
dS . (3.50)
Since T
h
>T
c
equation (3.49) is in fact fulfilled and, therefore, this process is in fact spontaneous. Heat
transfer into the other direction is, consequently, impossible. q.e.d.

Endoreversible Engines
When discussing the Carnot cycle, our primary attention rested on the thermodynamics efficiency of an
engine. However, maximum thermodynamic efficiency is not necessarily the primary concern in design of
real engines. More important might be power output, simplicity of construction, cost. We shall now discuss
engines that are not completely reversible but which power output is maximized. These engines are
called endoreversible engines and provide a good approximation to real engines. (See HBC, p126.)
Since the Carnot engine is reversible, all processes are quasi-equilibrium processes. Theoretically, this is
equivalent to the processes progressing infinitesimally slow and making the temperature differences
between the working substance and the heat baths infinitesimally small. Consequently, the power
delivered by the engine is infinitesimally small.
In practice, one runs an engine at some finite speed. Because "slow" has to be compared to the speed of
the relaxation processes the adiabatic processes could progress much faster than the isothermal
processes because the relaxation times in the working substance are much faster than the heat transfer
between the working substance and the heat baths. Therefore, an endoreversible engine has two
reversible, adiabatic processes and two irreversible heat transfer processes.
3. The Second Law - 15 C. Rose-Petruck, Brown University, 1999

We assume that the heat source is at a temperature T
h
and the heat sink at a temperature T
c
. The heat
between the working substance and the heat source and the heat sink is transferred with thermal
conductances K
h
, K
c
, respectively. Consequently, during the isothermal expansion the temperature of the
working substance T
w
, during the isothermal compression the temperature of the working substance is T
c
.
c t w h
T T T T > > > (3.51)
If the time t
h
is required to transfer an amount Q
h
, then
( )
w h h
h
h
T T K
t
Q
= . (3.52)
Since an equivalent equation holds for the heat transfer to the cold reservoir, the time required for the two
isothermal strokes of the engine is
( ) ( )
c t c
c
w h h
h
c h
T T K
Q
T T K
Q
t t t

= + = . (3.53)
As mentioned earlier, the adiabatic strokes can be very fast. Their contribution to the amount of time for
one complete cycle is, therefore, neglected.
The heats Q
h
and Q
c
are related by the Carnot cycle operating between the temperatures T
w
and T
t
. From
(3.46) follows then
3. The Second Law - 16 C. Rose-Petruck, Brown University, 1999
( ) ( ) ( ) ( )
W
T T
T
T T K T T
T
T T K
t
t w
t
c t c t w
w
w h h


+

=
1 1
. (3.54)
The power output of the engine (-W) / t has to be maximized with respect to the yet undetermined
temperatures T
w
and T
t
. It can be found that the power is maximized for
c t h w
T K T T K T = = and , (3.55)
with
c h
c c h h
K K
T K T K
K
+
+
= . (3.56)
The maximum power delivered by the engine is
2
max 1
1
]
1

=
|

\
|
c h
c h
c h
K K
T T
K K
t
W
. (3.57)
The efficiency for this endoreversible engine at maximum power is
h
c
endorev
T
T
=1 . (3.58)
It is important to note that this efficiency is not dependent of the thermal conductances. The model
employed here is, in fact, quite accurate as a comparison with a number of power plants shows.
Power Plant
T
c
[C] T
h
[C]
rev

endorev

observed

endorev
observed


West Thurrock (U.K.) coal fired steam
plant
~25 565 0.64 0.40 0.36 90%
CANDU (Canada) PHW nuclear reactor ~25 300 0.48 0.28 0.30 93%
Lardello (Italy) geothermal steam plant 80 250 0.32 0.175 0.16 91%
Ref. for endoreversible engines and table: F. L. Curzon and B. Ahlborn, Amer. J . Phys. 43, 22 (1975)
3.4 Equilibrium Conditions and Stability
Writing the fundamental equation
( )
i
N N N V S U U ,..., , , ,
2 1
= , (3.59)
in differential form we obtain
3. The Second Law - 17 C. Rose-Petruck, Brown University, 1999
j
r
j
N V S
j N S N V
dN
N
U
dV
V
U
dS
S
U
dU
j i
i i

=
1
, ,
, ,
. (3.60)
The various partial differentials are called energetic intensive parameters and have the following physical
meaning:
T
S
U
i
N V

,
, the temperature (3.61)
p
V
U
i
N S

,
, the pressure (3.62)
j
N V S
j
j i
N
U

, ,
, the electrochemical potential
of the jth component. (3.63)
Equation (3.38) can then be written as
j
r
j
j
dN PdV TdS dU

=
+ =
1
. (3.64)
We are already familiar with the first and second term. The third term, the electrochemical potential,
describes the energy exchange between the system and its surroundings due to flux of matter.
J ust to give it some name, we call this term the quasi-static chemical work dW
c
. Strange name, but it is an
energy term that has to be considered in chemical systems.
c
dW PdV TdS dU + = . (3.65)

Equations of State
Since the temperature, pressure, and chemical potential are derivatives of functions of S, V, and N
i
they
are functions of these extensive parameters themselves:
( )
i
N V S T T , , = , (3.66)
( )
i
N V S p p , , = , (3.67)
( )
j i j j
N V S

= , , . (3.68)
Such relationships are called equations of state. Again, this is not new to us. We already used the
equation of state for the ideal gas p = RT / V. For convenience we abbreviate N
#
, N
2
,..., Ni, with N
i
.
Finally we encounter a good reason why the equations of state have to be homogeneous first order
functions. The derivative of such functions is homogeneous zeroth order, which is nice because now we
get:
3. The Second Law - 18 C. Rose-Petruck, Brown University, 1999
( ) ( )
i i
N V S T N V S T , , , , = , (3.69)
( ) ( )
i i
N V S p N V S p , , , , = , (3.70)
( ) ( )
i j i j
N V S N V S , , , , = , (3.71)
This means that, e.g., the temperature of part of the system is equal to the temperature of the whole
system, and that's how it should be.
As demonstrated with equation (3.37) the energetic fundamental equation can, certainly, be written in
molar terms:
pdv Tds du = . (3.72)
We based these considerations on the energetic fundamental equations, i.e., we chose the work in the so
called energy representation. This means that we chose to use the energy to be dependent on the
independent variable entropy. Alternatively we could have started with the entropic fundamental
equations and would have arrived at the corresponding entropic intensive parameters. In such as case
we would work in the entropy representation. However, we shall not consider this alternative here.

Thermal Equilibrium
Let's consider a system that consists of two subsystems in thermal contact. In equilibrium dS=0. The
entropy for the system is the sum of the entropies of the sub-systems:
( ) ( )
B
i
B B B A
i
A A A
N V U S N V U S S , , , , + = . (3.73)
The differential form is then:
B
B
A
A
B
N V
B
B
A
N V
A
A
dU
T
dU
T
dS
dU
U
S
dU
U
S
dS
B
i
B A
i
A
1 1
, ,
+ =

=
w . (3.74)
Because of conservation of energy we have dU
B
= - dU
A
.
Equation (3.74) in then:
A
B A
dU
T T
dS

=
1 1
. (3.75)
Since in equilibrium dS has to vanish
B A
B A
T T
T T
=
=
{
1 1
. (3.76)
The system is in thermal equilibrium. (By the way, we began this derivation with the entropic fundamental
3. The Second Law - 19 C. Rose-Petruck, Brown University, 1999
equation and, therefore, worked in the entropy representation. But don't worry about that.)
We can see now that our definition of the temperature T is in agreement with our intuitive concept of
temperature.
If T
A
> T
B
the system is not in equilibrium and !S>0. J ust like equation (3.75) we obtain
0
1 1
>

=
A
B A
U
T T
S . (3.77)

0 <
A
U (3.78)
This means that in a spontaneous process the energy flows from the part with higher temperature to the
part with lower temperature. Furthermore, the temperature is an intensive parameter, i.e., it has the same
value everywhere in an equilibrium system.

Temperature Units
The temperature is defined by (3.61)
i
N V
S
U
T
,

. (3.61)
While the dimensions of the energy is [mass length
2
/ time
2
], the dimension of the entropy can be
arbitrarily chosen because any entropy multiplied by some constant satisfies the extremum principles and
is, consequently, an entropy. Nevertheless, considering
( ) P k S ln = (3.22)
it is clear that the entropy has the dimension of the constant in front of the logarithm.
The units of energy are Joule, erg, calories, etc. The thermodynamic temperature has a uniquely defined
zero point. This is, according to equation (3.46) the temperature at which the thermodynamic efficiency
for a reversible cycle equals 1. The Kelvin scale of temperature is defined by assigning the number
273.#3 to the triple point of water. This corresponds to about 0 C. However, the only temperature scale
that can be used in thermodynamic calculations is the Kelvin scale. The corresponding unit of
temperature is called Kelvin, designated by the notation K. Kelvin and J oule have the same dimension,
their ration is #.3806 x #0
-23
Joule / Kelvin. This ration is called Boltzmann's constant, designated k
B
or
often simply k. Thus k
B
T is an energy.
For more information on energy scales read, e.g., HBC, pp.47 or CJ A, pp.18, pp. 58.

Mechanical Equilibrium
Let's consider a system that consists of two subsystems separated by a diathermal, movable wall. In
equilibrium dS=0.
The differential form is then:
3. The Second Law - 20 C. Rose-Petruck, Brown University, 1999
B
N U
B
B
B
N V
B
B
A
N U
A
A
A
N V
A
A
dV
V
S
dU
U
S
dV
V
S
dU
U
S
dS
B
i
B B
i
B
A
i
A A
i
A
, ,
, ,

=
. (3.79)

B
B
B
B
B
A
A
A
A
A
dV
T
p
dU
T
dV
T
p
dU
T
dS + =
1 1
(3.80)
Because of conservation of energy we have dU
B
= - dU
A
and dV
B
= - dV
A
.
Equation (3.79) in then:
0
1 1
=

=
A
B
B
A
A
A
B A
dV
T
p
T
p
dU
T T
dS . (3.81)
Since temperature and pressure are independent (3.80) can only be satisfied if
B
B
A
A
B A
T
p
T
p
and
T T
= =
1 1
. (3.82)

B A
T T = (3.83)
B A
p p = (3.84)
The system is in thermal and pressure equilibrium. The equality of pressure is exactly what we expect
intuitively. It is clear that the pressure defined in equations (3.61) is exactly the mechanical pressure.

Equilibrium with Respect to Matter Flow
We now consider a system connected by a diathermal wall that is permeable to the ith but not to any
other substance. We are now searching for equilibrium conditions with respect to the temperature and the
chemical potential. The mathematical formalism is exactly the same as in the previous examples.
Beginning with
B
B
B
B
B
A
A
A
A
A
dN
T
dU
T
dN
T
dU
T
dS

+ =
1 1
(3.85)
we obtain the results
B A
T T = , (3.86)
B A
= . (3.87)
J ust as the temperature can be looked upon as a "potential" for heat flow, and the pressure can be looked
upon as a "potential" for volume changes, the chemical potential can be looked upon as a "potential" for
3. The Second Law - 21 C. Rose-Petruck, Brown University, 1999
matter flow. We shall see later that the chemical potential also provides a generalized force for the
change of phases and for chemical reactions. Thus, the chemical potential is of great importance for
theoretical chemistry. The units for the chemical potential are energy units per mole.
3.5 The Second Law: Examples
Example 1. Engines: Refrigerators and heat pumps.

Let's recall the ideal engi ne.
We have discussed the reversible cycle that converts transports heat from a hot to a cold heat bath while
performing work, the Carnot cycle. Such an engine is shown below.


Refrigerator
In contrast, a refrigerator absorbs work from an external source and removes heat from out of an isolated
volume to the environment as show in the next figure.
3. The Second Law - 22 C. Rose-Petruck, Brown University, 1999

The thermodynamic efficiency for this engine would be defined similarly to equation (3.45). However, now
our focus rests on the amount of heat Q
c
removed from the isolated volume per work W supplied to the
engine.
( )
c h
c c
or refrigerat
T T
T
W
Q

= . (3.88)

Heat Pump
The thermodynamic efficiency for the heat pump is given by the heat released, e.g., into the house at T
h

divided by the work consumed by the engine.
3. The Second Law - 23 C. Rose-Petruck, Brown University, 1999

( )
or refrigerat
c h
h h
pump heat
T T
T
W
Q
+ =

= 1
_
. (3.89)
These efficiencies are displayed together in the next figure. T
in
refers to the temperature of the heat
source, T
out
refers to the temperature of the heat sink.
3. The Second Law - 24 C. Rose-Petruck, Brown University, 1999

The thermodynamic efficiency of a reversible engine linearly decreases as a function of T
c
/T
h
. In contrast,
the efficiencies for the refrigerator and heat pump diverges for T
c
/T
h
=#. This means that no work has to be
supplied if there is no temperature difference to "work against".
Example 2. Entropy changes when heating a substance
Suppose we heat a substance reversibly from an initial temperature T
i
to a final temperature T
f
. What is
the change of entropy of the substance?
Since this is supposed to be a reversible process, we use the formula
T
dQ
dS (3.90)
For processes at constant pressure, such as those occurring at atmospheric pressure, we know that
dT C dQ
P
= (3.91)
We plug in to get:
dT
T
C
dS
P
= (3.92)
We get the value of the entropy at the final temperature as:
3. The Second Law - 25 C. Rose-Petruck, Brown University, 1999

+ =
B
A
T
T
P
A B
dT
T
C
T S T S ) ( ) ( (3.93)
The important thing to note is this: the heat capacity as a function of the temperature is an experimentally
accessible function. Thus, one can obtain the value of the entropy at some temperature from the one at
another temperature by integrating the above equation.
For small changes in temperature, that is whenever T
A
~ T
B
, we can assume the heat capacity to be
independent of the temperature. This allows us to take it out of the integral:

+ =
+ =

A
B
P A
T
T
P A B
T
T
C T S
T
dT
C T S T S
B
A
ln ) (
) ( ) (
(3.94)
As a numerical example, we can look at the change in entropy upon heating of one mole of water, from
298 K to 299K:
mol K
J
25 . 0
K 298
K 299
ln
mol K
J
75 ln ) ( ) ( = =

= =
A
B
P A B
T
T
C T S T S S (3.95)
Thus, if the entropy at one temperature is known, then it is an easy matter to calculate the entropy at a
slightly higher temperature, provided the heat capacities are known.
Example 3. Entropy of phase transitions
Now consider the change of the entropy of a substance during a phase transition. We again assume that
the phase transition is done in a reversible way. This implies, for example, that one melts ice to water at 0
C (that is: all phase transitions are studied at their transition temperature). We know that phase
transitions are associated with an enthalpy change. For a reversible process we have:
rev trans
Q H = (3.96)
We can immediately plug in to obtain:
trans
trans
trans
T
H
S

= (3.97)
If the phase transition is exothermic ) 0 ( <
trans
H , the change in the entropy is negative. For example, in
the exothermic freezing of water, the entropy change is negative. We will later see how we can interpret
this observation on a microscopic level.
Let us take a look at some typical molar entropies of vaporization of liquids:
3. The Second Law - 26 C. Rose-Petruck, Brown University, 1999

We notice that all the entropies of vaporization are fairly similar; this similarity is known as Trouton's rule,
which states that all substances have entropies of vaporization of about 85 J/K mol. Water is somewhat of
an exception, as it has a fairly high entropy of vaporization. The reason for this empirically found rule is
that a comparable amount of disorder is generated when a liquid evaporates. However, water molecules
in the liquid phase strongly interact with each other, which introduces some amount of order. This order is
lost upon evaporation and, consequently, the associated entropy is larger then for other substances.

You might also like