TENSOR CALCULUS
With Applications to Mechanics,
Elasticity and Aeronautics
ARISTOTLE D. MICHAL
DOVER PUBLICATIONS, INC.
Mineola, New York
Bibliographical Note
This Dover edition, first published in 2008, is an unabridged republication of
the work originally published in 1947 by John Wiley and Sons, Inc., New York,
as part of the GALCIT (Graduate Aeronautical Laboratories, California
Institute of Technology) Aeronautical Series.
Library of Congress CataloginginPublication Data
Michal, Aristotle D., 1899
Matrix and tensor calculus: with applications to mechanics, elasticity, and
aeronautics I Aristotle D. Michal.  Dover ed.
p. em.
Originally published: New York: J. Wiley, [1941]
Includes index.
ISBN13: 9780486462462
ISBNIO: 0486462463
I. Calculus of tensors. 2. Matrices. I. Title.
QA433.M45 2008
515'.63dc22
Manufactured in the United States of America
2008000472
Dover Publications, Inc., 31 East 2nd Street, Mineola, N.Y. 11501
",
To my wiJe
Luddye Kennerly Michal
EDITOR'S PREFACE
The editors believe that the reader who has finished the study of this
book will see the full justification for including it in a series of volumes
dealing with aeronautical" subjects. "
However, the editor's preface usUally is addressed to the reader who
starts with the reading of the volume, and therefore a few words on our
reasons for including Professor Michal's book on matrices and tensors
in the GALCIT series seem to be appropriate.
Since the beginnings of the modem age of the aeronautical sciences
a close cooperation has existed between applied mathematics and
aeronautics. Engineers at large have always appreciated the help of
applied mathematics in furnishing them practical methods for numerical
and graphical solutions of algebraic and differential equations. How
ever, aeronautical and also electrical engineers are faced with problems
reaching much further into several domains of modem mathematics.
As a matter of fact, these branches of engineering science have often
exerted an inspiring influence on the development of novel methods in
applied mathematics.
One branch of applied mathematics which fits especially the needs
of the scientific aeronautical engineer is the matrix and tensor calculus.
The matrix operations represent a powerful method for the solution of
problems dealing with mechanical systems" with a certain number of
degrees of freedom. The tensor calculus gives admirable insight into
complex problems of the mechanics of continuous media, the mechanics
of fluids, and elastic and plastic media.
Professor Michal's course on the subject given in the frame of the
wartraining program on engineering science and management has
found a surprisingly favorable response among engineers of the aero
nautical industry in the Southern Californian region. The editors be
lieve that the engineers throughout the country will welcome a book
which skillfully unites exact and clear presentation of mathematical
statements with fitness for immediate practical applications.
v
THEODORE VON KAmIdN
CLARK B. MILLIKAN
PREFACE
This volume is based on a series of lectures on matrix calculus and
tensor calculus, and their applications, given under the sponsorship
of the Engineering, Science, and Management War Training (ESMWT)
program, from August 1942 to March 1943. The group taking the
course included a considerable number of outstanding research en
gineers and directors of engineering research and development. I am
very grateful to these men who welcomed me and by their interest
in my lectures encouraged me.
The purpose of this book is to give the reader a working knowledge
of the fundamentals of matrix calculus and tensor calculus, which he
may apply to his own field. Mathematicians, physicists, meteorologists,
and electrical e n ~ e e r s , as well as mechaiucal and aeronautical e ~
gineers, will discover principles applicable to their respective fields.
The last group, for instance, will find material on vibrations, aircraft
flutter, elasticity, hydrodynamics, and fluid mechanics. .
The book is divided into two independent parts,_ the first dealing
with the matrix calculus and its applications, the second with the
tensor calculus and its applications. The minimum of mathematical
concepts is presented in the introduction to each part, the more ad
vanced mathematical ideas being developed as they are needed in
connection with the applications in the later chapters.
The twopart division of the book is primarily due to the fact that
matrix and tensor calculus are essentially two distinct mathematical
studies. The matrix calculus is a purely analytic and algebraic sub
ject, whereas the tensor calculus is geometric, being connected with
transformations of coordinates and other geometric concepts. A care
ful reading of the first chapter in each part of the book will, clarify
the meaning of the word "tensor," which is occasionally misused in
modem scientific and engineering literature.
I wish to acknowledge with gratitude the kind cooperation of the
Douglas Aircraft Company in making available some of its work in
connection with the last part of Chapter 7 on aircraft flutter. It is a
pleasure to thank several of my students, especially Dr. J. E. Lipp
and Messrs. C. H. Putt and Paul Lieber of the Douglas Aircraft
Company, for making available the material worked out by Mr. Lieber
and his research group. I am also very glad to thank the members of
my seminar on applied mathematics at the California Institute for
their helpful suggestions. I wish to make special mention of Dr. C. C.
vii
viii PREFACE
Lin, who not only took an active part in the seminar but who also
kindly consented. to have his unpublished researches on some dramatic
applications of the tensor calculus to boundarylayer theory in aer.o
nautics incorporated. in Chapter 18. This furnishes an application of
the Riemannian tensor calculus described in Chapter 17. I should
like also to thank Dr. W. Z. Chien for his timely help.
I gratefully acknowledge the suggestions of my colleague Prc;Ifessor
Clark B. Millikan concerning ways of making the book more useful
to aeronautical engineers. .
Above all, I am indebted to my distinguished colleague and friend,
Professor Theodore von K8.rm8.n, director of the Guggenheim Graduate
School of Aeronautics at the California Institute, for honoring me by
an invitation to put my lecture notes in book form for publicat,ion in
the GALCIT series. I ~ v e also the delightful privilege of expressing
my indebtedness to Dr. Karman for his inspiring conversations and
wise counsel on applied mathematics in general and this volume in
particular, and for encouraging me to make contacts with the aircraft
industry on an advanced mathematical level.
I regret that, in order not to delay unduly the publication of this
boQk, I am unable to include some of my more recent unpublished
researches on the applications of the tensor calculus of curved infinite
dimensional spaces to the vibrations of elastic beams and other elastic
media.
CALIFORNIA INsTITUTE OF TECHNOLOGY
OcroBI!lB, 1946
AmsTOTLE D. MiCHAL
CONTENTS
PART'I
MATRIX CALCULUS AND ITS APPLICATIONS
CHA.PTJlB
1. ALGlilBBAIC PBELlMINARIES
Introduction . . . . .
Definitions and notations .
Elementary operations on matrices
2. ALGl!IBBAIC PRELIMINARIES (Continued)
PAGE
1
1
Inverse of a matrix and the solution of linear equations 8
Multiplication of matrices by numbers, and matric polynomials. 11
Characteristic equation of a matrix and the CayleyHamilton theorem. 12
3. DIFFERENTIAL AND INTl!lGRAL CALCULUS OF MATBICES
Power series in matrices . . .. . . . . . . . . . 15
Differentiation and integration depending on a numerical variable 16
4. DIFFERENTIAL AND INTEIlBAL CALCULUS OF MATBICES (Continued)
Systems of linear differential equations with constant coefficients 20
Systems of linear differential equations with variable coefficients. 21
5. MATRIX METHODS IN PROBLl!IMS OF SMALL OSCILLATIONS
Differential equations pf motion
Illustrative example . . . . . . . . . . .
6. MATBIX METHODS IN PROBLEMS OF SMALL OsCILLATIONS (Continued)
24
26
Calculation of frequencies and amplitudes . . . . . . . . . . . 28
7. MATRIX METHODS IN THE MATHEMATICAL THEORY OF AIBCllAI'T FLUTTER 32
8. MATRIX METHODS IN ELASTIC DEFORMATION THEORY
I
PART 11
TENSOR CALCULUS AND ITS APPLICATIONS
9. SPACIil LINE ELEMENT IN CURVILINEAB COORDINATES
38
Introductory remarks . . . . 42
Notation and summation coDvention . . 42
Euclidean metrio tensor . . . . . . 44
10. Vl!ICI'OB FIELDS, TENSOR FIELDS, AND EUCLIDEAN GHlWITOFFilL SnmoLS
The strain tensor . . . . . . . . . 48
Scalars, contravariant vectors, and covariant vectors 49
Tensor fields of rank two 50
Euclidean Christoffel symbols 53
ix
x
CONTENTS
CBAPTIlB
PAGlIl
11. TENsoR ANALYSIS
Covariant difierentiation of vector fields
56
Tensor fields of rank r = p + q, contravariant of rank p and covariant
of rank'p. . . 57
Properties of tensor fields . . . . . . . . " 59
12. LAPLACE EQUATION, WAVE EQUATION, AND POISSON EQUATION IN QuaY!;'
LINlIlAR COORDINATES
Some further concepts and remarks on the tensor caloulus 60
Laplace's equation . . . . . .' 62
Laplace's equation for veotor fields 65
Wave equation ..
.'
Poisson's equation ...
13. SOME ELEMENTARY ApPLICATIONS OF THE TENSOR CALCULUS TO HYDRO
DYNAMICS
65
66
NavierStokes differential equations for the motion of a viscous'fluid 69
Multiplepoint tensor fields. . . . . . . 71
A twopoint correlation tensor field in turbulence . . . . 73
14. APPLICATIONS OF THE TENSOR CALCULUS TO ELASTICITY THJiIORY
Finite deformation theory of elastic media 75
Strain tensors in rectangular coordinates 77
Change in volume under elastic deformation 79
15. HOMOGENEOUS AND ISOTROPIC 8TaAJNs, STRAIN INV AJUANTS, AND V ARJ
ATION OF STRAIN TENSOR
Strain invariants . . . . . . . . . . . . 82
Homogeneous and isotropic strains . . . . 83
A fundamental theorem on homogeneous strains 84
Variation of the strain tensor. . . . . . . . 86
16. STRESS TENSOR, ELASTIC POTENTIAL, AND STRESS8TaAJN RELATIONS
Stress tensor . . . . . . 89
Elastic potential. . . . . . . . ',' . . . . . . . 91
StresHtrain relations for an isotropic medium . . . .. 93
17. TENifoR CALCULUS IN RlmMANNJAN SPACliIS AND TBJD FuNDAMENTALS
OF CLASSICAL MECHANICS
Multidimensional Euclidean spaces . 95
Riemannian geometry. . . . . . . . . . 96
Curved surfaces as examples of RiElmannian spaces 98
The RiemannChrlstoffel ourvature ~ r 99
Geodesics. . . . . . . . . . . . 100
Equations of motion of a dynamical system with n degrees of freedom. 101
CONTENTS xi
CBAPTmB PAGE
18. ,ApPLICA!l'IONB OF THE TENSOB CALCULUS TO BOUNDARyLAYER TBlDOBY
"Incompressible and compressible fluids. . . . . . . . .. 103
Boundarylayer equations for the steady motion of a homogeneous in
compressible fluid 104
NOTES ON PART I III
NOTJIlS ON PART IT. . 114
RuEBENCES FOB PART I 124
RmFERENCES FOR PART IT 125
INDEX. .. 129
PART I. MATRIX CALCULUS
AND ITS APPLICATIONS
Introduction.
CHAPTER 1
ALGEBRAIC PRELIMINARIES
Although matrices have been investigated by mathematicians for al
most a century, their thoroughgoing application to physics, It engineer
ing, and other  such as cryptography, psychology, and
educational and other statistical measurements  has taken place
since 1925. In particular, the use of matrices in aeronautical engi
neering in connection with small oscillations, aircraft flutter, and elastic
deformations did not receive much attention before 1935. It is inter
esting to note that the only book on matrices with systematic chaptem
on the differential and integral calculus of matrices was written by
three engineers.t .
Definitions and Notations.
A table of mn numbers, called elements, arranged in a rectangular
array of m rows and n columns is called a matrix 3 with m rOW8 am n
columna. If a} is the element in the ith row and 3th column, then the
matrix can be written down in the following pictorial form with the
conventional double bar on each side.
aI, ... ,
... ,
ai, a;', ... , a:'
In the expression oj the index i is called a 8Uper8Cl'ipt and the index 3 a
8Ubscript. It is to be emphasized that the superscript i in oJ is not the
ith power of a variable 0,.
If the number m of rows is equal to the number n of columns, then
t Superior numbers refer to the notes at the end of the book.
t Frazer, Duncan, and Collar, ElementaT'/l Matrice& and 80fM ApplicCJtioM to
Dynamic8 and Diilertmti4l EguatioM, Cambridge University Press, 1938.
. 1
2 ALGEBRAIC PRELIMINARIES
the matrix is called a square matrix. t The number of rows, or equiva
lently the number of columns, will be called the order of the square
matrix. Besides square matrices, two other general'types of matrices
occur frequently. One is the row matrix
II al, as, "', ax II ;
the other is the column matrix
am
It is to be observed that the superscript 1 in the elements of the row
matrix was omitted. Similarly the subscript 1 in the elements of the
column matrix was also omitted. All this is done in the interest of
brevity; the index notation is unnecessary when the index, whether a
subscript or superscript, cannot have' at least two values.
It is often very convenient to have a more compact notation for
matrices than the one just given. This compact notation is as follows:
if oj is the element of a matrix in the ith row and jth column we can
write simply
II oj "
instead of stringing out all the mn elements of the matrix. In par
ticular, a row matrix with element al: in the kth column will be written
II al: II,
and a column matrix with element al: in the kth row will be written
II al: II.
Elementary Operations on Matrices.
Before we can use matrices effectively we must define the addition of
matrices and the muUiplication of matrices. The definitions are those
that have been found most useful in the general theory and in the
applications.
Let A and B be matrice8 oj the same type, i.e., matrices with 'the same
number m of rows and the same number n of columns. Let
A = II a; II, B = II bj II .
Then by the sum A + B of the matrices A and B we shall mean the
t It will occasionally be convenient to write tliJ for the element in the ith row
and jth column of a square matrix. See Chapter 5 and the following chapters.
ELEMENTARY OPERATIONS ON MATRICES 3
uniquely obta.inable matrix
c= 11411,
where
c} = oj + b} (i = 1, 2, ... , mj j = I, 2, ... , n).
In other words, to add two matrices of the same type, calbulate the
matrix whose elements are precisely the numerical sum of the cor
responding elements of the two given matrices. The addition of two
matrices of different type has no meaning for us.
To complete the preliminary definitions we must make clear what we
mean when we say that two matrices are equal. Two matrices A == II oj II
and B == II bj II of the same type are equal, written as A = B, if and
only if the numerical equalities oj = b} hold for each i and j.
Exercise
I, 1, 5
A= 0, 0, 3, 2
1.1, 2, 4, 1
0, 0,
1
0, 0, I, 3
I, 0, 2, 4
Then
I, 1, 0, 6
A + B = 0, 0, 2, 1
2.1, 2, 2,3
The following results embodied in a theorem show that matric
addition has some of the properties of numerical
THEOREM. If A and B are any two matrices of the same type, then
A +B = B+A.
If C is any third matrix of the same type as A and B, then
(A + B) + C = A + (B + 0).
Before we proceed with the definition of muUiplication of matrices,
a word or two must be said about two very important special square
matrices. One is the zero matrix, i.e., a square matrix all of whose
elements are zero,
0,0,' ",0
0,0"",0
O. O ... .0
4 ALGEBRAIC PRELIMINARIES'
We can denote the zero matrix by the capital letter O. Occasionally
we shalI use the terminology zero matrix for a nonsqua.re matrix with
zero elements.
The other is the unit matri3:, i.e., a matrix
where
. I = II 8; II,
~ = 1 if i =j.
= 0 if i ~ j .
In the more explicit notation
1,0,0,,,,0
0,1,0,,0
0,0,1,0,,,0
1=
0,0,0"",0,1
One of the most useful and simplifying conventions in all mathe
matics is the 8Ummation convention: the repetition 01 an iru1ex once as a
sub8cript and once as a superscript wUZ indicate a summation over the
total rafl4e 01 that iru1ex. For example, if the range of the indices is
1 to 5, then '
Ii
ap' means .Eap' or D.J.b
1
+ afb2 + aafJ8 + aJJ4 + aH.
1=1
Again we warn the reader that t h ~ superscript i in b' is not the ith power
of a variable b.
The definition of the multiplication of two matrices can now be given
in a neat form with the aid of the summation convention. Let
~ , ~ , ... , a!.
~ , ~ , ... , a:.
A=
~ , a;, ... , a:.
b ~ , ~ , ... , b;
~ , ~ , ... ,b!.
B..,.
ELEMENTARY OPERATIONS ON MATRICES 5
Then, by the product AB 0/ the two matrice8, we 8haJl mean the mcrtri:e
C=lIc;lI,
where
c; = a'J>j (i = 1,2, ... , nj j = 1, 2, . , p).
If c; is written out in extenso without the aid of the summation qon
vention, we have
i 'b
I
'b
i
+ 'bm
Cj=/lij+(J3i+ Q,f,,;j.
It should be emphasized here that, in order that the product AB of
two matrices be well defined, the number of rows in the matrix B must
be precisely equal to the number of columns in the matrix A. It follows
in particular that, i/ A and B are square matrice8 0/ the sam.e type, then
AB as well as BA is always weU defined. However, it must be empha
sized that in general AB is not equal to BA, written 88 AB BA,
even if both AB and BA are well defined. In other words, matrix
multiplication of matrices, unlike numerical multiplication, is not
always commutative.
Exercise
The following example illustrates the noncommutativity of matrix
multiplication. Take
A
11
0
1
0
1
II
 so that at = 0, ~ = 1, ~ = 1, ~ = 0,
and
B
== II 0
1
0
1
II
so that M = 1, ~ "" 0, b ~ = 0, ~ = 1.
Now
Hence
Similarly
c ~ = a!b! = (0)(1) + (1)(0) = 0,
~ = a!b; = (0)(0) + (1)(1) = 1,
~ = ~ 1 = (1)(1) + (0)(0) = 1,
~ = a2..b; = (1)(0) + (0)(1) = o.
AB = I I _ ~ ~ II
BA II ~  ~ II
But obviously AB BA.
. The unit. matrix 1 of order n baa the interesting property that it
commutes with all square matrices of the. same order. In fact, if A is
6 ALGEBRAIC PRELIMINARIES
an arbitrary square matrix of order n, then
AI = IA = A.
The multiplication of row and column matrices with the same
number of elements is instructive. Let
A ~ II ~ II
be the row matrix and
B = II b
i
II
the column matrix. Then AB = a.:tJ', a number, or a matrix with one
element (the doublebar notation has been omitted).
Exercise
o
H A = II 1, 1, 0 II and B  0 ,then
1
AB = (1) (0) + (1) (0) + .(0) (1) = O.
This example also illustrates the fact that the product oj two matrice8
can be a zero matrix although neither of the multipZied matrices i8 a zero
matrix.
The multiplication oj a square matrix with a column matrix occurs
frequenJly in the applications. A system of n linear al{Jebraic equations
in n unknowns Xl, Xl, ... , x"
ajxi ..; b
i
can be written as a ltingZe matrix equation
AX=B
in the unknown column matrix X = II Xi II and the given square
matrix A = I ~ a} II and column matrix B = II b
i
II.
A system of firstorder difierential equations
dx' .
 = lLjX'
dt
can be written as one matric difierential.equation
dX = AX.
dt
Finally a system of secondorder difierential equations occurring
in the theory of small oscillations
(/Jxi ,.
 =ajtl
clt2
ELEMENTARY OPERATIONS ON MATRICES 7
can be written as one matric secondorder differential equation
The above illustrations suffice to show the compactness and sim
plicity of matric equations when use is made of matrix multiplication.
Exercises
1. Compute the matrix AB when
A.IWI
Is BA defined? Explain.
I. Compute the matrix AX when
A = IIl: II and X = II j II
Is XA defined? Explain.
CHAPTER 2
ALGEBRAIC PRELIMINARIES (Continued)
Inverse of a Matrix and the Solution of Linear Equations,l
The inverse a
1
, or reciprocal, of a real number a is well defined if
a ~ O. There is an analogous operation for square matrices. If A is a
square matri2:
A=IIt411
oj arder n and if the determinant I t4 I ~ 0, or in more extended no..
tation
~ ' ~ ' " ' ' a!
~ , a : , ... , ~
ai, a;, ... , a:
~ O ,
then there exists a 'Unique matrix, written A 1 in analogy to the inverse of
a number, 'ID'ith the important propertie8
{
AA1 = I
(21) A1A = I (I is the unit matrix.)
The matrix: A1, if it exists, is called the inverse matrix oj A.
In fact, the following more extensive result holds good. A nec688ary
and sufficient condittion that a matri2: A = II t4 II have an inverse is that
the associated determinant I a} I ~ O.
From now on we shall refer to the determinant a = I aj I as the
determinant a of the matrix A. Occasionally we sha]J, write I A Ijor
the determinant 01 A.
The general form of the inverse of a matrix can be given with the
aid of a few results from the theory of determinants. Let a = I aj I
be a determinant, not necessarily different from zero. Let a: be the
cofactor t of a ~ in the determinant a; note that the indices i and j are
interchanged in a: as compared with a{. Then the following results
t The (n  I)rowed determinant obtained from the determinant G by striking
out the.ith row and ith column in G, and then multiplying the result by (_I)H1.
8
INVERSE OF A MATRIX
come from the properties of determinants:
a;al '" a a1 (expansion by elements of ith row);
ajai", a a1 (expansion by elements of kth column).
H then the determinant a '" 0, we obtain the following relations,
(22)
on defining
.. = ~ ,
{
t4M ai
~ j a = ~
a:
M= .
, a
9
Let A", II aj II, B = II ~ IIi then' relations 22 state, In. terms of
matrix multiplication, that
AB=I, BA =1.
In other words, the matrix B is precisely the inverse'matrix A1 of A.
To summarize, we have the following. computational result: iJ the
determinant a oj a square matrix A '" II aj II i8 dijJerent Jrom zero, then
the inverse matrix A 1 oj A exists and i8 gifJen by
i
A1 = II ~ II,
where M '" a; and ex} i8 the coJactor oj at in the determinant a oj the
a
matrix A.
These results on the inverse of a matrix have a simple application to
the solution of n nonhomogeneous linear (algebraic) equations in n
unknowns xl, z2, "', x". Let the n equations be .
ajxi=b'
(the n'J numbers aj are given and the n numbers b
i
are given). On de
fining the matrices
A = II aj II, X =" x' 1/, B = II b' II,
we can, as in the first chapter, write the n linear equations as one matric
equation
AX=B
in the unknown column matrix X. If we now assume tha.t the de
terminant a of the matrix A is not zero, the inverse matrix A 1 will
exist and we shall have by matrix multiplication
A1(AX) = AlB.
Since A1A = 1 and IX = X, we obtain the solution
X", AIB
oj the equation AX = B. In other words, if aj is the cofactor of a! in
the determinant a of A, then Xi  a;:tJija i8 the solution oj the 8flstem
10 ALGEBRAIC PRELIMINARIES
oj n equations *1 = b' under the condition a ;o! O. This is equivalent
to Cramer's rule! for the solution of nonhomogeneous linear equations
as ratios of determinants. It is more explicit than Cramer's rule in
that the determinants in the numerator of the solution expressions are
expanded in terms of the given righthand sides b
1
, bt, "', b of the
linear equations. It is sometimes possible to solve the equations *' ..
b' readily and obtain x' = ).jbf. The inverse matrix A 1 to A = II oj II
can then be read off by inspection  in fact, A1 = II >.} II.
Practical methods, including approximate methods, for the calcula
tion of the inverse (sometimes called reciprocoI) of a matrix are given in
Chapter IV of the book on matriees by Frazer, Duncan, and Collar.
A method based on the CayleyHamilton theorem will be presented at
the end of the chapter.
A simple example on the inverse of a matrix would be instructive at
this point.
ExerciSe
. Consider the tworowed matrix
A = I I _ ~ ~ /I.
According to our notations
al = 0, ~ = 1, ~ = 1, ~ = O.
Hence the cofactors aj of A will be
al = (cofactor of aD = 0, ~ = (coflloCtor of ~ = 1,
a ~ .. (cofactor of ~ ) = 1, ex: = (cofactor of ~ )  O.
Now A 1 = II ~ j II , where ~ = aj/a. But the determinant of A is
a = 1. This gives us immediately ~ t = 0, ~ = 1, {If = 1, ~ = O.
In other words,
A 1 = II ~ ~ II
Approximate numerical examples abound in the study of airplane
wing oscillations. For example, if .
0.0176, 0.000128, 0.90289
A = 0.000128, 0.00000824, 0.0000413 ,
0.00289, 0.0000413, 0.000725
then approximately
170.9, 1,063., 741. 7
AI... 1063., 176,500., 14,290.
741.7, 14,290., 5,150.
See exercise 2 at the end of Chapter 7.
MULTIPLICATION OF MATRICES 11
. From :the rule for the product of two determinants,
8
the following
result is immediate on observing closely the definition of the product
of two matrices:
If A and B are two square matrices with determinants a and b reapec
tWely, then the determinant c of the matric product C = AB i8 given by the
numerical muUiplication of the two number8 a and b, i.e., c = abo
This result enables us to calculate immediately the determinant of
the inverse of a matrix. Since AA1 = I, and since the determinant of
the unit matrix I is 1, the above result shows that the determinant of
A 1 is 1/ a, where a i8 the determinant of A.
From the associativity of the o p e ~ t i o n of multiplication of square
matrices and the properties of inverses of matrices, the usual index
laws for powers of numbers hold good for powers of matrices even
though matric multiplication is not commutative. By the associativity
of the operation of matric multiplication we mean that, if A, B, Care
any three square matrices of the same order, then t
A (BC) = (A[J)C.
If then A is a square matrix, there is a unique matrix AA A with
8 factors for any given positive integer 8. We shall write this matrix
as A' and call it the 8th power of the matrix A. Now if we define
AD = I, the unit matrix, then the following index law8 hold for all
poBiJive integral and zero indice8 r and s:
A'A' = A'A' = A+
(A')' = (A')' = A".
Furthermore, these index laws hold for all integral r and 8, positive or
negative, whenever A 1 exists. This is with the understanding that
negative power8 of matrices are defined as positive power8 of their inver8es,
i.e., A r is defined for any positive integer r by
Ar = (AI) .
Multiplication of Matrices by Numbers, and Matrie Polynomials.
Besides the operations on matrices that have been discussed up to
this section, there is still another one that is of great importance. If
A = 1\ a} \I is a matrix, not .necessarily a square matrix, and a is a
number, real or complex, then by aA we 8hall mean the matrix II aa} II.
Thia operation of multiplication by numbers enables us to consider
matrix polynomials of type
(23) aoA" + alA ,,1 + agA,,4l + ... + a..
1
A + aJ.
t Similarly, if the two square matrices A and B and the column matrix X have
the same number of rows, then (..tB)X = A(BX).
12 ALGEBRAIC PRELIMINARIES
In expression 2 "3, au, at, .. , a.. are numbers, A is a square matrix,
and I is the unit matrix of the same order as A. In a given matric
polynomial, $e ais are given numbers, and A is a variable square
matrix.
Characteristic Equation of a Matrix and the CayleyHamilton Theorem.
We are now in a position to discuss some results whose importance
cannot be overestimated in the study of vibrations of all sorts (see
Chapter 6).
If A  II aj II is a given square matrix of order n, one can form the
matrix >J  A, called the characteri8tic maJ,rix of A. The determinant
of this J[l8.trix, considered as a function of )., is a (numerical) poly
nomial of degree n in ).; called the characteri8tic Junction oj A. More
explicitly, let J().) = I >J  A I; then J().) has the form J().)  ). .. +
at). .. l + ... + a..l). + a... Since a.. = J(O) , we see that a.. ... I A I;
ie., a.. is (1)" times the determinant oj the matrix A. The algebraic
equation of degree n for )..
J().) = 0
is called the charactmatic equation oj the matrix A, and the roots of the
equation are called the charactmatic roots oj A.
We shall close this chapter with what is, perhaps, the most famous
theorem in the algebra of matrices.
THE CAYLEYlIA.MruroN THEOREM:. Let
J().) = ). .. + atA .. 
1
+ ... + a..l). + a..
be the characteristic Junction oj a m.at1'W A, and let I and 0 be the unit
matrix and sero matrix respectively with an order equal to that oj A.
Then the matric polynomial equation
X" + a1X .. l + ... + a..1X + aJ = 0
is 8atisjied by X = A.
Example
Take A = II II; then J().) = \' ! I = ).2  1. Here
n=2,and'atO,at=1. HenceA2I=O.
The CayleyHamilton theorem is often laconically stated in the
form "A matrix satisfies its own characteristic equation." In symbols,
if J(>..) is the characteristic function for a matrix A, then J(A) = O.
Such statements are, of course, nonsensical if taken literally at their
OHARACTERISTIC EQUATION OF A MATRIX 13
face value. However, such mnemonics are useful to those who thor
oughly understand the statement of the CayleyHamilton theorem.
A knowledge oj the characteri8tic Junction oj a matrix enabks one
to compute the inver8e oj a matrix, iJ it exists, with the aid oj the Cayley
Ham1,1J,on theorem. In fact, let A be an nrowed square matrix with an
inverse A1. This implies that the determinant a of A is not zero.
Since 0 ;14 = ( 1)"a, we find with the aid of the CayleyHamilton
theorem that A satisfies the matric equation
1
1=  [A" + a
1
A1 + ... + + a,.tAJ.
Multiplying both sides by A 1, we See that the inver8e matrix A 1 can
be compute(llYg the Jollowi1l4 JorTIItula:
(24)
1
A 1 = =[A .. 1 + alA ,,II + ... + + an1I].
a,.
To compute A 1 by formula 24 one has to know the coefficients a1.
at, "', a,.l, a" in the characteristic function of the given matrix A.
Let A = II aj II i then the trace of the matrix A, written tr (A), is
defined by tr (A) = the sum of the n diagonal elements "',
0.;. Define the numbers' 81, Bt, "', 8" by
(25) 81 = tr (A), Bt = tr (A2), "', = tr "', 8" = tr (A")
so that 8r is the trace of the rth power of the given matrix A. It can be
shown' by a long algebraic argument that the numbers a1, "', a,.
can be computed successively by the following recurrence formulas:
(26)
a1 = 81
G2 = t(a181 + 82)
as = t(G2
8
1 + a1Bt + 83)
1
a,. = (a181 + a2B2 + ... + a18,,1 + 8 .. ).
n
We can summarize our results in the following rule for the calculation
of the inverse matrix A1 to a given matrix A.
A RULE FOR CALCULATION OF THE INVERSE MATRIX A1. First
compute the first n  1 powers A, A2, "', A1 of the given n
rowed' matrix A. Then compute the diagonal elements only of A".
Next compute the n numbers 81, 81, "',8 .. as defined in 25. Insert
these values for the 8. in formula 26, and calculate a1, G2, "',
successively by means of 26. Finally by formula 24 one can calcu
14 ALGEBRAIC PRELIMINARIES
late At from the kIiowledge of aI, "', a.., and the matrices A, AI,
.. " AI. Notice that the whole A" is not needed in the calculation
but merely s .... tr (A"), the trace of A".
P'Unchedcard met1wd8 can be uSed to calculate the powers of the
matrix A. The .rest of the calculations are easily made by standard
calculating machines. Hence one method of getting numerical solutiona
ola system of n linear equations in the n 'Unknowns z'
*1 = b
i
(I aj 1 0)
is to compute AI of A = II aj II by the above rule with the aid of
punchedcard methods and then to compute AIB, where B = /I b' /I,
by punchedcard methods. The Solution column matrix X = II z' "
is given by X = AlB.
Exercises
1. Calou1ate the inverse matrix to A '" II ~ ~ II by the last method of tlUachapter.
Solution.
1
Now AI '"   [A + IItl] '" A. Hence
lit
I. See the exercise given in M. D. Bingham's paper. See the bibliography.
8. Calculate AI by the above rule when
15 11 6 9 15
1 3 9 3 8
A.. 7 6 6 3 11
7 7 5 3 11
17 12 5 10 16
Mter calculating A2, A', A4, and the diagonal elements of AS, caloulate '1'" 5.
It .. 41, '8 .. 217, B4 .. 17, Is '" 3185. Inserting these values in 26, find
III '" 5, lit '" 33, 113 .. 51, 114 '" 135, IJa .. 225 .
Incidentally the characteristio equation of A is
I().) '" AI  5A4 + 33).8  51A9 + 135). + 225
_ (A + 1)().9  SA + 15)1 '" O.
Finally, using formula 24, find
207 64 124 III 171
1
315 30 195 ISO 270
,41 ... 
315 30 30 45 270
225
225 75 75 0 225
414 53 52 3 342
CHAPTER 3
DIFFERENTIAL AND INTEGRAL CALCULUS or
MATRICES
Power Series in Matrices.
Before we discuss the subject of power series, it is convenient to
make a few introductory remarks on general series in matrices. Let
A
o
, A
l
, A
s
, Aa ... be an infinite sequence of matrices of the same type
(Le., same number of rows and col1Ulls) and let 8p = Ao + Al + As
+ ... + Ap be the matric sum of the matrices A
o
, A
l
, As, .'., and Ap.
If every element in the matrix 8
p
converges (in the ordinary numerical
sense) as p tends to infinity, then by 8 = lim 8
p
we shall mean the
p>o>
matrix 8 of the limiting elements. If then the matrix 8 = lim 8
p
exi8ts
p+CZI
in the above sense, we shall say, by definition, that the matric infinite series
..,
D,. converges to the matrix 8.
,. ... 0
Example
1 1 1
Take Ao = 1 Al = 1 As = 1 Aa = 1 ... A, = 1 ... Then
, , 2! ' 3! ' , if' .
8 = Ao + Al + As + ... + Ap = (1 + 1 + ! + ! + ... + !)1
p 2! 3! p! .
Hence, on recalling the expansion for the exponential e, we find that
..,
lim 8
p
= el. In other words, :EAr = el.
p+Q) r==O
If A is a square matrix and the al, as, ... are numbers, one can
consider matric power series in A
..,
:Ea,.Ar ..
r=O
In other words, matric power series are particular matric series in which
each matrix Ar is of special type t A,. = a,.Ar, where Ar is the rth power
of a square matrix A. (AO = 1 is the identity matrix.) Clearly matric
polynomials (see Chapter 2) are special matric power series in which aU
the numbers a, after a certain value of i are zero.
An important example of a matric power series is the matric exp0
nential Junction e" defined by the following matric power series:
e"  1 +A +.!:..A2+.!:..Aa+ . + ...
2! 31 .
t The index r is not summed.
15
16 DIFFERENTIAL AND INTEGRAL CALCULUS
The following properties of the matrix exponential have been used
frequently in investigations on the matrix calculus:
1. The matric power series expansion for is convergent
l
for all
square matrices A. ,
2. = ileA. = whenever A and B are commutative matrices,
i.e., whenever AB = BA.
3. = = I. (These relations express the fact that eA.
is the inverse matrix of
Every numerical power series has its matric analogue. However,
the corresponding matric power series'have more complicated proper
tiesfor Other are, say, the matric sine, sin
A, and the matric cosine, cos A, defined by
sin A = A  !..A8 + 1_
A
& 
31 sr
1 1
COB A = I  2!A2 + 41A4 
The usual trigonometric identities are not always satisfied by sin A
and cos A for arbitrary matrices.
Difterentiation and Integration of :Matrices DependiD,g on a Numeri
cal Variable.
Let A(t) be a matrix depending on a numerical variable t so that
the elements of A(t) are numerical functionS of t.
aW), .", a!(t)
'.', a!(t)
A(t) =
ar(t), a:(t), ... , a:(t)
Then we'define the derivative of A (t), and write it by
da!(t)
dt'dt' ... ,&
dA(t)
;u: =
da!(t)
d.t' d.t' ... , ;u:
DIFFERENTIATION AND INTEGRATION 11
Similarly we define the integral of A (t) by
fA(t)dt=
dt, dt, "', fa!(t) dt
dt, dt, "', fo!(t) dt
far(t) dt, fa:(t) dt, "', frt:(t) dt
It is no mathematical feat to show that differentiation of matrices
has the following properties:
(31)
d[A (t) + B(t)] dA (t) dB(t)
dt = ;u + ;it
(32)
d[A (t)B(t)] = dA (t) B(t) A (t) dB(t)
dt dt +, dt
(33)
(t)B(t)C(t)] = B(t)C(t) + A (t) C(t)
+
etc.
There are important immediate consequences of properties 32 and
3 3. For example, from 32 and A l(t)A (t) = I, we see that
(3.4) dAl(t) = _Al(t)dA(t)Al(t)
dt dt'
Also, from 33, we obtain
(3.5) dA3(t) = dA(t) A2(t) + A (t)dA(t) A(t) + A2(t)dA(t).
dt dt dt dt
There are similar formulas for the derivative of any positive integral
power of A (t).
IT t is a real variable and A a constant square matrix, then one obtains
d(trA) = rtrlA
dt
Then, with the usual termbyterm differentiation of the numerical
exponential, the following difJerentiation can be justified:
{
d (e
A
) P t3
= A ... + ...
(36)
= Ae
tA
= JA.A.
18 DIFFERENTIAL AND INTEGRAL CALCULUS
There is an important theorem in the matrix calculus that turns up
in the mathematicaJ theory of aircroJt flutter (see Chapter 7). The
proof, into which we' can not enter here, makes use of the modem
theory of Ju,nctionals.
THEoREM. If F(A) is a pqwer 8eries that confJerge8 Jor all A, then the
matric pqwer series F(A) can be computed by the ezpansion
2
(37)
where A ia an nrowed square matrix with n distinct characteri8tic roots
Al, AI, "', and (h, <h, "', G .. are n matrice8 defined by8
(38) G. = 1 ll(AI A).
ll(Ai  A.) ,'',
i"""
There are a few matters that must be kept in mind in order to have
a clear underst&ndin& of the meaning of this result. In the first place
the matric power series F(A) = aJ + alA + atA
2
+ ... + ... when
ever F(A) = era + alA + atX
2
+ .. , + .... In other words AD = 1 is
"replaced" by AD = I, the unit matrix, in the transition from F(A) to
F(A). Secondly to avoid ambiguities we muSt write explicitly the
compact products occurring in equation 38.
TI(Ai  A,) = (AI  Ai)(A2  Ai) . (A.I  A.)(Ai+t  A,)  Ai),
ipt.,
II(AI A) = (All  A)(Asl  A) .. (A'll  A)(Ai+1I  A)
ipt.i
(AJ  A).
There are special cases of particular interest in vibration theory (see
Chapters 6 and 7). They correspond to the power of a matrix Ar and
the matrix The expansion 37 rields immediately
(39)
and
(310)
where the matrices G, have the same meaning as in 38.
Exercise
Calculate the matrix eA when A is the matrix A = II II Check the result
by calculating eA directly.
DIFFERENTIATION AND INTEGRATION 19
Solution. The eha.racteristio roots are ~ 1 .. 1, ~ ... 1. Hence the matrices
0
1
and Os are 88 follows:
G
1
.. ~ s 1  A ... ! (I + A) ... ! III 1 II.
~ s  ~ 1 2 2 1 1
Os .. ~ 1 1  A co ! (1 _ A) co ! II 1 1 II.
~ 1  ~ 2 2 2 1 1
Now
eA _ t e\{J, .. '! III 1 II + 'III 1 1 II
,"1 .2 1 1 2 1 1
Hence
eA ... 11 cosh 1 sinh 1 II.
sinh 1 cosh 1 .
CHAPTER 4
DIFFERENTIAL AND INTEGRAL CALCULUS OF
MATRICES (Continued)
Systems of Linear Differential Equations with Constant Coefllcients.
The matric exponential has important applications to the solution of
systems of n linear difierential equations in n unknown functions x
1
(t),
X2(t) , "', x .. (t) and with n
2
constant coefficients aj. The variable t
is usually the time in physical and engineering problems. Without
defining the derivative d!, we merely mentioned in the first chapter
that we can write such a system of equations as one matric equation
(4.1) dX(t) = AX(t)
dt .
Having defined the matric derivative, we are enabled to view this
equation with complete understanding.
From formula 36 of the previous chapter we find that
(42)
where to is an arbitrarily given value of t. But this result is equiValent
to saying that X(t) = [e<tlol.4.JX
o
is a solution of the matric difierential
equation 41 for an arbitrary column matrix Xo. A glance at the
expansion for the matric exponential e(tlolA shows that the solution
X(t) has the property
X(to) = Xo.
In summary, we have the result 1 that
(43) X(t) = [eCttolA]Xo
is a 80lution
2
0/41 with the property that X(to) = Xo/or any prea88igned
constant column matrix Xo.
so that
Example
dx
1
(t) ... x
2
dx2(t) = Xl
dt 'dt
dX =AX
dt '
20
SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS 21
where
A = \I /I and X = II II
Now A1 = I, >.t = I, and we saw in the last exercise of the previous
chapter that
0
1
= II, O
2
= II   II
Hence
e('Io)A = e(tIo)).i(]i = II (t  to) sinh (t  to) II.
i=1 (t  to) cosh (t  to)
Therefore the unique solution of the differential system
is
dX 'II II
di = AX, X(to) = Xo = x:
X(t) = II c?sh (t  to) sinh (t  to) II.
sinh (t  to) cosh (t  to)
This means that the unique solution of the differential system
ch
1
ch
2
;it = Z2, dt = Z1, Z1(to) z2(to) = x:
is
{
Z1(t) = [cosh (t  + [sinh (t  to)Jx:
X2(t) = [sinh (t  + [cosh (t  to)Jx:.
S;stem.s of Linear Differential Equations with Variable Coefficients.
Although the matric exponential is not applicable to the solution of
a system of linear difierential equations with variable coefficients
aj(t), there are some analogous matric expansions that enter into the
solution of such a system. The system of differential equations
(4.4) dz:t) = aj(t)xi(t)
is written as one matric differential equation
(45)
dX(t) = A(t)X(t)
dt
where A(t) ... II a;(t) II and X(t) is the column matrix of the n un
known functions x'(t)
. On integrating both sides of 45 between to and t we obtain tlie
equivalent matric equation
(46) X(t) = X(to)':+ [A(S)X(S) ds.
22 DIFFERENTIAL AND INTEGRAL CALCULUS
By the method of sucOOlSive substitutions, we are led to C01J,8ider the
following expansion as a. solution of 46:
(47) X(t) = [I + "[A (8) dB + [A(S) dB f
B
A (81) da
1
+ ... + ... ]
to to Jto
X(to).
Now the method of successive substitutions for equation 46 'can be
described as follows. In the integral term in 4 6 substitute for X(s) its
equivalent as given by formula 4 6 itself. This yields
X(t) = X(to) + [(A(S) da]X(to) + [A(S) dsJ,.8A(Sl)X(Sl) dB
1
.
Again substituting for X(Sl) its equal as given by we are led to a.
new expansion for X(t). Continuing indefinitely this way we are led
to the matric infinite series 47.
If we define the matrix
(48) = 1 + [A(S) da + J,.'A(8) ds J,.8A(81) ds
1
+ [A(S) dB . A(Bt) dBtJ,.81
A
(&.a) d&.a + ... + "',
then it",C&n be proved that, for t4(t) continuous in to t t
1
,
(49) X(t) =
is the unique 8Olution 01 the matric differential equation 4 5 that takes
on the aibitrarily given con8t.ant matric value Xo for t = to. It is often
simpler to carry out the matrix multiplications first in 48 and 49
before carrying out the successive integrations. If the matrix is inde
pendent of t, then, by an evident calculation, solution 49 reduces
precisely to the matrix exponential type 43. "
For approximate numerical calculations, a few terms in the expansion
for may suffice in 49 to give a good approxi;mation to the
solution of the matric differential equation 45.
Exercises
L Integrate by matrix methods the secondorder difierential equation
/h(t) _ :t(t) = 0
dJ!o
subjeCt to the initial conditions :t(to) ":1:0, (:) tate. ... 1/00
(Hint. Write the difierential equation as a system of two firstorder equations
thl dzS
 .. :r;2 =:1:
1
tit ' tit
SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS 23
with initial conditions Zl(tO)  ZOo :&'(to) = Yo, and use the results of the example il
lustrating formula 43.)
2. Integrate by matrix methods the secondorder differential system for har
monio oscillations with frequency ~
~ t ) + ~ B z ( t ) = 0, *0) = Zo, (= ),te = y,..
(Hint. Write the equation as a system of two firstorder linear equations.)
Discuss the solutions of the differential equation
fh dz
m+8+k:i:=0
dtI dt
for free damped osoillations by matrix metliods with the restriotion that {3 ;14 2 ~
m = mass, {3 = damping factor, and k = elastic constant, so that. all threem, fJ, k
are positive oonstants. Clearly the restriction rules out the critical damping case.
(Hint. Write the dift'erential equation as a firstorder matrio differential equation
dX
di=AX,
where
o 1
A= k_p"
m m
and notice that the characteristic equation of this matriz is the "characteristio
equation" of the given secondorder differential equation in the usual elementary
seDSe.)
,. Integrate by matrix methods the seeondorder differential equation
fh(t) _ tz(t) = 0
dtI
subject to the initial conditions 2:(0) = Zo, (: ),=te = y,..
CHAPTER 5
MATRIX :METHODS IN PROBLEMS OF SMALL
OSCILLATIONS
Differential Equations of Motion.
1
The problem of small oscillations! (of conservative dynanlical sys
tenia) about an equilibrium position concerns itself with the solution
of the Lagrangian differential equations of motion in whichthe kinetic
a.nd potential energies are homogeneous quadratic forms, in the veloci
ties and coordinates respectively, with constant coefficients. The
theory is approximate in that the constancy of the coefficients in the
kinetic energy and the quadratic type of the potential energy are due
to approximations in the actual form of the kinetic and potential
energies respectively. If, without loss of generality, we take all the
coordinates of the equilibrium position to be zero, these approximations
are due to the assumed smallness of the coordinates and velocities
the equilibrium position.
Let
and
be the kinetic and potential energies respectively of our oscillating
system with n degrees of freedom. In view of what we have already
said, the aq and b(l are constants. We shall consider the case in which
the equilibrium point is stable, i.e., the potential energy V has a mini
mum at qi = O. Now it can be proved that the positive definiteness of V
is a necessary and sufficient condition that (0, 0, "', 0) be a 8table
equilibrium point. V is, by definition, positive definite if V 0 for
all q( and V = 0 if and only if qi = O. Clearly the kinetic energy T
. dq'
. ti definite' th l' .
18 pOSl ve m e ve OCltles dt'
Lagrange's equations of motion for our oscillating system are
d (C>f.) C>V.
 =
dt e>q' ()q' '"
. th t ti .. dq'
on usmg e no a on q' = .
dt
If we Uf!e the explicit form for the
24
DIFFERENTIAL EQUATIONS OF MOTION 25
kinetic and potential energies, Lagrange's equations reduce to the
system of n secondorder differential equations
(51) a.ii
i
=
J.  J. _  ed the tat'  . d2qi If defin
wrwre we f/AMJf:} us no wn q' = . we e two square
dt
2
matrices t
A=lIl1ijll,
and the unknown column matrix
B = II bij II ,
Q(t) = II qi(t) II,
then we can write our difierential equations 51 of motion as the one
matric differential equation
(52)
A = BQ(t).
Since the kinetic energy is positive definite, it can be proved that
the determinant I A I p6 O. Hence it follows from our discussion in
Chapter 2 that the inverse matrix AI exists. On multiplying both
sides of equation 52 on the left by AI and remembering that A IA = I,
the unit matrix, we obtain the following equivalent matric differential
equation
(53)
= CQ(t),
where C is the (constant) square matrix C = AIE,
To summarize, we have the following result. If A and B are the
constant square mill:rice8 of the coefficients of the kinetic and potential
energies respectively, then the motion of our 08C1,"llatory system i8 gorerned
by the matric differential equation 53.
illustrative Example
Two equal masses, each of mass m, are connected by a spring with
elastic constant k while each mass is connected to a fixed wall by a
spring with elastic constant k. The kinetic and potential energies of
this twodegreeoffreedom problem are
T = i[ + (:t)2]
k
V = (ql)2 + (rf)2 + (ql  q2)2],
t Recall the notations for matrices given in Chapter 1.
26 DIFFERENTIAL AND INTEGRAL CALCULUS
where ql and r/ are the respective displacements of the centers of the
two masses "pa.raJlel" to the springs and are measured. from the
equilibrium position in which all three sprihgs are unstressed. Hence
by a direct calculation from the kinetic potential energies we find
that the matric equations of motion are
(5.4) dJQ(t) = OQ(t),
dJ,2
where
2k k
m.
k
m
2k
m m
If we define the column matrix R(t) ... II " by .. R(t), we
can write the secondorder matric differential equation 54 as a first
order matric differential equation
(5.5) = US(t),
where
ql(t)
S(t) = r/(t)
r1(t)
and
0 0 1 0
0 0 0 1
2k k
0 0
(56) U=
m m
k 2k
0 0 
m m
The characteristic equation of the matrix U turns out to be
4k 3k
2
'
A' +  A2 +  ... o.
m m
2
Now! > 0, so that there are four distinct pure imaginary cha.ntcter
m
istic rootsof U given by
. 10 lG
(57) A1 = Y ;;, >.t.= y ;;;, >.a ... Y ;;, >.. ... y ;;. .
DIFFERENTIAL EQUATIONS OF MOTION
Exercise
A abaft of length !ll, fixed at one end, ca.rries one disk at the free end and another
in the middle. If I/o is moment of inertia of each disk, and t/, tf are the respec
tive angular deflections of the two disks, then the kinetio and potential energies are
. T
V ... 2i [(t/}I + (rt  t/)']
under the assumption that the shaft has a uniform torsional stiffness T. Find the
matrio difierential equation of motion. Write this equation as a system of two
firstorder matrio equations. Discuss the solutions of this system and then the mo
tion of the disks.
CH.A1n'ER 6
l'4ATRIX: METHODS IN PROBLEMS OF SMALL
OSCILLATIONS (Continued)
Calculation of Frequencies and AmpUtudes.
Let us inquire into the pure harmonic solutions of our differential
equation of motion
(61)
d2Q(t) =' CQ(t),
,dJ.2
where C = AlB. We thus seek solutions of of type
(62) Q(t) = sin (wt +
,
where w is an angular frequency, an arbitrary phase angle, and r
a column matrix of amplitudes. On substituting 62 in 61 we obtain
w
2
sin (wt + = sin (wt +
Hence a necessary and sufficient condition that 62 be a solution of
61 is that the frequency w and the corresponding column matrix r of
amplitudes satisfy the matrix equation
(63) (w21  C)r = O.
In order that there exist a solution matrix r 0 of 63, it is clear
from the theory of systems of linear homogeneous algebraic equations
that w
2
mU8t be a characterimc root oj the matrix C. Since C = A IB,
we verify immediately the statement that
(64) w
2
1  C = AI(w
2
A  B).
On recalling that the determinant of the product of two matrices is
equal to the product of their determinants, we see that the determinant
I AI(w2A  B) 1 = 1 AIII w
2
A  B I
and hence, by 64,
I w21  C 1 =0'1 AIII w
2
A  B I
But I AII p6 0, so that the characteristic roots of the matrix C are
identical with the roots of the "frequency" equation
(65) 1 M  B 1 = O.
Si1U!e the kinetic and potential energies are positwe definiie qu,od,ratic
jorms, it can be proved (see any book on dynamics such as Whittaker's)
thai aU the roots oj the jrequency equation are positive. Hence all the
characteristic roots oj the matrix C = AI B are positive.
28
CALCULATION OF FREQUENCIES AND AMPUTUDE9 29
Since the potential energy is a positive definite quadratic form, it
follows that I B I > 0 and hence that B1 exists. Therefore C1
exists and is given by C1 = ,B1A. On multiplying both sides of equa
tion 61 by C1, we obtain the matric differential equation
(6.6) Q(t) = D fPQ(t),
dfl
where D .. C1 = ,B1A. This equation is obviously equivalent toequa
tion 61. If we now proceed with 66 as we did with 6 I, we are led
to the equation
(67)
This equation can also be derived by operating directly with equation
63. It is clear from equations 63 and 67 that the characteristic
roots oj the matrix D = C1 are the corresponding reciprocols oj the
characteristic roots oj the matrix C. We shall call D the dynamicoJ
matrix.
Let us write 67 in the equivalent form
(68) r = w
2
Dr.
The classical method of finding the frequencies and amplitudes of
our oscillating system consists in first finding the frequencies by solving
the frequency equation 65; or equivalently in finding the characteristic
roots of the matrix C = A lB, and then in determining the ampli
tudes by solving the system of linear homogeneous equations that
corresponds to the matrix equation 63. Such a direct way of
calculating the frequencies and amplitudes often involves laborious
calculations. For approximate numerical calculations, the method
oj successive approximations when applied to equation 68 greatly re
duces the laborious calculations. This is especially true when only the
fundamental frequency (lowest frequency) and the corresponding
amplitudes are desired. 1 We shall assume now that all the JreqtlRJnCies oj
our osciUating system are distinct. The method of successive approxi
mations for equation 68 is as follows. Let ro be an arbitrarily given
column matrix. Define
and in general
r1 = w2Dro
r. = w2Dr
1
80
MATRIX METHODS IN PROBLEMS
By successive use of this recurrence relation we can express r r in terms
of roo In fact, we have
(69) r, = (wI)rDrro,
where [)P is the rth power of the dynamical matrix D. Now it can be
shown that for large r the ratio of the elements of tM column matri:f:
,Dr1ro to tM corresponding element8 of tM column matrix [)pro is approxi
mately a constant equal to wt the square of the fundamental fref/U6'IWY wt
of our fundamental mode of oBCiUation with disti1ld frequeru:ieB. The
matrix rD is only r68t1'icted by tM nonvanishing of R (see equality 611
bekno). .
The proof of this result is a little involved and makes use of the
CayleyHamilton theorem, a theorem t of Sylvester, and a few other
theorems on matrices.
2
These theorems are instrumental in showing
that, for r large enough, the following approximate equality holds:
(610)
where (0) ).1 > ).. > ... > >.,. are the characteristic roots of D, and
1 .
hence).1 = 2 in terms of the fundamental frequency CIIl; (b) the num
(all
hers ai, a?, , a" are proportional to the amplitudes of the funda
mental mode of oscillation; and (c)
(611) R =
In 6 11, the are the elements of the arbif:rart"Zy chosen column matrix
ro and the AI, "', A" are n constants that are themselves obtainable
by a successive approximation method.
Clearly, [)pro is a column matrix. Hence the approximate formula
610 shows that for r large enough, aM for arbitrarily chosen r
D
, such
that R" 0, tM column matrix D'ro has elementB proportiOnal to the
amplitudes of tM fundonnental mode of oscillation. AU this is subject to
tM reatridion that aU tM frequeru:ieB are distinct.
On using equation 63 instead of 67, one can similarly obtain the
" II(Xal A)
t F(A) .. E where G,. .. 'rr( ) a.nd ).1, , )." are oharacteristic
r .. 1 Xa  M
.pI,.
roots of A, Bee the disoussion in Chapter 3.
CALCULATION OF FREQUENCIES AND AMPLITUDES 31
greatest frequency and corresponding amplitudes of our oscillating
system. The intermediate overtones and corresponding amplitudes
can be obtained by the above successive approximation methods
on reducing the number of degrees of freedom successively by one.
Anyone interested in these topics will find the following paper by
W. J. Duncan and A. R. Collar very useful: "A Method for the
Solution of Oscillation Problems by Matrices," Philo8(Jphicol Magazine
aruJ Journal of Science, vol. 17 (1934), pp. 865909. Byapproximati1l4
oscillating continU0'U8 Bystem8, such as beams, by oscillating systems
with a large but finite number of degrees of freedom, the DuncanC9llar
paper shows how the methods of this chapter are applicable in solving
oscillation problems for continuous systems.
CHAPI'ER 7
MATRIX METHODS IN THE MATHEMATICAL THEORY
OF AIRCRAFT FLUTTER
In recent years a group of phenomena known under the caption
"flutter" has engaged the attention of aeronautical engineers. The
vibrations taking place in flutter phenomena can often lead to bas of
control or even to' structural failure in such aircraft parts as wing,
aileron, and tail. Such dangerous situations may arise when the
airplane is flown at a high speed. It is of the greatest practic8.I im
portance therefore so to design the plane as to have the maximum
operating speed less than the critical speed at which flutter occurs.
Unfortunately experiments in wind tunnels are idealized and difficult,
and actual flight testing is obviously highly dangerous. It is here
that mathematics enters the stage at a most opportune moment.
Although the results of mathematical theories of flutter are now being
applied in the design of aircraft, the need for an adequate mathe
matical theory is becoming critical. There is no time in this brief set
of mathematical lectures to deal adequately with the present simplified
mathematical theories of the mechanism of flutter. We shall only
give the matric form for the equations of motion and say a few words
Itbout the approximate solutions with the aid of matrix iteration
methods.
The vibrations of an airplane wing and aileron can be considered as
those of a mechanical system with three degrees of freedom: the bend
ing and twisting of the wing accounts for two degrees of freedom, and
the relative deflection of the aileron gives rise to the third degree of
freedom. Not only is the system nonconservative, but there is the
additional COlDplication of damping forces leading to terms depending
on the velocities in the equations of motion. The difi'erential equations
of motion are of type
d,2qi(t) dqi(t) .
(7 1) (JqdP" + CiJ;U: + bif/'(t) = 0,
a system of three linear difi'erential equations in the three unknowns
ql(t), q2(t), rf(t). Since there ru.:e three degrees of freedom, all indices
have the range 1 to 3. The constant coefficients ~ j , b ~ i I C;j are com
puted from a large number of aerodynamic constants of our aircraft
32
MATRIX METHODS IN THEORY OF AIRCRAFT FLU'ITER 83
structure; see T. Theodorsen's "peneral Theory of Aerodynamic
Instability and the Mecbs.nism of Flutter," N. A. C. A. Report 496,
for 1934, pp. 413433, especially pp. 419420. Now the general
structure of equations 7 1 differs from the equations of motion of the
preceding two chapters in that biJ bji (giving rise to a nonconserva
tive system) and in the presence of the linear damping terms Cij
If we define A = II av II, B = 1\ b
ii
' II, C = II eiJ II, Q(t) = II qi(t) II,
then the equations of motion 71 can be written as the one matric
differential equation
(7.2) A'd2
Q
(t) + CdQ(') + BQ(t) = 0
dt2 dt
in terms of the three known constant matrices A, B, 0, and the un
known column matrix Q(t). As we are interested in small oscillations
around an unstable point of equilibrium, it is to be expected that
complex imaginary frequencies will playa role in the work.
Since A arises from the kinetic energy, Al exists and hence 7.2 is
equivalent to
(73)
=  AlBQ(t).
We can replace the one secondorder differential equation 73 by an
equivalent pair of two firstorder differential equations with the column
matrices Q(t) and R(t) as unknowns
dt
{
dQ(t) = R(t)
d:t) = A lBQ(t) _ A lCR(t).
Define the column matrix of six elements
Set) = \I
and the constant square matrix of six rows
0, I
(75)
u=
AlB, A1C
where 0 and I are the threerowed zero and unit matrices respectively.
Then equations 7 4 can be written as the one first.order matric differen
tial equation
(76)
dS(t) = US(t).
4t
34 MATRIX METHODS IN THEORY OF AIRCRAFT FLUTTER
We are led therefore to consider solutions
Set) = e)J1l (Il, a constant c o l ~ matrix of six elements)
of 7 6. This obviously leads us to the equation
(7'7) (}J  U)A = 0,
where I is the sixrow unit matrix. To solve the problem we must
get good approximations to the values of ). and the matrix Il that will
satisfy 77. The matrix iteration method for small oscillations of
conservative systems can now be applied with some modifications made
necessary by the fact that the possible values of ). in 77 are in general
complex: imaginary. If ).1, ).t, "', ). .. are the characteristic roots of
the matrix U lexicographically arranged so that their moduli t are in
descending order, i.e., I ).1 I > I ).t I > ... > I ). .. I , then the
characteristic root ).1 with the largest modulus can be obtained by the
methods of the previous chapter. Some further aids in computation
of the real and imaginary parts of complex characteristic roots are
given on pp. 148150 and 327331 of the FrazerDuncanCollar book
on matrices. A more readable and selfcontained account is given in
the paper by W. J. Duncan a;n.d A. R. Collar entitled "Matrices
Applied to the Motions of Damped Systems," Ph',;'l. Mag., vol. 19
(1935), pp. 197219. An illuminating discussion of a specialized
flutter problem with two degrees of freedom is given in a 1940 book by
K8.rman and Biot, Mathematical Methods in Engineering, pp. 220228.
It would be interesting and instructive to solve such specialized flutter
problems with the aid of the matrix calculus.
Another useful method of solving flutter problems is the combi
nation of matrix methods and Laplace transform methods. The
Laplace transform of a function x(t) is a function xCp) defined by
xCp) ... L:Plz(t) dt.
If one is willing to omit the proofs of one or two theorems, the whole
Laplace transform theory needed does not require one to be conversant
with the residue theory of complex: variable' theory. For such an
elementary treatment of Laplace transforms see Operational Calculus in
Applied M athematic8 by Carslaw and Jaeger, Chapters IIll. The
methods given there can be immediately extended in the obvious way
to apply directly to the matric differential equations 73 for flutter
problems. A good table of Laplace transforms together with mechan
ical or electric methods can cut down the labor of flutter calculations
t The modulus of a complex: number , = II + vCi y is denoted by I_I and is
defined by I, I = v'sa + yJ.
MATRIX METHODS IN THEORY OF AIRCRAFT FLUTTER 35
materially. Unfortunately there is no time to take up these matters
in detail in our brief introductory treatment ..
Exercises
1. Show that the matric differential equation 76 for flutter can be written as
Set) = UI d8(t),
dt
where Ul, the inverse matrix of U, is given by the sixrow square matrix
UI = /I B;'C II
S. A model airplane wing is placed at a small angle of incidence in a uniform air
stream. The three degrees of freedom are the wing bending, wing twist, and aileron
angle measured relative to the wing chord at the wing tip. When the wind speed
is 12 feet per second, the matrices for the differential equations of flutter are as
follows. The data are obtained from R. A. Frazer and W. J. Duncan, "The Flutter
of Aeroplane Wings," Reports Gnd MemortJnda of the AeronamicGl Research Com
mittee, No. 1155, August, 1928.
1117.6 0.128 2.89
II
A  0.128 0.00824125 0.0413
2.89 0.0413 0.725
11121.042 1.89
15.9497
II
B.. 0 0.027 0.0145
11.9097 0.364 15.4722
117.65833 0.245
2.10
II
C .. 0.023 0.0104 0.0223
0.60 0.0756 0.658333
.. matrix of damping coefficients.
Show that
II
0.170883 1.06301 0.741731 II
AI.. 1.06301 176.433 14.2880
0.741731 14.2880 5.14994
and that the matric differential equation for flutter is
.. USC!),
where the "flutter matrix" U is given by
0 0 0 1 0
0 0 0 0 1
U=
0 0 0 0 0
11.8502 0.08168 8.73526 0.888089 0.003153
41.4969 1.57195 201.554 3.62604 1.01517
28.4464 0.086931 2.91908 0.059016
0
0
1
0.105747
3.23949
1.51412
For the lengthy details of the calculations of flutter frequencies and aDlplitudes
see Phil. MtJ{J. 1935 paper by Duncan and Collar.
36 MATRIX METHODS IN THEORY OF AIRCRAFT FLUTTER
More recent developments in aircraft design require an extension of
the flutter theory to handle fourdegree (or more) offreedom problems
in which the motion of the tab defines the fourth degree t of freedom
with generalized coordinate t. The aerodynamic forces and moments
are obtained theoretically in accordance with T. Theodorsen's and
I. E. 'Garrick's investigations and not from windtunnel data. It is
for this reason that the coefficients in the difiere.ntial equations of
motion will be in general complex. See Fig. 7 1. The four difierential
equations of mQtion are of the form t
cflqi dqj
(7 8) av df,2 + Cij dt + bill = 0,
Willi elastic axis
FIG. 71.
where all the indices, in contradistinction to 7 1, have the range 1 to
4 and the coefficients llij, bq, and Cij are in general complex.
The flutter velocity tI appears in general in the coefficients Coi and biJ
and is replaced by the quantity ~ , where b, w, and k are the airfoil
semichord, flutter frequency, a.nd the flutter parameter k = : ~ respec
tively. This yields a system of four linear difierential equations 78
t In accordance with the flutter notation used in this country, ql co h, If co a,
tf .. {j, q' co "y. See Fig. 71.
t The differential equations of motion and the contributions of the aerodynamic
forces and moments to the coefficients of the differential equations for the four
degreeoffreedom problem are given in the Douglas reports.
MATRIX METHODS IN THEORY OF AIRCRAFT FLUTTER 37
in which C;,j and bii are expressed as functions of the flutter frequency Col
and not of the flutter velocity fl. If one then considers solutions of the
form
iut _r:.
qi = qf,e (i = vI)
of the differential equations, one is led to a system of four linear alge
braic equations with complex coefficients, some of which are functions
of Col and the structural damping coefficients gii' The damping coeffi
cients gii are defined by
F. = v=i giiKi/li (gii = 0 if i:p4 3),
where F. is the damping force in theq'th direction and the Kii are the
spring constants. Upon obtaining the characteristic roots of the
matrix of the linear algebraic equations by matric iteration methods,
one ultimately finds the flutter velocity fI as a function of the flutter
frequency Col and the structural damping coefficients gii' Flutter is
likely to occur if the structural damping coefficients gii from the
pure imaginary part of the characteristic root exceeds 0.03, provided
that no extraneous damping devices are used. The algebraic eql:la
tions are so arranged  and this constitutes an important aspect of
the development in that it lends itself to matric iteration procedures 
that the characteristic roots are of the form
c .
z =  + ~
or
(i = v=i),
where c is some constant, Col is the flutter frequency, and g is a structural
damping coefficient. .
The flutter analyst is attempting to approximate the actual flutter
characteristics of the airplane by representing them by as small a
number of degrees of .freedom as possible. The design of faster and
larger aircraft requires the consideration of a larger number of degrees
of freedom so as to make the Butter analysis an aiiequoJ,e apprlXCimation.
When many degrees of freedom are required to represent the Butter
characteristics of an airplane, the need for matrix methods becomes
acute. Matrix methods t also serve to improve the theory of the
mechanism of flutter.
t Matrix methods are also used in treating other phenomena related to Hutter.
See Douglas reports.
CHAPTER 8
MATRIX METHODS IN ELASTIC DEFORMATION
THEORY
Although the tensor calculU8 is the most natural and powerful mathe
matical method in the treatment of the fundamentals of elastic def
ormation of bodies, the matrix calculus can aJao be used to advantage
in furnishing a short and neat "treatment. This chapter is purely
introductory and suggestive.
Let a medium be acted on by deforming forces. The position of the
medium before and after deformation will be called the initial and
final state of the medium respectively. Let a
1
, a
2
, a8 be the rectangular
cartesian coordinateS of a representative particle of the medium in the
initial state, and Xl, xl, :r;8 the rectangular cartesian coordinates of the
corresponding particle in the final state. Then the elastic deforma
tion is represente<l by particletoparticle transformations
(81) x' = j'(a
1
, a
2
, a8).
Hence by the ordinary differential calculus
(82) dx
i
= Ii da
i
,
where
Jj = a8).
The classical theory of elastic bodies assumes that the deformations 81
are "innitesimaI." Such crude approximations have been found
inadequate in some investigations on thin plates and shells. t AB a
result the finite deformation theory is beginning to be used in engineer
ing problems. In what we shall have to say we shall make the restric
tive assumptions of the classical infinitesimal theory only toward the
end of the chapter.
Let A and X be defined as the column matrices of three elements:
A = II at II and X = II Xi "
Similarly the differential matrices are dA = " da
i
II, dX ... II dx' II.
Define the square matrix F by
F=1I1i II,
t See the KGrmtln Anniversary Volume, California. Institute of Teclmology, 1941,
for various papers and other references.
38
MATRIX METHODS IN ELASTIC DEFORMATION THEORY 39
i.e., the matrix of the partial derivatives of the deformation 81. Then
the differential relations 82 can be written in matric form as
(8,3)
dX = FdA.
!>EFmmON. The adjoint M* of a matrix M is the matrix obtained
from M by interchanging the rows and columns of M.
Thus A * and X* are row matrices while
()xl
()x2
()x3
, ,
()a
l ()al ()al
F*=
()xl
()x
2 ()x3
, ,
cxi
2
()a2 ()a2
()xl
()x2
()z3
, ,
()as ()a3 ()a3
It can be proved by a routine procedure that (M
i
M
2
)* = ~ ~ .
In other words, the adjoint of the product of two matrices is the
product of their adjointB in the reverse order. For example,
dX* = dA*F*.
Hence the square of the differential line element in the final state of
the medium will be
(84) dsi = dA*F*F dA
since
ds!r = (dx
l
)2 + (dx
2
)2 + (dx')2 = dX* dX.
By a direct computationJt can be shown that the matrix
(85) F*F = II 1/I'J II ,
where
8 ()zk ()zk
(86) 1/IiJ= (;()a' ()a/
Note that 1/Iii = 1/Iji. This is expressed by saying that F*F is a symmetric
matrix.
The square of the differential line element in the initial state is
(87) dsl = dA* dA (= dA*I dA, where lis the unit matrix)
and hence with the aid of 84 we find the formula
(88) dsi  dsl = dA*(F*F  1) dA.
On defining the matrix, called the deformation or strain matrix,
(89) H = i(F*F 1),
we find
(810) dsi  dsl = 2 dA*H dA.
40 MATRIX METHODS IN ELASTIC DEFORMATION THEORY
Now, if d8i .. d8i for all particles of the initial state and for ail
dA, we have, by definition, a rigid displacement of the medium from
the initi8.1 state to the final state. A glance at g10 shows that a
neces8ary a:IIIl w:fPcien1 condition that the change of the medium from the
initial to the final 8tate be a rigid displacement is that the 8train ~
H be a zero matrix. In other words, when H = 0, the medium is not
deformed or strained but is merely transported to a different p0
sition by a rigid displacement. This property then justifies the
terminology "strain matrix H" since H measures, in a sense, the
amount of strain or deformation undergone by the medium. It is clear
from definition 89 that the 8train matrix H is a trymmetric matrix.
Let "v be the elements (more commonly called components in elasticity
theory) of the strain matrix H, i.e., H = II "ii ". On using result
85 and definition 89, we see that
f1V = !(t ()xk 2>x1'  6 .. ),
2 A:=l ()at ()ai 01
where
Let u' .. Xi  a', then
Define the matrix
and obtain the relation
a'i = 1 if i = j,
... 0 if i j.
U=XA
FIG. 81.
F = V +1.
Hence, from definition g9, for the strain matrix H we obtain
H = i[(Y* + I)(Y + 1)  lJ.
MATRIX METHODS IN ELASTIC DEFORMATION THEORY 41
Expanding the righthand side gives
H = ![V*V + V*+ YJ.
In the classical infinitesimal theory of elasticity only firstdegree terms
in :: are kept. Hence, to the degree of approX'imoJ,ion contemplated by
injinifaimal theory of elastic deformations, the strain moJ,ri,x is given by
H = l(V*+ V)
in rectangular cartesian coordinates. In other words, the components
'IiJ of H are given by the familiar
'I" = !(oo
f
+ 00
/
).
\1 2 00
1
00'
From the symmetry of 'Iii there are thus in general five distinct com
ponents of the strain matrix in three dimensions while there are three
components for plane elastic problems.
"PART II. TENSOR CALCULUS AND
ITS APPLICATIONS
CHAPTER 9
SPACE LINE ELEMENT IN CURVILINEAR
COORDINATES
Introductory Remarks.
The vague beginnings of the tensor calculus, or absolute differenti8J
calculus as it is sometimes called, can be traced back more than a
century to Gauss's researches on curved surfaces. The systematic
investigation of tensor calculi by a considerable number of mathema
ticians has taken place since 1920. With few exceptions, the applica
tions of tensor calculus were confined to the general theory oj relativity.
The result was an undue emphasis on the tensor calculus of curood
spaces as distinguished from the tensor calculus of Euclidean spaces.
The subjects of elasticity 1 and hydrodynamics,2t. 88 studied and used
by aeronautical engineers, are developed and have tlieir being in plane
and solid Euclidean space. It is for this reason that we shall be pri
marily concerned with Euclidean tensor calculus in this book. We
shall, however, devote two chapters to curved tensor calculus in con
nection with the' fundamentals of classical mechanics
8
and fluid
mechanics.
It is worthy of notice that the tensor calculus is a generalization of
the widely studied differential calculus of freshman and sophomore
fame. In fact, as we shall see, a detailed study of the classical dif
ferential calculus along a certain direction demands the introduction
of the tensor calculus.
Notation and Summation Convention.
Before we begin the study of tensor calculus, we must embark on
some formal preliminaries including some matters of notation.
Consider a linear function in the n real variables x, 1/, Z, "', w
(91) ax + ~ 1 / + 'YZ + .:. + ).w.
Define
(92)
al = a, as = ~ , as = 'Y, "', a.. = ).,
Xl = X, :tJ = 1/, x8 = Z, "', x" = w.
42
NOTATION AND SUMMATION CONVENTION 43
We emphasize once for all that xl, Xl, "', x are n independent vari
ables and not the first n powers of one variable x. In terms of the
notations of 92 we can rewrite 91 in the form 
(93)
oras
(94)
The set of n integer values 1 to n is called the range of the index i in
94. A lower index i as in ai will be called a subscript, and an upper
index i as in Xi will be called a sUperscript. Throughout our work we
shall adopt the following useful summation convention: ..
The repetition oj an index in a term once as a subscript and once as a
superscript wiU denote a summation with respect to that index over its
range. An index that is not summed out will be called a Jree index.
In accordance .with this convention then we shall write the sum
94 simply as
(95) aiX'.
A summation index as i in 95 is called a dummy or an umbraZ, since it
is immaterial what symbol is used for the index. For example, ap;;
is the same sum as 95. All this is analogous to the (umbral) variable
of integration x in an integral
I
b
J(X) dz.
Any other letter, say y, could' be used in the place of x. . Thus
LbJ(y) dy = I
b
J(X) dz.
Aside from compactness, the subscript and superscript notation
together with the summation convention has advantages that will
become evident later. .
. As a further illustration of the summation convention, consider the
square of the line element
(9 ',6) dIP = dz2 + dy'J. + dz2
in a threedimensional Euclidean space with rectangular cartesian
coordinates x, y, and z. Define
(97)
and
(9,8)
yl = x, y2 ... y, 'II = ,
44 SPACE LINE ELEMENT IN CURVILINEAR COORDINATES
Then 96 can be rewritten
(99)
or again
(9 10) M = 8'1 dy' dyi
with the understanding that the range of the indices i and j is 1 to 3.
Note that there are two summations in 910 one over the index i and
one over the index j.
Let f(x
1
, :ct, "', X") be a function of n numerical variables Xl, x
2
,
.. " x"; then its differential can be written
d.f = 'bf dx'
'I ()X,
with the understanding that the summation conuention has been extended
80 as to apply to repeaied flUperscripts in differentiation formulas. We
shall adhere to this extension of the summation convention.
It is worth while at this early stage to give an example of a tensor
and show the fundamental nature of such a concept even for ele
mentary portions of the usual differential and integral calculus. This
will d i s p ~ , I hope, any illusions' common among educated laymen
that the tensor calculus is a very "highbrow" and esoteric subject
and that its main applications are to the physical speculations of
relativistic cosmology.
Euclidean Metric Tensor.
4
In the following example, free as well as umbral indices will have
the range 1 to 3 as we shall deal with a threedimensional Euclidean
space. Let
(911) x' = f(yt, y2, y8)
be a transformation of coordinates from the rectangular cartesian
coordinates y1, y2, y8 to some general coordinates Xl, :ct, x8 not neces
sarily rectangular cartesian coordinates j for example, they may be
spherical coordinates. The inverse transformation of coordinates to
911 is the transformation of coordinates that takes one from the
coordinates Xl, X2, x8 to the rectangular cartesian coordinates yl, y2, y8.
Let
(9 12) y' = gi(xl, :ct, xli)
be the inverse transformation of coordinates to 911.
NOTATION AND SUMMATION CONVENTION
Example
Let yt, y2, 11 be rectangular cartesian coordinates and xl, x2, :r;8
polar spherical coordinates. The transformation of coordinates from
rectangular cartesian to polar spheri
cal coordinates is clearly 1/8
Xl = v' (y1)2 + 0l)2 + (11)2
X2 = COS
l
( Y (yl)2 + ~ 2 ) 2 + (11)2)
FIG. 91.
The inverse transformation of coordinates is given by
yl = Xl sin x2 cos xl
y2 = Xl sin x
2
sin:r;8
11 = Xl cos x
2
The differentials of the transformation functions in 9 12 may be
written
(913)
~ . i
d
"'II .3_a
y' = ()xa u;c
On using 913 we obtain, after an evident rearrangement, the formula
8 ?>gi?>gi a
(9 14) dIP = &; 2>x"' 2>xI' dx d3!.
If we define the functions ga6(xt, X2, :r;8) of the three independent vari
ables Xl, x2, zS by
(915)
8 ?>gi ?>gi
g
...A(XI X2 zS) = ~  ,
.... " ~ ()xa 2>xI'
we see that the square of the line element in the general xl, X2, zS coordi
nates takes the form
(916) ds2 = gtr# dx"' d3!
This is a homogeneous quadratic polynomial, called quadratic differen
tial form in the three independent variables dxt, dx
2
, dxB.
Caution: Once an index has been used in one summation of a series
of repeated summations, it cannot be used again in another summation
of the same series. For example, g .... dx
a
dx
a
has a meaning and is equal
to gll(dx
l
)2 + g22(dx
2
)2 + gaa(dzS)2, but that is not what one gets by
46 SPACE.LINE ELEMENT IN CURVILINEAR COORDINATES
carrying out the dou.ble repeated summation in 9 16. Expanded in
extenso, 9 16 stands for
(9.17) {dB' co gU(dXl)1 + 2glJ dzl dz2 + 2g18 dzl dz8
+ gll(dx
2
)2 + 2g. dz2 dz8 + g a a ( ~ ) 2 .
The factor 2 in three of the terms in 9 17 comes from combining terms
due to the fact that golJ is symmetric in a and /3; a glance at the defi
nition 9 15 shows that
golJ EO g{1 for each a and /3
and hence
glJ = gSI, g18 = gn, gl8 = ga.
Now let r, f2, XI be any chosen general coordinates, not necessarily
distinct from the general coordinates Xl, xl, xl. Let
(9 18) x' = Fi(r, f2, fJ)
be the' transformation of coordinates from the general coordinates Xl,
Zi, f8 to the general coordinates Xl, xl, xl. Clearly the differentials dzG
have the form
at
G
(919) dzG = ()Z" d'.
Define the functions g.,,(x
l
, Zi, xa) of the three variables :J!l, :J!S, za by
at
G
?n!'
(920) g.,,(f
l
, Zi, fJ) = golJ(x
l
, x
2
, xI)()Z., Z)f,.
Then, if we use 9 19 in 9 '16, we obtain, with the aid of the definition
920, the formula
(921) ds2 = g.,,(r, f2, fB) dX., dx
'
,
which gives the square of the line element in the Xl, f2, 5;J coordinates.
We have thus arrived at the following result:
If Xl, xl, xl and r, f2, :!' are two arbitrarily chosen sets of general
coordinates, and if the transformation of coordinates 918 from the x's
to the x's has suitable differentiabiZity properties, then the coeJfici,ent8
golJ(x
l
, xl, xl) of the square of the line element 916 in the Xl, x
2
, xl c0
ordinates are related to the coejficients golJ(x
1
, Zi, XI) oj the 8fJUf13'e of the
line eZemem 9S1 in the coordinates Xl, f2, xa by means of the law of trans
formation 9S0.
In each coordinate system with coordinates Xl, xl, xl, we have a
set of functions golJ(x
l
, xl, xl), called the components of the Euclidean
metric tensor (field), and the components of the Euclidean metric
tensor in any two coordinate systems with coordinates Xl, zS, xl and
Xl, Zi, za respectively are related by.means of the characteri8tic rule
920.
NOTATION AND SUMMATION CONVENTION 47
An analOgoUS discussion can obviously be given for the line element
and Euclidean metric tensor of the plane (a twodimensional Eu
clidean space). We now have two coordinates instead of three so
that the ra1l.{/e oJ the i1lllice8 is 1 to 2. For example, the line elements
dB in rectangular coordinates ul, y2)
d82 = (dy1)2 + (dy2)2
will become
dst = gajJ(x
1
, x2) dxt% tbJ
in general coordina.tes (Xl, x2), and the components of the plane Eu
clidean metric tensor gajJ(x
1
, x2) will.undergo the transformation
g.,.(X1, ZS) = gajJ(x
1
, x2) :: ::.
Exercises
1. Find the components of the plane Euclidean metric tensor in polar coordi
nates (Zl, zI) and the corresponding expression for the line element.
{
. {Zl .. V (1/
1
)1 + (11')1,
y1  zl cos zI, d (11') 2
11' .. Zl sin zI, an zI .. sinI , Y /# (x
.
Hence ::/'
2 ?iyt%?)yt% ()yl ()yl ()yI ()yI / X2
gi/.,.z1, zI) .. .. + liz' "''
111
Therefore
FIa. 92.
gn(ZI, zI) co cos
l
zI + sin
2
zI .. 1,
guCzl, zI) = (cos zlX _Zl sin zI) + (sin zI)(Zl cos zI) = 0 = gn(zl, zI),
(In(z1, zI) '" (_Zl sin zI)' + (Zl cos zI)I = (Zl)l.
The line element dB' .. gajJ dz"' d:rP in polar coordinates (z1, Z2) is then
dB' .. (d:l;I)1 + (zl)l(dzI)l,
which in the usual notation is written
dB' .. drS + rI dD'.
I. Find the components of the (space) Euclidean metric tensor and the expres
sion for the line element in polar spherical coordinates.
AnatDer.
gu(zl, zI, z8) .. 1, ga(z1, zI, z8)  (ZI)I, gaa(Zl, zI, z8) .. (zl)l(sin zI)t
and aJl other g,,i . ZI, zI, z8) D 0, so that
dB' .. (d:l;I)1 + (z1)1(dzI)I + (Zl)l(sin zI)I(dzI)l.
CHAPTER 10
VECTOR FIELDS, TENSOR FIELDS, AND EUCLIDEAN
CHlUSTOFFEL SYMBOLS
The Straia Tensor.
Aliother interesting example of a tensor is to be found in elasticity.
Let a
1
, a
2
, oJ be the curvilinear coordinates of a representative particle
in an elastic medium, and let Xl, :;2, r be the coordinates of the repre
sentative particle after an elastic deformation of the medium.
Let
(101)
E .. ?
Mali".
FIG. 101.
be the square of the line element in the medium, and let
(102) ds2 = g ~ dxCl dz'l
be the corresponding square of the line element in the deformed medium
induced by the elastic deformation whose equations are
(10,3) x = f(at, a", as).
In terms of the coordinates Xl, :;2, r the line element 101 can be
written
(104)
where
OO'Y ()a'
(105) hatJ = C'Y'?)zCl ~ .
On subtracting corresponding sides of 102 and 104 we find
(106) ds2  ~ = 2 E ~ dx'" d3!,
48
if we define EaB by
(107)
SCALARS AND VECTORS 49
Now EaB are functions of the coordinates Xl, :rr, x8. If then we calcu
late d82  in any other curvilinear coordinates Xl, fI, za, we would
obtain by the method of the preceding chapter
d82  = Z"EaB dZ" di!,
where
.
Because of the characteristic law 108, EaB are the components of a
tensor (field), and because EafJ in 106 is a measure of the strain of the
elastic medium, EafJ are the components of a strain tensor. We shall
have a good deal to say about the strain tensor in some of the later
chapters.
Scalars, Contravariant Vectors, and Covariant Vectors.
We shall now begin the subject of the tensor calculus by defining
the simplest types of tensors. An object is called a scaktr (field) if in
each coordinate system there corresponds a function, called a com
ponent, such that the relationship between the components in (xl, x
2
, x8)
coordinates and (ii, fI, coordinates respectively is
(10 9) 8(XI, X2, x8) = s(xt, i2, f3).
An object is called a contravariant vector field (an equivalent termi
nology is contravariant tensor field oj rank one) if in each coordinate
system there corresponds a set of three functions, called components,
such that the relationship between the components in any two coordi
nate systems is given by the characteristic law
_
(1010) E'(i
l
, XI, XS) = X2, x
3
) ()x .. '
An object is called a covariant vector field (an equivalent terminology
is covariant tensor field oj ranJc one) if in each coordinate system there
corresponds a set of three functions, called components, such that the
relationship between the components in any two coordinate systems
is given by the characteristic law
()x'"
(10 11) ;joW, Xl, XI) = '1 .. (x
l
, xl, x
8
) C>ii'
It is to be noticed at this point that the laws of transformation
1010 and 1011 are in general distinct so that there is a difference I
between the notions of contravariant vector field and covariant vectOr
50 VECTOR FIELDS AND TENSOR FIELDS
field. However, if orUy redangtdar cartesian coordinates are con
sidered, this distinction disappears. It is for this reason that the
notions of contravariant as distinguished from covariant vector fields
are not introduced in elementary vector analysis. That the char
acteristic laws 1010 and 1011 are identical in rectangular cartesian
coordinates follows from some calculations leading to the' result
(1012)
between rectangular (Xl, Xl, z8) and any other roctangular
coordinates (i
1
, 2, 8).
Tensor Fields of Rank Two.
In all three objects  scalar fields, contravariant vector fields, and
covariant vector fields  there are components in any two coordinate
systems, and the components in any two coordinate systems are re
lated by characteristic transformation laws. We have to consider
other objects, called tensor fields (of various sorts), whose components
in any two coordinate systems are related by a characteristic trans
formation law. To shorten the statements of the following definitions
we shall merely give the characteristic transformation law of com
ponents.
Covariant Tensor Field of Rank Two.
 ()z>?nJ'
(1013) ta,8(X1, tJ, = t}.p(x
l
, Xl, z8)
Contravariant Tensor Field of Rank Two.
_ ()zen!
(1014) ia.8(il, 2, 8) = F(x
1
, Xl, z8) ()z> ()z".
Mixed Tensor Field of Rank Two.
_ ()z?nJ'
(1015) t; (i
1
, tJ, 8) = :ct, z8) ()x> en! .
.Again because of relation 1012, the difference between the above
three types of tensor fields is nonexistent as long as one considers
orU1} rectangular cartesian coordinates.
It is worthy of notice at this point that the indices in the various
characteristic transformation laws tell a story which depends on
whether the index is a superscript, called contrrwariant index, or a
subscript, called covariant index.
Before proceeding any further with the development of our subject,
it would be illuminating to have some examples of vector fields and
MIXED TENSOR FIELD OF RANK TWO 61
other tensor fields. Perhaps the most important example of a contra
variant vector field is a velocity field. Suppose that the motion of a
particle is governed by the differential equations
di = :ct, xl),
where t is the time variable. If Xi = /,(Zl, Z2, xl) is a 'transformation
of coordinates to new coordinates x', then
(/,Xi chi ()x'
di = ()zi di = X2, xl).
Thus the components 5;2, XI) of the velocity field in the Xi co
ordinates are related to the components Xi, xl) in the x' coordi
nates by the rule
_ ()Xi
z2, f8) = x2, xl),
uX
'
which is precisely the contravariant vector field rule.
If 8(Xt, :ct, is a scalar field, then the "gradient" :. are the com
of a covariant vector field. An important example of a scalar
field is the potential energy of a moving particle. .
We gave two examples of a covariant tensor field of rank two: the
Euclidean metric tensor and the strain tensor. A little later in the
chapter we shall give an example of a contravariant tensor field of rank
two, the gP associated with the Euclidean metric tensor gafJ'
As an example of a mixed tensor field of rank two, we have the
mixed tensor field with constant components
8'3=0 if
=1 if
in the coordinates. But
Hence
 ( 1 ) x ()z" ()if"
8'3 x, ;v, f8 = &"?n! ()zX
?n!' ()XG
= ?n! ()z)"
8;(xt, XS, Xi) = 0 if a
=1 if
In other words, not only are the components constant throughout space,
but they are also the same constants in all coordinates.
One of the first fundamental problems in the tensor calculus is to
extend the notion of partial derivative to the notion of covariant deriva
tive in such a manner that the covariant derivative of a tensor field is
52 VECTOR FIELDS AND TENSOR FIELDS
also some tensor field. It is true that, if one restricts his work only
to cartesian coordinates (oblique axes), then the partial derivatives
of any tensor field behave like the components of a tensor field under
a tranSformation of cartesian coordinates to cartesian coordinates.
,For example, suppose that (Xl, xi, xl) are cartesian coordinates and
(Xl, 5:', f8) are any other cartesian coordinates; then it can be shown
that for suitable constants aj and ao
(1016J
is the transformation of coordinates taking the.cartesian coordinates
x'to the cartesian coordinates Xi, Hence
(1017)
a set of con.stai:tts, and so
(1018)
2)2fi
oo:i ()x" = o.
Now, if E ' ( ~ , xi, xl) are the components of a contravariant tensor
field, then
(1019)
On difierentiating corresponding sides of 1019, we obtain
( ) ~ , ()E'" ()x! ()x
i
. ()2xi C>:r!
(1020) ()fi = C>:r! ()fi 00:'" + E'" ()x'" ()x! ()fi' .
But both Xi and Zi are cartesian coordinates. Hence on using 1018
in 1020 we obtain
( ) ~ i C>E'" ()xfJ ()fi
(10 21) ()zi = C>:xfJ ()fi 00:'" '
which states that the partial derivatives :: behave as though they were the
components of a mixed tensor field of ranJc two and this under a trans
formation from cartesian coordinates to cartesian coordinates.
The presence of the second detivative terms in 1020 in curvilinear
coordinates Xi shows that the :: are not really the components of a
tensor field. So the fundamental question arises whether it is possible
to add corrective terms C; (all zero in cartesian coordinates) to the
partial derivatives ~ so as to make
. ()E'"
(1022) C>:r! + C;
EUCLIDEAN CHRISTOFFEL SYMBOLS
53
the components of a mixed tensor field of rank two for all contravariant
fields r'. The answer to "this question is in the affirmative, and the
possibility of the corrective terms depends on the existence of the
Euclidean Christoffel symbols.
2
EucUdean Christoffel Symbols.
We saw in the previous chapter that the element of arc length
squared in general coordinates takes the form
(1023) d82  gafJ(x
1
, X2, xl) d,xt% ih!,
where, in the present terminology, ga(J are the components of a covari
ant tensor field of rank two, called the Euclidean metric tensor. Now
it can be proved that the determinant
3
gn, ga, gl3
(1024) g= g21, g'J.2, gm
gal, g32, gaa
Define
(1025) ffP=
Cofactor of g{J in g
.
g
As the notation indicates, it can be proved that the functions ffP are the
components oj a oontrOJJariant tensor jieUJ. of rank two with the JoUowing
properties:
(1026) ffP = t/"
t/"gv(J = (equals 0 if a and 1 if a =
Define the Euclidean Christoffel symbols X2, xl) as follows:
(
10.27) r' ( 1 xt xl) _ 1 w(?)gv(J ?)g _ ?)gap).
afJ X " .,g C>xt% + ()x" C>x"
Since the law of transformation of the components gap and gaIJ are
known, one can calculate the law of transformation of the xl)
under a general transformation of coordinates X = f(x
1
, xt, xli).
Let gap and be the components of the Euclidean metric tensor
and its associated contravariant tensor respectively in the Xi coordi
nates. Then if we define
(10.28) ii' (w i
2
f8) = igw(?JU"fj + ?JU __ C>gafJ),
afJ , , i:)Xt% b5! ()i'"
we can prove by a long but straightforward calculation that the
r'ap (Xl, X2, xl') are related to the X2, XI) by the foUowing famous
transformation law:
4
_. " ()x" ()xP ()Xi ()2X" ()xi
(1029) r!..s(i
1
, X2, f8) = r".(x
l
, w, xI)()xt% ()xfJ ()x" + ()xt% ()xfJ
VECTOR' FIELDS AND TENSOR FIELDS
In the previous chapter we saw that
8 ()y. ()y.
(1030) ga{J(x
1
, xl; xl) = l: ..... '::II
,.1 IT,/; o:rr
where the VS are rectangular cartesian coordinates. Consequently,
if the x's are coordi,ruites, it follows that all the components
ga{J(x
1
, xl, xB) are con8tants. In other words, = 0 in cartesian
coordinates x'. We have then immediately the iinportant result that
the Euclidean Christoffel symbols xl, xB) are identically zero in
cartesian coordinates.
If the yi are cartesian and the Xi are general coordinates,
one can calculate the Euclidean Christoffel symbols X2, xB)
directly in terms of the derivatives of the transformation functions in
the transformation of coordinates
Xi = Ji(yl, y2, 11')
and in the inverse transformation of coordinates
y' = q,1(xt, xl, xB).
Since all the Euclidean Christoffel symbols are zero when they are
evaluated in cartesian coordinates yi, it follows immediately from the
transformation law 1029 that the Euclidean Christoffel symbols in
generaJ. coordinates x' are given by the simple formula
. W
(1031) r'a{J(Xl, xl, xB) = ()x" WfI
This formula is often found. to be more convenient in computations
than in the defining formula 1027.
Caution: The Christoffel symbols are not the components of a tensor
field so that i, a, and fJ are not tensor indices; i.e., i is not a contra
variant index and a, fJ are not covariant indices.
The concepts of tensor fields and Euclidean Christoffel symbols can,
by the obvious changes, be studied in plane geometry  twodimen
sional Euclidean space. Since we have two coordinates for a point in
the plane, all components of tensors and the Christoffel symbols will
depend on two variables, and the range of the indjces will be from
1 to 2 instead of 1 to 3. Thus the Euclidean Christoffel symbols for the
plane 1Dill be
(1032) r'a{J(xt, xl) = igio'(Xl, + ':;  :a:)
in terms of the Euclidean metric tensor ga{J(xt, x') for the plane. The
alternative expression in terms of the derivatives of the transformation
EUCLIDEAN CHRISTOFFEL SYMBOLS
functions from, rectangiUar coordinates (yl, y2) to general coordinates
(xl, xi) and of the inverse transformation functions will be
. b2y). bx'
(1033) xi) = CY.r!" ?Jx"
. Exercise
Compute from the definition 1032 the Euclidean Christoffel symbols for the
plane in polar coordinates. Then cheok the results by computing them from 10 33.
Hint: Use results of exercise 1 of Chapter 9 and find (fl ... 1, gO '" gil '" 0, gat ...
Answer. .
1
:pl .... I\ .. ... , It .. 0, It '" 0, r:, '" n '" 0, r:a .. o.
:e1
CHAPTER,l1
TENSOR ANALYSIS
Covariant Differentiation of Vector Fields.
Having shown the existence of the Euclidean Christoffel symbols,
we are now in a position to give a complete answer to the fundamental
question  enunciated in the preVious chapter  on the extension of
the notion of partial differentiation. We shall now prove that the
fu'fU'Ji008
(
11.1) xi, xl) + r' (Xl xi xl) ""( 1 xi xl)
()xCI ....,' X, ,
are the components of a mixed tensor field of rank two, called the c0
variant derivative of This result holds for every differentiable con
travariant vector field with the understanding that the r!..(x
1
, xi, xl)
are the Euclidean Christoffel symbols. We shall use the notation
for the oovariant derivative of By hypothesis we have
 ()Xi
(11 2) Xl, XS) = t(x
1
, xi, xl) ()x>.'
If we then differentiate corresponding sides of equation 112 we
obtain, by the wellknown rules of partial differentiation,
()t ()x" ()x' >. ()2Xi ():rJ"
(11 3) ()x = ():rJ" ()x ()x>' + ()x>. ():rJ" ()X'"
We also have
_. ()x. ():rJ" ()Xi ()2X>' ()Xi
(114) r'ja = r!,. ()Xi ()X'" cnt + ()xi ()XCI ()iA'
If we multiply corresponding sides of 11 4 by and sum on i we obtain
, . >. ():rJ" ()X' ()Xi ()2x>' ()x'
(11 5) = ()x ()X>. + ():rJ" i:)ii ()X'" ()i'
on using
in the first set of terms and
 ()x.
,.. = I:.i_
.. ()Xi
56
TENSOR FIELDS OF RANK R = P + Q 57
in the second set of terms. Since). and II are summation indiceS, we
can interchange ). and II in the second derivative terms of 11 5.
On carrying out the renaming of these umbral indices, we can add
corresponding sides of 11.3 and 11.5 and obtain
{
()E' . . ()f ()Xi .
()Z" + r j.J1 = ()x)' + r ,,.r ()"
(11 6) ( ()2Xi ()x" ()Xi (P31' ()Xi)
+ ()31' ()f'" + ()ii ()f'" ()x"
Now
(117)
where
i {I if
8 .. = 0 if
i= a,
i;14 a.
On. differentiating 11 7 with respect to we obtain
(Pii ()x)' ()Xi (Px)' ()ii
(11S) ()x" ()" + ()x)' ()Xi ()" = 0
and hence 11 6 reduces to
_. _. ()t ).)()x)' ()x'
(11 9) ()Xa + rj.J1 = ()x" + r .,.r ()" ()x).
But 119 states that the functions
()f ).
()x" + r.,.r
are the components of a mixed tensor field of rank two. This completes
the proof of the result stated at the beginning of this chapter.
By a slight variation of the above method of proof, it can be es
tablished that the Junctions
(1110)
are the components oj a covariant tensor field oj rank two whenever Ei are
the components oj a covariant vector field, called the covariant derivative oj
Ei. As before, r1a are the Euclidean Christoffel symbols. We shall use
the notation ti,a for the covariant derivative of ti.
Tensor Fields of Rank T = P + q. Contravariant of Rank p and Co
variant of Rank q.
It is conyenient at this point to give the definition of a general tensor
field. As in the case of tensor fields of rank two, the definition will be
58 TENSOR ANALYSIS
clear if we give the law of transformation of its components under a
transformation of coordinates.
 ()X" ()x"
(1111) r, til) = 7'1::::1:(X
1
,:z:2, :J!?},
()r
x
()x')'1 ()x'YfI
We are now in a position to considet: some problems that arise in
taking successive covariant derivatives of teruror fields. However, we
must first say a word or two about the formula for the covariant deriva.
tive of a tensor field. If l!re the components of a ten80r field of
ranJi p + q, contravariant of rank p and covariant of rank q, then the
functions = : : ;:. 'Y defined by
{
'''U ()1';:===:; "'Up
(11.12) 'l': ... II!:'Y = ()x'Y + ..
...,cIJt n1CIl CIpIcr r" n1CIl ., t .".. tnCIl cz.
+ ...   
are the components of a tensor field of rank p + q + 1, contr(Jf)(Jriant of
rank p and covariant of rank q + 1.
'1':: =:;:. 'Y will be called the covariant derivative of The above
result 1 stating that the covariant derivative of a tensor field is indeed
a tensor fiE!ld can be proved by a long but quite straightforward calcu
lation analogous to the method of proof given for the covariant deriva.
tive of vector fields.
Since the covariant derivative of a tensor field is a tensor field, we
can consider the covariant derivative of the latter tensor field, called
the second covariant derivative of the original tensor field. In symbols,
if '1':::::; is the original tensor field, we can consider its second suc
cessive covariant derivative
1';::::;:. 'Y. "
The fundamental questiun arises whether covariant dijJerentioJi;on is a
commutative operation, i.e., whether
(11 13)
,"",,'''Up ,"",,"'Up
.I. = . .I. '.'Y'
The answer i8 in the ajfirmative.
2
In fact, since all the Euclidean Chris
toffel symbols are zero in cartesian coordinates, the partial derivatives
of all orders of the Christoffel symbols are also zero in cartesian coordi
tes H if
*,"",,"Up( l .3 .8) th ts f ,"",,'''<110
na . ence II, II are e componen 0 m
the cartesian coordinates y", we find with the obvious repeated use of
formula 1112 that .
:"2*,"",,' Up
*maa'''Up ( 1 .2 .a) _ u
.I. 'Y. ,\9 , II, II  ()y'Y ()y'
PROPERTIES OF TENSOR FIELDS
59
In other words, we have the result that 8UCCe8sive covariant derivatives
01 a ten80r ji.eW, reduce to partial derivatives 01 the tensor fie7il, whenever
the ten80r ji.eW, and the operations. are evaluated in cartesian coordinates.
Properties of Tensor Fields.
Perhaps the most important property of tensor fieJds is the following:
II the components 01 a tensor fie7il, vanish identically (or at one point, or
at a set 01 points) in one coordinate System, they vanish likewise in all
coordinate systems. This result follows immediately on inspecting the
law of transformation 1111 of the components of a tensor field.
If, then, one can demonstrate that a tensor equation
= 0
holds good in one coordinate system, it will necessarily hold good,
without further calculation, in all coordinate systems. For example,
consider the covariant derivative gil.k of the Euclidean metric tensor.
Now, since.
C>gii ..,., _ 1'cr
gil. k = ()xk  gcrjJ. M:  g ..... ik'
and since the gil are constants when evaluated in cartesian coordinates,
we have gil. k .. 0 in cartesian coordinates, and hence the tensor equation
gil.II'" 0 holds in aU coordinates throughout space.
Exercises
1. Provethat the covariant derivatives of gil and of a; are zero. See equation
1026 for the definition of the tensor gil. The mixed tensor a;  1 if i '" j and
B; .. 0 if i ;r! j. 
S. If 7'# is any tensor field, then show that T: is a scaJ.a.r field. Similarly, if
7'#7 is a mixed tensor field of rank three, show that is a covariant vector field.
a
8. If the tensor field T'; is defined by 7'# '" >., in terms of the two tensor
fields >.,06 and Pall, prove that the following formula (reminiscent of the differentia
tion of a product in the ordinary differential calculus) holds:
7'#.7  + >., '\"
CHAPTER 12
LAPLACE EQUATION, WAVE EQUATION, AND POISSON
EQUATION IN CURVILINEAR COORDINATES
Some Further Concepts and R e ~ k s on the Tensor Calculus.
We remarked in Chapter 10 that, if 8(X
I
, X2, xl) is a scalar field, then
!;, is a covariant vector field. So, to complete the picture of covariant
difiere1'ltiation, we can call :i the covariant derivative oj the scalar fie1d
8(xl, :tA, xl).
For some discussions it is convenient to extend the notion of a tensor
field. By a relative tensor fie7i oj weight w, we shall mean an object
with components whose transformation law difiers from the transforma
tion law of a tensor field by the appearance of the functional determi
nant (Jacobian) to the wth power as a factor on the right side of the
equations. If w = 0, we have the previous notion of a tensor field.
For example:
C>XI C>Xl
C>xl ..
, ,
()fl ()f2
()f8
(fl, z2, X8) =
()x2 C>x2 ()x2
8(Xt, :c2, xl)
, ,
()fl ()z2 ()f8
c>XI
()x3 ()x3
, ,
()fl
()z2
()f8
and
C>xl C>xl
C>xl '"
, ,
()fl
()f2 ()f8
{'W, z2, XI) =
()x2 ()x2 C>X2
()fi
, ,
r(xl, :tA, XI)C>x"
()fl ()X2
C>xa
()X' ()x3 ()x3
, ,
()f1 ()X2 ()f8
are the transformation laws for a relative scalar field of weight w and
a relative contravariant vector field of weight w respectively. A rela
tive scalar field of weight one is called a scalar density, a terminology
60
FURTHER CONCEPTS AND ItEMAItKS 61
suggested by the physical example of the density of a solid or fluid.
In fact, the mass m is related to the density function p(xl, z2, z2) of the
solid or fluid by
m = f f fp(xI, z2, xl) dx
1
dx
2
.dxB,
where the triple integral is extended over the whole extent of the solid
or fluid.
Another important example of a scalar demity is given by yg,
where g is the determinant oj the Euclidean metric tensor ga{J' In fact,
()xc ()xb
(121) ga{J = gab ()x .....
Let g = I ga{J I , the determinant of the ga{J. By a double use of the
formula for the product of two determinants when applied to 12 1,
one can prove that
I
()xc 12
(122) g = ()X'" g,
where I :: I stands for the determinant of the partial
d
. . ()xc
envatives
On taking square roots in 122 we obviously get
(123) v'g = I :: I vg,
which states that yg is a scalar density.
The yg enters in an essential manner in the formula for the volume
enclosed by a closed surface. In fact, the formula for the volume in
general curvilinear coordinates X' is given by the triple integral
(124) V = f f fyg dx
1
dx
2
dx3.
This form for the volumecan be calculated readily by the following
steps. If y' are rectangular coordinates, then
Hence.
(125)
V = f f f dyl dyl dr.
V = f f f J dx
1
dx
2
dx8,
where J stands for the functional determinant
of the transformation of coordinates from the curvilinear coordinates
Xi to the rectangular coordinates y'. Now
s ()y' c>y'
ga{J(x
1
, xl, XS) = t; ()x'" c>#'
62 EQUATIONS IN CURvn.INEAR COORDINATES
and hence j;he determinant
g= I I
is precisely equal to J2 on using the rule for the multiplication of two
determinants. In other words
(126) yg ... J,
from which the formula 124 for the volume becomes clear.
Since the formula for volume in general coordinates z' is given by
124 it follows that the mass m of a medium in general coordinates z'
has the form
m = f f f Po(Zl, z2, :r;8)yg dz
l
d:l!2 cJx3,
where Po(Zl, r, zS) is an absolute scalar field that defines the (physical)
density of the medium at each particle (Zl, zS, :r;8) of the medium.
Clearly p(Zl, :r!, xii) = Po(z!, :r!, xa)Vi is a scalar density and, since
Vi = 1 in rectangular coordinates, has the same components as the
density Po(zl, zI, xii) of the medium in rectangular cartesian coordinates.
Other concepts and properties of tensors will be discussed later in
the book whenever they are needed.
Laplace's Equation.
Let be the contravariant tensor field of rank two defined in
Chapter 10, i.e.,
(127)
ffP = CQfactor of gpa in g.
g
H ",(zt, :r!, Z3) is a scalar field, then the second covariant derivative
"' .... fJ is a covariant tensor field of rank two. Now we can show that
.... fJ i8 a scalar fieM. In fact,
;;,a8 = .N ()ffl
(128) 1/ . fI
(129)
On multiplying corresponding sides of 128 and 129 and summing on
a and we obtain
ff'V,.a.I1 (/"", .... (J5fS ()ffl
and hence the desired result
on using the obvious relations
(1210)
Let
(1211)
LAPLACE'S EQUATION 63
for the arbitrarily chosen sca.Ia.r field "'(Xl, X2, xl). We have just shown
that F(x
1
, xl, xl) is also a sca.Ia.r field. In rectangular cartesian coordi
nates, the Euclidean metric tensor has components equal to 0 for
a and equal to 1 for a Furthermore, we saw in Chapter 11
that in cartesian coordinates, and hence in rectangular cartesian
coordinates, y',
.. ,. (0.1 2 .3) = ?p*"'(y1, y2, '11')
'I' .G.P II , Y , II . ?if' CY/
The component of the scalar field F(x
1
, xl, x
8
) in rectangular coordinates
y'is then
or
(1212)
?P*I/t(yt, y2, ya) + ?P*I/t(y1, y2, '11') + ?P*I/t(yl, y2, '11'),
(by1)1I (by2)2 ()y8)2
the Laplacean of the function *I/t(y1, y2, '11').
Hence the form. of Laplace's equation in curvilinear coordin0.t6s x
wiJJ& the scalar field ",(Xl, xli, xl) as unknown i8 given by
(1213) xl, :r;3)", "".,,(x
1
, xl, xl) = 0,
where g"fJ(x
1
, xli, xl) is the contrauariant tensor field lS.7 deftMd in terms
o/the Euclidean metric tensor and where ",,,,,.,,(x
1
, xl, xl) i8 th second
covariant derivative oj the 8calar field ",(Xl, xl, xl).
If we write 12.13 explicitly in terms of the Euclidean Christoffel
symbols rjA;(X
1
, xli, xl), we evidently have
(
12.14) ..a8( 1 xli xI)(CJlI/t(xt, xl, xl) _ ( 1 z2 xB),*(zl, xl, xl) = 0
1/ x,, ():J!'
as the/arm of Laplace'8 equation t in curvilinear coordinates Xi.
It is worth while at this point to give an example of Laplace's
equation in curvilinear coordinates and at the same time review
several concepts and formulas that were studied in previous chapters.
t There is another form of La.place's equation in ourvilinear ooordinates z,
which is sometimes more useful in numerical calculations than 12 14. It is given by
{ v'i gall? )
_  C>z'" ?xrfJ  O. For a. proof see note 3 to Cha.pter 13. SimUa.r rema.rks
vg .
could be made for the wa.ve equation and Poisson's equation since the La.place
differential expression occurs in them.
64 EQUATIONS IN CURVILINEAR COORDINATES
Let yt, if', 11 be rectangular cartesian coordinates and Xl, x2, xl polar
spherical coordinates defined by the coordinate transformation
{
Y
l
= Xl sin X2 cos xl,
(1215) y2 = Xl sin x2 sin xl,
11 = Xl cos x2.
Clearly the inverse coordinate trans
Z2 .. 1.A (ZI Zl Z8)
,"'/ I "
formation is given by
Xl = V (yl)2 + (yI)2 + (11)2
x
2
= cos{v (yl)2 + + (yI)2)
I
I
xl = tanI
FIo. 121.
From the definition of the Euclidean metric tensor g,,{J we find
(1216) gll = I, g22 = (Xl)2, gaa = (XI)2 (sin x2)2, and all other gi,i = 0,
so that the line element d8 is given by
(12.17) d82 ... (dx
l
)2 + (Xl)2(lx2)2 + (Xl)2(sin x2)2(dx8)2.
Again, from the definition of the tensor g6, we find
(1218) gll = I, g22 = g33 = x2)2 ' and all other gil = O.
The Euclidean Christoffel symbols caD. now be computed in spherical
polar coordinates; use either formula 1027 or 1031. They are as
follows:
... hcrrdx" dx
r
,
4 'ds2 = pqk(dpa)(d'la),
where
(149)
We are now in a to write down the change produced by the
STRAIN TENSORS '17
deformation in the squared arc element. In fact, in terms of the
coordinates Xi, we have
(1410) d,8l = 2f..,,(x) a.x'" rbfl,
where
(1411)
Similarly, in terms of the coordinates ia, we find
(1412) d,8l = 2"",T/(a)(rl'a)(d"a),
where
(1413) "",fI(a) =  ..,se(a.
Clearly is a covariant tensor field of rank two in the "8trained"
coordinaJ,es Xi while ""'fI (a) is a covariant ten80r field of rank two in the
"unstrained" coordinates ia. 'We shall call Eat1(X) the Eulerian strain
tensor and "",T/(a) the Lagrangean strain tensor. The Eulerian strain
tensor wiU often be referred to as the 8train tensor. We have chosen this
terminology in analogy with the two viewpoints in hydrodynamics
represented respectively by the Eulerian and the Lagrangean differen
tial equations of motion. AB an immediate consequence.,of formula
1410 we find the following fundamental result: A necessary and
sujficient condition that the elastic deformaiion of the medium be a rigid
motion (i.e., a degenerate deformation that merely di8places the medium
in space with a preservation of distances between particles) i8 that the Eu
lerian 8train tensor components be zero. Equivalently from 14 12 we
have: a nece88ary and sufficient co1ulition for. a rigid motion is the van
ishing of aU the Lagrangean strain components. These results justify
the use of the word "strain" in connection with the tensor fields Ea,e(x)
and "",'l(a).
Strain Tensors in Rectangular Coordinates.
If the 8ame rectangular cartesian coordinate SY8tem is used for the
description of both the initial and final positions of the elastic body,
the EUlerian strain tensor reduces to
1( 8 ()).a
(1414) Ea,e(xt, x', xl) = 2 6afJ  ():rf"
In terms of the usual notation (a, b, c)
and (x, y, z) for the rectangular coordi
nates in the same coordinate system of
the representative initial and final parti
cles respectively, we have
(a, b, c)
Unstrained StniiMd
FrG.142.
78 APPLICATIONS TO ELASTICITY THEORY
(1415)
. Similarly the Lagrangea.n strain tensor reduces to
(1416)
(14.17)
)
2 Cf"a CIa
In the classical theory (the usual approri:mate theory) oj elastic deforma
tions, the squares and products oj the partial deritJatitJ68 oj 1, v, and w are
considered negligible. Hence, to the degree oj appro:cimation considered
in the cla8sical theory, 1417 yields the Jo'fiqwing wellknown Jormulas
Jor the strain tensor:
(1418)
()tJ
ew='
()y
()w
Ea =.
()z
Change in Volume under Elastic Deformation.
79
Let us now return to our deformation theory without making the
approximations of the classical theory. One of the first fundamental
questions that arises is the manner in which volumes behave under a
deformation of the medium.
The element of volume in
the unstrained medium is,
by 12.4,
(14 19) dV
o
= VCOJad2arJ8a,
where c = I ~ I, the deter
minant of the atfJ. Similarly,
/\9;
~ L/
Unstrained
FIG. 148.
(1420) dV = v7J dx
1
dz2 dx3,
where g = I gaIJ I, the determinant of the gaIJ'
In terms of the "strained variables" Xi, we have
(1421)
where I Ga.. I is the determinant of the ~ ~ .
Now
dto = >,pc (a) a}'a d"a = haIJ(x) ~ rbfl,
where haIJ is given by formula 149.
Evidently the determinant h of the haIJ is given by
From this
(1422)
and
(1423)
h = c 1 Ga,. 1
2
dV
o
= v'h dx
1
dx2 dx'.
Formulas 1420 and 1423 imply that
(14.24) dV
o
= 10.
dV V g
Straiftsd
80 APPLICATIONS TO ELASTICITY THEORY
Define
(1425) h: = r h ~
and thus obtain
(1426) .
hafJ = g,.,li,;
by an application of the properties of ga/J and g"'iJj see formulas 1026.
From 1426 we compute
(1427) h=gl h6 I,
where 1 h6 1 is the detenninant of the h;. On using 1424, we have
immediately
(1428)
d Vo _ fT""':'::I
dV = v 1 h; I
To express this ratio in terms of the strain tensor EatJ, we first recall the
definition 1411 and obtain
hafJ(x) = gafJ(x)  2EatJ(x).
On raising the indices with the aid of the g"'iJ, we evidently have
(1429) h6 = 6';  2E
6
,
where the mixed tensor field e; of rank two is defined by
(1430) e;(x) = g""(x)e./J(x),
On using 1429 in 1428, we arrive at the important result that the
ratio of the element of oolume of a set of particle8 in the UnstTained medium
to the element of oolume of the corresponding particles in the strained
position i8 given in terms of the strain tensor Ea/J(x) by means of thefoUow
ingformula
(1431)
Notice that, with rigid motion of the medium, the strain tensor
EatJ = 0 and hence E; = O. Hence, from 1431 and e; = 0, we see that
a rigid motion preserves fJolume8.
We have developed the fundamentals of elastic deformation an'd
strain tensor in threedimensional space. It is clear, however, that
everything we have said can be taken over for the corresponding
elastic deformations in the plane. In all the formulas, there will be
two variables Xl, x
2
, etc., the indices will have tqe range 1 to 2, and the
consequent summations will go from 1 to 2. For example, formulas
1414, 14'15, 14'17, and 1418 for the Eulerian strain tensor will be
respectively as follows in elasticity theory in the plane.
VOLUME CHANGE UNDER ELASTIC DEFORMATION 81
(14,32)
1( 2 ()>'a ()>'a)
~ ( Z 1 , X
2
) =  8
o
/J  :E  .
2 >. 1 ()x'" CY:r!
(1433)
(14,34)
where 'U = x  a and tJ = Y  b.
()u
(1435)
E:==I
()x
Ezv = ~ ( ~ + ~ ; ) = EvZI
()tJ
fw='
()y
Unstrained Stn&inl
FIG. 144.
Exercises
1. Find the components of the Eulerian strain tensor fo!J{zl, zt, zS) when the
deformation of the elastic body is a stretching whose equations are
Zi = A ia (A is a constant greater than unity)
where both Zi and ia are rectangular coordinates referred to the same rectangular
coordins.te system. Discuss the change in volume elements. Work out the cor
responding problem in plane elasticity.
2. Work out exercise 1 for a contraction so that the constant A is les. than unity.
CHAPTER 15
HOMOGENEOUS AND ISOTROPIC STRAINS, STRAIN IN
VARIANTS, AND VARIATION OF STRAIN TENSOR
Strain Invariants.
If we expand the threerowed determinant I h; I, we find
(151) I a;  2f; I = i  211 + 411  8Ia,
where
II = e:, III = sum of the principal tworowed minors in the determinant
(152) A = I e; I, and. I a = A.
A function f(un, gl!J, "', gas, EU, ElIl, "', faa) of the Euclidean metric
tensor gall and the strain tensor Ea/J 1JJ1,'U be called a 8train invariant
l
if
(a) it i8 a scalar field; (b) under aU tramformal,ions of coordinates Xi to Xi
(153) I(ou, gl!J, "', gsa, ill, its, "', faa) = f(Uu, gill, "', gsa, En, EI!J,
"', Eaa),
where the functiun f, on the left, i8 the 8ame functiun of the gall and f...s
as it i8, on the right, of the gall and Eall'
We shall now prove that the three fUnctions 1
1
,1
2
, and I. occurring in
the expansiun of the determinant 151 are 8train infJariants. From the
law of transformation of the mixed tensor field E; we readily get
E:(r, 5!, XI) = e:(x
l
, X2, xl),
from which follows that II is a strain invariant on recalling that
e; = (f'e.p. To prove that Ia is a strain invariant, we have by hypothesis
()x" ~ Z "
E; (x) = ~ ( x ) bi! ~ ~ .
On taking the determinant of corresponding sides, we obtain
I e; I = I E ~ 11 :: ,.\ :: ,.
But the product of the functional determinants is equal to unity.
Hence I e; I = I ~ I , and I. is a strain invariant. To prove that Is
is a strain invariant, we first observe that
~  2E;
82
HOMOGENEOUS STRAINS 83
is a mixed tensor field of rank two. Hence,"by the argument just com
pleted for I ~ I, we see that the determinant I a;  2E$ I is itself a
strain iwariant. But, from the expansion 151, we see that
(154) I, = tcl a;  2E$ I  1 + 211 + 8IsJ.
Formula 154 expresses Is as a linear combination of strain invariants
with numerical multipliers. Hence obviously Is is itself a strain in
variant.
On using the results 1431 together with what we have just proved,
we obtain the result
dV
o
_ /
(155) dV = v 1  211 + 41,  8Is,
which gives the ratio of the element of the volume of a set of particles in the
unstrained medium to the element of volume of the corresponding particles
in the strained medium in terms of the three strain iwariants 11, I" and Ia.
Homogeneous and Isotropic Strains.
Let us now discuss the mathematical description of a homogeneous
strain.
DEFINITION OF HOMOGENEOUS STRAIN. A strain is homogeneous if
the corresponding strain tensor Eafj has a zero covariant derivative,
Le., Eall. 'Y = O.
In rectangular coordinates, the condition reduces to ~ ~ = 0 sinc.e all
the Euclidean Christoffel symbols are identically zero in rectangular
coordinates. In other words, the strain tensor components Ea{J in rec
tangular coordinates are comtants for a homogeneous strain.
It readily follows from the definition of the strain invariants 11, I"
and Ia that for a homogeneous strain
()I
i
()xi = 0
in rectangular coordinates and hence in all coordinates. (Keep in mind
that 1
1
, I" and 18 are three scalars and not the three components of a
. ) H f h . dV
o
al
COVarIant vector. ence, or a omogeneous stram, dV 18 a numenc
constant for all coordinates. We thus arrive at the important result
that for a homogeneous strain
V .
(156) V
O
= VI  211 + 41.  818 = a constant.
This constant is the 'same for all "unstrained" volumes'V
o
and their cor
84 HOMOGENEOUS AND ISOTROPIC STRAINS
reaporuJ,ing "strained" oolume8 V, and is independent oj eM coordi1UJt8
system.
For the special case of a homogeneous strain which is also isotropic at
each point, we shall have
(15'7) . '3 = EI3
or what amounts to the same thing
(158) = the Euclidean metric tensor.)
Since the strain is homogeneous, we have 'Y = O. We also have (see
the end of Chapter 11)
= O.
Hence E in 157 and 158 is a constant Scalar field, a numerical constant
that is independent of position and the coordinate system.
An example of an isotropic homogeneoqs strain is found in an is0
tropic medium subjected to uniform hydrostatic pressure, i.e., an
isotropic medium subjected to the same pressure in all directions. .
Since 156 holds and
1  211 +  81s = I 0;  2e; I,
we see by an evident calculation that for an isotropic homogeneous strain
Vo
(159) V = (1  2e)I.
The constant scalar E for an isotropic homogeneous strain is given
by the formula
(1510)
in terms of anyone volume before and after deformation.
In the usual approximaie theory (usual theory of elasticity) higher
powers of E than the first are negJected; see Chapter 14. Since
(1  2e)1 = 1  3E, approximately,
we have
(1511) = 1 _ 3t, approximately,
and
(1512)
1 V  Vo 1aV 1aV .
E = 3 V = 3 V = 3 Vo ' approXImately.
A Fundamental Theorem on Homogeneous Strains.
We shall now outline the proof of the following theorem. A neceB8Q,ry
and sujJicient con4ition that a strain be homogeneous is that, in terms oj
HOMOGENEOUS STRAINS 85
'Unatrained cartesian coordinoJ,es era and strained carlelrian coordinate8 yi,
th deJormaUon i8 Zinear, i.e.,
(15.13) 0, = OAt y' + CIA.
OutUne of Proof.
From the definitiol) of the strain tensor we have
1&"Il = g,,'l  2E,.'l'
Since g"'l,r = 0, and since under our "necessity EfJ'l,r ... 0,
we obtain the vanishing of the covariant derivative of h,.Il (x). But
by definition
h,,'l (x) =
so that the x are the independent variables and the era are the de
pendent variables. Expanding the covariant derivative in hp'l,r (x) = 0,
and rearranging, we find
(1514)
where
(1515)
Since
(1516)
/J /J er /J
a,pl' a,'l =  ()xr a,,, a,'l  a,,, a,Rr'
the right side of 1514 must be symmetric in p and r. Hence
(1517)
er /J /J er /J
a,pl' a,a =  ()xl' a,r a,'l  a,r a,'lP'
On equating the corresponding sides of 1514 and 15 17, rearranging,
and interchanging p and q and fJ and O!, we obtain
(15 18)
er /J /J + ()artJCer /J' + CI /J
artJC a,F a,'l =  ()xr a,,, a,Il ()x'l a,r a,p artJC a,r a,'lp'
Adding corresponding sides of 15 17 and 15 18 there results
(1519)
Recalling that the artJC are functions of the unstrained coordinates "a, we
find
(15.20) . _.r. CIa lIa =  + ().,tf  ()...,c] era lIa :a
crp ,pi' ,'l 2L()"a r:la . ,r ,Il ,,.
86 HOMOGENEOUS AND ISOTROPIC gI'RAINS
Multiplying corresponding sides by .,x
Q
and summing on q, we obtain
CI __ !p)..,.c _ 01 "f
(1521) ..,.c a,F  2i.()"f
a
+ ()CIa ().a a.
r
a,p'
Finally, if we mUltiply corresponding sides of 1521 by we arrive
readily at the interesting result
(1522) "a,fIr = ,:rea) "fa,p Cla,r,
where ,:r(a) are the Eudidean Christoffel symbols based on (he Euclidean
metric tensor in the untJl:rained coordinates ia.
Now from the definition 1515 9f Cl a.
fIr
and from the vanishing of the
Euclidean Christoffel Symbols r}k(X) and when evaluated in
" strained" cartesian coordinates yi and tt unstrained" cartesian co
ordinates iz respeetively, we see that 1522 reduces to
()2t1
z
(1523) {)y" ()yr = O.
This implies that the deformation is linear, i.e., of type 15 13.
To prove the converse part of the fundamental theorem. on homo
geneous strains, we have by hypothesis that the deformation, or strain,
is given by a linear transformation 15 13 in cartesian coordinates iz
and yi. Now
1&"Q(X) = aIP(a)(Cla,,)(fJa:
Q
)
and alP are constants a(/l in cartesian coordinates. From' 15 13 we
have
()CIz _ CIA
()yP  p
and hence the components *1&"Q(Y) in cartesian coordinates yi are
by ,
*1&"i.1/) =..,ad ,ClAp (JA
Q
,
a set of constants. Hence
()*1&"Q(Y) = O.
()yr
But this condition implies that the covariant derivative
1&"Q,r (x) = 0
and hence the covariant derivative E"Q,r (x) = o. In other words, the
,strain is homogeneous, and the proof of the theorem is complete.
Variation of the Strain Tensor.
In preparation for the subject matter of the next chapter, we need
to consider deformations that depend on an accessory parameter t,
VARIATION OF THE STRAIN TENSOR 87.
which in dynamical problems can be taken as the time t. So let the
coordinates x of a representative particle in the strained medium
depend on the coordinates 'a of the corresponding particle in the
unstrained medium and on the accessory parameter t. Let
(1524)
be the partial difierential of Xi in t. If J::: (x) is any tensor field in the
strained medium, define aJ::: (x) by
(15 25) lI.r () J'" D
UJ '" X = .... i x,
where J::: . is the covariant derivative of J:::. Clearly, if the Xi are
cartesian, aJ::: = DJ:::. Moreover, aJ(x) = DJ(x) for a sca.la.r J(x) in
general coordinates Xi. To have a wellrounded notation, define
ax" = Dx" and refer to ax" as the virtual di8placement vector. If
Xi (la, 2a,"'8a , t) have continuous second derivatives, then from the
commutativity of second derivatives
D( = = . ( r").
'0 o'Q or""
Hence
o
D(dxI') = .dr",
which implies the tensor equation
(1526) a(dxI') = dr".
Obviously ag,.. = 0, since g,.,.e = O. Hence the above tensor equation
may be written
(1527) a(dxl:) = (aXl:).CI dr".
Since &l("a) = 0, an evident calculation using 1526 shows that
(1528) a("a,p) = "a,a(ar"),p.
Recalling that
h,,9(X) = afjC(a)(Cla.p)(fJa.fJ.)
and applying formula 1528 we find
ah;.fJ. = + Cla,J> fJa.,.(ax").fJ.J,
which can be put in the convenient form
(1529) ahp'l = h;(aX,.),p  h;,(aX,.).fJ.'
From this, and from the definition of the strain tensor faIJ and the re
lated formula.
88 BOMOOENEX>US AND ISOTROPIC STRAINS
we arrive at the/v/nda'fMnta1,/orm:u'la/or the fJO:riati,on 0/ the atrain tenaor.
(1530) 8e,g (x) = ![(8xg).p + (rb:p),gJ  [e;(8x .. ),p + ep (8x .. ),gJ.
This formula becomes
(1531) 8Epg(X) = ![(8xg),p + (8xp),J
within the a'P'JYfOXimati0n8 0/ the usual approximaJe theory 0/ eW.sticity.
Returning to our finite deformation theory, we define a rigid virtual
by the condition 8(M) = O. On using formulas 1526
and 1527 in an evident calculation, we find
(1532) 8(d82) = [(8x
a
),p.+ (8x/J) ... J d:r!' d:rP
for any virtual displacement, rigid or not. Hence the virtual displace
ment vector must satisfy differential equations for a rigid
virtual displacement
(1533) (8z
a
),p + ... = O.
For the sake of completeness, we shall write down the formula
(without giving the derivation) for the variation of the Lagrangean
strain tensor pgTl under an arbitrary virtual displacement ..
8pgTl = ![(&x
a
)", + (8x/J)...] p,1?
Exercise
Calculate the three fundamental strain invariants for a homogeneous isotropic
stre,in. Hint: since they are constants, calculate them in rectangular cartesian c0
ordinates.
CHAPTER 16
STRESS TENSOR, ELASTIC POTENTIAL, AND
STRESSSTRAIN RELATIONS
Stress Tensor.
Let S be the bounding surface of a portion of the elastic medium
in its strained position. The surface element of S may be described by
means of the covariant vector dS
r
,
(161) dS
1
.. yg d(XS, xl), dS
s
" Vg d(xa, Xl), dB
a
= vg d(xl,:r!),
where
(}.z;P
()iP
,
(162) d(tt', :rJ1) =
00
()v
du dtJ
()x
ll
(}.z;1I
,

00
()v
and u, v are the surface parameters so that the parametric equations
of the surface S are given by Xi = fi(u, v). In rectangular coordinates
and in the usual notations x, y, z, the components of the covariant
vector dS
r
are given by
dBz = d(y, z), dB" = d(z, x), dB. = d(x, y).
Exercise
Prove that dS
r
is a covariant vector under transformations of the
coordinates Xi. Hint: use the fact that yg is a scalar density.
Let dB be the magnitude of the surface element, i.e.,
(dS)2 = g4 dB
a
dB{J'
Before we introduce the notion of a stress tensor we must define a
stress vector. A stress vector is a surface force that acts on the surface of
a ooZume. An example of a surface force is the tension acting on any
horizontal section of a steel rod suspended vertically. If one thinks,pf
the rod as cut by a horizol),tal plane into two parts, then the action of
the weight of the lower part of the rod is transmittep to the upper part
across the surface of the cut. A hydrostatic pres&ure on the surface of
a submerged solid body provides another example of a surface force.
There are other kinds of forces called body, volume, or mass forcetl,
89
90 STRESS TENSOR, ELASTIC POTENTIAL
i.e., forces that act throughout the volume. AB a typical example of a
mass force one can take the force of gravity, pg fl. V, acting on the mass
contained in the volume fl. V of the medium whose density is p, and
where g is the gravitational acceleration.
A 8tres8 tensor 'r
a
is defi,Md, implicitly' by the relattion
(163) . . dB = 'rtl dBa,
where F'" is the 8tress vector acting on the 8UrJOI!e element dBr.
Let us now consider a virtual displacement of the strained medium
corresponding to the accessory parameter t. The virtual work of the
stresses across the boundary B is
(164) of fJJfl 5xII dB  f f'J"k' 5xII dB
a
= f f f(T'h 5x
fJ
)"" dV,
8 8' V .
a volume integral extended over the vo\ume V bounded by B and ob
tained by Green's' theorem or generaJized Stokes' theorem in curvi
linear coordinates.
If there are mas8 JorrAJ8 (Mr per unit mass) acting on the medium,
the virtual work of these mass forces is
f f f pMfJ6xfJ dV,
v
where p is the mas8 den8ity. Hence, the virtual work of oJ}, the forces
acting any the medium is
(165) f f + pMfI)6x/I + F(5x/I),..] dV.
v
We shall now adopt the
PHYSICAL AsSUMPTION OF EQUILIBRIUM: The virtual work oj oJ}, the
Jorcu ading on any portion oj the medium is zero Jor any rigid virtual
displacement .
. . In particular, the translations, characterized by (6x
II
),fl = 0, are
rigid virtual displacements, and so we must have the condition
(166) f f + pMfI) 6x" dV = O.
v' "
Since 6XfJ is arbitrary at any chosen point and V is arbitrary, we have
the differential equations jor equiltOrium:
(16;7) + pMfJ = O.
Consequently the virtual work of all the forces (mass as well as surface)
acting upon any portion of the medium' in i.ny virtual displacement is
given (on using 165 and 167) by
(168) Total virtual work = f f f'J"k'(6xfJ)"" dV.
v
this vanish for any rigid virtual displacement, i.e., tor
(52:p)" + (6xo.),p = 0,
ELASTIC POTENTIAL
the stress tensor must be 81Immetric:
(169) 'f"IJ ... Tf1.
Hence 16.8 can be written
(1610) Total virtual work = if f fr't(8x .. ),fJ + (6x,,) ... ] dV.
v
91
Within the approximations of the usual approximate elasticity theory,
the total virtual work may be written
(1611) f f frP 8E .. " dV
v
since formula 1531 holds for the approximate theory. But note that
this is not a legitimate result for the finite deformation theory; formula
1610 is the legitimate result for that theory.
Elastic Potential.
We shall now turn our attention to the elastic potential anq its
relation to the stress tensor. Let p be the density of the volume element
dV in the strained medium. The element of mass dm is given then by
dm = p dV. The principle of conservation of ma88 in a virtual dis
placement is expressed by
8(dm) = 8(p dV) = O.
Let T be the temperature of the element of mass dm, tI the entropy
density (per unit mass) so that the entropy of the mass dm is tI dm
= ptI dV, and u dm the internal energy of the mass dm. Then the
fundamental energyconservation law of thermodynamics says that
(1612) T8(tI dm) = 8(u dm)
(virtual work of all forces acting on dm). Let
(16.13) q, = u  Ttl,
the free energy density or elastic potential.
From the principle of conservation of mass, we have, on integrating
over any portion of the strained medium and making use of equations
168,1612, and 1613,
(1614) f f f(aq,)p dV = f f f'J.""fJ(8x .. ),fJ dV  f f f(6T)fJt1 dV.
v . v v
Since V is arbitrary, this yields
(1615) p 8q, = 'J.""fJ(8x..),fJ  ptl6T.
We shall now work under the following
HYPOTHESIS ON ELASTIC POTENTIAL 4>: q, i8 a function of ra ., the
Euclidean metric tensor g,lx) in the 8trained medium, the Euclidean metric
tensor ,."c(a) in the unstrained medium, and the temperature T.
92
STRESS TENSOR, ELASTIC POTENTIAL
We shall restrict ourselves to isothermal variations, so that T is a
constant parameter in q,. From 16 15 and the sYmmetry of the stress
tensor F, we see that &p = 0 for any isothermal rigid virtual displace
ment. Now, for any virtual displacement, = 0 and ag ... = O. Hence
from Killing's differential equation 1533 we have
(1616) 8('"a.fJ) = 0
whenever
(1617)
Let
rd"(x) = (f'l(x) f'a,fl
and use the formula
8(ra.fJ) = _ra ... (8:t")JJ (see formula 1528)
and 1617 in 1616 to obtain
[
aq,_cra'Y _ ... ). = 0
'" U"''Y.fJ
Hence q, must satisfy the following system of partial difierential
equations
(1618)
This is a complete system of three linear firstorder partial difierential
equations in the nine variables a./l' There are nine conditions in 16 18
but three. are identities and only three of the remaining six are inde
pendent, From the theory of such systems of differential equations I
we know that the general80lution of 16 18 is a function of six functionally
independent 80lutions. It will take us too far afield to give the theory
of such differential equations; we are content here with this mere state
ment of the result concerning the most general solution q,.
There are some particularly interesting solutions of equations 16 18.
To consider them it is convenient to define an iootropic medium.
DEFINITION OF IsoTROPIC MEDIUM. A medium whose elastic potential
is a strain inaariant that may depend parametrically on the temperature
T wi1l be called an isotropic medium.
Now it can be shown, but in this brief volume we have not the time
to give the details of proof, that the elastic potential for an jsotropic
medium satisfies the differential equations 16 18. It can also be shown
by a long mathematical argument that any strain intJariant is afunction
of the three fundatMntal strain invariants II, It, and 18 of Chapter 16.
The following important result is immediate. A nec688ary and sufficient
STRESSSTRAIN RELATIONS 93

condition that a medium be isof,ropic is that its elastic pot,ential q, ... q,(I
1
,
I It 1
8
, T) where 11, I I, and 18 are the jUndamemal strain i1Wariants
. In the Usual approximate t h ~ r y of elasticity, the elastic potential q,
for crystalline media (another name for nonisotropic media) is taken
as a quadratic function of the strain tensor components. It is tacitly
assumed in the usual approximate theory that a special privileged
reference frame, determined by the. axes of the crystal, has been
chosen. The coefficients of the quadratic form are accordingly not
scalars but components of a tensor of rank four which depends on the
orientation of the crystalline axes.
StressStrain Relations for an Isotropic Medium.
Consider the elastic potentialq, for an isotropic medium as a function
of the strain tensor e,... Since e,.. is symmetric, we have E
r
, = !(e,., + Eor).
In q" we shall write !(e,.. + Eor) wherever e,., occurs, and thus we see that
()cp ()cp
=,
()e,., ~ r
(16,19)
with the understanding that in ~ , say, all the other E's (including for
ue,.,
for that 8, r) are held constant, so that in this difierentiation no atten
tion is paid to the symmetry relations Eor = e,.,.
We saw in Chapter 15 (see 1529) that under a virtual displacement
the variation of hpq and hence of the strain tensor Epq was given by
(1620) 8 ~ = i8hpq = ![h;(8x
T
),p + h ~ (8x,.).oJ,
since hpg ... gpg  2Epq. But 8g
pq
= OJ hence
&p = ~ 8E..p = ~ ~ [hH8xT ) ... + h; (&I:,.)
oII
J,
EotJ EotJ .
or
(1621) &p = ~ h ~ (8x,.) ...
uEafj
(a and fJ are summation indices t)
on using conditions 1619. Now, for an isothermal virtual displace
ment, formula 1615 reduces to p 8q, = T""(8x.,,),IJ, and so for an isotropic
medium
(1622)
From the arbitrariness of the virtual displacement and the fact that
t Throughout the remaining part of this chapter, a mere repetition of an index
in a term will denote summation over that index.
94 . STRESS TENSOR, ELASTIC POTENTIAL
= a:  we obtain the stre888train relations for an i80f;ropic medium
(16,23) tp"fJ = p(!: 2e:;)'
We shall put this stressstrain relation in another form. Now by
definition = rt' and hence
Obviously
and hence
(1624)
.).i
 =" a/ll'
()E(f
btp ()q,
Q
. ()ell ()Eii
Define the mixed stress tensor by
(1625) = gfly'J.''Y
(= gflyT"t% from the symmetry of the stress tensor). Then with the aid
of 1624 in 1623 one can show readily that thefoUuwing stress3train
relations hold for an i80tropic medium I
(1626) = 2e:
From the principle of conservation of mass P dV = Po dV
o
, and from
the fundamental result 155, we see that
(1627) P .. PoVI  211 + 41.  818
in terms of the strain invariants 11, Is, and 18 for media, whether is0
tropic or not. Since It, 1
2
, and 13 are respectively first degree, second
degree, and third degree in the strain tensor components Eap, we see
that to a first approximation p = Po; i.e., volumes are also preserved to
a first approximation. Hence to the same degree of approximation, the
stress8train relations 1626 for an isotropic medium reduce to Hooke's
law of the u.sual aP'[1l'ozirrwie theory
()q,l
(1628) =
where = pr;.
CHAPTER 17
TENSOR CALCULUS IN RIEMANNIAN SPACES AND THE
FUNDAMENTALS OF CLASSICAL MECHANICS
Multidimensional EucUdean Spaces.
In the last two chapters of this book we shall attempt to give some
indications of a more general tenser calculus and some of its applica
tions. Although our discussion will of necessity be brief, this fact will
not keep us from going to the heart of our subject. Our study of
Euclidean tensor analysis can 'be used advantageously to accomplish
this.
First of all the subject'matter of Chapters 9, 10, and 11 can obviously
be extended to ndimensional Euclidean spaces, where n is any positive
integer. There will be n variables wherever there were three before,
and indices will have the range 1, 2, ... to n with the consequent
summations going from 1 to n. For example, the squared element of
are in rectangular coordinates yt, y2, "', yfl is
(171)
while in general coordinates Xl, x2, "', x
fl
(172) M = gafl(x
l
, x2, , zo fbf' ti:r!,
where
(
17.3) n_,,(xl :rJ ... Xfl) = ()yi()y',
II.... '" f,;;, ()XZ ar!
the ndimensional Euclidean metric tensor (see 9 15). The ndimen
siona! Euclidean Christoffel symbols are
(17.4) x2, H, XfI) = + _
where the g4 are defined in 173 while
(17.5) g4(xt, x2, "', XfI) = Cofactor of g(Ja 9
9
95"
96 TENSOR CALCULUS IN RIEMANNIAN SPACES
in terms of the nrowed determinant
(176) g=
gu, gJ!J, "', glra
gn, (/t2. , gh
gt>l, gtd, "', g ....
Geometry.l .
An ndimensional Riemannian space is an ndimensional manifold
with coordinates such that length of curves is determined by D\ean8 of
a symmetric covariant tensor field of rank two gafj(x
l
, ,;2, "', X") in
such a fashion that the squared element of arc
,(177) dIP = gafj(x) d1f' dx!
is positive definite, i.e., gafj d1f' dx! ::2: 0 and is equal to zero if and only
if all the d1f' are zero. The length of a curve x = f(t) given in terms
FIG. 171.
(179)
of a parameter t is then by tIefinition
(tl)
(17.8) 8 = de.
It can be proved by rather long alge
braic manipulations that from the positive
definiteness of gafj d1f' dx! follows the posi
tive value of the determinant of the gafj' i.e.,
g= ........... > o.
g"l, gfl2, "', g""
The theory of a Riemannian space is a Riemannian geometry.
Exactly as in an ndimensional Euclidean space, we can derive the
contravariant tensor field of rank two gaIJ(x
l
, x2, "', X") and thus have
at our disposal the Christoffel symbols of.our Riemannian geometry
(1710) xl, "', x") = +  :a:).
Notice that the Riemannian Christoffel symbols depend on the funda
mental Riemannian metric tensor gafj and its first pa.rtial deriva.tives.
Unlike the Euclidean Clpistoffel symbols, it is impossible in general
to find a coordinate system in which a.ll the Riemannian Christoffel
symbols are zero everywhere in the Riemannian space. This is due to
the fact tha.t it is in general impossible to find a coordinate system
RIEMANNIAN GEOMETRY
97
in which the metric tensor ga{J has constant components
throughout space. It is to be recalled that in a Euclidean space there
do exist just such coordinate systems, i.e., caan coordinate systems
and rectangular coordinate systems in particular.
We can, however, prove that there exists a coordinate system with
any point of the space as the origin, i.e., the (0, 0, "', 0) point, such
that all the Christoffel symbols vanish at the origin when they are
evaluated in this coordinate system. Such a coordinate system is called
a geodesic coordinate system. We shall prove that the coordinates yi
defined implicitly by the transformation of coordinates
(1711)
are geodesic coordinates.
A direct calculation from 1711 yields the needed formulas
(1712)
where the 0 mearu. evaluation at the origin of the y' coordinates. Now,
under a transformation of coordinates, the Christoffel symbols of a
Riemannian space transform t by the rule 1029 for Euclidean Chris
toffel symbols. Let y2, "', y") be the (Riemannian) Chris
toffel symbols in the yi coordinates. Then
(1713)
i x ()x" OO!
*rClP(yl, 'If, .", 1f') = r,..(xl, xi, .. ', X")
W by'
Now we see from the transformation of coordinates 1711 that Xi = qi
when Y' = O. In other words, the origin of the yi coordinates has co
ordinates x' = qi in the Xi coordinates.
If we evaluate both sides of 17 13 at the origin of the yi coordinates
and if we use formulas 17 12 in the calculations, we find
(17,14)
In other words, the yi's are geodesic coordinates.
t With the difference that the number of variables now is n and the indices
have the range 1 to n. The proof is practically a repetition of that given in note 4
to Chapter 10.
98 TENSOR CALCULUS IN RIEMANNIAN SPAC1i2
Curved Surfaces as Examples of Riemannian Spaces.
Obviously any Euclidean space is a very special Riemannian apace.
A simple example of a Riemannian space which is not Euclidean is
furnished by a curved surface in ordinary threedimensional Euclidean
space. This can be seen as follows. Let y' be rectangular coordinates
in the threedimensional Euclidean space, and let the equations of a
curved surface be
(1715)
in terms of two parameters xl and xl. Then the squared element of arc
for points on the #fUr/ace 17 15 is .
8
(1716) d82 = 2: (dyf)2 = gQ/j(x
1
, xl) d:r!' d:r!
i=1
(a and (J have the range 1 to 2 and corresponding summations go from
1 to 2), where .
(1717) (
1 _ ?Jf(xl, xl) ?J1'(x
1
, xl)
gQ/j X ,;v  ",,_8
uw u;,;
So a surface in threedimensional Euclidean space is a twodimensional
Riema.nnian space.
Exercise'
The surface of a sphere is a twodimensional Riemannian space.
Find its fundamental metric tensor and its Christoffel symbols. The
FIG. 172.
#fUr/ace of a sphere of fixed r is
given by
'yl = r sin Xl cos xl
'I! = r sin Xl sin xl
'I! = r cos Xl
Therefore the jurulamental metric tensor
is given by gu = rI, gu = g21 = 0,
ga = rI(sin Xl)2. Hence
1 1
gU gl2 = g21 = 0, gt2 = rI(sin xl)2
The Christoffel symbols are then
and all the other Christoffel symbols of the surface of the sphere are
zero.
RlEMANNCHRISTOFFEL TENSOR
The RiemannChristoffel Curvature Tensor.
99
It was seen in Chapter 11 that covariant differentiation is a commu
tative operation in threedimensional Euclidean space, and by exactly
the same type of reasoning this is also true in an ndimensional Eu
clidean space. To establish this result explicit use was made of car
tesian coordinates. Since such coordinates are in general not available
in a Riemannian space, we cannot use that type of proof. In fact, co
tJOrio:,a diJJerenJ,ioJ,ion in a Riemannian space is not in general commu
tative. We. shall find a formula (see 1719 below) that makes clear the
noncommutativity of covariant differentiation in Riemannian spaces.
In obtaining Laplace's equation.in curvilinear coordinates for con
travariant vector fields in a Euclidean space, we had to calculate the
second covariant derivative of a contravariant vector field. (See the
bracket term in 1223.) The calculation for Riemannian spaces is
practically the same, so that we shall write down the second covariant
derivative of xi, "', X") based on the (Riemannian) Christofiel
symbols xt, "', X") without giving any more details (again
see bracket term in 1223). The result is
.
(17,18) r ... . = CY.rf ():rfI  r cr,8 ()x" + r " .. ():rfI + r "" CY.rf
(
()r!.. ,.
+ ()x" .... 
From the commutativity of the partial derivatives and the symmetry
of the Christoffel symbols = we find
(1719) t ... .  ... = B!J,
where
(1720)
To justify the notation B'acr,8 and prove that they are the com
ponents of a tensor field of rank four, contravariant of rank one and
covariant of rank three, we first note that the ieft sides of 17'19 are
the components of a tensor field of rank three, contravariant of rank
one and covariant of rank two. Hence is a tensor field of the
same type for all contravariant vector field,s Le.,
_.  nl. ():t! ()x" ()x'
B'acr,8(xW(x) = Dp.,(X)r'(x) ()X'" ()x>.'
But, writing r'(x) in terms of we evidently have
i (_) (_) nl. ( ) (_) ()x" ()xv ()xP
B ." x r: x = Dp.
p
X x ()i!' ()f!" ()i! ()xA'
100 TENSOR CALCULUS IN RIEMANNIAN SPACFlJ
But r (!) are arbitrary, and. hence, equating correspoQ,ding coe1ficients,
we obtain the tensor law of for This tensor
field is the famous RiemannChristoifel CU1'fXJture ten8or; it is not a
zero tensor in a general Riemannian space. Hence ( ... (.6 ... in a
Riemannian space with 1W'1/,IJanishing RiemannAJhristotlel curvature
ten8or. But obviously the RiemannChristoffel curvature tensor is
zero in Euclidea.n spaces. Hence (,a,p .. r,/J... in Euclidean spaces;
this checks a result found earlier, in 11.13.
Geodesics.
A line is the shortest distance between two points in Eu
clidean spaces. There are curves in Riemannian spaces that play a
role analogous to the straight lines of Euclidean spaces. Such curves
are called ge0deaic8. In fact, if a Riemannian space is a Euclidean
space, then its geodesics are straight lines. To find the differential
equations satisfied by the geodesics of a Riemannian space, we have
to get the EulerLagrange differential equations for the calculus of
variations problem
(1721)
f
V dx'dx;
to gWdi di dt .. minimum.
It can be shown that the EulerLagrange equations for this calculus
of variations problem are
(1722)
hi(8) i d:.t!
d8 = 0,
where 8 is the arc length and are the Christoffel symbols of the
Riemannian space. In other words, if the coordinates of points on a
geodesic are considered as functions of the arc length parameter
8, then the n functions satisfy the system 17 22 of n differential
equations of the second order.
If the Riemannian space is Euclidean and we choose rectangular
cartesian coordinates yO, equations 17 22 reduce to
and hence
(a' and fJ' are constants),
the parametric equations of straight lines in terms of arc length 8.
EQUATIONS OF MOTION 101
Equations of Motion of a Dynamical System with n Degrees of
Freedom.
In classical mechanics, it is postulated that the motion of a con
servative dynamical system of n degrees of freedom with no moving
constraints is governed by Lagrange's equati0n8 of 'lTWtion
l
(17,23)  O.
If the kinetic energy T, in terms of the generalized coordinates q1, 'q2,
d th alized 1
.. "(t) dqi(t).
"', q" an e gener ve q' = &' 18
T = igi/(q1, "', qrqoqi (gil = gli)
and if the potential energy is V(q1, q', .. " q"), then the kinetic potential
or Lagrangean L is given by L = T  V. Now the kinetic energy is
positive definite in the velocities qi; i.e" T 0 and T .. 0 if and only
if gi = O. It can be proved by algebraic reasoning that the determinant
g of the gil is positive so that we can form gii in terms of the gi/ ex
actly as in Riemannian geometry. By direct calculation we find
{jL "
  g'.n'
(jqi  """
= +g.A
i
(q; =
= 1{()gi/ ()gUo) . i;Jc AI
2\(jqlo + {jqi q '1 + gin ,
{jL = !(()g I")q' iq'" '_ {j V
{jqi 2 {jqi {jqi
Hence Lagrange's equations of motion 1723 can be written in the form
(17 24)
"' 1(?>gil ?>gill ()gi"). "L {jV
. gip/" + 2 (jqlo + {jqi  {jqi q'q .....  {jqi'
Multiplying corresponding sides 'of 1724 by r and summing on i,
we obtain the foUowing form for Lagrange's equation of motion.'
(17 25)
..,.. ( ). '. L ..,IIi {jV
. if' + 1. ill q1, "', q" q'q = II (jql
where Ij,,(q1, "', q") are (he Christoffel symbols based on the gil (qt,
, qr of (he kinetic energy of the dynamical system. '
102 TENSOR CALCULUS IN RIEMANNIAN SPACES
Exercise
A symmetrical gyroscope with a point 0 fixed on the axis is acted upon by glaV
ity. Let I, I, and J be the principal moments of inertia. Then the kinetic energy
"by TH!)' +",,<,(!)' + ~ J ( ! + ""' .. !)'
and the potential energy by
V ... Mgh cos ql, The ClOOl'dinates rt, qt, and qt are the Eulerian angles, M is the
mass of the gyroscope, and h is the distance of the center of gravity from O. Find
the Lagrangean equations of motion of the symmetrical gyroscope. Compute also
the element of arc length of the threedimensional Riemannja.n space associated
with the symmetrical gyroscope. .
CHAPTER 18
APPLICATIONS OF THE TENSOR CALCULUS TO
BOUNDARYLAYER THEORY
Incompressible and Compressible Fluids.
The of volume of all parts of a fluid in motion sometimes
plays an important role in the theory' of fluid flows. A fluid in motion
with this property is .called an incompre88ible fluid, whereas a fluid in
motion without this property is called a compre88'tole fluid. If u' are the
contravariant components of velocity of the fluid in motion in general
coordinates Xi, then
dx'
(181)  = u'
dJ
are the differential equations whose integration gives the paths of the
fluid particles in the coordinates Xi.
It can be proved by a direct calculation that a necessary and sufficient
condition that the volume
(182)
of arbitrary portions of the moving fluid' be preserved 1 is that the
divergence of the velocity field u' be zero, i.e.,
(183) u:, = o.
In 18'2, g is the determinant of the Euclidean metric tensor gi;
(d82 = g'l dx' dx'), and the comma in u:. stands for covariant differen
tiation based on the Euclidean Christoffel symbols r}k' In other words,
a necessary and sujJicient condition Jor an incompre8sible fluid is that the
velocity vector field u
i
8atisfy the partial differential equation 18 S. A
glance at formula 136 shows that the condition of incompressibility is
equivalent to
(184)
. ()(yguCl) = 0
()x<I
If recall the NavierStokes equations 133 for the motion of a
viscous fluid, incompressible or compressible, we know that the equation
of continuity
(185)
()p + (PUCl) = 0
()t ,ex
. 103
104 APPLICATIONS TO BOUNDARYLAYER THEORY
in general coordinates x' merely states the constancy 1 of the mass m
(186) , m'= f f fp(x
l
, x2, xl, t)yg dx
l
dx2 dxB
of any portion of the moving flUid. An evident consequence of the con"
ditions 183 and 185 is that the density p(Xl, :rr, xl, t) (an absolute
a.calar) satisfies the condition
(
18.7) ()p ()P ..  0
()t + ~ u  ,
which states that
(188)
dp =0
dt
along any chosen path of fluid particles. This means that 187 can be
taken as Q&e defining condition lor an incompres81,Qle fluid in view of the
continuity equation 18.5. The NavierStokes equations for an incom
pressible viscous fluid reduce then to the following system of four
differential equations in general coordinates Xi:
=11 U .. uu g + .
{
emi gP' .. ilia ()p X'
(189) ()t ...... ... p ()x"
u,: = o.
For an incompresSible fluid, the density p(Xl, x2, r, t) is given subject
to condition 187. Then the four differential equations 189 will have
as unknowns the three velocity components u
l
, u
2
, US and the pressure
p(xl, x2, r, t) of the fluid.
The situation is different for compre8sible fluids. The NavierStokes
equations 13.3 are four in number with five unknown functions ut, u
2
,
us, p, and p. To make the problem determinate a fifth condition must
be imposed. This is usually furnished by the "equation of state,"
which in the isothermal case is of the form
(1810) p = 1(P).
BoundaryLayer Equations for the Steady Motion of a Homogeneous
Incompressible Fluid. t
We shall now restrict ourselves to the steady motion of a fluid with
out any external forces, so that Xi = 0 and all the quantities u
j
, p, p are
independent of the time t. If in addition we assume that the fluid is
homogeneous, i.e., p is a constant, and incompressible, the four unknowns
t The r e ~ i n g part of this chapter is an expOSition of some unpublished re
searches of Dr. C. C. Lin. These results were presented by Dr. Lin in my seminar
on applied mathematics.
BOUNDARYLAYER EQUATIONS 105
U', P must, by a reference to 189, satisfy the four difierential equations
(1811)
{
. i . i .. C>r
U'u . = vn'J:u . L  nO, 
oJ 't/ oJ,.. If bx
i
'
"t,1 = 0,
where 11" is the pressure p divided by the constant density p of the
fluid., and the constant II is the kinematical viscosity. Since the co
variant derivative of the Euclidean metric tensor rliJ"is zero, it follows
from 1811 that the covariant vector components u, = guu' of the
velocity field and the function 11" will satisfy the system of differential
equations
)
{
UiU,; = pgileu i k _ <>r
(1812 ,', . bx"
u, ~ = O.
For the treatment of "boundarylayer" problems connected with
an arbitrary surface, it is convenient
to take a system of space C()ordinaJ,es
in which Xl, x2 are surface coordinates
and xl is a coordinate measured along
the normals to the surface. Thus
xl = 0 will be the equation of the given
surface. If we allow Latin indices to Surface
run over the range (1, 2, 3) and Greek 1 __ _
indices over the range (1, 2), we have FIG. 181.
the following fundamental metrics:
(18,13) d82 = g'j(x
l
, x
2
, xl) dx'dx
i
in 3space,
and
(1814)
over the surface
xl = 0, a constant.
From the manner in which the coordinate xl was chosen, it follows that
in 3space t
(18,15) d82 = gil dx' dx
i
= r I ~ ( X I , x2, xl) dxf d3fl + (dz8)2.
t In Riemannian geometry, this is sometimes called the "geodesic form" of the
line element. Such forms of the line element were used recently by Dr. W. Z. Chien
in connection with his researches on the intrinsic theory of plates and shells (see
references). Dr. Chien presented some of his work in my seminar on applied mathe
matics and made some exceedingly helpful calculations in connection with interest
ing geometric ideas arising in his and Dr. C. C. Lin's work.
106 APPLICATIONS TO BOUNDARYLAYER THEORY
In other words, the Euclidean metric tensor g'i(X
I
, z2, zS) is such that
(18 16) g88 = 1, gSa = g .. = O.
We shall henceforth consider transformations of BUr/ace coordinates xl, :r}
alone so that the coordinate z8 may be regarded as a acaZar para1Mter
under a transformation of surface coordinates. To emphasize this
fact we shall uae the notation :rfl = :co = zS. Thus, in the new notation,
1816 can be written
(1817) goo = 1, gOa< = gaO = O.
We saw in the previous chapter that a BUr/ace can be considered as a
twod't'men8ional Riemannian spaCe. There is thus at our disposal the
Riemanni8.D tensor calculus of the previous' chapter for immediate use
in connection with the surface:rfJ = constant. We use a semicolon
to denote sur/ace COfJaria:Tjj di;fferenJiation in contradistinction to the
comma for covariant difierentiation in the enveloping threedimensional
Euclidean space.
To express all covariant differentiations with respect to space co
ordinates in terms of covariant difierentiations with respect to surface
coordinates Xl, z2 and partial difierentiations with respect to :rfl, con
sider first the Euclidean Christoffel symbols r}Al in the coordinates
Xl, z2, z8. If i, j, k are all in the range 1, 2 no reduction is possible
unless a special surface coordinate system is chosen; if one of the
three indices is zero, we have
1 ()g1Jy
(1818) ro.." =  2?fJ' r;, = =  ii' ?fJ
These are evidently tensor fields with respect to transformation of
surface coordinates, and they shall be denoted by r.." .and r; respec
tively. The other Christoffel symbols riAl, in which two or all of the
i, j, k are zero, vanish identically.
With the help of these relations, it can be easily verified that
{
Uo.o =:' Uo ... = + r!u/I,
(1819) 00'
u..,o = + r! u/I' = + r J/O,
(18 20) = ()zO + 11;/1  rgUo,
{
gilouo./.1c = :::  r::: .
(1821) . ()u 00...
gi1:Ua J,Ic = + : rgzn:o + cit""
BOUNDARYLAYER EQUATIONS
107
where
(1822) = + r!ufJ + {/,u";/J;"'( : 21'!:; r!;fJ'Uo
_ {4O = +  r; r! 'Uo + I"$u..,
 + qr! )ufJ'
do not involve differentiation of Ui with respect to :tfJ.
Let US now consider the analytical nature of the system 18 12 of
four partial differential equations in the four unknowns 'tr, 'Uo, U
cr
By
using 18 19, 1820, and 1821, we can put this system in the normal
. ...ob l"f C>1r()'u"f th .
form With respect to;v y so vmg or ():tfJ' ()tlS rom e equatlOns
of motion and for:: from the equation of continuity. Thus
(1823)
<>uo
():tfJ = tI;fJ + r=Uci,
where and :: in the first equation may be expressed in terms of
'Uo and by using the last equation. Thus, highest derivatives
of all the variables with respect to :tfJ have the coefficient unity in
these equations. Hence, if 'tr, 'Uo, u .. , :: are given as fUnctions of Xl
x" on the surface :tfJ = 0, the solution of the problem is uniquely
determined. "
This normal form 1823 of the system of difierential equations, how
ever is not analytic in the small parameter v, the important case in
aeronautics, in the neighborhood of v = 0, and is consequently incon
venient for the application of the method of successive approximations.
We therefore make the transformation of variables
(1824)
lOS APPLICATIONS TO BOUNDtYLAYER THEORY
to bring it into the desired.form. We then have
(1825)
()". = lIlr a[Jua.,f _ II(W ()w + ua()w) + vf(CAD _ ()w +
()r ()r ()xa ()f2 a ()r
()2ua = W ()u,. + ." + ()". + _
()t2 ()r .....?xr!' a a ()t
()w
()t = + plI1w.
Let US note that CPo and CPa are linear in the small parameter III through
the term in Uo (cf. 1822), also depend on vi through the geometrical
quantities, which are functions of :rfJ.= lIir. Indeed, it can be shown 2
that the surface metric tensor ga[J is a quadratic function of :rfJ, while all
other geometrical quantities may be e:tpanded as power serie8 of :rfJ con
vergent for I :rf! I < R"" R", being the minimum magnitude of the
pIjncipal radii of curvature over the surface under consideration.
Hence, the righthand sides of the equations 1825 are Taylor series in
vi, and the solution of 1825 may be carried out by expanding each of
the dependent variables as a power series of vi, convergent for all finite
values of vi for which I lI
i
r I < R.n.
H we try to solve 1823 by the same type of expansion, either the
series are asymptotic, or they may terminate; but in general we cannot
find a solution satisfying all the required boundary conditions. In
fact, the initial approximation is easily verified to 'satisfy the non
viscous equation (II = 0). The boundary conditions at infinity and the
condition Uo = 0 at :rf! = 0 are then sufficient to determine this approxi
.mation completely. Indeed, the boundary conditions at infinity are
usually such that the resultant solution is potential. Then the initial
approximation is an exact solution of the complete equations 1823.
However, the boundary conditions Uo = 0 at:rf! = 0 cannot be satisfied
in general. The effect of viscosity can never be brought into evidence.
This shows that the more elaborate treatment de8crtoed abOfJe is absolutely
necessary. The nonviscous solution (usually potential), however,
serves as a guide for making the exact solutions satisfy the boundary
conditions at infinity. This point will be discussed in more detail
below.
Let us now proceed with the solution of 18.25 by writing
{
1( = 1(0) + vn(l) + n(2) + ... + "',
U
a
= + + + ... + ... ;
W = w(O) + VII'W(l) + 1I'W(2) + . , . + ... ,
(1826)
BOUNDARYLAYER EQUATIONS 109
Corresponding developments for the geometrical quantities gcrfJ, r..", ...
must also be used. The initial approximation gives
" 00.. C>r ()2u.,.
u"u ;jI+w", +,
II' ()r Cnf" ~
C>r
0=,
()r
(1827)
()w
t!;/I + ()r = 0,
where the superscripts of the initial approximation are dropped. In
these equations, r is a scalar, and the metric tensor is acx/l(x
l
, xl), being
gcx/l(xl, xl, :r;O) evaluated at :rfJ = O. The conditions over the surface
r = 0 are u.. = 0 and w = O. The condition at infinite r is set according
to the following considerations. For a large but finite value of r,
the value of :& is still small. Hence, the solution may be expected to
pass into the nonviscous solution close to the surface if r is large.
Thus, for the initial approximation, we may lay down the conditions
(1828) u .. = it .. , 11" = r for r ... co,
where u .. and i' are functions of Xl and X2, being the values of u .. and 11'
of the nonviscous Ilolution at :& = O. The initial approximation is
then completely determined.
H u .. and 11' differ from it .. and 1r by quantities of the order of " for
r = h, then an approximate solution of 1812 is usually taken to be
given (a) by the nonviscous solution for r > h, and (b) by the solu
tion of 1827 for r < h. The quo/nitty h0 is known as the "thickne8s
oj the boundary layer" and is arbitrary to a 'certain extent. For ex
ample, we may define h to be given by (say) three times h of the
equation
(1829) La> (it ..  u .. ) dr = it.,.h,
which is in general different accorc:ling to whether a ... 1 or 2. This
initial approximation is usually known as the boundarylayer theory of
Prandtl. Incidentally, we note that 11' is a function of Xl and x
2
alone,
by the second equation of 1827. Hence, by 1828, 11' = i'(xt, xl),
which is known from the nonviscous solution. The first and third
equations of 1827 then serve as three equations for the velocities
u .. and w.
The higher approximations in 1826 satisfy certain differential equa
tions obtained together with the derivation of 1827. The boundary
conditions at r = 0 are u ~ = 0, for any approximation. The boundary
conditions for the nth approximation at infinity will be specified by
110 APPLICATIONS TO BOUNDARYLAYER THEORY
using the nth approximation of the asymptotic solution, which will in
turn be determined from certain boundary conditions related to the
(11.  l)st approximation of the convergent solution. Since we are
never concerned with higher approximations in practice, we shall not
go into further details.
RODS ON PART I
Chapter 1
1. In the modern quantum mechanics of theoretical physics, matrices with an
infinite number of elements as well as with a finite number of elements are used
very widely. The elements of these matrices are often complex numbers. The
reader is referred to the bibliographical entries under Born, Jordan, and Dirac.
2. For applications of matrices to the social sciences the reader is referred to
the references given in a recent paper by Hotelling.
3. We shall deal for the most part with matrices whose elements are real or
complex numbers. It is possible, however, to deal with matricea tDlioae elem6ntB GTe
Uaemaelves matricea. We shall have occasion to use a few such matrieM in connecj;ion
with.our discussion of aircraft Butter in Chapter 7.
ChapterS
1. For the properties of determinants, linear equations, and related questions
on the .bra of matrices, see Booher's Introduction to Higher Aigebm.
2. Cramer's rule for the solution of linear algebraic equations is given in most
books on algebra. In our notations it can be stated in the following manner. If the
determinant a.. I aj I of the n equationll
a':rJ = hi
1
in the n tmknouma:el,:1I', .oo,:rl' ia not zero, then !he equati0n81&ave a unique solution
given by
Ai
z' .. ,
a
where A' is the ATOtDetl determinant obtt&i1l6d from a by TepltJeing t1l8 elem6ntB a1, ~ ,
o , a, of tM itA column by tM corr68p07l4ing elem6ntB bl, bt, 0 0 " b".
8. The rule for the multiplication of two determinants takes the following form
in our notations. If a.. I a ~ I mid h.. I hj I are t1.110 nr0tDetl determinants, then
the numerical product c = ab ia iUd/ an ATowed determinant with elem6ntB cj gWen
by the /orm1ila
i i ....
c
i
.. aaul'
4. It can be shown that Sr, the trace of the matrix Ar, is also equal to the sum
of the rth powers of the n characteristic roots of the matrix A.
5. The recurrence formula 26 for the coefficients a" .. , a.. of the chalacteristic
function of a matrix can be derived from Newton's formulas; see Booher's Introdw;
tion to Higher AlgebnJ. pp. 243244, for a derivation of Newton's formulas.
111
112. NOTES ON PART I
ChapterS
1. To that the matrio exponential eA is convergent for aD square matrices
A, let A .. II /lj II be an nrowed sql1lL!8 matrix, and let V(A) be the greatest .of the
numerical values of the n' numbers /lj  the greatest of tlie moduli of the /lj if the
a; are complex numbers. Then each element in the matrix A i :wID not exceed n
H
in numerical value. Hence each of the n' infinite series in eA will be dominated by
series
nV' nlya 1
1 + V + 21 + 31 + ... + ...... 1) +r.
Hence all the n' numerical series in eA converge. This means that eA is convergent
for all square matrices A. .
In the terminology of modern functional analysis and topoloCIcal spaces, the
V(A) is oalled the norm of the matrix A, and the class of nrowed matrices with
the operations of addition and multiplication of matrices, multiplication by num
bers, and convergence of matrices defined by means of the nonn V(A)  in other
words, V(A) plays an analogous role to the absolute value or modulus of a number
in the convergence of numbers  is oalled a normed linecr ring. Other equivalent
definitions of the Donn of a matrix are possible. For example, V(A) can be
takenas
V.t(/lj)!
,,,""I
if the elements a1 of the matrix A are real numbers. Whatever suitable definition
of a nonn is adopted, the norm V(A) of a matrix will have the following properties:
(1) V(A) 0 and = 0 if and only if A is the zero matrix.
(2) V(A + B) S V(A) + V(B) (triangular inequality).
(3) V(AB) S V(A)V(B).
From property 3 it follows that V(A") S (V(A", a result that makes obvious the
usefulness of the notion of a nonn for matrices in the treatment of convergence
properties of matrices.
The class of matrices discussed above is only one example of a nonned linear
ring. The first general theory of nonned linear rings was initiated in 1932 by
Michal and Martin in a paper entitled "Some Expansions in Vector Space," Journal
de mathhnatiques pure8 et appliquks.
2. The speoial case of the expansion 87 when F(A) is a matric polynomitJl is
known as Sylvester' 8 thwrem.
If the characteristic equationof a matrix Ahas multiple roots, then the expansion
37 is not valid. However, a more general result can be proved. For the case of
matrix polynomials F(A) see the Duncan, Frazer, and Collar book. The more
general cases of matric power series expansions are treated briefly in a paper by
L. Fantappie with the aid of the theory of functionals, since the elements
in the F(A) are !undionala of the numerical function F(A). See Volterra's book on
functionals (Blaokie, 1930).
3. Another equivalent form for the n matrices G
1
, 0" . , G
n
is
Gi ... (;\)l ,
d>. A=A,
NOTES ON PART I 113
where the >.t are the characteristio roots of the matrix A, leA) is the characteristic
determinant of A, and M(A) '" II II is a matrix whose element is the
_I 1
cofactor of  in the oha.ra.oteristio determinant I(A). Notice carefully the
position of the indices i and j.
Chapter 4
. . _ ca(t)
1. A good approximation to the solution X(t) .. [e(e toAl]Xo of ;; .. AX(t)
can be obtained by taking 1 + (t _ to)A + (t  to)' AI + ... + (t  to)"A" in the
2' nl
place of the infinite expansion for e(etolA. .For many practioaJ purposes n '" 2 would
be large enough to give a good approximate solution. The approximate solution
can then be written
(t  to)' .
X(t) .. Xo + (t to)AXo +2,AIXo,
where Xo is the column matrix for t .. to initially given.
2. If the solution of the matric differential equation
(1)
cU(t) = AX(t) (X(t
o
) ... Xo)
dt
has been found, then the solution of
(2) '" (A + bl)X(t) (X(t
o
) .. Xo)
o.an be written down immediately. In fact, from the second property of the matric
exponential given in Chapter 2, we see that eA+bI '" e>eA, where e> is the numerical
exponential. Hence by formula. 43 we see that the solution of equation 2 is ob
tained by a mere multiplication by e> of the solution of equation 1.
Chapter IS
1. The reader is referred to Whittaker's Analytical Dyna:mic8 for a treatment
of Lagrange's differential equations of motion of particle dynamics. For some
engineering applications, the reader is referred to M atherrulf,ico.l Methods in Engi,
neering by K4rm8.n and Biot.
2. Consult the references in the above note.
ChapterS
1. If only the fundamental frequency is wanted but not the corresponding
amplitudes, then the application of Rayleigh's principle may be preferable. See
Blementary Matricu by Frazer, Duncan, and Collar, pp. 310, 299301.
2. A special case of theorem is what is actually used; see expansion
39.
114 NOTES ON PART n
NOTES ON PART D
Chapter 8
1. Although little use has been made of the tensor calculus in plastic deforJD8;oo
tions, one would suspect that a thoroughgoing application of the tensor calculus
to the func:la.tn8ntals of plastic deformation theory would prove fruitful.
2. For some "elementary applications of the tensor calculus to dynamic meteor
ology, the reader is referred to Ertel's monograph.
3. We shall deal briefly with Riemannian spaces (certain curved spaces) and
their applications to clsssical dynamical. systems with a finite number of degrees
of freedom (see Chapter 17), and to fluid mechanic,. (see Chapter 18).
4. A discussion of the fundamentals of coordinates, coordinate systems, and
the transformation of coordinates in the various spaces, including Euclidean spaces,
is out of the question here. The readers who are interested in modem differential
geometry and topology will find ample references in the bibliography under the
entries for Veblen, Whitehead, Thomas, and Michal.
Chapter 10
1. Some writers, especially those dealing with physical applications, like to think
of the contravariant and covariant components of one object called a vector. For
example, if E' are the contravariant components of a velocity vector field, then e'
and can be considered the contravariant vector and covariant vector "repre
sentations" respectively of the same physical object called "velocity vector field."
This point of view, however, is untenable in spaces without a metric gii.
2. importance of the Christoffel symbols for Euclidean spaces
is, even now, not very well known.
3. Since
8 by; by'
gafJ{zl, 1>', 1>') .. 1: "' ......... .JI (yl, yI, 11 are rectangular coordinates),
i= 1""' CRT"
we obtain, from the rule for the multiplication of two determinants, the result that
g.. I gal I .. .p, where J .. I : I is the Jacobian determinant, or the functional
determinant, of the trpsformation of coordinates to rectangular coordinates y;
from general coordinates z'. ,Hence J ;0' 0 since we deal with transformations of
coordinates that have inverses. This means that the determinant g;o' 0 for all our
"admissible" transformations of coordinates.
4. The following steps establish the law of transformation 1029 of the Eu
clidean Christoffel symbols; exactly the same method establishes the corresponding
law of transformation for the Riemannia.
n
Christoffel symbols to be discussed in
Chapter 17.
Since g/foP are the components of the Euclidean metric tensor we have under a
transformation of coordinates from coordinates Zl to coordinates

(II) g"tJ.!l, 1;1, !8) ... g,.,,(:&1, 1>', z8)"M!'
NOTES ON PART n 115
Differentiating corresponding sides of (a) we obtain (considering the 88 inde
pendent variables)
()g"" b:rl' b:I! bzP [ ?hi' b:I!] b:rI' bY
(b) = b:r/' M!' bf! + gpM!' b' bf! . + gpM!' ?J!fl
Interchange ere and (T in (b) and, noting that
?hi' ?hi'
=,
obtain
()gp b:rI' b:I! bzP [ b':rI' b:I!] [ b:rI' bY ]
(c) M!' = in" bf'" + gp M!' btJI + M!'
Interchange fJ and (T in (c) and, noting that
bY bY
bIP M!' = M!' bf}'
obtain
()gp b:rI' b:I! b:r/' ?hi' b:I! [ b:rl' bY ]\
(d) btJI ... in" bfJ? btJI + gp ?J!fl ()!'" b + gp M!'
Add corresponding sides of (b) and (d) and subtract corresponding sides of (c) after
interchanging" and p in the first terms of (b) and alter interchanging" and p in the
first terms of (d). Then take I of both sides, obtaining
1 (ciifIfJ cii..... ciia/J) 1 (()gp ()gpp ()g",,)b:rI' b:I! bzP 1 b:rI' bY
(6) 2 +  M!' = 2 ():rJ' + b:I!  in" M!' + 2
gP
M!'?J!fl
1 ?hi' b:I!
the terms enclosed in brackets in (b), (c) and (d) canceling out in the additions and
subtractions. On interchanging" and " in the last terms of (6) and on reca.lling that
gpp = g"", we get
(f) ! (cii"fI + _ ciia/J) ... ! (()gp + ()g,.,. _ ():rJ' b:I! b:r/' + (l b:rI' bY .
2 M!' 2 ():rJ' b:I! bzP btJI M!' PM!' btJI
Now
(g)
Multiplying corresponding Bides of (J) and (g), summing on a, and using the identi
ties
we readily obtain
!w(cii"fI ..... _ ... !..>.p(()gpP ()g,.,. _ W b:i',
2
g
+ bIP bf'" 2" ():rJ' + b:I! bzP bIP + i)P' ()rI
from which the desired transformation law 1029 follows immediately on recalling
the definition of jii a/J and r:..
116 NOTES ON PART n
Chapter 11
1. Formula 1112 for the covariant derivative of a tensor field can be
Iiahed very quick1y with the aid of normal coordinate methods of modern differential
See the two Michal and Thomas 1921 'papers.
2. In Chapter 17, we shall see that the answer is in general in the negative
for the more general Riemannian spaces.
8. The operation of putting a covariant index equal to a contravariant index in
a tensor and SUIpming over that common index is called contraction. A contraction
reduces the rank of a tensor by two: by one contravariant. index and by ODe covari
Chapter 18
1. The NavierStokes differential equations of hydrodynamics are discussed
in Lamb's Hydrod1/Mmics.
2. There are two viewpoints in htdrodynamics: one is the Eulerian point of
view in terms of the Eulerian variables; the other is the Lagrangean point of view
in terms of the Lagrangea.n variables. For a nonviscous fluid, the Eulerian hydro
dynamical equations are the equations of motion of the fluid from the Eulerian point
of view, and the Lagra.ngean hydrodynamical equations are the equations of motion
of the fluid from the Lagrangean point of view. The Eulerian hydrodynamical
eql"lIotions in rectangular coordinates y' are obtained by putting p .. 0 in the Navier
Stokes equations 182.
Eulerlan HydrodyDamica1 Equations.
.. _U .. _! C>p +X'
i)t lJif' p l>y'
+ ()(pU"') .. 0.
i)t lJif'
H a' are the rectangular coordinates of & fluid particle in the initial state of the
fluid, and if the y'(a
l
, ai, a
8
, t) are the coordinates of the particle at time t, then the
LagrcJngean hydrod1/Mmical eqtUJtionB are
t. (blyi _ + ! C>p. = 0,
,=1 ()tI 00
'
P ()a1
l>yl l>yl l>yl
,,
()al ()a' ()a8
by' by' by'
,J.,yl, 1f, yB) ()aI' ()aI' ()a8 = pri.a
l
, ai, at).
byB.byB byB
,t
()al ()al ()a3
For & treatment of classical hydrodynamiCS, including treatments of the Eu
lerian differential equations and the Lagrangea.n differential equations, the reader is
referred to Lamb's Hydrodyno.mics and to Webster's Dynamics; cf. the references
at the end of Part II. Ertel's monograph on dynamic meteorology hall some
interesting remarks.
NOTES ON PART IT 117
3. To obtain 136 for the divergence we can proceed 88 follows.
Since g, the determinant of the Euclidean metrio tensor gti, is a relative aoa.la.r of
weight two, it can be shown by the usual methods of obtaining covariant derivatives
of absolute tensors that the oovariant derivative g,. is given by
bg 2g a
g,' .. ()zi  r cd
But, in rectangular coordinates, g is unity and the Euolidean Christollelsymbols r.."
are zero. Hence g,. is zero in rectangular coordinates and consequently g, .. 0 in
all coordinates. This means that
blog\IU ....
i)zl .. J.iQ'
But the divergence of 'IP is by definition
().,p
'IP '"  + M'u',
... ?n!" lei
so that by the above result we find the following equivalent ezp7'eaaUm lor the diver.
gence
'IP .. .!... b( Vi u'I)
... VI ?n!"
H in partioular u' ... gifJ :;, where 1/I(Zl, zI, zI) is a scalar field, we see that for
this u'
1 i)( Vi,.." :)
'IP ...  .
... {Ii ?n!"
But m rectangular cartesian coordinates g .. I, ,.." '" 1f"IJ, and so this scalar u::'
reduces to the Laplaoean in rectangular cartesian coordinates. Hence Lapltu:e' 8
equation in general coorrji1l4te8 Zi is given by
1
vr, ?n!" co 0 when the unkn.oton 1/I(zl, zI, zI) is II 8CGl.Gr jii/AL
Chapter l'
1. The fundamentals of a finite elastio deformation theory are not new. Kirch
holl in 1852 made the first systematio study, and E. and F. Cosserat in 1896 made
an extensive investigation of the subject. Ricci and LeviCivita in 1900 made
brief but important contributions to the applications of the tensor calculus to
elasticity theory. LOOn Brillouin in 1924 simplified and recast Cosserat's treatment
with the aid of the tensor caloulus, and in 1937 F. D. Murnaghan, among several
other authors, made contributions to the tensor theoretio treatment of elasticity
theory.
Chapter 16
1. One can consider strain ditlerential invariants of order T, i.e., scalar fields
that retain their forms 88 functions. of the metrio tensor g..", the strain tensor ...",
118 NOTES ON PART II
mad Ihe deritJati.fJea oJ fall up to order T. Silice successive covariant dift'erentiation
reduces to a corresponding order of partial dift'erentiation In cartesian coordinates,
a strain dift'erential invariant can be written exclusively in terms of tensor fields by
merely replaciflg the derivatives of by corresponding covariant derivatives. The
MichalThomas methods (see the 1927 papers of these authors) can be used to carry
on some interesting researches on strain dift'erential invariants. Strain difterential
tensors can also be considered.
Examples strain invariants of order one are
>"..8 s 'OIl '011,
H = IIIr"g'Y"faII,.,EAw and 11 ?:J:If'l>:rP
where fai.., is the first covariant derivative of the strain tensor fall, and where 11,
la, and I, are the three fundamental strain invariants. The question arises whether
suCCessive covariant differentiation of 11, I" 18 and combination with gall would
yield all the fundamental strain differential invariants. Clearly there exist no strain
differentia/, invariants for homogeneous strains. Note that the vanishing of H is the
necessary and sufficient condition for a homogeneous strain.
Chapter 18
1. The theory of complete systems of partial dift'erential equations is discussed
in Hedrick's translation of Goursat's COUT8 d:a.rwJY8e, Vol. II, part II.
Chapter 1'1
1. For an account of Riemannian geometry, see Eisenhart's Riemannian GeMn
dry. This reference, of course, treats the (classical) finite dimensional Riemannian
geometries. Infinite dimensional and dimensionless "Riemannian" geometries were
first studied by A. D. Michal (see paper 2 under Michal in the references for Part II).
The applications to vibrations of elastic media are now being studied (see papers
4, 5, and 6 under Michal in the references).
2. Several engineenng applications of Lagrange's equations of motion are to
be found in the and Biot book.
3. We have seen that the Lagrangean equations of motion for a cooserva.tive
dynamical system with no constraints and n degrees of freedom were
(a) 9'"+ =
This means that the dynamical traiectoriea can be considered as curI1e8 in (In ndiffUM
8iontJl Riemannian apace whose element oj CITe length ds is gWen by
'(b) dB' = ... , rr)dq' dII,
whsre t1&e giJ CIT6 the Juncticms occurriflg in t1&e kinetic energy T = The
differential equations of the dynamical ourves are the BeOOndorder differential
equations (a).
These curves are not, in general, geodesics in the Riemannian space with arc
lengths ds given by formula (b). The question then arises whether it is possible to
define a Riemannian space whose geodesics are some of the curves whose differential
equations are Lagrange's equations of motion (I) for the given 1Hiegree dynamical
NOTES ON PART IT 119
system. We shall show briefly that it is possible to define such a Riemannian
space. To do this we must first show that the Lagrangea.n equations of motion
the energy integral
i.e., the BUm 0/ the 1cinetic tmd potential energies is a constant along any cIwsend1l
namical tra.jectory. In fact,
elL "L::t "L ,
dt '" + i)q,q.
Hence on using Lagrange's equations of motion
elL. '" "!' il' +
dt i)q' dt z,qi
d(aL.) d
"';it z,qi ql "';it (2T).
We have then immediately T + V", 0, a constant, since the kinetic potential
L .. TV.
Consider now the dynamical CUrtl88 that correapcmd to /lny chosen energy constant
O. We shall show that theae p/lrticullJr dynamical CUrtl88 lire the geodeaics uJ the
ndimensional RiemlJnnian space whose element 0/ /Ire length dB is gWen by
(e) d8I '" 2(0  V)g;,j(ql, " q") dq' dII,
where, as in (/I), the gv /Ire the coejJicient8 in the kinetic energy T. For convenience in
computation, let us define A ... 2(0  V) and /J;j '" At/ii so that (e) can be written
.. /J;j dq' dqi.
By definition
:; Cofactor of 4ii in /I
/I" '" "'
/I
where /I '" the determinant of the /J;j. Hence, since AI factors out In the nu
merator and A in the denominator respectively of /Ii, we see that
ii
/Iv co fL.
A
Let *r}1I be the Christoffel symbols based on the metric tensor /J;j.
(e)
By definition
*11 _..!.. w("Ag"k "Agi" _ Milk)
11:  2A g e;,f + e;qk ()(j'
. 1 ( . M ."A . M )
.. Ijll + 2A "qi + &j e;qk  ,... ()(j' llill
d8I '" 2(0  V)lijdq' dqi
Clearly
and hence along a dynamical trajectory with energy constant 0 we have
'" [2(0  V)J
so that
(I)
dt
 ",Al
d8
120 NOTES ON PART II
along a dynamical trajectory with energy constant C. By elementary calculus we
have therefore
(g)
dq' dq' dIq' dIq' dq' ciA
 ... ,.41 _,.441_,.441
dB tit ' dBI dJ,I. tit tit
'!'he difterential equations for the geodesics of the Riemannian space whose dB is
given by (c) Is
(h)
On employing results (e), (f), and (g) in (h), we get after obvious simplifications
(
';) ....I. dI/ dtf 1...w aA dI/ dt/'
dJ,I + 11k tit dt ... U
lf
a('li1c tit di'
But
clt/dt/'
g,'1cdidi ... A
along a dynamical trajectory with energy constant C. Hence equations (i) reduce to
il'q' clt/ dt/ . () V
(j) dJ,I + r;'1c di di  IF
But these ditJerential equations a.re another form of the Lagrangean differential
equations of motion for our dynamical system. We have thus proved that the
geodesicB oj the Riemannian space with a da gWen by (c) lire dyrwmical tro,jectoriea oj
the dyrwmicGl 81/Btem with kinetic energy T ... and potentW energy V.
By retracing o.ur steps of proof, we can show that IInY dyrwmicGl trajectory 'IDith
energy conBI471t C, i.e., GnY CUn16 t1IIJt satiBftu (j) with energy COfIBtGnt C, CCln be c0n
sidered II geodesic in the Riemtmnwn space with an element oj arc length dB gWen in (c).
Dluatratlve Example of a Shaft CarryiDg Four Disks.
A shaft is fixed at one end and carries four disks at a distance l apart. H" is
the moment of inertia of each disk and ql, f', if, tf the respective angular deflections
of the four disks, then, if the shaft has a uniform torsional stiffness 7", the kinetic
and potential energies a.re given respectively by
T =0 i[ e:Y + + (:)1]
and
v =0 + (rJ'  r)l + (if  f')I + (tf  if)l].
This is a conservative dynamical system of four degrees of freedom with no
moving constraints. Hence the dynamical trajectories with total energy constant
C can be represented as the geodesicB of the fourdimensional Riemannian space
whose element of arc length dB Is given by ,
dBlm F(tf, if, if, tf)[(dtf)l + (drt'l + (dtf)l + (dq')ll
where
F(r, f', if. tf) .. r)l+ (if rJ')I +(tf if)IJf
Numerous other engineering examples can be given, some simpler and some
NOTES ON PART II 121
more sophisticated. For example, if the above shaft carries only t1lJO dilk8. the
trajectories with total energy constant 0 can be represented as the
geodesics of the 8Urface whose dB is given by ,
dIP ..  i[(qI)t + (q'  ql)']f [(dqI)' + (dqI)'].
A more sophisticated example is given by the aymmetrical gyroscope; see the
exercise at the end of Chapter 17. Here the dynamical trajectories with total
energy constant 0 can be represented as the geodesics of the threedimensional
Riemannian space whose element of arc length dB is given by
dIP .. 2[0  Mgh coa ql][J(dqI)' + (1 sin
l
ql + J coat q%iq')B +
2J coa qI dqI dtf + J(dt)']
Chapter 18
1. The conditions for an incompressible fluid and the continuity equations for
a fluid flow state the invariance of two integrals: the integral for volume and the
integral for mass, respectively. In other words, here we have two important ex
amples of integral infJariootB. For the theory of integral invariants and its modem
generalizations, the is referred to paper 1 under Michal in the references.
There the reader will find ample references to the earlier work on integral invariants
by H. Poincare, S. Lie. E. Cartan, and E. Goursat.
2. We saw in Chapter 17'that the RiemannChristoffel curvature tensor Bjw
is a zero tensor in any Euclidean space and hence in our threedimensional Eu
clidean space. Define the tensor (field) R;.;w by
(I) R;.;w .. gwB/w.
It is evident that
(b)
holds throughout our threedimensional Euclidean space. It can be shown that"
there are only six independent equations in (b). Three of them are included in
(c) .. O;
two of them are included in
(d)
... O;
and the sixth one is given by
(6)
RulB " O.
By straightforward calculation it can be shown, that.
{
R ........ ! ()tgafJ _
tit.",., 2 4 i>.zO i>.zO
RfJ'yo .. rf)y;a  rcry;/J
RIBIB '" *RIBIB + (rllru  rlBrU),
wh.ere. it is to be recalled, a semicolon denotes covariant differentiation on the
surface zIl 0 and r.." .. 2 azp' The *RfJ'yB stande for the curvature tensor on
the surface. Define
bafJ(ZI, :r;I) .. [!
, 2 i>.zO =0
122 NOTES ON PART II.
If we evaluate the last two sets of conditions (f) on the surface .. 0, we now see
readily that the va.nishiDg of the curvature tensor Rt.sw in the threedimensional
Euclidean space implies the following three sets of conditions:
(g) ()tgfJ ..
2 b:r!' b:r!'
(A)
(i)
b""sl'Y  ba,.;p .. 0
RpafIy '" bptJJ...,  b,,1JfJ.
Equations (Ia) and (11 are the wellknown Codazzi and Gauss equations of the
surface '" o. (Of. MoConnell's AppZicationa oj the AbBoZt.Ite Dif/ert/Atiol Colcvl'IU,
p. 204, 1931.)
Conversely it can be shown that equations (g) in aapace and equations (Ia) and
.(11 over the IlUrja.ce .. 0 imply Ri.;111 .. 0 th.rov(J1wul the 8BPa.ce. But we shalI not go
into this matter any further. .
If we differentiate (g) with respect to and if in this result we eliminate the
second derivatives by means of (g), we find
(j) ()IgfJ '" ! ?)gP" C)g..,. + !1f"rf'rC)g,., C)gor>.
l)z08 2 b:r!' b:r!' b:r!' 2 ()zO ()zO ()zO
On differentiating the wellknown identity
fA.,g'" .. 6).,
btl" .
we can solve for b:r!' and obtain
btf" co ffi'g'" i)g).r
b:r!' b:r!'
If we substitute this expression in (j), we evidently obtain
()IgfJ ... 0
l)z08
Hence the 1lUr/a.ce tensor eomponema gr4/..z1, :ea, aTe quadratic expressions of
Indeed, if we write .
Call ... [! ()tgfJ]
2 W;OI zO"'o
then
(k)
A glance at (11) shows that
Call '" a""1l...,bp..
Hence, flfJ, bcrlh and Call aTe T68p6dively the tensor coe.tfi,cienta oj the fInt, aecor&d, and
tlriTtl J'UffI14menfqJ JOTf1I,8 oj the 1lUr/a.ce ... O
. It can be shown, that we may write (i) in the form
(1) KEfJ'yEpa '" bPlAr,  bnbcrlh
where
ea(J .. al"crIh a = I GcrIJ I, 1In '" 'In .. 0, 1111" I, "'1'" 1,
and
.'
NOTES ON PART II 123
the total CW'IICIture (Gclusaian curtllJtuTe) oj the surface:r!l .. o. If we apply the tensor
a"'lP to (I), we find
(m) Ca/J  2HbotlJ + ... 0,

where H is the mean curtllJture oj the surface :r!l ... 0,
H=ja .. "b.r .. .
If HI and Rs are the principal radii of ourvature, it is well known in surface theory
that
H ... K ... _
1
_.
2 HI R,' HIRt
With these relations, we can easily calculate
g.. I gaIJ I ...
For this purpose, we have to caloulate the quantities
"otIJ,,"'I'GayafJ"
if similar quantities fnvolvillg c,... have been reduced with the help of (m). Now
"aIJ,,"'I'a.ry .. aafJ3.
Hence
(n) rfII,,"'I'GayafJ3 a 2a, 2Ha.
ApplyiDg to (l). we obtain
(0) ... 2K.
If we substitute (n) and (0) in the determinant g, we find
9 = (1 +2H:r!l +K(:r!l)')t ... c{ 1 + :y(l + :y.
gafJ ... G.l( 1 + 1 +
Thus, if we expand gaIJ as a power series in:r!l, the series are convergent if I :r!l I
< Rm. where Rm is the minimum value of the prinoipal radii of ourvature of the
surface zp ... O. The same is true of many other geometrical quantities derived
from (/otIJ and gaIJ.
REFERENCES FOR PART I
AmoDN, A. C. 1. "Studies in Practical Mathematics. I. The Evaluation of a
Certain Triple Matrix Product," Proc. Roy. Soc. Edifllmrgh, vol. 57 (1987), pp.
172181.
2. "Studies in Practical Mathematics. II. The Evaluation of the Latent
Roots and Latent Vectors of a Matrix," Proc. Roy. Soc. Edinburgh, vol. 57
(1937), pp. 269304.
3. "On Bernoulli's Numerical Solution of Algebraic Equations," Proc. Roy.
&c. Bdin11urgh, vol. 46 (1926), p. 289:
BINGHAK, M. D. "A New Method for Obtaining the Inverse Matrix," J. Am.
SkltiBtictJZ AB8OC., vol. 36 (1941), pp. 530534.
BLEAXNlDY, H. M. J. Aerooout. Sci., vol. 9 (1941), pp. 5663.
BoCHlDR, 111lroduction to Higher Algebra.
BORN and JORDAN. Zeit8chri/t fur Physik, vol. 34 (1925), p. 858.
CAlISLAW and JAEGER. Operational Calculus in Applied MathematiCB, Oxford Uni
versity Press,. 1941.
DEN H.urroo. MechanictJZ Vibrations, McGrawHill Book Co., 1934.
DIRAC, P. A. M. The Principles of Quantum Mechanics, Oxford University Press,
1930.
DUNCAN, W. J., and A. R. CoLLAR. 1. "A Method for the Solution of Oscillation
Problems by Matrices," Philosophical MCJ(faZine and Journal of 8cUmce, vol. 17
(1934), pp. 865909.
2. "Matrices Applied to the Motions of Damped Systems," PhiloaophictJZ
MtJ(JtJrine and Journal oj Science, vol. 19 (1935), pp. 197219.
FANTAPPIE, L. "Le ca1cul des matrices," Comptea rendus (Paris), vol. 186 (1928),
pp. 619621.
FRAZER, DUNCAN, and COLLAR. Bletnentary Matrices and Some Applications to
Dymmic8 and Di:/ferentiaJ. Equations, Cambridge University Press, 1938.
FRAZER, R. A., and W. J. DUNCAN. "The Flutter of Aeroplane Wings," Reports
and Memoranda of the (British.) AeroooutictJl Research Committee, 1155, August,
1928.
HOLZER, H. Die BerechnufIIJ der DrehschwifllJUfllJe7l, Julius Springer, Berlin.
HOTELLING, H. "Some New Methods in Matri.'t Calculation," AnfUlls of Mo.the
maticlJl SkltiBtic8, vol. 14 (March, 1943), pp. 134.
KAlmAN, THEODORE VON. 60th Anniversary Volume, California Institute of Tech
nology, 1941.
KARMAN and BlOT. Mathematical Methods in Efil/ineerifllJ, McGrawHill Book Co.,
1940.
KuBSNEB, H. G., and L. ScHWARZ. "The Oscillating Wing with Aerodynamically
BoJanced Elevator," N.A.C.A. TechnictJl Memoro.ndum 991, October, 1941.
LuaBER, PAUL. 1. "Outline of Four Degrees of Freedom Flutter Analysis,"
S. M. Dougla,s Aircro/t Corporation Report 3572, March 20,1942.
2. "Three Dimensional Four :begrees of Freedom Flutter Analysis,"
S. M. Dougla,s Aircraft Corporation Report 3587, April 4, 1942.
3. "An Iteration Method for Calculating the Latent Roote of a Flutter
Matrix," S. M. Dougla,s Aircro.ft Corporotion Report 3855, Sept. 29, 1942.
124
REFERENCES FOR PART I 125
4. "A General Three Dimensional Flutter Theory as Applied to Aircraft,"
8. M. Douglas Aircra!t Corporation Report 8127.
5. "Matric Methods for Calculating Wing Divergence Speed, Aileron Re
versal Speed, and Effective Control," 8. M. Dougla8 AirCTa,ft Corporation Re
port 8142, Oct. 13, 1943.
MAcDtJFFEE, C. C. The Theory oj Matrices, Julius Springer, Berlin, 1933; a bibli
ographical report.
MICHAL, A. D., and R. S. MARTIN. "Some Expansions iii. Vector Space," JO'Umal
de f1IIlthbnatiqueB pures et voL 13 (1934), pp. 6991.
OLDENBURGEB, RUlI'US. 1. " Infinite Powers of Matrices and Characteristic Roots,"
Duke. Mathematical Jqu,maJ" vol. 6 (1940), pp. 357361.
2. "Convergence of Hardy Cross's Ba.lancing Process," Journal 01 Applied
Mechanics, 1940.
PIPES, LoUIS A. 1. "Matrices in Engineering," Electrical Bngineering, 1937,
p.1177.
2. "Matrix Theory of Oscillatory Networks," JO'Urnal oj Applied Phyllic8,
vol. 10 (1939), pp. 849860.
3. "The Matrix Theory of Four Terminal Networks," Philosophical Maga,
rine, 1940, pp. 370395.
4. "The Transient Behavior of FourTerminal Networks," PhiZosophical
Magazine, 1942, pp. 174214.
5. "The Matrix Theory of Torsional Oscillations," JO'Urnal of Applied
Phyllic8, vol. 13, No.7 (1942), pp. 434444.
TamoDOBSEN, T. "General Theory of Aerodynamic Instability and the Mechanism
of Flutter," N.A.C.A. Report 496, 1934, pp. 413433, especially pp. 419420.
TmlODOBSEN, T., and I. E. GARRICK. 1. "Mechanism of Flutter," N.A.C.A.
Report 685,1940.
2. "NonStationary Flow about a WingAileronTab Combination In
cluding Aerodynamic Ba.lance," Adllanced Restricted Report 334, Langley
Memorial Aeronautical Library, March, 1942.
TIMOSBENKo, S. Vibration Problems in Engineering, D. Van Nostrand Co., New
York, 1928.
VOLTERRA, V. Theory oj Fu.nctioMls, Blackie & Son, London, 1930.
WEDDERBURN, J. H. M. Lectures on Matrices (American Mathematical Society
Colloquium Publications, 1934); contains a good bibliography on pure matrix
theory up to 1934.
WHITI'AXEB, l!i. T. A Treatise on the AnaZytical Dynamics oj Particles end Rigid
Bodies, Cambridge University Press, 1927.
REFERENCES FOR PART U
APPlIILL, P. CO'Urs de mkaniq'lte rationeUe, Tome V, GauthierVillars, 1926.
BRILLOUIN, LmON. 1. Toronto Mathematical Congress, 1924.
2. "Les lois de l'Basticit6 BOUS forme tensorie1le valable pour des coordon
quelconques," Annales de physique, vol. 3 (1925), pp. 251298.
3. Les tenaeurs en mkanique et en ela8ticiU, Masson, Paris, 1938.
CmmN, W. Z. "The Intrinsic Theory of Thin Shells and Plates, Part I  General
Theory," Quarterly oj Applied Mathematics, voL 1 (1944), pp.
Coss1DltAT, E. and F. "Sur la tMorie de I'Basticit6," Annales de Toulouse, vol. 10
(1896).
126 REFERENCES FOR PART n
DABBotJX, G. aur lG thM1riedea avrja.ct8, voL 2, pp. 452521,
Paris, 1915.
EIsmNBABT, L. P. Riemannian Geometry, Princeton University Press, 1926.
ER'1'IIIL, H. MetJuxJm U1Id Probleme dtJr dynGmiacI&en Julius Springer,
Berlin, 1988; Edwards Brothers, Ann Arbor, Mich., 1943.
JJDP'I'BJIYs, H. Carie8ian Tensor8, Cambridge University Press, 1931.
1umdN, TmloOOlUlJ VON. 1. "The Fundamentals of the Statistical Theory of
Turbulence," Jourruit oJ the AeronauticGZ Scimcea, vol. 4 (1937), pp. 131.;..138.
2. "Turbulence," JoumoJ. qJ the Royal AeronouticalSociety, December, 1937,
Twentyfifth Wilbur Wright Memorial. Ulcture.
K1lt1N, TBEoDOlUlJ VON, and LBsuJD HOWAlmL "On the Statistical Theory of
Turbulence," Procudings qJ the Royal Society qJ London, vol. 164 (1938), pp.
192215.
KAsNmt, E. DifJerent/iIlJ,.(Jeometric Aapecta oj Dynamic8, American Mathematical
Society, New York, 1913.
IUBCBBol'P', G. "'Ober die Gleichungen des Gleiohgewiohtes eines elastiRchen
Korpers bei nioht unendlioh kleinen Versohiebungen seiner Theile," 8iU. Math.
Not. K'l.aue dtJr kaiBerlic1wm Akad. dtJr. WiBB., Vol. 9 (1852), p. 762.
LAMB, H. Hydrool/fl4mics, Cambridge University Press, 1924.
LAd, Q. aur lea coordonnll curtJiZign68.et leurs dirJer868 appZietJtionB, Paris,
1859.
LJm"CIVlTA, T. The Absolute DifJerential Calculus (Calculus qJ Ten80f'8),. Blaokie
and Son, London, 1927. .
LoVll, A. E. H. A Treatise on the Mathematical Theory oj BlaBticity, Cambridge
University Press, 1927}.
MICHAL, A. D. 1. "Funotionals of RDimensionaI Manifolds AdInitttng Con
tinuous Groups of Transformations," Tram. American MtJt1umuJtical Society,
vol. 29 (1927), pp. 612646. (This paper includes an introduction to multiple
point tell80r fields.)
2. "General Differential Geometries and Related Topics," Bull. American
Mathematical Society, vol. 45 (1939), pp. 529563.
3. "Recent..General Trends in Mathematics," Science, vol. 92 (1940), pp.
563566. .
4. "An Analogue of the MaupertuisJaoobi 'Least' Action Prinoiple for
Dynamical Systems of Infinite Degrees of Freedom," B1lU. American
matical Society, vol. 49 (1943), abstract 288, p. 862.
5. "Physical Models of Some Curved DifferentialGeoInetrio Metrio Spaces
of Infinite Dimensions. I. Wave Motion as a Study in Geodesics," Bull.
American Mathematical Society, vol. 49 (1943), abstract 289, p. 862.
6. "Physical Models of SoIne Curved DifferentialGeometrio Metrio 8paoes
of Infinite Dimeosions. n. Vibrations of Elastio BeaIDS and Other Elastic
Media as Studies in Geodesics," Bull. American Mathematical Society, vol. 50
(1944), abstract 154, p. 340.
MICHAL, A. D., and T. Y. THOMAS. 1. "Differential Invariants of Affine1y C0n
nected Manifolds," AnMlB of Mathematics, voL 28 (1927), pp. 196236.
2. "Dmerential Invariants or Relative Quadratic Differential Fonns,"
AnMlB qj Mathematics, voL 28 (1927), pp. 631688.
McCoNNIILL, A. J. AppZicatiOM oj the Absolute DifJerentio.l CalcuZus, Blaokie and
Son, London, 1931. .
MUBNAGHAN, F. D. "Finite Deformations of an Elastio SOUd," American Joumal
oj MathemtJtics, vol. 59 (1937), pp. 235260.
REFERENCES FOR PART II 127
Rmtl'l'TllB, F. "EiDe Anwendung des absoluten Parallelismus auf die Schalen
theorie," Z. /lnge1II. Math. Md., vol. 22 (1942), pp. 8798.
RICCI, G., and T. LmVICIVITA. "Methodes de calcul differentiel absoIu et leurs
applicatioDS," Ma.th.ematiache Annalen, Vol. 54 (1900), pp. 125201.
RmIuNN, B. Dber die Hypothaen, welch.e der Geometrie 3U Grunde liegen, 1854
(GuommeUe Werke, 1876, pp.254269).
SYNGE, J. L: "On the Geometry of Dynamics," Pl"l. Tnma. RoyW,'Sociely Londun,
A, vol. 226 (19261927), pp. 31106.
SYNGE, J. L., and W. Z. CHIEN. "The Intrinsic Theory of Elastic Shells and Plates,"
Theodore lIOn K4nn4n Annivers/lry Volume, 1941, pp. 103120.
THOMAS, T. Y. Differentia}, Invariants of Gener/Iliud Spaces, Cambridge University
Press, 1934.
VEBLEN, O. "Invariants of Quadratic Differential Forms," C/lmbridge Troct 24,
. Cambridge University Press, 1927.
VEBLEN, 0., and J. H. C. WHlTJllBEAD. "The Foundations of Differential Geom
etry," C/lmbridge Tract 29, Cambridge University Press, 1932.
WEBS'!'EB, A. O. The Dynamics of P/lrtide8 /lnd of Rigid, Hlc&stic, tmd Fluid Bodies,
G. E. Stechert & Co., New York, 1942 reprint.
WHlTTAXEB, E. T. A Treatise on the AooZytirol Dynamic8 of Particles tmd Rigid
Bodies, Cambridge University Press, 1927.
WBIOBT, J. E. "Invariants of Qus.dro.tic Differential Forms," C ~ Tract 9,
Cambridge University Press, 1908.
INDEX
Aircraft flutter, calculation of, 34,35, 36
description of, 32
matric differential equation of, 33
need for matric theory in calculations
for fast aircraft, 37
AITKEN, 124
APPJDLL,l25
BINGBAK, 14, 124 .
BlOT, 34, 113, 124
BLmAKNlily,l24
BOCHER, 111, 124
BORN, Ill, 124
Boundary layer, theory of Prandtl, 109
thickness of, 109
Boundarylayer equations, 104, 105, 106,
107, lOS
BRILLOUIN, 117, 125
CABSLAW, 34, 124
CARTAN,121
CAYL!iIy1LU.ULTON theorem for mat
rices, 12
CBIIIN, x, 105, 127
CBRISTOPFEL symbols, Euclidean, 53
multidimensional Euclidean, 95
proof of law of transformation of,
114, 115
Riemannian, 96
CoDAZZI equations for a surface, 122
CoLLAR, 1, 34, 35, 124
Compressible fluids, 103, 104
Contraction of a tensor, 116
Correlation tensor field in turbulence,
73,74
CossmRAT, E. and F., 117
Covariant differentiation, its commu
tativity in Euclidean spaces, 58, 59
its noncommutativity in general in
Riemannian spaces, 99, 100
of Euclidean metric tensor, 59
of scalar field, 60
of tensor fields in general, 58
of vector fields, 56, 57
Cramer's rule, 111
Crystalline media, 93
Curvature of a surface, Gaussian, 123
mean, 123
DAlUloux" 126
DmN HARTOG, 124
Differential equations, example, 25, 26
. frequencies and amplitudes, 29, 30,
31
harmonic solutions, 28
of small oscillations, 24, 25
solution of linear, 20
with variable coefficients, 21, 22
Differentiation of matrices, 16
DIRAC, Ill, 124
DuNCAN, I, 34, 35, 124
Dynamical systems with n degrees of
freedom, 24, 101
Lagrange's equations of motion of,
24, 101
E ~ B A R T , 118, 126
Elastic deformation, change in yolume
under, 79, SO, 81
matrix methods in formulation of
finite, 38, 39, 40
tensor methods in formulation of
finite, 75, 76, 77
Elastic potential, 91, 92
Equation of continuity, 69, 70, 71, 103
ERTIIL, 114, 116, 126
Euclidean Christoffel symbols, 58
alternative form for, 54
exercise on, 55
for Euclidean plane, M
law of transformation of, 58
multidimensional, 95
Euclidean metric tensor, 43
exercises on, 47
its law of transformation, 46
Euclidean spaces, 42
multidimensional, 95
Eulerian strain tensor, 77, 78
EUL1IIBLAGRANGJil equations for ge0
desics, 100
129
180
INDEX
FANTAPPIlII, 112, 124
Fluids, compressible, 103, 104:
incompressible, 103, 104
boundarylayer equations for, 104:,
105, 106
N avierStokes equations for incom
pressible viscous, 104:
N avierStokes equations for viscous,
69,70
F'BAzmB, I, 34, 35, 124
FUnction&m, 18, 112
FUndamental forms of a surface, 122
GAlUUCK, 36, 125
GAuss,43
equations for a surface, 122
General theory of relativity, 42
Geodesic coordinates, 97
Geodesics, 100
dynamical trajectories as, 118, 119,
120,121
EulerLagrange equations for, 100
GoUBSAT, 118, 121
HoLZlllB, 124
Homogeneous strains, 88
fundamental theorem on, 84, 85, 86
Hooke's law, 94
 HoTIILLING, 111, 124
HowARTH,l26
Hydrodynamics, 42
Eulerian equations, 116
Lagrangean equations, 116
Integral invariants, 121
Isotropic medium, 92, 93, 94
condition for, 92, 93
stresHtra.in. relations for, 93. 94
Isotropic strain, 84
JAEGlIIB, 34, 124
JEFFBJlYB, 126
JORDAN, Ill, 124
KAtudN, x, 34, 38, 74, 113, 124, 126
KAsNEB, 126
Killing's differential equations, 88
Kinetic potential, 101
KIBCHBOI'II': 117, 126
KussNllB, 124
Lagrange's equations of motion, 24, 101,
102
Lagrangean strain tensor, 77, 78
Lum, 116, 126
Laplace equation, 62, 63
for vector fields, 65
in curvilinear coordinates, 63, 64
LBVICIVITA, 117, 127
LIIIl,121
LIIilBEB, ix, 124, 125
LIN, X, 104:, 105
Line element, 42
exercises on, 45, 46, 47
geodesic form of, 105
in curvilinear coordinates, 45, 46, 47
Riemannian, 96
Linear algebraic equations, solutions, 9
numerical, 14
punchedcard methods, 14
Linear differential equations, applica
tion of matric exponential to, 20
in matrices, 21, 22
method of successive substitutions in
solution of, 22
with constant coefficients, 20
w i ~ h variable coefIicients, 21, 22
wP, Dc
LoVE; 126
MAcDuFFlIIlII, 125
MARTIN, 112, 125
Matric exponential, application to sys
tems of linear differential equa
tions with constant coefficients, 20
convergence of, 112
definition of, 15
properties of, 16
Matric power series, definition of, 15
. theorem on computation of, 18
Matrix, addition, 2
adjoint of, 39
application of inverse, to solution of
linear algebraic equations, 9
CayleyHamilton theorem, 12
characteristic eqlSation of, 12
characteristic function of, 12
characteristic roots of, 12
column, 2 .
definition of, 1
differentiation of, with respect to nu
merical variable, 16
INDEX 131
Matrix, dynamical, 29
example of noncommutativity of
: matrix multiplication, 5
index and power laws, 11
integration of, with respect to numer
ical variable, 17
inverse, 8
multiplication, 5
by number, 11
norm of, 112
of same type, 2
order,2
polynomials in a, 11
I row, 2
rule for computation of inverse, 13
square, 2
strain, 39, 40, 41
symmetric, 40
trace of, 13, 111
unit, 4
zero, 3
McCoNNELL, 122, 126
MICHAL, ix, x, 112, 114, 116, 118, 121,
 125,126
MILLIKAN, x
Multiplepoint tensor fields, 71, 72, 73,
74
in e1asticity theory, 76
in Euclidean geometry, 71, 72, 73,74
in turbulence, 73, 74
MUBNAGBAN, 117, 126
NavierStokes differential equations for
a viscous fluid, 69
in curvilinear coordinates, 70
subjeot to condition of incompressi
bility, 104
Norm of a matrix, 112
Normal' coordinates, 116
Normed linear ring, 112
OLDlIINBUBGIDB, 125
PIPES, 125
Plastic deformation, 114
POINCAU, 121
POISSON equation, 66, 67, 68
Principal radii of curvature of boundary
layer surface, lOS
PO'l'T, ix
Quadratic differential form, 45, 46
RmUTTEB, 127
RICCI, 117, 127
RIIDHANN, 127
RiemannChristoffel curvature tensor,
99,100
Riemannian geometry, 96, W, 98, 99,
100
applioationstoboundarylayer theory,
103, 104, 105, 106, 107, lOS, 109,
110, 121, 123
applications to classical dynamics,
101,lQ2
example, 98
infinite dimensional, x, 118
Rigid displacement, 40, 77
Scalar density, 60, 61,67
Scalar field, 49
relative, 60, 61, 62
SCBWABZ, 124
Strain invariant, 82, 117,118
Strain matrix, 39, 40, 41
in infinitesimal theory, 41
Strain tensor, 48, 49
Eulerian, 77
Lagrangean, 77
variation of, 86, FtT, 88
Stressstrain relations for isotropio
medium, 93, 94
Stress tensor, 89, 90
symmetry of, 91
Stress vector, 89, 90
Summation convention, 4, 43
Sylvester's theorem, 112, 113
SYNGE, 127
Tensor analysis, 56
applications to boundarylayer theory,
103, 104, 105, 106, 107, lOS, 109,
110, 121, 122, 123
applications to classical dynamics,
101, 102, 118, 119, 120, 121
in multidimensional Euclidean spaces,
95
in Riemannian spaces, 97, 99, 100
Tensor field, contraction of a, 116
covariant differentiation of, 57, 58
exercises on, 59
general definition of, 57, 58
property of a, 59
~ t i v e , 60,117
182 INDEX
Teuaor field, RiemannChrfstoffel ourva
ture, 99, 100
weight of, 60
Tensor field of rank two, contravariant,
DO
covariant, 50
mixed, 60
TmIoDOBSBN, 88, 86, 125
TBOKAS, 114, 116, 118, 126, 127
TIOSBJIINEO, 125
Turbulence, 78, 74
correlation tensor field in, 73, 74
VULIIIN, 114, 127
vector field, contravariant, 49
covariant, 49 .
in rectangular oartesia.n coordinates.
50 .
Velocity field, 51
divergence of, 103, 117
VOL'1'IIRBA, 112, 126
Wave equation, 65, 66
WUS'l'llB, 116, 127
WlIlDDEaBUBN, 125
Wm'l'llBEAD, 114, 127
WmTTADB, 28, 126, 127
'WJUGJIT, 127