You are on page 1of 13

atrix (mathematics)

"Matrix theory" redirects here. For the physics topic, see Matrix string theory.
Specific elements of a matrix are often denoted by a variable with two subscripts. For instance,
a,! represents the element at the second row and first column of a matrix ".
#n mathematics, a matrix (plural matrices) is a rectangular array of numbers, symbols, or
expressions, arranged in rows and columns. $he individual items in a matrix are called its
elements or entries. "n example of a matrix with rows and % columns is
Matrices of the same si&e can be added or subtracted element by element. $he rule for matrix
multiplication is more complicated, and two matrices can be multiplied only when the number of
columns in the first e'uals the number of rows in the second. " ma(or application of matrices is
to represent linear transformations, that is, generali&ations of linear functions such as f(x) ) *x.
For example, the rotation of vectors in three dimensional space is a linear transformation. #f + is
a rotation matrix and v is a column vector (a matrix with only one column) describing the
position of a point in space, the product +v is a column vector describing the position of that
point after a rotation. $he product of two matrices is a matrix that represents the composition of
two linear transformations. "nother application of matrices is in the solution of a system of linear
e'uations. #f the matrix is s'uare, it is possible to deduce some of its properties by computing its
determinant. For example, a s'uare matrix has an inverse if and only if its determinant is not
&ero. ,igenvalues and eigenvectors provide insight into the geometry of linear transformations.
Matrices find applications in most scientific fields. #n every branch of physics, including
classical mechanics, optics, electromagnetism, 'uantum mechanics, and 'uantum
electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies.
#n computer graphics, they are used to pro(ect a %-dimensional image onto a -dimensional
screen. #n probability theory and statistics, stochastic matrices are used to describe sets of
probabilities. for instance, they are used within the /age+an0 algorithm that ran0s the pages in a
1oogle search2!3. Matrix calculus generali&es classical analytical notions such as derivatives and
exponentials to higher dimensions.
" ma(or branch of numerical analysis is devoted to the development of efficient algorithms for
matrix computations, a sub(ect that is centuries old and is today an expanding area of research.
Matrix decomposition methods simplify computations, both theoretically and practically.
"lgorithms that are tailored to the structure of particular matrix structures, e.g. sparse matrices
and near-diagonal matrices, expedite computations in finite element method and other
computations. #nfinite matrices occur in planetary theory and in atomic theory. " simple example
is the matrix representing the derivative operator, which acts on the $aylor series of a function.
4ide5efinition
" matrix is a rectangular arrangement of mathematical expressions that can be simply numbers.
23
6ommonly the m components of the matrix are written in a rectangular arrangement in the form
of a column of m rows7
For example,
"n alternative notation uses large parentheses instead of box brac0ets.
$he hori&ontal and vertical lines in a matrix are called rows and columns, respectively. $he
numbers in the matrix are called its entries or its elements. $o specify the si&e of a matrix, a
matrix with m rows and n columns is called an m-by-n matrix or m 8 n matrix, while m and n are
called its dimensions. $he above is a *-by-% matrix.
" matrix with one row (a ! 8 n matrix) is called a row vector, and a matrix with one column (an
m 8 ! matrix) is called a column vector. "ny row or column of a matrix determines a row or
column vector, obtained by removing all other rows or columns respectively from the matrix. For
example, the row vector for the third row of the above matrix " is
9hen a row or column of a matrix is interpreted as a value, this refers to the corresponding row
or column vector. For instance one may say that two different rows of a matrix are e'ual,
meaning they determine the same row vector. #n some cases the value of a row or column should
be interpreted (ust as a se'uence of values (an element of +n if entries are real numbers) rather
than as a matrix, for instance when saying that the rows of a matrix are e'ual to the
corresponding columns of its transpose matrix.
Most of this article focuses on real and complex matrices, i.e., matrices whose elements are real
or complex numbers, respectively. More general types of entries are discussed below.
:otation
$he specifics of matrices notation varies widely, with some prevailing trends. Matrices are
usually denoted using upper-case letters, while the corresponding lower-case letters, with two
subscript indices, represent the entries. #n addition to using upper-case letters to symboli&e
matrices, many authors use a special typographical style, commonly boldface upright (non-
italic), to further distinguish matrices from other mathematical ob(ects. "n alternative notation
involves the use of a double-underline with the variable name, with or without boldface style,
(e.g., ).
$he entry in the i-th row and the (-th column of a matrix is typically referred to as the i,(, (i,(), or
(i,()th entry of the matrix. For example, the (!,%) entry of the above matrix " is ;. $he (i, ()th
entry of a matrix " is most commonly written as ai,(. "lternative notations for that entry are
"2i,(3 or "i,(.
Sometimes a matrix is referred to by giving a formula for its (i,()th entry, often with double
parenthesis around the formula for the entry, for example, if the (i,()th entry of " were given by
ai(, " would be denoted ((ai()).
"n asteris0 is commonly used to refer to whole rows or columns in a matrix. For example, ai,
refers to the ith row of ", and a,( refers to the (th column of ". $he set of all m-by-n matrices is
denoted (m, n).
" common shorthand is
" ) 2ai,(3i ) !,...,m. ( ) !,...,n or more briefly " ) 2ai,(3m8n
to define an m 8 n matrix ". <sually the entries ai,( are defined separately for all integers ! = i =
m and ! = ( = n. $hey can however sometimes be given by one formula. for example the %-by-*
matrix
can alternatively be specified by " ) 2i > (3i ) !,,%. ( ) !,...,*, or simply " ) ((i-()), where the
si&e of the matrix is understood.
Some programming languages start the numbering of rows and columns at &ero, in which case
the entries of an m-by-n matrix are indexed by ? = i = m > ! and ? = ( = n > !.2%3 $his article
follows the more common convention in mathematical writing where enumeration starts from !.
6lose this section
Show@asic operations
ShowMatrix multiplication, linear e'uations and linear transformations
4ideS'uare matrices
" s'uare matrix is a matrix with the same number of rows and columns. "n n-by-n matrix is
0nown as a s'uare matrix of order n. "ny two s'uare matrices of the same order can be added
and multiplied. " s'uare matrix " is called invertible or non-singular if there exists a matrix @
such that
"@ ) #n.2!*3
$his is e'uivalent to @" ) #n.2!;3 Moreover, if @ exists, it is uni'ue and is called the inverse
matrix of ", denoted ">!.
$he entries "i,i form the main diagonal of a matrix. $he trace, tr(") of a s'uare matrix " is the
sum of its diagonal entries. 9hile matrix multiplication is not commutative as mentioned above,
the trace of the product of two matrices is independent of the order of the factors7 tr("@) )
tr(@").2!A3
"lso, the trace of a matrix is e'ual to that of its transpose, i.e., tr(") ) tr("$).
#f all entries outside the main diagonal are &ero, " is called a diagonal matrix. #f only all entries
above (below) the main diagonal are &ero, " is called a lower triangular matrix (upper triangular
matrix, respectively). For example, if n ) %, they loo0 li0e
(diagonal), (lower) and (upper triangular matrix).
5eterminant
Main article7 5eterminant
" linear transformation on + given by the indicated matrix. $he determinant of this matrix is
>!, as the area of the green parallelogram at the right is !, but the map reverses the orientation,
since it turns the countercloc0wise orientation of the vectors to a cloc0wise one.
$he determinant det(") or B"B of a s'uare matrix " is a number encoding certain properties of the
matrix. " matrix is invertible if and only if its determinant is non&ero. #ts absolute value e'uals
the area (in +) or volume (in +%) of the image of the unit s'uare (or cube), while its sign
corresponds to the orientation of the corresponding linear map7 the determinant is positive if and
only if the orientation is preserved.
$he determinant of -by- matrices is given by
9hen the determinant is e'ual to one, then the matrix represents an e'ui-areal mapping. $he
determinant of %-by-% matrices involves A terms (rule of Sarrus). $he more lengthy Ceibni&
formula generalises these two formulae to all dimensions.2!D3
$he determinant of a product of s'uare matrices e'uals the product of their determinants7
det("@) ) det(") E det(@).2!F3 "dding a multiple of any row to another row, or a multiple of any
column to another column, does not change the determinant. #nterchanging two rows or two
columns affects the determinant by multiplying it by >!.2!G3 <sing these operations, any matrix
can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant
e'uals the product of the entries on the main diagonal. this provides a method to calculate the
determinant of any matrix. Finally, the Caplace expansion expresses the determinant in terms of
minors, i.e., determinants of smaller matrices.2?3 $his expansion can be used for a recursive
definition of determinants (ta0ing as starting case the determinant of a !-by-! matrix, which is its
uni'ue entry, or even the determinant of a ?-by-? matrix, which is !), that can be seen to be
e'uivalent to the Ceibni& formula. 5eterminants can be used to solve linear systems using
6ramerHs rule, where the division of the determinants of two related s'uare matrices e'uates to
the value of each of the systemHs variables.2!3
,igenvalues and eigenvectors
Main article7 ,igenvalues and eigenvectors
" number I and a non-&ero vector v satisfying
"v ) Iv
are called an eigenvalue and an eigenvector of ", respectively.2nb !323 $he number I is an
eigenvalue of an n8n-matrix " if and only if ">I#n is not invertible, which is e'uivalent to
2%3
$he polynomial p" in an indeterminate J given by evaluation the determinant det(J#n>") is
called the characteristic polynomial of ". #t is a monic polynomial of degree n. $herefore the
polynomial e'uation p"(I) ) ? has at most n different solutions, i.e., eigenvalues of the matrix.
2*3 $hey may be complex even if the entries of " are real. "ccording to the 6ayleyK4amilton
theorem, p"(") ) ?, that is, the result of substituting the matrix itself into its own characteristic
polynomial yields the &ero matrix.
Symmetry
" s'uare matrix " that is e'ual to its transpose, i.e., " ) "$, is a symmetric matrix. #f instead, "
was e'ual to the negative of its transpose, i.e., " ) >"$, then " is a s0ew-symmetric matrix. #n
complex matrices, symmetry is often replaced by the concept of 4ermitian matrices, which
satisfy " ) ", where the star or asteris0 denotes the con(ugate transpose of the matrix, i.e., the
transpose of the complex con(ugate of ".
@y the spectral theorem, real symmetric matrices and complex 4ermitian matrices have an
eigenbasis. i.e., every vector is expressible as a linear combination of eigenvectors. #n both cases,
all eigenvalues are real.2;3 $his theorem can be generali&ed to infinite-dimensional situations
related to matrices with infinitely many rows and columns, see below.
5efiniteness
Matrix ". definiteness. associated 'uadratic form L"(x,y).
set of vectors (x,y) such that L"(x,y))!
positive definite indefinite
!M* x N !M*y !M* x > !M* y
,llipse
4yperbola
" symmetric n8n-matrix is called positive-definite (respectively negative-definite. indefinite), if
for all non&ero vectors x +n the associated 'uadratic form given by
L(x) ) x$"x
ta0es only positive values (respectively only negative values. both some negative and some
positive values).2A3 #f the 'uadratic form ta0es only non-negative (respectively only non-
positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-
semidefinite). hence the matrix is indefinite precisely when it is neither positive-semidefinite nor
negative-semidefinite.
" symmetric matrix is positive-definite if and only if all its eigenvalues are positive.2D3 $he
table at the right shows two possibilities for -by- matrices.
"llowing as input two different vectors instead yields the bilinear form associated to "7
@" (x, y) ) x$"y.2F3
6lose this section
Show6omputational aspects
4ide"bstract algebraic aspects and generali&ations
Matrices can be generali&ed in different ways. "bstract algebra uses matrices with entries in
more general fields or even rings, while linear algebra codifies properties of matrices in the
notion of linear maps. #t is possible to consider matrices with infinitely many columns and rows.
"nother extension are tensors, which can be seen as higher-dimensional arrays of numbers, as
opposed to vectors, which can often be realised as se'uences of numbers, while matrices are
rectangular or two-dimensional array of numbers.2*!3 Matrices, sub(ect to certain re'uirements
tend to form groups 0nown as matrix groups.
Matrices with more general entries
$his article focuses on matrices whose entries are real or complex numbers. 4owever, matrices
can be considered with much more general types of entries than real or complex numbers. "s a
first step of generali&ation, any field, i.e., a set where addition, subtraction, multiplication and
division operations are defined and well-behaved, may be used instead of + or 6, for example
rational numbers or finite fields. For example, coding theory ma0es use of matrices over finite
fields. 9herever eigenvalues are considered, as these are roots of a polynomial they may exist
only in a larger field than that of the coefficients of the matrix. for instance they may be complex
in case of a matrix with real entries. $he possibility to reinterpret the entries of a matrix as
elements of a larger field (e.g., to view a real matrix as a complex matrix whose entries happen to
be all real) then allows considering each s'uare matrix to possess a full set of eigenvalues.
"lternatively one can consider only matrices with entries in an algebraically closed field, such as
6, from the outset.
More generally, abstract algebra ma0es great use of matrices with entries in a ring +.2*3 +ings
are a more general notion than fields in that no division operation exists. $he very same addition
and multiplication operations of matrices extend to this setting, too. $he set M(n, +) of all s'uare
n-by-n matrices over + is a ring called matrix ring, isomorphic to the endomorphism ring of the
left +-module +n.2*%3 #f the ring + is commutative, i.e., its multiplication is commutative, then
M(n, +) is a unitary noncommutative (unless n ) !) associative algebra over +. $he determinant
of s'uare matrices over a commutative ring + can still be defined using the Ceibni& formula.
such a matrix is invertible if and only if its determinant is invertible in +, generalising the
situation over a field F, where every non&ero element is invertible.2**3 Matrices over superrings
are called supermatrices.2*;3
Matrices do not always have all their entries in the same ring K or even in any ring at all. One
special but common case is bloc0 matrices, which may be considered as matrices whose entries
themselves are matrices. $he entries need not be 'uadratic matrices, and thus need not be
members of any ordinary ring. but their si&es must fulfil certain compatibility conditions.
+elationship to linear maps
Cinear maps +n P +m are e'uivalent to m-by-n matrices, as described above. More generally,
any linear map f7 Q P 9 between finite-dimensional vector spaces can be described by a matrix
" ) (ai(), after choosing bases v!, ..., vn of Q, and w!, ..., wm of 9 (so n is the dimension of Q
and m is the dimension of 9), which is such that
#n other words, column ( of " expresses the image of v( in terms of the basis vectors wi of 9.
thus this relation uni'uely determines the entries of the matrix ". :ote that the matrix depends
on the choice of the bases7 different choices of bases give rise to different, but e'uivalent
matrices.2*A3 Many of the above concrete notions can be reinterpreted in this light, for example,
the transpose matrix "$ describes the transpose of the linear map given by ", with respect to the
dual bases.2*D3
More generally, the set of m8n matrices can be used to represent the +-linear maps between the
free modules +m and +n for an arbitrary ring + with unity. 9hen n ) m composition of these
maps is possible, and this gives rise to the matrix ring of n8n matrices representing the
endomorphism ring of +n.
Matrix groups
Main article7 Matrix group
" group is a mathematical structure consisting of a set of ob(ects together with a binary
operation, i.e., an operation combining any two ob(ects to a third, sub(ect to certain re'uirements.
2*F3 " group in which the ob(ects are matrices and the group operation is matrix multiplication is
called a matrix group.2nb 32*G3 Since in a group every element has to be invertible, the most
general matrix groups are the groups of all invertible matrices of a given si&e, called the general
linear groups.
"ny property of matrices that is preserved under matrix products and inverses can be used to
define further matrix groups. For example, matrices with a given si&e and with a determinant of !
form a subgroup of (i.e., a smaller group contained in) their general linear group, called a special
linear group.2;?3Orthogonal matrices, determined by the condition
M$M ) #,
form the orthogonal group.2;!3 $hey are called orthogonal since the associated linear
transformations of +n preserve angles in the sense that the scalar product of two vectors is
unchanged after applying M to them7
(Mv) E (Mw) ) v E w.2;3
,very finite group is isomorphic to a matrix group, as one can see by considering the regular
representation of the symmetric group.2;%3 1eneral groups can be studied using matrix groups,
which are comparatively well-understood, by means of representation theory.2;*3
#nfinite matrices
#t is also possible to consider matrices with infinitely many rows andMor columns2;;3 even if,
being infinite ob(ects, one cannot write down such matrices explicitly. "ll that matters is that for
every element in the set indexing rows, and every element in the set indexing columns, there is a
well-defined entry (these index sets need not even be subsets of the natural numbers). $he basic
operations of addition, subtraction, scalar multiplication and transposition can still be defined
without problem. however matrix multiplication may involve infinite summations to define the
resulting entries, and these are not defined in general.
#f + is any ring with unity, then the ring of endomorphisms of as a right + module is isomorphic
to the ring of column finite matrices whose entries are indexed by , and whose columns each
contain only finitely many non&ero entries. $he endomorphisms of M considered as a left +
module result in an analogous ob(ect, the row finite matrices whose rows each only have finitely
many non&ero entries.
#f infinite matrices are used to describe linear maps, then only those matrices can be used all of
whose columns have but a finite number of non&ero entries, for the following reason. For a
matrix " to describe a linear map f7 QP9, bases for both spaces must have been chosen. recall
that by definition this means that every vector in the space can be written uni'uely as a (finite)
linear combination of basis vectors, so that written as a (column) vector v of coefficients, only
finitely many entries vi are non&ero. :ow the columns of " describe the images by f of
individual basis vectors of Q in the basis of 9, which is only meaningful if these columns have
only finitely many non&ero entries. $here is no restriction on the rows of " however7 in the
product "Ev there are only finitely many non&ero coefficients of v involved, so every one of its
entries, even if it is given as an infinite sum of products, involves only finitely many non&ero
terms and is therefore well defined. Moreover this amounts to forming a linear combination of
the columns of " that effectively involves only finitely many of them, whence the result has only
finitely many non&ero entries, because each of those columns do. One also sees that products of
two matrices of the given type is well defined (provided as usual that the column-index and row-
index sets match), is again of the same type, and corresponds to the composition of linear maps.
#f + is a normed ring, then the condition of row or column finiteness can be relaxed. 9ith the
norm in place, absolutely convergent series can be used instead of finite sums. For example, the
matrices whose column sums are absolutely convergent se'uences form a ring. "nalogously of
course, the matrices whose row sums are absolutely convergent series also form a ring.
#n that vein, infinite matrices can also be used to describe operators on 4ilbert spaces, where
convergence and continuity 'uestions arise, which again results in certain constraints that have to
be imposed. 4owever, the explicit point of view of matrices tends to obfuscate the matter,2nb %3
and the abstract and more powerful tools of functional analysis can be used instead.
,mpty matrices
"n empty matrix is a matrix in which the number of rows or columns (or both) is &ero.2;A32;D3
,mpty matrices help dealing with maps involving the &ero vector space. For example, if " is a %-
by-? matrix and @ is a ?-by-% matrix, then "@ is the %-by-% &ero matrix corresponding to the null
map from a %-dimensional space Q to itself, while @" is a ?-by-? matrix. $here is no common
notation for empty matrices, but most computer algebra systems allow creating and computing
with them. $he determinant of the ?-by-? matrix is ! as follows from regarding the empty
product occurring in the Ceibni& formula for the determinant as !. $his value is also consistent
with the fact that the identity map from any finite dimensional space to itself has determinant !, a
fact that is often used as a part of the characteri&ation of determinants.
6lose this section
4ide"pplications
$here are numerous applications of matrices, both in mathematics and other sciences. Some of
them merely ta0e advantage of the compact representation of a set of numbers in a matrix. For
example, in game theory and economics, the payoff matrix encodes the payoff for two players,
depending on which out of a given (finite) set of alternatives the players choose.2;F3$ext mining
and automated thesaurus compilation ma0es use of document-term matrices such as tf-idf to
trac0 fre'uencies of certain words in several documents.2;G3
6omplex numbers can be represented by particular real -by- matrices via
under which addition and multiplication of complex numbers and matrices correspond to each
other. For example, -by- rotation matrices represent the multiplication with some complex
number of absolute value !, as above. " similar interpretation is possible for 'uaternions.2A?3
,arly encryption techni'ues such as the 4ill cipher also used matrices. 4owever, due to the
linear nature of matrices, these codes are comparatively easy to brea0.2A!36omputer graphics
uses matrices both to represent ob(ects and to calculate transformations of ob(ects using affine
rotation matrices to accomplish tas0s such as pro(ecting a three-dimensional ob(ect onto a two-
dimensional screen, corresponding to a theoretical camera observation.2A3 Matrices over a
polynomial ring are important in the study of control theory.
6hemistry ma0es use of matrices in various ways, particularly since the use of 'uantum theory to
discuss molecular bonding and spectroscopy. ,xamples are the overlap matrix and the Foc0
matrix used in solving the +oothaan e'uations to obtain the molecular orbitals of the 4artreeK
Foc0 method.
1raph theory
"n undirected graph with ad(acency matrix
$he ad(acency matrix of a finite graph is a basic notion of graph theory.2A%3 #t saves which
vertices of the graph are connected by an edge. Matrices containing (ust two different values (?
and ! meaning for example "yes" and "no") are called logical matrices. $he distance (or cost)
matrix contains information about distances of the edges.2A*3 $hese concepts can be applied to
websites connected hyperlin0s or cities connected by roads etc., in which case (unless the road
networ0 is extremely dense) the matrices tend to be sparse, i.e., contain few non&ero entries.
$herefore, specifically tailored matrix algorithms can be used in networ0 theory.
"nalysis and geometry
$he 4essian matrix of a differentiable function R7 +n P + consists of the second derivatives of R
with respect to the several coordinate directions, i.e.2A;3
"t the saddle point (x ) ?, y ) ?) (red) of the function f(x,>y) ) x > y, the 4essian matrix is
indefinite.
#t encodes information about the local growth behaviour of the function7 given a critical point
x ) (x!, ..., xn), i.e., a point where the first partial derivatives of R vanish, the function has a
local minimum if the 4essian matrix is positive definite. Luadratic programming can be used to
find global minima or maxima of 'uadratic functions closely related to the ones attached to
matrices (see above).2AA3
"nother matrix fre'uently used in geometrical situations is the Sacobi matrix of a differentiable
map f7 +n P +m. #f f!, ..., fm denote the components of f, then the Sacobi matrix is defined as
2AD3
#f n T m, and if the ran0 of the Sacobi matrix attains its maximal value m, f is locally invertible at
that point, by the implicit function theorem.2AF3
/artial differential e'uations can be classified by considering the matrix of coefficients of the
highest-order differential operators of the e'uation. For elliptic partial differential e'uations this
matrix is positive definite, which has decisive influence on the set of possible solutions of the
e'uation in 'uestion.2AG3
$he finite element method is an important numerical method to solve partial differential
e'uations, widely applied in simulating complex physical systems. #t attempts to approximate the
solution to some e'uation by piecewise linear functions, where the pieces are chosen with
respect to a sufficiently fine grid, which in turn can be recast as a matrix e'uation.2D?3
/robability theory and statistics
$wo different Mar0ov chains. $he chart depicts the number of particles (of a total of !???) in
state "". @oth limiting values can be determined from the transition matrices, which are given
by (red) and (blac0).
Stochastic matrices are s'uare matrices whose rows are probability vectors, i.e., whose entries
are non-negative and sum up to one. Stochastic matrices are used to define Mar0ov chains with
finitely many states.2D!3 " row of the stochastic matrix gives the probability distribution for the
next position of some particle currently in the state that corresponds to the row. /roperties of the
Mar0ov chain li0e absorbing states, i.e., states that any particle attains eventually, can be read off
the eigenvectors of the transition matrices.2D3
Statistics also ma0es use of matrices in many different forms.2D%35escriptive statistics is
concerned with describing data sets, which can often be represented in matrix form, by reducing
the amount of data. $he covariance matrix encodes the mutual variance of several random
variables.2D*3 "nother techni'ue using matrices are linear least s'uares, a method that
approximates a finite set of pairs (x!, y!), (x, y), ..., (x:, y:), by a linear function
yi U axi N b, i ) !, ..., :
which can be formulated in terms of matrices, related to the singular value decomposition of
matrices.2D;3
+andom matrices are matrices whose entries are random numbers, sub(ect to suitable probability
distributions, such as matrix normal distribution. @eyond probability theory, they are applied in
domains ranging from number theory to physics.2DA32DD3
Symmetries and transformations in physics
Further information7 Symmetry in physics
Cinear transformations and the associated symmetries play a 0ey role in modern physics. For
example, elementary particles in 'uantum field theory are classified as representations of the
Corent& group of special relativity and, more specifically, by their behavior under the spin group.
6oncrete representations involving the /auli matrices and more general gamma matrices are an
integral part of the physical description of fermions, which behave as spinors.2DF3 For the three
lightest 'uar0s, there is a group-theoretical representation involving the special unitary group
S<(%). for their calculations, physicists use a convenient matrix representation 0nown as the
1ell-Mann matrices, which are also used for the S<(%) gauge group that forms the basis of the
modern description of strong nuclear interactions, 'uantum chromodynamics. $he 6abibboK
VobayashiKMas0awa matrix, in turn, expresses the fact that the basic 'uar0 states that are
important for wea0 interactions are not the same as, but linearly related to the basic 'uar0 states
that define particles with specific and distinct masses.2DG3
Cinear combinations of 'uantum states
$he first model of 'uantum mechanics (4eisenberg, !G;) represented the theoryHs operators by
infinite-dimensional matrices acting on 'uantum states.2F?3 $his is also referred to as matrix
mechanics. One particular example is the density matrix that characteri&es the "mixed" state of a
'uantum system as a linear combination of elementary, "pure" eigenstates.2F!3
"nother matrix serves as a 0ey tool for describing the scattering experiments that form the
cornerstone of experimental particle physics7 6ollision reactions such as occur in particle
accelerators, where non-interacting particles head towards each other and collide in a small
interaction &one, with a new set of non-interacting particles as the result, can be described as the
scalar product of outgoing particle states and a linear combination of ingoing particle states. $he
linear combination is given by a matrix 0nown as the S-matrix, which encodes all information
about the possible interactions between particles.2F3
:ormal modes
" general application of matrices in physics is to the description of linearly coupled harmonic
systems. $he e'uations of motion of such systems can be described in matrix form, with a mass
matrix multiplying a generali&ed velocity to give the 0inetic term, and a force matrix multiplying
a displacement vector to characteri&e the interactions. $he best way to obtain solutions is to
determine the systemHs eigenvectors, its normal modes, by diagonali&ing the matrix e'uation.
$echni'ues li0e this are crucial when it comes to the internal dynamics of molecules7 the internal
vibrations of systems consisting of mutually bound component atoms.2F%3 $hey are also needed
for describing mechanical vibrations, and oscillations in electrical circuits.2F*3
1eometrical optics
1eometrical optics provides further matrix applications. #n this approximative theory, the wave
nature of light is neglected. $he result is a model in which light rays are indeed geometrical rays.
#f the deflection of light rays by optical elements is small, the action of a lens or reflective
element on a given light ray can be expressed as multiplication of a two-component vector with a
two-by-two matrix called ray transfer matrix7 the vectorHs components are the light rayHs slope
and its distance from the optical axis, while the matrix encodes the properties of the optical
element. "ctually, there are two 0inds of matrices, vi&. a refraction matrix describing the
refraction at a lens surface, and a translation matrix, describing the translation of the plane of
reference to the next refracting surface, where another refraction matrix applies. $he optical
system, consisting of a combination of lenses andMor reflective elements, is simply described by
the matrix resulting from the product of the componentsH matrices.2F;3
,lectronics
$raditional mesh analysis in electronics leads to a system of linear e'uations that can be
described with a matrix.
$he behaviour of many electronic components can be described using matrices. Cet " be a -
dimensional vector with the componentHs input voltage v! and input current i! as its elements,
and let @ be a -dimensional vector with the componentHs output voltage v and output current i
as its elements. $hen the behaviour of the electronic component can be described by @ ) 4 E ",
where 4 is a x matrix containing one impedance element (h!), one admittance element
(h!) and two dimensionless elements (h!! and h). 6alculating a circuit now reduces to
multiplying matrices.
6lose this section
Show4istory
Show

You might also like