You are on page 1of 11

Math Primer

This notebook goes through the prerequisite math needed before starting a PhD
program in economics. These are notes to Shaums Outline of Mathematical
Economics. This guide is an outline of an outline. This means that the subjects that
are summarized here are written without any extensive proof behind the
propositions and only the necessary conclusions have been highlighted. I have also
used terms and definitions loosely in order to highlight the intuition behind these
techniques and make the notes more compact. I have sacrificed detail for brevity.
Some Helpful Links:
UCIrvine OCW: Math for Economists Videos
Math Primer
1. Properties of Functions
Linear Functions
Exponential and Logarithmic Functions
Limits and Derivatives
Differentials
2. Finding the Maximum and Minimum of a Function
Convex Functions
Concave Functions
Generalization
3. Multivariate Calculus
Partial Derivatives
Examples
Optimization of Multivariable Functions
Optimizing Functions using the Lagrangian Multiplier
General Example
Total and Partial Differentials
Total Derivatives
Implicit Function Rules
Inverse Function Rules
4. Basics Linear Algebra
Definitions and Terms
Matrix Addition and Subtraction
Scalar Multiplication
Matrix Multiplication

Commutative, Associative, and Distributive Laws in Matrix Algebra


The Special Matrices
Systems of Equations as Matrices
5. Inverting Matrices
Calculating Determinants
2 x 2 Case
3 x 3 Case
Laplace Expansion
Properties of Determinants
The Inverse of a Matrix
Solving a System of Equations
Cramers Rule
1. Properties of Functions
A function maps an element from set to one and only one other
element in set . Another way we can phrase this is to say that the function takes
an input and returns an output. The most common notation is . Here are
some common functions.
Linear Functions
A linear function in one variable is has the form of where is the slope
of the line and is the intercept. Through algebraic manipulation, we can also
find the intercept, .
Exponential and Logarithmic Functions
Logarithmic functions solve equations of the following form, . They answer
the question, which exponent when is raised to would give you . This would be
rewritten as . It tells you how many of gives you . What this means is
that they are inverses of each other. This helps us undo one with the other and
vice-versa. They also turn multiplcation into addition and division into subtraction.
Because of its nice properties, we sometimes log-linearlize functions to get them
into nicer forms.
Limits and Derivatives
f : X Y X
Y
y = f (x)
y = mx + b m
b y
x
b
m
= c b
x
b c
lo (c) = x g
b
b c
The limit of a function is the value of the function when the input approaches a
certain value. For example, we can see visually that as approaches , the
function also approaches zero. Notationally, we write the previous idea as
.
We use this concept of a limit to define the derivative of a function. A derivative of
a function is the slope of the function at a certain point. From algebra, we know
that the slope of a linear function is the change in over the change in , .
While the slope of a linear function is constant, the slope of other functions is more
difficult to find out. We start out by estimating the slope of the function at an
arbritary point as the slope of the function between and . The slope of
the function between these two points is .
Now we use the concept of limit. We ask ourselves, what would the slope become
as becomes infinitesimally small (this means itll get closer and closer to ). We
are trying to solve:
This is the idea of the derivative, which is denoted as or . It is just the rate
of change of a function, the same as the slope of a linear function. It is answering
the question, how fast does change if changes? Luckily, we have techniques
that deal with taking the derivative of functions that are fairly simple.
Differentials
Usually, the derivative is presented as a symbol denoting the limit definition.
However, can also be treated as a fraction of differential equations, with as
the differential of and as the differential of . Here is an example of how we
find the differential of a function, .
First, we take the derivative of the function, which tells us the rate that
changes for a small change in .
We then mulitply the rate that changes for a small change in by the
infitesimally small change in x, which is denoted by .
This allows us to find the resulting infinitesimally small change in , which is
x 0
x
2
lim
x0
x
2
y x
y
x
x x x + x
f (x+x)f (x)
(x+x)x
x 0
lim
x0
f (x + x) f (x)
(x)
dy
dx
(x) f

y x
dy
dx
dy
dx
dx
x dy y
2 + 5x + 4 x
2
y
x
= 4x + 5
dy
dx
y x
dx
dy = (4x + 5)dx
denoted by .
We can also say the following, The change in is equal to the rate at which y
changes for a small change in times a small change in .
2. Finding the Maximum and Minimum
of a Function
Convex Functions
A convex function has a v-shape, and its first derivative is increasing. An example
of a convex function is . As you can see, its derivative is an increasing function.
More generally, we can tell if a function is increasing (decreasing) if its derivative
is positive (negative). The derivative of is positive, .
Concave Functions
A convex function has a n-shaped, and its first derivative is decreasing. An
example of a concave function is . As you can see, its derivative is an increasing
function. More generally, we can tell if a function is increasing (decreasing) if its
derivative is positive (negative). The derivative of is negative, .
Generalization
After see these two simple examples, we can now see how to find the maximum
and minimum for a general function . The maximum and minimum of a function
will occur when the derivative of the function is equal to zero. We call the the first
order condition. When we find the solution to the , we will have found the critical
points of the function. Now to tell whether the critical point is a maximum or
minimum, we apply the techniques we developed in our simple examples.
Maximum:
Minimum:
If , then we cannot tell if the critical point is a maximum or minimum. We must
apply the following higher-order derivative test. We keep taking derivatives until it
equals .
If and is odd, then we have an inflection point
If and is even, then we follow the same rule as the second derivative test
3. Multivariate Calculus
Until now, we have only been dealing with functions of one variable, for example, .
However, we extend this idea and incorporate multiple variables. We usually
encounter 2 or 3 variable functions, or , but we do not have to restrict ourselves to
only 3 variables. From here on out I will use the most general case, that is functions
of variables .
Partial Derivatives
When taking the derivative of a function with multiple variables, we take the
derivative of the function with respect to one variable while treating the other ones
as constants. We denote partial differentiation as or , where is the variable we are
differentiating with repect to. We also define the partial derivative in the same way
we did when we had one variable. We can also keep taking derivatives with respect
to , which is known as the direct -order partial derivative or take the partial with
respect to while hold constant, which is known as the mixed -order partial
derivative.
Examples
Find the first, second, and mixed partial derivatives for .
Optimization of Multivariable Functions
In the same way that we extended differentiation from one variable to variables,
we can extend the ideas to optimize functions in variables.
Maximum:
Minimum:
The third condition is needed to prevent inflection points or saddle points. If all the
functions in condition two have the same sign, but the third condition is not met, we
are at an inflection point. If the function in the second condition have differing
signs, the third condition will not be met and we will be at a saddle point.
Optimizing Functions using the Lagrangian
Multiplier
This method allows us to optimize functions when given an equality constraint. Let
be the function that we want to optimize (also called the objective function) and be
the constraint, we can create a new function, , called the Langrange Function or
Lagrangian. The langrange mulitplier approximates how much the objective
function changes for a one unit increase in in the constraint
General Example
Create the Lagrangian:
Find the partial of the Lagrangian with respect to all of its variables:
Solving this system of equations will give you all of the critical points of the
objective function
We can then plug in the critical points back into the objective function of find
the maximum and minimum points of the function
Total and Partial Differentials
In the same way that the differential measures the change in given the rate of
change in and the change in , we can find the change in the dependent variable
brought about by a small change in each of the independent variables.
Partial differentials arise if we assume all but one of the variables are constant (if
the variables are constant, then they dont change). It measures the change in the
dependent variable resulting from a small change in the indpendant variable while
the other variables are held constant.
Total Derivatives
If one of the variables is not independent, then a change in one of the independant
variables will affect the function directly and indirectly through the dependent
variable. For example, given a case where and , is dependant on . This means will
affect directly through the function and indirectly through the function . In order to
measure the effect of of a change in on when and are not independent, the total
derivative must be found. In this example, the total derivative is
Implicit Function Rules
Functions of the form that do no express explicitly in terms of are called implicit
functions. The total differential is simply . Rearranging the terms, we find that . This
means that to find the derivative of an implicit function or the inverse of a function,
we just have to apply the rules we just defined.
Inverse Function Rules
The derivative of the inverse of a function is the reciprocal of the derivative of the
function. For example, if we have a function , then
4. Basics Linear Algebra
Linear algebra allows us to express complicated systems of equations in a nice
way, allows use to determine whether a solution even exists before we attempt to
solve, and hands us the tools to solve linear equations. However, as the name
suggests, the techniques from linear algebra can only be applied to linear
equations.
Definitions and Terms
Matrix
A rectangular array of numbers, parameters, or variables, which has be
carefully ordered.
The number of rows, , and the number of columns, , define the dimensions of
the matrix. A square matrix has the same number of rows and columns, . A
matrix with dimension x is called a column vector. In the same way, a matrix
with dimensions x is called a row vector.
If we convert the rows of to columns and the colmns to rows, we have the
transpose of , which is denoted
Matrix Addition and Subtraction
The addition and subtraction of two matrices and requires the that the two
matrices be of the same dimensions. The operations occur element-wise.
Scalar Multiplication
We call real numbers in lineaer algebra scalars, since they scale the vectors or
matrices. We multiply each of the elements by the scalar.
Matrix Multiplication
When we multiply two matrices, the columns of the lead matrix and the rows of the
lag matrix must be equal. Let be a x matrix and be a x matrix. This means that they
are conformable. The product of will be an x matrix. Note that you would not be
able to find the product of . This is due to the mechanics of matrix multiplication.
We do not just multiply element-wise, we find the dot product between the rows
and the columns. Here is an example:
Commutative, Associative, and Distributive
Laws in Matrix Algebra
Due to the element-wise operations in addition and subtraction, the associative,
commutative, and distributive laws are applicable.
Commutative:
Associative:
Distributive (if conformable):
Matrix multiplication is not commutative, . Scalar multiplication is commutative, . If
you have matrices that are conformable, then the associative and distributive laws
hold, as long as the matrices are multiplied in the order of conformability.
The Special Matrices
Identity Matrix
The identity matrix is a square x which has a for every element in the
diagonal and everywhere else. This is the same as the scalar 1, since . Also,
the identity matrix is multiplicatively commutative.
Symmetric Matrix
A matrix is symmetric when
Idempotent Matrix
A symmetric matrix in which
Null Matrix
a matrix composed of all , of any dimension. and
The identity matrix is both symmetric and idempotent.
Systems of Equations as Matrices
Let us take as an example the follwing system of equations
This can be expressed as where
In order to solve this expression, we need to multiply both sides by the inverse of A,
just like in the most basic equations. However, not all matrices have inverses. In a
similar way as the number zero, whose inverse is undefined, there are some
matrices that have an undefined inverse. Non-square matrices are singular. In the
following section, we will address this in more detail.
5. Inverting Matrices
The product of two nonnegative numbers is always greater than zero, however, this
is not always true when applied to matrices. If the product of matrix multiplication
is the null matrix, then the matrix is singular. Note that the product of singular
matrices need not be null, but it is possible.
Another way to think of it is in terms of the identity matrix. In the same way that
when a scalar is multiplied by its inverse, the product is equal to one, when we
multiply a matrix by its inverse, we should get the identity matrix. If we dont then
the matrix is singular. Alll non-square matrices are singular
Why does this occur? It happens because at least two rows or columns are linearly
dependent (i.e, they are multiples of each other). This also means that there are an
infinite number of solutions to the system of equations. In order to preclude the use
of such matrices, we use what is called the determinant test. Let be the
determinant of the square matrix with dimensions .
The rank of is the maximum number of linearly independent rows or columns in
the matrix. If the rank of the matrices, then the matrix is non singular, i.e., all the
rows and columns are linearly independent. If , then the matrix is singular.
Calculating Determinants
We have gone through the reasoning behind determinants and its usefulness in
helping us find out if a matrix is singular, however, how do we compute it?
2 x 2 Case
3 x 3 Case
Laplace Expansion
For higher order cases, we use the Laplace expansion. First let us define a minor,
by using the x case above. The matrix next to element is . It was found by deleting
the row and column. The matrix next to element is and so on. We can also see that
there is a negative sign in front of . This alternating pattern of positive and
negative continues for higher order matrices. In order to account for the
alternating signs, we define the cofactor, . Using this notation, we can define the
laplace expansion for the third order determinant is
While in this example, we have chosen to delete the first row and column, however,
we can choose any row of column . If we choose one with a lot of zeroes, well
make the computations easier.
For higher order matrices, we will have to to recursively reduce the cofactors.
We can also define a cofactor matrix, in which the elements are all of the matrices
cofactors. The adjoint matrix is the transpose of the cofactor matrix.
Properties of Determinants
Adding or subtracting multiples of on row or column from another row or
column will no affect the determinanat
Interchanging two rows or columns will change the sign, but not the absolute
value of the determinant
Multiplying the elements of any row by a scalar will caused the determinant
to be scaled by the scalar.
The determinant of a triangular matrix (a matrix that looks like a triangle due
to the zeros) is equal to the product of the elements on the principal diagonal
If all the elements in a row or column are equal to zero, the determinant is
zero
If two rows are linearly dependent, the determinant is zero.
The determinant of the identity matrix is one
The Inverse of a Matrix
We finally have all the tools to find the inverse of a matrix
Solving a System of Equations
Cramers Rule
This is a shortcut to solve for the variable using determinants. It is
where is the determinant of the matrix created by replacing column with the vector

You might also like