You are on page 1of 8

MATHEMATICS

All branches of engineering and science require mathematics as a tool for the description of their
content. Therefore a thorough knowledge of various topics in mathematics is essential to pursue
study in engineering, science and technology. Most of the students find it difficult to learn the topics
due to inadequate knowledge and understanding of the basic concepts. The aim of this site is to
provide a thorough understanding of the fundamental concepts and applications of subject
concerned. Although the topics are designed primarily for use by engineering students, it is also
suitable for students pursuing bachelor degrees with mathematics as one of the subjects and also
for those who prepare for competitive examinations.

Each topic begins with clear statements of pertinent definitions, principles and theorems together
with illustrative and other descriptive material. The content included a great variety of examples of
the important concepts which occur. This is followed by sets of supplementary problems. The
solved problems serve to illustrate and amplify the theory, and to provide the repetition of basic
principles so vital to effective learning. Numerous proofs, especially those of all essential theorems,
are included among the solved problems. The supplementary problems serve as a complete review
of the material of each chapter.

All the topics are covered exhaustively and the explanations are clear and lucid. It provides
definitions, examples, theorems, proofs and exercises to reinforce the students’ understanding of
the subject matter.

INTEGRAL CALCULUS

The key concept in integral calculus is integration, a procedure that involves computing a special
kind of limit sums called the definite integral. During the eighteenth century, integrals were
considered simply as antiderivatives. That is there were no underpinnings for the concept of an
integral until Cauchy formulated the definition of integral in 1823. The formulation was later
completed by Friedrich Riemann. The knowledge of integrals found applications such as computing
area, volume, arc length, surface area, work, hydrostatic force, centroids of planar regions, and
applications to business, economics, and life sciences.
INFINITE SERIES

A series is, informally speaking, the sum of the terms of a sequence. Finite sequences and
series have defined first and last terms, whereas infinite sequences and series continue
indefinitely.

The terms of the series are often produced according to a certain rule, such as by a formula, or by
an algorithm. As there are an infinite number of terms, this notion is often called an infinite series.
Unlike finite summations, infinite series need tools from mathematical analysis, and specifically the
notion of limits, to be fully understood and manipulated.

Differential Equations are frequently solved by using infinite series. Fourier series, Fourier-Bessel
series, etc. expansions involve infinite series. Transcendental functions (trigonometric,
exponential, logarithmic, hyperbolic, etc.) can be expressed conveniently in terms of infinite series.
Many problems that cannot be solved in terms of elementary (algebraic and transcendental)
functions can also be solved in terms of infinite series.

So, in other words, infinite series occur so frequently in all types of engineering problems that the
necessity of studying their convergence or divergence is very important. Unless a series employed
in an investigation is convergent, it may lead to illogical conclusion.

Hence, it is essential that the students of engineering begin by acquiring an intelligent grasp of this
subject.

FUNCTIONS OF COMPLEX VARIABLES

The theory of functions of a complex variable, is the branch of mathematical analysis that
investigates function of complex numbers. It is useful in many branches of mathematics, including
algebraic geometry, number theory, applied mathematics; as well as in physics,
including hydrodynamics, thermodynamics, mechanical engineering and electrical engineering.
Complex analysis is particularly concerned with the analytic functions of complex variables (or,
more generally, meromorphic functions). Because the separate real and imaginary parts of any
analytic function must satisfy Laplace’s equation, complex analysis is widely applicable to two-
dimensional problems in physics.
COMPLEX NUMBERS AND ITS APPLICATIONS

A complex number is a number that can be expressed in the form a + bi, where a and b are real
numbers and i is the imaginary unit. Complex numbers extend the idea of the one-
dimensional number line to the two-dimensional complex plane by using the horizontal axis for the
real part and the vertical axis for the imaginary part. The complex number a + bi can be identified
with the point (a, b) in the complex plane. A complex number whose real part is zero is said to be
purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this
way the complex numbers contain the ordinary real numbers while extending them in order to solve
problems that cannot be solved with real numbers alone.

MATRICES

Cayley, a British mathematician discovered matrices in the year 1860. But it was not until the
twentieth century was well advanced that engineers heard of them. A matrix is a rectangular array
of numbers. These days, however, such arrays (matrices) have been found to be of great utility in
many branches of applied mathematics such as algebraic and differential equations, mechanics,
theory of electric circuits, nuclear physics, aerodynamics and astronomy. In many cases, they form
the coefficients of linear transformations or systems of linear equations arising, for instance, from
electric network, frameworks in mechanics, curve fitting in statistics and transportation problems.

Matrices of the same size can be added or subtracted element by element. But the rule for matrix
multiplication is that two matrices can be multiplied only when the number of columns in the first
equals the number of rows in the second. A major application of matrices is to represent linear
transformation, that is, generalizations of linear functions such as f(x) = 4x. For example,
the rotation of vectors in three dimensional space is a linear transformation. If R is a rotation
matrix and v is a column vector (a matrix with only one column) describing the position of a point
in space, the product Rv is a column vector describing the position of that point after a rotation.
The product of two matrices is a matrix that represents the composition of two linear
transformations. Another application of matrices is in the solution of a system of linear equations.
If the matrix is square, it is possible to deduce some of its properties by computing its determinant.
For example, a square matrix has an inverse if and only if its determinant is not zero. Eigenvalues
and eigenvectors provide insight into the geometry of linear transformations.

Matrices are useful because they enable us to consider an array of many numbers as a single object,
denote it by a single symbol, and perform calculations with these symbols in a very compact form.
The mathematical shorthand thus obtained is very elegant and powerful and is suitable for various
practical engineering problems. It entered engineering mathematics over seventy years ago and is
of increasing importance of various engineering branches. Therefore, it is necessary for the young
engineers to learn the elements of matrix algebra in order to keep up with the fast development of
physics and engineering.

DIFFERENTIAL CALCULUS

In mathematics, differential calculus is a subfield of calculus concerned with the study of the rates
at which quantities change. It is one of the two traditional divisions of calculus, the other
being integral calculus.

The primary objects of study in differential calculus are the derivatives of a function, related notions
such as the differential, and their applications. The derivative of a function at a chosen input value
describes the rate of change of the function near that input value. The process of finding a derivative
is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to
the graph of the function at that point, provided that the derivative exists and is defined at that point.
For a real-valued function of a single real variable, the derivative of a function at a point generally
determines the best linear approximation to the function at that point.

Differential calculus and integral calculus are connected by the fundamental theorem of calculus,
which states that differentiation is the reverse process to integration.

Differentiation has applications to nearly all quantitative disciplines. For example, in physics, the
derivative of the displacement of a moving body with respect to time is the velocity of the body, and
the derivative of velocity with respect to time is acceleration. Newton’s second law of motion states
that the derivative of the momentum of a body equals the force applied to the body. The reaction
rate of a chemical reaction is a derivative. In operations research, derivatives determine the most
efficient ways to transport materials and design factories.

Derivatives are frequently used to find the maxima and minima of a function. Equations involving
derivatives are called differential equations and are fundamental in describing natural phenomena.
Derivatives and their generalizations appear in many fields of mathematics, such as complex
analysis, functional analysis, differential geometry, measure theory and abstract algebra.

VECTOR CALCULUS

Vector calculus (or vector analysis) is a branch of mathematics concerned


with differentiation and integration of vector fields, primarily in 3 dimensional Euclidean space. The
term "vector calculus" is sometimes used as a synonym for the broader subject of multivariable
calculus, which includes vector calculus as well as partial differentiation and multiple
integration. Vector calculus is the foundation stone on which a vast amount of applied mathematics
is based. Vector calculus plays an important role in differential geometry and in the study of partial
differential equations. It is used extensively in physics and engineering, especially in the
description of electromagnetic fields, gravitational fields and fluid flow. Vector calculus was
developed from quaternion analysis by J. Willard Gibbs and Oliver Heaviside near the end of the
19th century, and most of the notation and terminology was established by Gibbs and Edwin
Bidwell Wilson in their 1901 book, Vector Analysis.

The basic objects in vector calculus are scalar fields (scalar-valued functions) and vector fields
(vector-valued functions). These are then combined or transformed under various operations, and
integrated. In more advanced treatments, one further distinguishes pseudovector fields
and pseudoscalar fields, which are identical to vector fields and scalar fields except that they
change sign under an orientation-reversing map: for example, the curl of a vector field is a
pseudovector field, and if one reflects a vector field, the curl points in the opposite direction.

FOURIER SERIES

Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly
infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials).
The study of Fourier series is a branch of Fourier Analysis. The Fourier series is named in honour
of Jean-Baptise Joseph Fourier (1768–1830), who made important contributions to the study
of trigonometric series, after preliminary investigations by Leonhard Euler, Jean le Rond
d’Alembert, and Daniel Bernoulli. Fourier introduced the series for the purpose of solving the heat
equation in a metal plate. The heat equation is a partial differential equation. Prior to Fourier's work,
no solution to the heat equation was known in the general case, although particular solutions were
known if the heat source behaved in a simple way, in particular, if the heat source was a sine or
cosine wave. These simple solutions are now sometimes called eigensolutions. Fourier's idea was
to model a complicated heat source as a superposition (or linear combination) of simple sine and
cosine waves, and to write the solution as a superposition of the corresponding eigensolutions.
This superposition or linear combination is called the Fourier series. Although the original
motivation was to solve the heat equation, it later became obvious that the same techniques could
be applied to a wide array of mathematical and physical problems, and especially those involving
linear differential equations with constant coefficients, for which the eigensolutions are sinusoids.
The Fourier series has many such applications in electrical engineering, vibration
analysis, acoustics, optics, signal processing, image processing, quantum mechanics,
econometrics, thin-walled shell theory etc.
ORDINARY DIFFERENTIAL EQUATIONS
The study of differential equations is such an extensive topic that even a brief survey of its
methods and applications usually occupies a full course.

In mathematics, an ordinary differential equation (abbreviated ODE) is an equation containing a


function of one independent variable and its derivatives. The derivatives are ordinary because
partial derivatives only apply to functions of many independent variables.

The subject of ODEs is a sophisticated one (more so with PDEs), primarily due to the various forms
the ODE can take and how they can be integrated. Linear differential equations are ones with
solutions that can be added and multiplied by coefficients, and the theory of linear differential
equations is well-defined and understood, and exact closed form solutions can be obtained. By
contrast, ODEs which do not have additive solutions are non-linear, and finding the solutions is
much more sophisticated because it is rarely possible to represent them by elementary functions
in closed form — rather the exact (or "analytic") solutions are in series or integral form. As an
alternative to the exact analytic solution, graphical and numerical methods (by hand or on
computer) may be used to generate approximate solutions. The properties of such approximate
solutions may yield very useful information, which often suffices in the absence of the exact
analytic solution.

Ordinary differential equations (ODEs) arise in many different contexts throughout mathematics
and science (social and natural) one way or another, because when describing changes
mathematically, the most accurate way uses differentials and derivatives (related, though not quite
the same). Since various differentials, derivatives, and functions become inevitably related to each
other via equations, a differential equation is the result, describing dynamical phenomena,
evolution and variation. Often, quantities are defined as the rate of change of other quantities (time
derivatives), or gradients of quantities, which is how they enter differential equations.

Specific mathematical fields include geometry and analytical mechanics. Scientific fields include
much of physics and astronomy (celestial mechanics), geology (weather modelling), chemistry
(reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling
(population competition), economics (stock trends, interest rates and the market equilibrium price
changes).

Many mathematicians have studied differential equations and contributed to the field,
including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d’Alembert and Euler.
PARTIAL DIFFERENTIAL EQUATIONS
Real world problems in general, involve functions of several (independent) variables giving rise to
partial differential equations more frequently than ordinary differential equations. Thus, most
problems in engineering and science reproduce with first and second order linear non-
homogeneous partial differential equations.

Partial differential equations (PDEs) are equations that involve rates of change with respect
to continuous variables. The position of a rigid body is specified by six numbers, but the
configuration of a fluid is given by the continuous distribution of several parameters, such as the
temperature, pressure, and so forth. The dynamics for the rigid body take place in a finite-
dimensional configuration space; the dynamics for the fluid occur in an infinite-dimensional
configuration space. This distinction usually makes PDEs much harder to solve than ordinary
differential equations (ODEs), but here again there will be simple solutions for linear problems.
Classic domains where PDEs are used include acoustics, fluid flow, acoustics, fluid flow,
electrodynamics, and heat transfer.

LAPLACE TRANSFORMS

The Laplace transform is a widely used integral transform with many applications
in physics and engineering. The Laplace transform has the useful property that many relationships
and operations over the original f(t) correspond to simpler relationships and operations over its
image F(s). It is named after Pierre-Simon Laplace, who introduced the transform in his work
on probability theory. Now a days, the Laplace transform is used for solving differential and integral
equations. In physics and engineering it is used for analysis of linear time-invariant systems such
as electric circuits, harmonic oscillators, optical devices, and mechanical systems. In such
analyses, the Laplace transform is often interpreted as a transformation from the time-domain, in
which inputs and outputs are functions of time, to the frequency-domain, where the same inputs
and outputs are functions of complex angular frequency, in radians per unit time. Given a simple
mathematical or functional description of an input or output to a system, the Laplace transform
provides an alternative functional description that often simplifies the process of analyzing the
behavior of the system, or in synthesizing a new system based on a set of specifications.
FOURIER TRANSFORMS

The Fourier transform, named after Joseph Fourier, is a mathematical transform with many
applications in physics and engineering. The motivation for the Fourier transform comes from the
study of Fourier series. In the study of Fourier series, complicated but periodic functions are written
as the sum of simple waves mathematically represented by sines and cosines. The Fourier
transform is an extension of the Fourier series that results when the period of the represented
function is lengthened and allowed to approach infinity.

SPECIAL FUNCTIONS
Special functions are particular mathematical functions which have more or less established
names and notations due to their importance in mathematical analysis, functional
analysis, physics, or other applications.

There is no general formal definition, but the list of mathematical functions contains functions
which are commonly accepted as special. In particular, elementary functions are also considered
as special functions.

The high point of special function theory in the period 1850-1900 was the theory of elliptic functions;
treatises that were essentially complete, such as that of Tannery and Molk, could be written as
handbooks to all the basic identities of the theory. They were based on techniques from complex
analysis. The twentieth century saw several waves of interest in special function theory. The classic
Whittaker and Watson (1902) textbook sought to unify the theory by using complex variables; the
G. N. Watson tome A Treatise on the Theory of Bessel Functions pushed the techniques as far as
possible for one important type that particularly admitted asymptotics to be studied.

Z - TRANSFORMS
In mathematics and signal processing, the Z-transform converts a time domain signal, which is
a sequence of real or complex numbers, into a complex frequency domain representation. It can be
considered as a discrete-time equivalent of the Laplace transform.

You might also like