You are on page 1of 50

School of Chemical Engineering

CHEMENG 2016
Professional Practice II

Errors & Uncertainty


Outline
• Measurement error and uncertainty
• Types of uncertainty
 Accuracy vs Precision
 Random error vs Systematic error
 Probable Errors
• Propagation of Uncertainty
 Functions of a single variable
 Functions of two or more variables
• Propagation of Probable errors
• Linear regression
Measurement Error and Uncertainty
• Measurement - a process whereby the value of a quantity
is estimated.
• All measurements are accompanied by errors.
• Our lack of knowledge about the magnitude of measurement
error is called measurement uncertainty.
• Measurement errors are random variables that follow
probability distributions.
• An uncertainty estimate is the characterisation of what we
know statistically about the measurement error.
• A measurement result is only complete when accompanied
by an uncertainty estimate.
Types of Uncertainty
Accuracy vs Precision
1 2
Accurate Accurate but
AND NOT
Precise Precise

3 4
NOT Precise but
Accurate NOT
NOT Accurate
Precise

Dieck (1992)
Types of Uncertainty
Accuracy vs Precision
1 2
Accurate ? x Accurate ?
AND xx x
xx x x AND
Precise x
x NOT Precise

3 4 xx
x x xx x Precise
NOT Accurate ? x
x
AND AND
NOT Precise x NOT Accurate ?
Types of Uncertainty
Accuracy vs Precision
• Accuracy: a measure of how close an experimental result is
to the “true” (or published or accepted) value

Accuracy  Systematic errors

• Precision: a measure of the degree of closeness of results


from repeated measurements

Precision  Random errors


Types of Uncertainty
Error
• The deviation of a measured result from the true, correct or
accepted value of the quantity being measured

Error

• Two basic types of errors: Random and Systematic


Types of Uncertainty
Random error
• causes the measured result to deviate randomly from the correct
value.
• The distribution of multiple measurements with only random error
contributions will be centred around the correct value.

• Examples:
 Noise (random noise)
 Careless measurements
 Low resolution instruments
Types of Uncertainty
Systematic error
• cause the measured result to deviate by a fixed amount in one
direction from the correct value.
• The distribution of multiple measurements with systematic error
contributions will be centred some fixed value away from the
correct value.

• Examples:
 Uncalibrated instruments
 Faulty instruments
 Careless measurements
Types of Uncertainty
Random error vs Systematic error
• Sometimes difficult to classify
• Is error truly random or a series of compounding systematic
errors ?
• Impossible to eliminate all random errors
• Always possible to reduce random errors
• Can handle random errors using statistics
• Role of experimenter is to eliminate systematic error

 Decrease random error  increased precision


 Decrease systematic error  increased accuracy
Types of Uncertainty
Some common sources of error

• Fineness of scale division

• Instrument calibration L  5.0  0.1 cm

• Instrument reproducibility

• Experimenter skill
Uncertainty Estimate
• The uncertainty (x) in a single measurement may be
estimated using engineering judgment
• Sometimes based on a “worst-case scenario”
• Necessarily subjective
• But how do uncertainties in multiple measurement steps
combine?
Example – Uncertainty Estimate
The CO concentration in a flue gas is measured to be 300 ppm
using a single measurement.
The needle on the CO analyser is observed to be fluctuating
between 295 and 305. The finest division on the scale is 5 ppm.
Checking the instrument records, you find that the calibration
certificate for the analyser is still valid. The instrument engineer
responsible indicates that the analyser has been calibrated against
a gas mixture containing 700  5 ppm CO.
The manual for the analyser indicates that the precision at this
range is  10 %.

Q. What is the likely uncertainty in the CO concentration


measured?
Solution
• CO reading: [CO] = 300 ppm
• Error in scale reading = ± 5 ppm
• Error in calibration =
 5 ppm
 300 ppm   0.7%  300 ppm   2 ppm
700 ppm

• Instrument precision = ± 10%  300 ppm = ± 30 ppm


• Assuming worst-case scenario that uncertainties are additive:
 Max possible error = ± (5 + 2 + 30) ppm = ± 37 ppm

• Report result: [CO] = 300 ± 37 ppm


Uncertainty Estimate
• For error sources that are independent, it is improbable that the
uncertainties are additive
• For independent errors, x1, x2, x3 .., the most probable
estimate of the combined uncertainty is:

X  x12  x 22  x 32  ... Root Sum Squares (RSS)

Example
• Combined uncertainty =  52  22  302   31 ppm

• Therefore, best-estimate uncertainty = ± 31 ppm

• Report result: [CO] = 300 ± 31 ppm


Absolute vs Relative Uncertainty
Measurement: x  x 0  δx

δx  Absolute Uncertainty

δx
 Relative Uncertainty [in %]
x0

Example: H  129  1 mm

H   1 mm

δH 1
  0.008  0.8%
H 0 129
Measurement Repetition
How can uncertainty be reduced?
• Reduce uncertainty by repetition of measurement
Statistics

• Assume error is random and distribution is known


Frequency

Gaussian Distribution
is common
0
0

x x

• Systematic errors (or bias) are not reduced through repetition


Mean & Standard Deviation

Frequency
0

x
0

x
x  x 

 x  x
2
x i
Mean: x i
Std. Deviation:  
n n 1

• The standard deviation () allows an estimate of the probable


error at a certain confidence limit:

x  x  k
Probable Errors

Frequency

x  k x  k
0

x x

Confidence limits ( or intervals)

50% : x  0.68  x o  x  0.68 k  0.68

68% : x    xo  x   k 1

95% : x  2  x o  x  2 k2

99.7% : x  3  x o  x  3 k 3
Graphical Presentation of Experimental Data

t = 10 min
Data plotted with mean (points) and
std. dev. (error bars)
Propagation of Uncertainty
• Functions of a single variable

• Functions of two or more variables


Propagation of Uncertainty
• Uncertainty is propagated when a measurement is used in
further calculations
 Combined uncertainty

• First consider the case of a single measured variable


e.g. z = log x; x = x0 ± x; z = ?

• Then consider function of 2 or more variables


pM
e.g.  
RT
p  po  p & T  To  T    ?
Functions of a Single Variable
z
δz f ( x)
z0

z  f ( x) δx

x0
0

x
dz z
For z  f(x) :   f (x)  z  f (x)x
dx x
Result for Some Common Functions
(a) Powers: z  Kx n
(K  const; n  const)
dz n 1 z
 nKx 
dx x
 δz  nKx n 1δx

Kx n z
z  n x  n x
x x
δz δx
Relative n
Uncertainty z x
Result for Some Common Functions
(b) Logarithmic and Exponential Functions
z  log x ze x

dz 1 dz
 e x

dx x dx
z  e x
x

1
δz  δx  zx
x
z
 x
z
Functions of Two or More Variables
(a) Sum of two or more variables: z = x + y + ...
z = (z0 ± z) = (x0 ± x) + (y0 ± y) + ...

z0 = x0 + y0 + ...

What is the maximum possible value of z?


z = x + y + ...

(b) Difference of two variables: z = x – y


z = (z0 ± z) = (x0 ± x) – (y0 ± y)
z 0 = x0 – y 0
z = x + y
Sum of Two Variables

5±1
+
3±1
=
8±2
Difference of Two Variables

5±1
-
3±1
=
2±2
Functions of two or more variables
Addition and Subtraction
• But, for error sources that are independent, it is improbable
that the uncertainties are additive

• For independent errors, x, y, …, the most probable


estimate of combined uncertainty for addition or
subtraction is:

z  (x)  (y)  ...


2 2

Root-Sum-Square (RSS) method


or Method of Quadrature
General functions of two variables
General method for single variable: z = f(x)
 df 
z    x
 dx 

General method for two variables: z = f(x,y)


 f   f 
dz    dx    dy
 x   y 

f f
 z  x  y
x y
Result for Common Functions
(a) Product of Two Variables: z = xy

z z
z  xy : y ; x
x y

 z  yx  xy

Divide through by z = xy

δz δx δy
   Relative Uncertainty
z x y
Result for Common Functions
(b) Quotient of Two Variables: z = x/y
z 1 z x
 ;  2
x y y y

1 x
δz   δx   2  δy
y y
x δx x δy
   
y x y y
δx δy δz δx δy
δz  z   z    
x y z x y
Result for Common Functions
(c) Powers: zx y m n

z m 1 n z
 mx y ;  nx m y n 1
x y

δz  mx m 1 y n  δx  nx m y n 1  δy
δx m n δy
 mx y   nx y 
m n

x y
δx δy δz δx δy
δz  mz   nz    m  n
x y z x y
General Functions of Two variables
Independent error sources

2 2
 f   f 
z  f (x, y) : z    x    y
2 2

 x   y 

Examples
(a) z = xy or z = x/y (b) z = xmyn

2 2 2 2
δz  δx   δy  δz  δx   δy 
      m  n 
z  x   y  z  x   y 
Exercise
• The density of gas can be “measured” by directly measuring
pressure and temperature, and assuming ideal gas law:
pM

RT
• Use the following data to determine density of air and its
uncertainty:

p  125.4  0.2 kPa; T  378.2  0.4 K

kg m3 .kPa
M  28.96 ; R  8.314
kmol kmol.K
Exercise - Solution
pM
• Gas density:    f (p, T)
RT
• Assume errors in p and T are independent:
2 2
 f   f  f M
    p  
2
T 2  ;
 p RT

 p  T 
f pM
2 2 
 M   pM  T RT 2
    p 2
   2 
T 2

 RT   RT 
2 2

2

2
  p   T 
   p    T 2
2      
p T   p   T 
Exercise – Solution
pM (125.4 kPa)(28.96 kg/kmol) kg
  3
 1.155 3
RT (8.314 m kPa/kmol.K)(378.2 K) m

2 2 2 2
  p   T   0.2   0.4 
           0.00191
  p   T   125.4   378.2 

kg
   1.155  0.00191  0.00224 3
m

kg
Result :   1.155  0.002 3 Relative Error :
m
kg 
or:   (1155  2) 10 3  0.2%
m3 
Propagation of Probable Errors
• Probable Errors

• General function of two variables

• Results for common functions


Probable Errors
• Based on standard deviations from multiple measurements

• Assume errors are random with a known distribution

• Propagation estimated using Method of Quadrature


Frequency

0
0

x x
General Functions of Two Variables
2 2
 z  2  z  2
z  f(x,y): sz    sx    s y
 x   y 

Standard Deviations Mean

  xi  x 
2
x
 x i
sx  nx
nx 1

 y
y  y
2 i
i
y
sy  ny
n y 1
Results for Common Functions
(a) Sum and
(a) Sum Difference
and DifferenceofofTwo
Two Variables:
Variables:
z  x  y : s z  s 2x  s 2y

z  x  y : s z  s 2x  s 2y

(b) Products and


(b) Products andquotients:
quotients: (c)
(b)Powers :
Powers:
x
z  xy or z  z  x m yn
y
2 2 2
 sx   sy 
2
sz  sx   sy  sz
 m  n 
    
z x y z  x  y
Linear Regression
• Line of best-fit by linear regression

• Standard errors of estimate

• Confidence intervals
Line of best-fit using linear regression

Y = ax + b

• Have n pairs of data, (xi, yi), and we seek to fit a straight line of
the form: Y = ax + b
• For each xi (assumed error free), we can predict a value of Yi
(according to Y = ax + b) such that for each value of xi we have an
error: ei = Yi - yi
Line of best-fit using linear regression
• The line of best-fit is determined by evaluating a and b so as to
minimise the sum of the square of the errors, E
n n n
E   e   Yi  yi     axi  b  yi 
2 2 2
i
i 1 i 1 i 1

n xi yi   xi  yi
Results: a
n x    xi 
2 2
i

b  y  ax

1 1
Where: x   xi and y   yi
n n
Line of best-fit using linear regression
• A measure of how well the best-fit line represents the data is
called the Standard error of estimate, Sy.x, given by:

 y Y 
2

S y. x  i i

n2

• or in terms of the Coefficient of determination, R2:


n  yi  Yi 
2

R  1
2

n y    yi 
2 2
i

(R2 = 1  Perfect line of best-fit)


Line of best-fit using linear regression
• Applying standard statistical methods, it is then possible to
evaluate the (1 - ) Confidence interval (often called a
Prediction interval) for any value, yo, predicted using
Y = ax + b at x = xo

n  1 ( xo  x )2
yo  (axo  b)  S y . x t / 2,n  2 
n  xi  nx
2 2

• t α/2,n-2 is the value obtained for the Student’s t-distribution


corresponding to /2 and degrees of freedom (n - 2)
Line of best-fit using linear regression
Critical Values of the t Distribution

P
 1  2
100

P = % Confidence level
 = Significance
 = Degree of freedom
Line of best-fit using linear regression
Example: C A  C A0 exp( kt ) or ln C A  ln C A0  kt

Best-fit line

95% confidence
limits

Filled green circles represent the experimental data and the line of
best fit is shown with the solid line. The dashed lines mark the bounds
of the 95% confidence interval on predictions for the linear model.
Line of best-fit using linear regression
• Similarly, possible to estimate Confidence intervals for the
regression constants a and b.
• The two-sided (1-) confidence interval for the slope, a, is:
S y. x t / 2,n  2
a 
x 2
i  nx 2

• and the two-sided (1-) confidence interval for the


intercept, b, is:

  xi 
2
1
b  S y . x t / 2,n  2  2
n n   xi  nx 
2 2
References & Further Reading
• Baird, D.C. (1962). “Experimentation: An Introduction to
Measurement Theory and Experiment Design,” Prentice-Hall.
• Cook, N.H. & Rabinowicz, E. (1963). “Physical Measurement and
Analysis,” Adison-Wesley.
• Dieck, R.H. (1992). “Measurement Uncertainty: Methods &
Applications,” Instrument Society of America
• Harrison, D.M. (2001). “Error Analysis in Experimental Physics.”
http://www.upscale.utoronto.ca/PVB/Harrison/ErrorAnalysis/Introd
uction.html
• Wheeler, A.J. & Ganji, A.R. (2010). “Introduction to engineering
experimentation,” 3rd Edition, Pearson Higher Education.

You might also like