You are on page 1of 145

Curs Metrologie

Metrology is the science of measurement.


Measurement today is more important than ever. We rely on measurements for almost everything that touches our lives - from measuring time, buying groceries (by weight, barcode for scanning prices) to predicting weather. From our homes to workplaces, measurements play a transparent but very important role in our lives. Engineering is the application of science to the needs of humanity. Engineers use their knowledge of science, mathematics and experience to find suitable solutions to a problem. They must evaluate multiple scenarios to find the best possible solution. Compromises are at the core of all engineering designs. The best design is the one that meets as many of the original requirements specified as possible. In order to evaluate multiple solutions to the design, physical measurements are made and the data analyzed. Predictions need to be made on how well the design will perform to its specifications before full scale production begins. Tests are performed using Prototype models, computer simulation, designed experiments, destructive and non-destructive tests, scale models and stress tests among the many other methods of evaluation. The typical engineer goes through several phases of career activity. It begins with formal education and an area of specialization in an engineering discipline. This is followed by work experience. During the work phase, the engineer interacts with other departments. These interactions can be with:

Marketing to understand customer requirements. Testing department to specify test requirements and understand test results. Manufacturing and production to ensure that product are made to specifications. Quality departments to ensure quality requirements. Calibration technicians in the pre and post design phase.

Measurement is the most important function of daily commerce. Without accurate and repeatable measurements, global commerce would not exist and chaos would ensue. Needless to say, accurate and repeatable measurements are important to the engineering function in the design process. Imagine an engineer forming design conclusions on bad measurement data. If this involved design of a medical device or an aircraft, many lives would be at stake.

William Thompson, famously known as Lord Kelvin, Baron of Lars (26 June 182417 December 1907) had many things to say about measurements and metrology that are even more appropriate during these present times.

Metrology may be divided into 3 categories: 1. Scientific or Fundamental Metrology for: o The establishment of measurement units, unit systems. o Development and realization of measurement methods and standards. o Transfer of traceability from the standards to users. 2. Applied or Industrial Metrology for: o Application of measurement science to manufacturing and other production processes and its use in public. o Ensuring the suitability of measurement instruments and their calibration. o Quality assurance of measurements. 3. Legal Metrology (regulating use of measuring instruments and measurement requirements) for: o Regulatory requirements for, health, public safety and welfare. o Monitoring environment. o Protection of consumers and fair trade. o International trade and commerce.

This course is designed to familiarize engineers and scientists with Metrology, the science of measurement. A quantitative analysis of design based on sound metrology principles will help engineers and scientists to design better products and services.

his course is designed to familiarize engineers and scientists with Metrology, the science of measurement. The course is divided into the following main topics: 1. 2. 3. 4. 5. 6. Definitions and Basic Concepts Metrology Concepts Measurement Parameters Basic Statistics Measurement Uncertainty Applications

Each main topic is divided into several sub-topics. To select a main topic, click the Main Topics link on the navigation panel at the left. The course has built-in navigation, in the form of the menu links, Next and Back buttons, and other links. The Next and Back buttons take you forward one screen or back one screen, respectively. The course includes a Glossary, Resources, a Course Map, and a Course Notes option all accessible from the navigation panel at the left.

etrology Concepts

Become familiar with historical origins of metrology concepts. Define metrological traceability. Define the seven base units of measurement. Identify various types of measurements. Define calibration. Become familiar with the calibration standards hierarchy. Understand concept of calibration intervals. Become familiar with the requirements for calibration records. Become aware of international standards relating to calibration and metrology.

Measurement Parameters

Understand basic definitions and concepts relating to electrical, dimensional, pressure, temperature, humidity, mass, optical radiation, acoustic, and chemical parameters. Identify instruments used to measure electrical, dimensional, pressure, temperature, humidity, mass, optical radiation, acoustic, and chemical parameters. Become familiar with standards and calibration concepts.

Basic Statistics

Perform basic calculations including probability, percentage, mean, median, mode, range, standard deviation, standard error of mean. Review histogram, bell curve. Perform a z-table calculation.

Reliability Statistics

Define reliability. Identify different types of failures. Identify different probability distributions. Understand the difference between time-terminated and failure-terminated tests. Understand normal, exponential, binomial and Weibull probability distributions. Become familiar with concepts relating to system reliability.

Measurement Uncertainty

Define measurement uncertainty. Become familiar with publications relating to measurement uncertainty. Walk through a seven-step process for determining measurement uncertainty. Understand concept of test uncertainty ratios. Become familiar with Guard Banding techniques.

Applications

Understand Type I and Type II errors. Become familiar with types of design failure. Understand criteria for analog or digital instrument selection. Review six tips for improving measurements. Become familiar with quality issues of verification and validation. Understand aspects of safety characteristics including definitions, design requirements, and quantification of safety. Understand Failure Mode, Effect, and Criticality Analysis (FMEA). Define test specification. Calculate a test specification. Learn about assigning appropriate tolerances using Chi-square. Learn about assigning appropriate tolerances using control charts.

Understand process capability. Become familiar with aspects of calibration including Mutual Recognition Agreements. Review and assessment checklist used in equipment/product design

Learning Objectives

Metrology Concepts

Become familiar with historical origins of metrology concepts. Define metrological traceability. Define the seven base units of measurement. Identify various types of measurements. Define calibration. Become familiar with the calibration standards hierarchy. Understand concept of calibration intervals. Become familiar with the requirements for calibration records. Become aware of international standards relating to calibration and metrology.

History
The earliest measurement of mass seems to have been based on objects, such as seeds and beans, being weighed. In ancient times, measurement of length was based on factors relating to the human body including the length of a foot, the length of a stride, the span of a hand, and the breadth of a thumb. A vast number of measurement systems were developed in early times, most of them used only in small localities.

egyptian Cubit
Although evidence suggests that many early civilizations devised standards of measurement and tools for measuring, the Egyptian cubit (also called Covid) is generally thought to have been the closest to a universal standard of linear measurement in the ancient world. Developed in Egypt about 3000 BCE, the cubit was based on the length of the arm from the elbow to the extended fingertips of the current ruling Pharaoh! The cubit was standardized by a royal master cubit made of black granite for durability. Cubit sticks (usually wooden) used to construct tombs, temples and pyramids, were regularly checked against the granite standard cubit. Pharaohs, priests, royal architects and construction foremen probably enforced the use of this length standard. is said that this comparison against the granite master cubit was performed at every full moon (calibration interval) and failure to do so was punishable by death!

Although the punishment prescribed was severe, the Egyptians had anticipated the spirit of the present day system of legal metrology, standards, traceability and calibration recall. With this standardization and uniformity of length, the Egyptians achieved surprising accuracy. Thousands of workers were engaged in building the Great Pyramid of Giza. Through the use of cubit sticks, they achieved an accuracy of 0.05%. In roughly 756 feet or 9,069.4 inches, they were within 4 1/2 inches. The cubit usually is equal to about 18 inches (457 mm) although in some cultures it was as long as 21 inches (531 mm). The royal cubit was subdivided in an extraordinarily complicated way. The basic subunit was the digit, about a finger's breadth, of which there were 28 in the royal cubit.

Four digits equaled a palm, five a hand. Twelve digits, or three palms, equaled a small span. Fourteen digits, or one-half a cubit, equaled a large span. Sixteen digits, or four palms, made one t'ser. Twenty-four digits, or six palms, were a small cubit

The Indus (Harappan) Civilization achieved great accuracy in measuring length, mass and time. They were among the first to develop a system of uniform weights and measures. The Harappan civilization flourished in the Current Punjab region of India and Pakistan between 2500 BCE and 1700 BCE. Their measurements were extremely precise. Their smallest division, which was marked on an ivory scale was approximately 1.704mm, the smallest division ever recorded on a scale of the Bronze Age. Harappan engineers followed the decimal division of measurement for all practical purposes, including the measurement of mass as revealed by their hexahedron weights. A hexahedron is a polyhedron with six faces. A regular hexahedron, with all its faces perfect squares, is a cube. The Harappans appear to have adopted a uniform system of weights and measures. An analysis of the weights discovered in excavations suggests that they had two different series, both decimal in nature, with each decimal number multiplied and divided by two. The main series has ratios of 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200, and 500 with each unit weighing approximately 28 grams, similar to the English Imperial ounce or Greek uncia, and smaller objects were weighed in similar ratios with the units of 0.871. Several scales for the measurement of length were also discovered during excavations. One was a decimal scale based on a unit of measurement of 1.32 inches (3.35 centimetres) which has been called the "Indus inch". Of course ten units is then 13.2 inches (33.5 centimetres) which is quite believable as the measure of a "foot", although this suggests the Harappans had rather large feet! Another scale was discovered when a bronze rod was found to have marks in lengths of 0.367 inches. The accuracy with which these scales are marked is surprising. Now 100

units of this measure is 36.7 inches (93 centimetres) which is about the length of a stride. Measurements of the ruins of the buildings which have been excavated show that these units of length were accurately used by the Harappans in their construction. Brick sizes were in a perfect ratio of 4:2:1. The Bureau International des Poids et Mesures, the BIPM, was established by Article 1 of the Convention du Mtre, on 20 May 1875. The General Conference on Weights and Measures in 2000 dedicated May 20 to be designated as "The World Metrology Day" to commemorate the signing of the Metre Convention. The BIPM is charged with providing the basis for a single, coherent system of measurements to be used throughout the world. The decimal metric system, dating from the time of the French Revolution, was based on the metre and the kilogram. Under the terms of the 1875 Convention, new international prototypes of the metre and kilogram were made and formally adopted by the first Confrence Gnrale des Poids et Mesures (CGPM) in 1889. Over time this system developed, so that it now includes seven base SI units.

Laws to regulate measurement were originally developed to prevent fraud. However, units of measurement are now generally defined on a scientific basis, and are established by international treaties. In the United States, commercial measurements are regulated by the National Institute of Standards and Technology (NIST), a division of the United States Department of Commerce. Each country has its own National Measurement Institute (NMI) to regulate measurements in their country.

Names of National Measurement Institute of some countries:


Canada: National Research Council (NRC) United Kingdom: National Physical Laboratory (NPL) Germany: Physikalisch-Technische Bundesanstalt (PTB) Mexico: Centro Nacional de Metrologa (CENAM) United States: National Institute of Science and Technology (NIST)

Definition
A fundamental concept in metrology is metrological traceability which is defined as: "The property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons, all having stated uncertainties." The level of traceability establishes the level of comparability of the measurement: whether the result of a measurement can be compared to the previous one, to a measurement result a year ago, or to the result of a measurement performed anywhere else in the world.

alibration Traceability
Traceability is most often obtained by calibration, establishing the relation between the indication of a measuring instrument and the value of a measurement standard.

asurement Uncertainty
Measurement uncertainty is an integral part of establishing traceability. The topic of measurement uncertainty is explored in another portion of this course. For now, just note that there is a relationship between measurement uncertainty and traceability.

English (Imperial) Units of Measurement


If you think in pounds and miles instead of kilograms and kilometers, you're in the minority. Only the United States, Liberia, and Burma (Myanmar) still primarily use English (Imperial) units--the rest of the world uses the metric system (SI Units).

There is an advantage to the whole world using the same system of units. The confusion that can arise from using mixed units was highlighted by the loss of the Mars Climate Orbiter robotic probe in 1999. This incident occurred because a contractor provided thruster firing data in English units while NASA was using metric units. The result: millions of dollars wasted.

he Seven Base Quantities


The seven base quantities corresponding to the seven base units are: 1. 2. 3. 4. 5. 6. 7. Length Mass Time Electric Current Thermodynamic Temperature Amount of Substance Luminous Intensity

unit of Length
meter, metre, m The meter is the length of the path traveled by light in vacuum during a time interval of 1/299792458 of a second. It follows that the speed of light in vacuum, c0, is exactly 299 792 458 m/s.

Unit of Mass
kilogram, kg The kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram. It follows that the mass of the international prototype of the kilogram, m(K), is exactly 1 kg. The international prototype of the kilogram (kg) is the only remaining physical artifact used to define a base unit of the SI. The kilogram is also in the process of being redefined in terms of natural physical constants in the very near future.

Unit of Time
second, s The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. It follows that the hyperfine splitting in the ground state of the caesium 133 atom, v (hfs Cs), is exactly 9 192 631 770 Hz.

Unit of Electric Current


ampere, A The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to 2 10-7 newton per metre of length. It follows that the magnetic constant, 0 , also known as the permeability of free space is exactly 4 10-7 H/m.

Unit of Thermodynamic Temperature


kelvin, K The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. It follows that the thermodynamic temperature of the triple point of water, Ttpw , is exactly 273.16 K.

Unit of Amount of Substance


mole, mol 1. The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12. 2. When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. It follows that the molar mass of carbon 12, M(12C), is exactly 12 g/mol.

Unit of Luminous Intensity


candela, cd The candela is the luminous intensity, in a given direction, of a source that emits monochromatic

radiation of frequency 540 1012 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian. It follows that the spectral luminous efficacy, K, for monochromatic radiation of frequency 540 1012 Hz is exactly 683 lm/W.

Derived Quantities and Units


All quantities other than base unit quantities are described as derived quantities. Derived quantities are measured using derived units, which are defined as products of powers of the base units. View a diagram of relationships between base and derived SI units. A measurement is made by comparing an unknown quantity to a known quantity that is established as a standard or a reference that has a known traceability. The measurement is then expressed in some quantity through its relationship to the reference. There are several different types of measurements:

Direct Measurements Indirect Measurements Zero Difference, or Null Measurements Ratio Measurements Transfer Measurements Ratio-Transfer Measurements Differential Measurements Substitution Measurements

Let's briefly look at each in turn.

Direct Measurements
When an unknown quantity is measured with a measuring instrument, it is called a direct measurement. Measuring a length with a digital caliper would be considered a direct measurement. Measuring voltage with a Multimeter is a direct measurement.

indirect Measurements
When the unit of measurement is not the one that is being measured, but is expressed as a derivative of the other measured units, it is called an indirect measurement. When pressure is measured, it is derived from force and area measurement. The specific gravity of an object is derived from other units.

Zero Difference, or Null Measurements


When a measurement is made between an unknown quantity and a known reference and the comparison is made by a difference or a null indicator, it is known as the zero difference or null measurement. Calibrating a gage block with a comparator and standard gage blocks would be considered a null measurement. Another example would be a mass balance that has two pans and a zero indicator.

Ratio Measurements
This measurement technique is similar to the null measurement technique. The difference is that the reference is fixed. The zero indicator is adjusted through a divider circuit and the measurement is a result of a product of the reference and the divider adjustment. A beam balance and an electrical Wheatstone Bridge are examples of ratio measurements.

Ratio Measurements -- Wheatstone Bridge


The operation of the Wheatstone Bridge is based on two voltage dividers, both supplied by the same input. The circuit output is taken from both voltage divider outputs. A galvanometer (a very sensitive dc current meter) connected between the output terminals is used to monitor the current flowing from one voltage divider to the other. If the two voltage dividers have exactly the same ratio (R1/R2 = R3/R4), then the bridge is said to be balanced with no current flowing in either direction through the galvanometer. If one of the resistors changes, even minutely, the bridge becomes unbalanced resulting in current flowing through the galvanometer. Thus, the galvanometer becomes a very sensitive indicator of the balance condition.

Transfer Measurements

A transfer standard is used whenever it is not practical to use a reference standard. For example, it may not be practical to use a reference standard in the laboratory or the field. A transfer standard, which may be more suited in terms of robustness and portability, is substituted for the reference standard. The transfer standard is first compared against the reference standard and then used in the laboratory or the field for calibration.

atio-Transfer Measurements
Sometimes, it is necessary to measure larger or smaller quantities than what is available for reference standards. This is achieved using the ratio-transfer method. An electrical resistance (Wheatstone) bridge circuit is one example of this method.

Ratio-Transfer Measurements (continued)


A dc voltage (E) is applied to the Wheatstone Bridge, and a galvanometer (G) is used to monitor the balance condition. The values of R1 and R3 are precisely known, but do not have to be identical. R2 is a calibrated variable resistance, whose current value may be read from a dial or scale. An unknown resistor, RX, is connected as the fourth side of the circuit, and power is applied. R2 is adjusted until the galvanometer, G, reads zero current. At this point, RX = R2R3/R1. This circuit is most sensitive when all four resistors have similar resistance values. However, the circuit works quite well in any event. If R2 can be varied over a 10:1 resistance range and R1 is of a similar value, decade values of R3 values can be switched into and out of the circuit according to the range of value expected from RX. Using this method, RX can be accurately measured by moving one multiple-position switch and adjusting one precision potentiometer.

Differential Measurements
The measured quantity desired is sometimes the difference between the unknown and the reference standard. A thermocouple measures a difference in temperature between the junction

and the wire in terms of an electrical signal. The two pan scale measures the difference between the quantity being measured and the known weight.

BackNext

Substitution Measurements
A measuring instrument can sometimes only display one measurement. For example, an electronic scale with a display can only show one measurement. By first weighing a known weight (reference) with the unknown weight and using a substitution technique of alternately weighing, the inaccuracy of the balance is taken into consideration using a mathematically derived technique.

Calibration

What is it?
Calibration is a set of operations that establish, under specified conditions, the relationship between values of quantities indicated by a measuring instrument (or values represented by a material measure) and the corresponding values realized by standards.

Calibration

What is it? (continued)


Calibration of measuring instruments is a fundamental concept in quality assurance. Calibration is a word which is sometimes misunderstood. Calibration of an instrument means determining by how much the instrument reading is in error by checking it against a measurement standard of known error.

What is it? (continued)

A calibration is thus not usually associated with approval. It "only" gives information about the error of the equipment with respect to an accepted reference value. As a consequence, it is up to the user to decide whether the equipment is sufficiently capable of performing a certain measurement. One possibility is to have the instrument certified. Calibration is a process. It can be as simple as verifying one measurement or a series of steps to derive one value. It is very important to calibrate equipment to provide assurance of its accuracy.

Calibration Standards

Not all calibration standards are equal. Calibration standards have a hierarchy similar to the traceability hierarchy described earlier in this course.

Calibration Standards

International Reference Standards


International reference standards are those agreed upon and maintained by BIPM. The standard kilogram is one example of the international standards (Note that the kilogram is the only SI unit artifact that is a physical standard and that too is in the process of being redefined.)

Calibration Standards

National Primary Standards


The national primary standards are those maintained by the National Metrology Institutes (NMI) such as NIST in the United States. These national primary standards are periodically compared to the standards maintained by BIPM to ensure traceability.

Transfer Standards
Transfer standards are those used as a reference so that the wear and tear on the primary standards is minimized. They are compared with primary standards to ensure traceability.

Working Standards
Working standards are those used in the daily calibration process for verification and comparison. Their traceability comes from transfer standards.

General Shop Level Standards


These standards are used at the local level to calibrate instruments that are used in industry at the general level.

Device Under Calibration


These are instruments used in the daily process level measurements that derive their calibration traceability from the shop level standards.
It should be noted that many laboratories and organizations define a similar level of hierarchy for their standards at a local level. Their most accurate standard with the lowest measurement uncertainty is then sent out to a primary level laboratory for calibration and traceability.

A Myth
A myth has existed mainly in the United States about "NIST Traceability" through NIST Test Numbers. When NIST verifies (note: NIST does not calibrate) an artifact and states the value measured, it issues a report stating its findings. Many organizations and laboratories use these NIST Report Numbers to claim traceability. This runs counter to the definition of traceability and should NOT be used. NIST itself does not endorse the use of NIST Report Numbers for traceability claims. Periodic calibration of equipment is performed to ensure that the Instrument, Measuring and Test Equipment (IMTE) do not result in an out of tolerance condition in between the calibration cycle. Measurement uncertainties of certain attributes in equipment tend to grow over a period of time since the previous calibration. Ignoring this information can result in equipment being out of tolerance before the next calibration interval. Uncertainty growth is illustrated in the figure below.

Many standards (ISO 9001, ISO 17025 and ISO/TS 16949) require that equipment be calibrated at regular, established intervals to ensure that acceptable accuracy and reliability of the equipment is maintained. None of the standards specify the exact interval of calibration. The calibration interval is left to the end user of equipment based on their usage and regulatory requirements. Optimizing calibration intervals will improve compliance with standards and help to ensure equipment reliability.

While periodic calibration of equipment does not prevent the equipment from being out of tolerance between the calibration, it does minimize the risk if the calibration interval is determined using a scientific approach.

Note
The topic of Calibration Interval Analysis is covered in detail in a separate course titled "Interval Analysis" available from WorkPlace Training. The record of test/calibration is contained in the Test/Calibration Report which is the result of the calibration. The report should be clear, concise, and easy to understand and use, and contain information about traceability and measurement uncertainty. A calibration report should contain all the information required by the pertinent calibration standards such as ISO 17025, ISO 9001, ISO/TS 16949, and other regulatory agencies. Read ISO 17025:2005 Sections 5.10.2 through 5.10.4.4. View example of a calibration report that meets the requirements of ISO 17025:2005 Sections 5.10.2 through 5.10.4.4.

International Standards related to Calibration and Metrology and Laboratory Accreditation


International standards such as ISO 17025 and ANSI/Z540 are applicable to calibration and testing laboratories seeking accreditation based on compatibility with international standards. The program activities are operated in conformance with ISO/IEC 17011 and 17025. Accreditation is available to commercial labs, manufacturer's in-house labs, university labs, and Federal, State and local government facilities. Many users of calibration and testing services look to laboratory accreditation and similar efforts to provide some assurance of the technical proficiency and competence of a laboratory to assess the conformance of a product or service to a set of prescribed standards. Requirements and assessment criteria for calibration and testing vary according to the product, system, or service being provided. National governments at various levels have laboratory accreditation programs as well as private-sector professionals and trade organizations.

In the United States, organization such as National Voluntary Laboratory Accreditation Program (NVLAP) administered by NIST, American Association for Laboratory Accreditation (A2LA), Internal Accreditation Services (IAS), Laboratory Accreditation Bureau (LAB) and ACLASS provides third party assessments for laboratory accreditation. Many other countries usually have only one assessment body under the government umbrella that provides the accreditation service.

Measurement Parameters

Understand basic definitions and concepts relating to electrical, dimensional, pressure, temperature, humidity, mass, acoustic, optical radiation, and chemical parameters. Identify instruments used to measure electrical, dimensional, pressure, temperature, humidity, mass, acoustic, optical radiation, and chemical parameters. Become familiar with standards and calibration concepts.

Electrical

SI Base Unit
The basic SI unit for current is the Ampere.

Traceability
Traceability for the electrical parameter is defined in the following way:

Electrical measurements are measurements of the many quantities by which the behavior of electricity is characterized. Measurements of electrical quantities extend over a wide dynamic range and frequencies ranging from 0 to 1012 Hz. The International System of Units (SI) is in universal use for all electrical measurements. Electrical measurements are ultimately based on comparisons with realizations, that is, reference standards, of the various SI units. These reference standards are maintained by the National Institute of Standards and Technology in the United States, and by the national standards laboratories of many other countries. Direct-current (dc) measurements include measurements of resistance, voltage, and current in circuits in which a steady current is maintained. Resistance is defined as the ratio of voltage to current. For many conductors this ratio is nearly constant, but depends to a varying extent on temperature, voltage, and other environmental conditions. The best standard resistors are made from wires of special alloys chosen for low dependence on temperature and for stability. The SI unit of resistance, the ohm, is realized by means of a quantized Hall resistance standard. This is based upon the value of the ratio of fundamental constants h/e2, where h is Planck's constant and e is the charge of the electron, and does not vary with time.

The principal instruments for accurate resistance measurement are bridges derived from the basic four-arm Wheatstone bridge, and resistance boxes. Many multirange digital electronic instruments measure resistance potentiometrically, that is, by measuring the voltage drop across the terminals to which the resistor is connected when a known current is passed through them. The current is then defined by the voltage drop across an internal reference resistor.

For high values of resistance, above a megohm, an alternative technique is to measure the integrated current into a capacitor (over a suitably defined time interval) by measuring the final capacitor voltage. Both methods are capable of considerable refinement and extension.

The SI unit of voltage, the volt, is realized by using arrays of Josephson junctions. This standard is based on frequency and the ratio of fundamental constants e/h, so the accuracy is limited by the measurement of frequency. Josephson arrays can produce voltages between 200 V and 10 V. At the highest levels of accuracy, higher voltages are measured potentiometrically, by using a null detector to compare the measured voltage against the voltage drop across a tapping of a resistive divider, which is standardized (in principle) against a standard cell.

The Zener diode reference standard is the basis for most commercial voltage measuring instruments, voltage standards, and voltage calibrators. The relative insensitivity to vibration and other environmental and transportation effects makes the diodes particularly useful as transfer standards. Under favorable conditions these devices are stable to a few parts per million per year. Most dc digital voltmeters, which are the instruments in widest use for voltage measurement, are essentially analog-to-digital converters which are standardized by reference to their built-in

reference diodes. The basic range in most digital voltmeters is between 1 and 10 V, near the reference voltage. Other ranges are provided by means of resistive dividers, or amplifiers in which gain is stabilized by feedback resistance ratios. In this way these instruments provide measurements over the approximate range from 10 nanovolts to 10 kV. The most accurate measurements of direct currents less than about 1 A are made by measuring the voltage across the potential terminals of a resistor when the current is passed through it. Higher currents, up to about 50 kA, are best measured by means of a dc current comparator, which accurately provides the ratio of the high current to a much lower one which is measured as above. At lower accuracies, resistive shunts may be used up to about 5000 A, but the effective calibration of such shunts is a difficult process.

Alternating-current (ac) voltages are established with reference to the dc voltage standards by the use of thermal converters. These are small devices, usually in an evacuated glass envelope, in which the temperature rise of a small heater is compared by means of a thermocouple when the heater is operated sequentially by an alternating voltage and by a reference (dc) voltage. Resistors, which have been independently established to be free from variation with frequency, permit direct measurement of power frequency voltages up to about 1 kV. Greater accuracy is provided by multijunction (thermocouple) thermal converters, although these are much more difficult and expensive to make. Improvements in digital electronics have led to alternative approaches to ac measurement. For example, a line frequency waveform may be analyzed by using fast sample-and-hold circuits and, in principle, be calibrated relative to a dc reference standard. Also, electronic root-meansquare detectors may now be used instead of thermal converters as the basis of measuring instruments. Voltages above a few hundred volts are usually measured by means of a voltage transformer, which is an accurately wound transformer operating under lightly loaded conditions. The principal instrument for the comparison and generation of variable alternating voltages below about 1 kV is the inductive voltage divider, a very accurate and stable device. They are widely used as the variable elements in bridges or measurement systems. Alternating currents of less than a few amperes are measured by the voltage drop across a resistor, whose phase angle has been established as adequately small by bridge methods. Higher currents are usually measured through the use of current transformers, which are carefully constructed (often toroidal) transformers operating under near-short-circuited conditions. The performance of a current transformer is established by calibration against an ac current comparator, which establishes precise current ratios by the injection of compensating currents to give an exact flux balance.

The inductive voltage divider is the principal instrument for the comparison and generation of __________________ below about 1 kV. A. line frequency phase B. current transformer C. variable alternating voltages D. capacitor voltage Electrical quantities most commonly measured are:

voltage resistance current

Depending on their capabilities, multimeters can measure a wide range of direct and alternating current characteristics, including:

volts ohms amps

The readout can be:


analog, using an indicator needle digital

Dimensional

SI Base Unit
The SI base unit for length is the meter. A meter is defined as the length of the path traveled by light in a vacuum during a time interval of 1/299,792,458 seconds.

Traceability
Traceability for the dimensional parameter is defined in the following way:

In this section we will look at the following instruments for measuring length:

Optical Comparator Micrometers Working Standards Dial Indicator Height Gage Calipers Laser-Scan Micrometer

Calipers

Calipers measure short distances. A typical caliper can measure dimensions such as length, width, depth, and diameter up to six inches. A typical dial caliper has a resolution of 0.001 inch. A typical digital caliper has a resolution of 0.0005 inch

Micrometers
Micrometers usually have greater resolution than calipers. Typical digital micrometers are accurate to 0.0001 inch.

Height Gages
Height gages are used to measure the height of objects. At typical height gage can measure objects 100-300 mm high

Dial Indicators

Dial indicators measure any change in the position of an object. A typical dial indicator has a resolution of 0.0001 inch. Dial indicators are useful for measuring short movements and surface changes. As a shaft turns or an object moves against the indicator. The indicator measures any change in position or surface shape.

Optical Comparator
Optical comparators use an optical system to illuminate surfaces, projecting enlarged shadows onto a calibrated screen. The shadows can be either measured or compared to a template.

laser-Scan Micrometer

Laser-scan micrometers emit scanning laser beams, which are produced by optical units mounted within a frame. The laser beams are used to accurately measure such part dimensions as length, width, depth, diameter, roundness, and gaps. This type of instrument is especially useful for measuring brittle or elastic objects that difficult to measure with calipers or micrometers. Laser-scan micrometers can only be used to measure small parts. Metrology lasers also use laser beams to measure distances. However, unlike laser-scan micrometers, the optical units are not contained within the frame. They can be mounted directly on the unit under test.

They can measure much longer distances than other systems. The resolution of a metrology laser can be as much as 0.0000001 inch.

Working Standards

A kit of working standards can also be used to calibrate measurement instruments. This kit is a set of gage blocks. Gage blocks are specified in four grades: 1. 2. 3. 4. Grade K Grade 0 Grade 1 Grade 2

Grade K is the most accurate gage grade. Grade 2 is the least accurate gage grade. The specifications of all four gage block grades are defined in the ISO 3650 standard.

Check Your Understanding


What measurement instrument is most appropriate for measuring the diameter of metal shafts that vary in diameter from about 4.9 to about 5.1 inches? Accuracy is required to the nearest 0.001 inch. A. 2-3 in. micrometer with a resolution of 0.000001 inch B. Dial caliper with a resolution of 0.0001 inch C. Dial indicator with a resolution of 0.001 inch

D. Height indicator with a resolution of 0.0001 inch

Check Your Understanding


What measurement instrument is most appropriate for measuring the roundness of rubber tubes whose diameters range from 0.3 to 0.8 inches? Accuracy is required to the nearest 0.01 inch. A. Laser scan micrometer with a resolution of 0.001 inch B. Dial caliper with a resolution of 0.001 inch C. 0-1 inch micrometer with a resolution of 0.001 inch D. None of the above What measurement instrument is most appropriate for measuring height of a block with dimensions of approximately 3 inches by five inches by nine inches? Accuracy is required to the nearest 0.0001 in. A. 2-3 in. micrometer with a resolution of 0.0001 inch B. 6 inch dial caliper with a resolution of 0.000 C. Dial indicator with a resolution of 0.0001 inch 1 inch D. 12-inch height gage with a resolution of 0.00005 inch

SI Base Unit
The SI base unit of pressure is the Newton (N) per square meter (m2). This unit is given the name Pascal (Pa). The common non-metric unit of pressure is the pound-force per square inch (lbf/in2). Other units of pressure are millimeter of mercury and inch of water, which are used when measuring pressure with a manometer. The earth's atmosphere exerts pressure on all objects on it. This pressure is known as Atmospheric Pressure. Pressure that is measured in reference to the atmospheric pressure is known as Gage Pressure. Absolute Pressure is defined as: Absolute Press. = Atmospheric Press. + Gage Press. Differential Pressure is the difference in pressure at two different points. Which of the following is NOT a type of pressure?

A. Absolute B. Differential C. Gage D. Dew

Traceability
Traceability for the pressure parameter is defined in the following way:

Pressure is defined as: Pressure = Force/Area Pressure is the force acting against a given surface within a closed container. n the figure, a weight of 1 pound (the force) is shown pressing down on water in a glass cylinder.

The cylinder has a cross-sectional area of 1 square inch. Pressure on the water is, therefore, 1 pound per square inch (usually abbreviated 1 psi). Pressure is measured in terms of the weight units per unit area in this fashion. This pressure acts uniformly in all directions against the side walls of the cylinder and the bottom. If the cylinder is connected to a vertical glass tube A, the 1 pound per square inch pressure forces the water up the tube. As the water column rises, it develops a downward force due to the weight of the water. When this force just balances the force of the 1 pound weight, the water stops rising. The height, H, to which the water rises is about 2.3 feet. The height of the column varies with the density (relative heaviness) of the liquid. If mercury, which is 13.6 times as dense as water, were used in this example, it would only have to rise about 2 inches to balance the 1 pound per square inch pressure of the weight, W. The column of liquid that develops a pressure due to its height is spoken of as a "head" of that liquid. For example, the 2.3 feet of water is a 2.3-foot head of water; the 2 inches of mercury is a 2-inch head of mercury. Observe that the name of the liquid must be stated because it affects the numerical value of the head. The relationship between pressure and head is: P=HxD Where: P = Pressure H = Head (or height of liquid column) D = Density of liquid

Manometer
A U-Tube Manometer is one of the most simple instruments capable of accurately measuring pressure.

Manometer
Because water requires a height of 701.04 millimeters to measure only 1 psi, water is used for relatively low pressures. For higher pressures, mercury is used in manometers and the height is graduated in millimeters for millimeters of mercury. The mathematical relationship for the manometer is defined as: P = P0 + h x x g Where: P is the unknown pressure and P0 is the atmospheric pressure. h is the difference in height of the fluid level between the two columns. is the density of the liquid in the manometer. g is the local acceleration due to gravity. A mechanical device such as the one shown below is also used to measure low pressure.

Boyles Law
This law describes compression. It states that, at a fixed temperature, the volume of a given quantity of gas varies inversely with the pressure exerted on it. To state this as an equation:

where he subscripts refer to the initial and final states, respectively. In other words, if the pressure on a gas is doubled, then the volume will be reduced by one-half. The product of the two quantities remains constant. Similarly, if a gas is compressed to half its previous volume, the pressure it exerts will be doubled.

Boyle's Law
In the diagram, when 8 cu. ft. of gas is compressed to 4 cu. ft., the 30 psia reading will double to 60 psia.

What is the approximate volume when one cubic meter of gas is compressed by 25%? A. 25 cubic decimeters B. 75 cubic centimeters C. 750,000 cubic centimeters D. 750 milliliters

Boyle's Law
Designers use Boyle's Law calculations in a variety of situations: when selecting an air compressor, for calculating the consumption of compressed air in reciprocating air cylinders, and for determining the length of time required for storing air. Boyle's Law, however, may not always be practical because of temperature changes. Temperature increases with compression, and Charles' Law then applies.

Charles' Law
Charles' law states that, at constant pressure, the volume of a gas varies directly with its absolute temperature. Absolute temperature is defined on a scale where zero represents the temperature at which all thermal motion ceases (-273 degrees C or -460 degrees F). The two absolute temperature scales in common use are the Kelvin scale, which uses the same degree as the Celsius scale, and the Rankine scale, which uses the Fahrenheit degree.

Expressed as an equation, Charles' Law states:

where the subscripts refer to the initial and final states. BackNext

Charles' Law
The basic concepts of Charles' Law are summarized below. When the temperature of a gas is increased, the volume changes proportionately (as long as the pressure does not change). The same relationship holds with temperature and pressure, as long as volume does not change.

Combined Gas Law


What if both temperature and pressure are changed at the same time? Another important mathematical description, the Combined Gas Law, can be derived from Boyle's and Charles' Laws. It states that:

This makes it possible to calculate any one of the three quantities -- pressure, volume, and temperature -- as long as the other two are known.

Bourdon Tube Pressure Gage


The term pressure gage tends to be ambiguous in that many pressure measuring instruments are called gages. The majority of pressure gages in use have a Bourdon tube as a measuring element. The Bourdon tube is a device that senses pressure and converts the pressure to displacement. Since Bourdon tube displacement is a function of the pressure applied, it may be mechanically amplified and indicated by a pointer. Thus, the pointer position indirectly indicates pressure.

Bourdon Tube Pressure Gage


The basic Bourdon tube is a hollow tube with an oval cross section and shaped in a circular arc. An example of a simple Bourdon gage is shown in the diagram, with a portion of the tube drawn in cross-section. The size, shape, and material of a Bourdon tube depend on the pressure range and the type of gage desired. Low pressure Bourdon tubes (pressures up to 2,000 psi) are often made of phosphor bronze. High pressure Bourdon tubes (pressures up to 30,000 psi) are made of stainless steel or other high strength materials. High pressure Bourdon tubes tend to have more circular cross sections than their lower range, more oval counterparts. As a pressure is applied to the rigid end of the Bourdon tube, its circular shape and oval cross section force it to straighten, or reduce its curvature, causing a displacement. This displacement, although not linear with respect to pressure, is a function of the pressure applied.

Bourdon Tube Pressure Gage


The diagram shows how the tip (closed end) of a Bourdon tube is connected to the pointer through a linkage arm and sector assembly and pinion gear. The link pushes on a lever called the sector, causing it to rotate about its pivot. The sector contains a small sector or arc-segment of a larger gear that turns the pinion gear. The sector is also called a rack, although the rack is really just the portion that contacts the pinion. The proper design of arm lengths and gear ratios allows the pointer to rotate in linear proportion to the pressure applied, even though the displacement at the end of the Bourdon tube is nonlinear.

A small, light spring, not shown in this diagram, is attached to the drive gear at the end of the rack and pinion assembly and is provided to minimize backlash. It maintains positive contact between the mating gears so that smooth and continuous operation is obtained in either direction.

Dead Weight Piston Gage


If a device is to be considered as a primary standard, it must refer directly to absolute standards of mass, length, or time. The mercury barometer/manometer is considered a primary standard in that it meets these requirements whereas the best Bourdon tube can only be classified as a secondary standard in that it does not reference the above mentioned parameters directly. Another primary standard that lends itself well to the measurement of high pressures is called a dead weight piston gage, or a dead weight tester. It can be certified as a primary standard because both the mass of its weights and the areas of its pistons are traceable to absolute standards of mass and length. The piston portion of a complete tester is shown at the right. It is shown with a stack of masses (weights) on top of the piston assembly. The weights and the weight of the piston itself exert a downward force on the fluid below the piston. If the fluid is pressurized just enough to counteract the force of the weight, the piston rises and floats in its cylinder. If you know the precise weight applied as force, and also the precise area of the end of the piston supported by the fluid, you can calculate the pressure it takes to raise the piston to a fixed equilibrium height in its cylinder. This pressure would be a calibrated pressure referenced to mass and length standard. You can apply this pressure in parallel to a device under test.

Dead Weight Piston Gage


In the diagram, the area of the end of the piston exposed to the pressurizing medium is 0.05 square inches. If there were nine 10-pound weights and the piston itself weighed 10 pounds, the weight and area of the piston would define and produce the minimum pressure: P = F/A = 10 lb./0.05 in.2 As each weight was added, it would generate an additional 200 psi, until the pressure would be 2,000 psi with nine weights plus the piston. Although simplified, this is the principle that is applied to dead weight piston gages.

Since each weight operates with a fixed piston area, the weights can be marked to indicate the nominal pressure in psi that they will generate if placed on the tester. The weights would be marked as 200 psi each rather that 10 pounds each. Actually, on commercial units, smaller weights are provided to generate incremental pressures. The piston weight is also smaller than shown to provide a lower minimum pressure for the tester. The weights are serialized because in precision work neither the nominal area of the piston nor the nominal pressure, as indicated on the weights, is sufficient. If you need to use such a unit to generate pressures with an uncertainty of +/- 0.1 percent of indicated value or better, you must know the actual mass of each weight used and the piston area to a minimum of five significant figures to achieve the required 4-to-1 Test Uncertainty Ratio (TUR). Dead weight piston gages, almost without exception, use air, oil, or water as a pressure media. Since friction cannot be tolerated, there must be some clearance between the piston and cylinder. This clearance allows some of the fluid to leak by the piston. This is not altogether undesirable, as the fluid provides lubrication between the piston and cylinder. If the leakage rate of the fluid by the piston is not closely controlled, the piston may tend to fall too rapidly. Therefore, equilibrium cannot be assured. To ensure that the fluid pressure is accurate, as indicated by the weights and piston area, the piston must be freely floating in equilibrium. It will not indicate properly if it is hung up on some speck of contamination in the fluid or a tiny scratch on the piston or cylinder. The best way to ensure equilibrium is reached and maintained is to spin the piston in the cylinder (with some testers you actually spin the cylinder instead!). Rotating the piston with respect to the cylinder is accomplished either manually or mechanically and accomplishes two goals - (1) it decreases friction and (2) it decreases the rate of fluid leak, allowing you to better achieve and maintain the desired equilibrium float height. A rotation rate that proves adequate for most testers is five revolutions per minute.

SI Base Unit
The SI base unit for temperature is the Kelvin.

Traceability
Traceability for the temperature parameter is defined in the following way:

Measuring temperature is relatively easy because many solids and liquids expand or contract with changes in temperature. These changes enable the building of simple devices for measuring temperature and temperature changes.

nstrument Types
Here are three types of instruments for measuring temperature: 1. Liquid glass thermometer 2. Metal stem thermometer 3. Electronic thermometer

4. Glass Thermometers

5. Glass thermometers probably look familiar. 6. These measure temperature in Fahrenheit and Celsius degrees, and have a resolution of 0.1 degrees.

7. Glass Thermometers
8. Glass thermometers temperature reading depends on the immersion of the thermometer in liquid medium. 9. A total immersion thermometer requires that the thermometer is immersed to within a few millimeters of the column meniscus when the reading is taken. 10. A partial immersion thermometer immersion to a designated mark on the thermometer stem for the correct temperature reading. 11. A complete immersion thermometer must be immersed completely in the liquid medium for a correct temperature reading.

12.

lass Thermometers

13. When a thermometer is used at an immersion depth other than that specified, a stem correction is applied using the following formula: 14. t = kn(t1 - t2) 15. Where: t is the stem correction k = Differential expansion coefficient between mercury and glass k = 0.00016/C for mercury in glass thermometers and 0.001/C for alcohol in glass thermometers n = Number of thermometer scale degrees the mercury/alcohol column is out of the bath t1 = Temperature of the thermometer bulb t2 = Average temperature of the portion of the stem containing mercury which is out of the bath

16.

Metallic Stem Thermometers

17. 18. This thermometer also measures Fahrenheit and Celsius degrees, but its resolution is only to the degree.

Temperature

Many electrical properties of matter are temperature-sensitive. Therefore, special probes can be fitted to meters to provide temperature readings. For example, electronic thermometers can be fitted with platinum, thermocouple, or thermister probes.

Thermister Probes
Thermister probes can measure temperature changes because their resistance varies as the temperature changes.

Thermocouple Probes

Thermocouple probescan measure changes in temperature because their voltage varies with a change in temperature.

It is interesting to note that working standards for temperature are almost always used with baths or chambers whose temperatures are regulated. The measurand is compared with measurements made using higher-quality temperature measurement instruments. The highest quality thermometers are calibrated directly against the physical qualities of matter, such as the freezing temperatures of various pure substances. A special freezing point of water, known as the triple point, is used as the definition of 0 degrees Celsius.

Platinum Probes
Platinum resistance thermometers (PRTs) offer an excellent accuracy over a temperature range from -2000 to 8500 C. PRTs are available from many manufacturers with varying accuracy specifications and packaging options to suit different applications. PRTs can be connected digital units to display the temperature in either Fahrenheit or Celsius degrees. Unlike thermocouples, PRTs can be connected to a display without special cables or connection junctions. These units can measure temperatures to the nearest 0.01 degree.

Platinum Probes
The principle of operation is to measure the resistance of a platinum element. The most common type (PT100) has a resistance of 100 ohms at 0 C and 138.4 ohms at 100 C. There are also PT1000 sensors that have a resistance of 25 ohms and 1000 ohms respectively at 0 C. The relationship between temperature and resistance is approximately linear over a small temperature range: for example, if linear relationship is assumed over the 0 to 100 C range, the error at 50 C is 0.4 C. For precision measurement, it is necessary to linearize the resistance to give an accurate temperature. The relationship between resistance and temperature is defined in the International Temperature Scale (ITS-90). ITS-90 (International Temperature Scale of 1990) is made up of a number of fixed reference points with various interpolating devices used to define the scale between points. A special set of PRTs, called SPRTs, are used to perform the interpolation in standards labs and National Measurement Institutes (NMI) labs over the ranges 13.8033 K (Triple point of Equilibrium Hydrogen) to the Freezing point of Silver which is 971.78

Triple points of various materials (3-phase equilibria between solid, liquid and vapor phases) are independent of ambient pressure. The triple point of water is the temperature to which the resistance-ratios (W = R(t2)/R(t1)) given in Standard Platinum Resistance Thermometer calibrations are referred. In the ITS- 90, t1 is 0.01 C. The triple point of water is the most important defining thermometric fixed point used in the calibration of thermometers to the International Temperature Scale of 1990 (ITS-90) for practical and theoretical reasons. The triple point of water is the sole realizable defining fixed point common to the Kelvin Thermodynamic Temperature Scale (KTTS) and the ITS-90. Its assigned value on these Scales is 273.16 K (0.01 C). It is one of the most accurately realizable of the defining fixed points. Properly used, the triple point of water temperature can be realized with an accuracy of +0.0 C, 0.00015 C. The triple point of water provides a useful check point in verifying the condition of thermometers. It is realized using a triple point water cell as one shown below:

The Triple-Point-of-Water (TPW) Cell consists of a cylinder of borosilicate glass with a reentrant tube serving as a thermometer well, filled with high-purity, gas-free water, and sealed. When an ice mantle is frozen around the well, and a thin layer of this ice mantle is melted next to the well, the triple point of water temperature can be measured in the well. The three states of

water in equilibrium can only occur at the assigned value on the International Temperature Scales of 0.01 degrees C (273.16 K).

What type of instrument to measure temperature should be used if an accuracy of 0.01 degrees Celsius is needed? A. Electronic temperature meter B. Metal stem thermometer C. Liquid-in-glass thermometer D. All of the above

Traceability
Traceability for the Humidity parameter is defined as:

Two important facts about moisture in the atmosphere are basic to an understanding of humidity: 1. Air at a particular temperature can hold only so much water. Air holding the maximum possible amount of water is said to be saturated. 2. Warmer air can hold a lot more water than cooler air. The rule of thumb is that raising the air temperature 18F (10C) doubles its moisture capacity. This means that air at 86F (30C) can hold eight times as much water as air at 32F (0C).

U.S. weather reports traditionally include the relative humidity, telling us how much water there is in the air as a percentage the maximum possible amount. If a temperature of 96F and a relative humidity of 46% is reported, it means that the air contained 46% of the moisture it could possibly hold at that temperature. The equation for humidity is:

Where: Pv = Pressure of the water vapor Ps = Saturation pressure %RH = Percent relative humidity

Dewpoint
If we cool air without changing its moisture content, we will eventually reach a temperature at which the air can no longer hold the moisture it contains. Then water will have to condense out of the air, forming dew or fog. The dewpoint is this critical temperature at which condensation occurs. The dewpoint is a measure of how moist the air mass itself is. That is why the Weather Channel's new dewpoint maps show us the movement of moist air masses across the country. Ordinarily the dewpoint does not vary much during a 24-hour period. Unlike temperature and unlike relative humidity, the dewpoint is usually the same at night as it is during the daytime.

Dewpopint
Dewpoint equations are:

Where: Ps(ta) = the saturation pressure of the gas from a reference table at temperature ta Ps(tdew) = the saturation pressure of the gas from a reference table at temperature (tdew) and:

Where: D = Wet bulb depression ta = Dry bulb temperature tw = Wet bulb temperature

Two-Pressure Humidity Calibration


The most accurate and reliable method of continuous humidity generation for the ~5%98% relative humidity (RH) range is based on a two-pressure principle originally developed by the National Institute of Standards and Technology (NIST). The most accurate on-site calibration and verification systems are mobile and self-contained, and incorporate a humidity generator that can simulate a wide range of temperature/humidity values accurately and consistently with very small measurement uncertainties.

The Principle of a Two-Pressure RH Generator


A two-pressure RH generator uses compressed air of up to 175 psia (1207 kPa) provided by either a portable oil-free air compressor or other source and directed to a receiver. Two-pressure humidity generation entails saturating air or nitrogen with water vapor at a known temperature and pressure. The saturated high-pressure air flows from the saturator and through a pressure-reducing valve, where the gas is isothermally reduced.

The Principle of a Two-Pressure RH Generator


After passing through dual regulators that produce a regulated pressure of ~150 psia (1034 kPa), the gas travels through a flowmeter and into a flow control valve. Although humidity is not affected by flow rate, the valve is set to maintain 220 slpm through the system.

BackNext

The Principle of a Two-Pressure RH Generator


The gas then flows to a presaturator, a vertical cylinder partially filled with water maintained at approximately 10C20C above the desired final saturation temperature. Gas entering the presaturator flows through a coil of tubing immersed in the water, a configuration that forms a heat exchanger. As it passes through the tubing, the gas is warmed to at or near the presaturator temperature. Gas exiting the tubing is deflected downward onto the water surface in a manner that causes circular airflow within the presaturator. The gas continues to warm to the presaturator temperature and becomes saturated with water vapor to nearly 100% RH.

BackNext Next, the gas flows to the saturator, a fluid-encapsulated heat exchanger maintained at the desired final saturation temperature. As the nearly 100% RH gas travels through the saturator it begins to cool, forcing it to the dew point or 100% saturation condition. The gas continues to cool to the desired saturation temperature, causing moisture in excess of 100% to condense out. This step ensures 100% humidity. The saturation pressure, PS, and the saturation temperature, TS, of the air are measured at the point of final saturation before the air stream exits the saturator.

BackNext The gas then enters the expansion valve, which causes it to fall to the test chamber pressure, PC. Because adiabatically expanding gas naturally cools, the valve is heated to keep the gas above dew point. If the gas or the valve were allowed to cool to or below the dew point, condensation could occur at the valve and alter the humidity content of the gas. The cooling effect of expansion, while mostly counteracted by the heated valve, is fully compensated by flowing the gas through a small post-expansion heat exchanger. This allows it to reestablish thermal equilibrium with the fluid surrounding the chamber and saturator before it enters the test chamber. The final pressure, PC, and temperature, TC, of the gas are measured in the test chamber. This chamber exhausts to atmospheric or ambient pressure and so is very near ambient pressure.

A computer/controller embedded in the system controls the entire humidity generation process: temperatures, pressures, and system flow rate. It also handles keypad input, parameter measurements and calculations, data display, and external I/Os to link to peripherals such as additional computers or printers. Various Temperature and humidity measurement devices can be calibrated using the Two pressure RH Generator to achieve Test Uncertainty Ratios of 4:1 or better.

Humidity Salts--Saturated Salt Solutions


A closed box partly filled with saturated salt solutions generates relative humidity in the free room above the salt with good accuracy. The value of the relative humidity depends on the type of salt used. It is mainly independent of temperature, but strongly dependent on temperature uniformity. For an accuracy of 2 %RH, temperature uniformity better than 0.5 C is necessary.

Non Salt Saturated Solutions


Instead of saturated salts, non-concentrated Lithium Chloride solutions can be used. The obtained values of the relative humidity depend on the salt concentration.

Saturated Salts--Advantages and Disadvantages


Key advantages of using saturated salts:

Easy and reliable calibration of humidity probes and transmitters. Based on saturated salt solutions. Fast temperature equilibration. No external power required. Suitable for laboratory use and on-site checks. Salts with certified values are available.

Key disadvantages of using saturated salts:


The salt bath can take several hours to stabilize in a chamber. Salts themselves are of highly corrosive in nature. The salts can also be toxic in nature and there is always a problem of cross contamination. The salt chamber can also need high maintenance. The accuracy of a salt bath is approximately +/-2%.

With the requirement for better measurement uncertainty, the use of salts for Relative Humidity calibration is slowly disappearing as two pressure relative humidity generators become more economical for commercial calibrations.

Saturated Salts
Relative humidity (+/- 3%) of saturated salt solutions at a fixed temperature (250C):

SI Base Unit
The SI base unit of mass is the kilogram. This is the only unit still defined in terms of an object. A kilogram equals the weight of a cylinder of platinum iridium alloy that is kept at the International Bureau of Weights and Measures in Paris, France.

Traceability
Traceability for the mass parameter is defined in the following way:

Some instruments for measuring mass are as follows:


Spring scale Mechanical balance Electronic balance Analytical balance

Let's look at each a bit closer.

Spring Scales
Spring scales work by measuring the effect of a measurand's weight on an expansion spring. They are used in industry to handle many different types of loads. You can also find them in grocery stores to measure the weight of produce. Fishermen use them to measure the weight of their catch. A typical small spring scale can handle a weight of about 10 kilograms with a resolution of 0.1 kilograms.

Mechanical Balance
There are several types of mechanical balances. We will look briefly at a common type: a double-platform balance. With a double-platform balance, you put the sample on one pan and a set of known counterweights on the other, until the two pans balance. This type of balance is effective for comparing the weights of masses of two samples, or comparing a quantity of a known weight.

Electronic Balance

Electronic balances use electromagnets instead of counterweights to balance the sample. A typical electronic balance can handle quantities up to 500 grams, and has a resolution of 0.01 gram.

Analytical Balance

If you need high resolution when measuring the mass of small objects, you can use an analytical balance. This type of balance has enclosed counterweights and a glass chamber to hold the sample, to prevent errors from stray air currents. A typical analytical balance has a resolution of 0.0001 grams (0.1 milligrams). Calibration of balances is almost always done using known weights as working standards. For extremely precise weighing applications, the counterweights should have the same volume as the sample and be made of material with the same density as the sample. Otherwise, a correction may need to be made. Some electronic balances may also need corrections for temperature. What measurement instrument is most appropriate for measuring the mass of the single sample when the accuracy must be to the 0.1 mg? A. Double platform balance with internal counterweights B. Spring scale C. Analytical balance D. Hanging-pan balance with external counterweights What measurement instrument is most appropriate for measuring the mass of a 100 pound weight to within a few ounces? A. Double platform balance capable of handling 500 grams B. Spring scale

C. Analytical balance D. Hanging-pan balance capable of handling 4000 grams

Scale Verification
There are many different techniques for the verification of weights and scales. Three are:
1. Shift Test o Place a traceable weight corresponding to 40-50% of Test Instrument scale on the center of the weighing pan and tare. o Reposition the same standard weight to the front, back, left and right sides of the weighing surface. The scale must indicate within the manufacturers specifications. 2. Linearity Test o Manually depress the weighing mechanism several times until a stable zero is observed. o Place the traceable weights recommended by the manufacturer or a weight corresponding to 3-10%, 20-30%, 50-60% and 90-100% of scale on the weighing surface. o The scale must indicate within manufacturers specifications for each applied value. 3. Repeatability o Place a traceable weight and repeat each reading from linearity test five times and record these indications. o Average the five recorded indications and ensure that all indications fall within manufacturers specifications from recorded average.

Note: Always place standard weights at the center of the weighing pan, unless otherwise directed.

General Requirements for Weight Verification


Environment

Temperature: No excessive fluctuations Humidity: N/A Air quality: N/A.

Stabilization

Stabilize equipment and standards at ambient temperature. Allow a one-hour warm-up time.

Preliminary Operations

Scale pan must be free of dust and foreign material. Place Scale on a flat, firm surface, free of air currents, and in an area with no large temperature fluctuations.

Level the Scale pan and ensure a level condition is maintained through the entire calibration. Zero Scale, then manually depress the weighting pan several times to check the zero stability.

Standards and Calibrating Equipment


The standards and equipment used must have a valid, traceable calibration certificate.

Comparison Method
This method involves the comparison of two weights, an unknown and a standard, by placing each one in turn on the balance pan and noting the reading. The process is symmetrical in that the weight that is placed on the pan first is also on the pan for the final reading. It is possible to change the number of measurements of each weight according to the application, but three measurements of the unknown weight and two of the standard weight is common: X1 S1 X2 S2 X3 Where: Xi is the unknown mass Bi represents the balance read Si is the Standard Mass

Comparison Method
The difference between the two weights is calculated as shown below:

Where m = the difference between the weights. Maximum allowable difference should be less than 0.05% of the scale capacity.

Comparison Method
Steps: 1. Turn on the scale and allow for thermal equilibrium. All weight used for calibration and to verify must also be at thermal equilibrium. 2. Zero the scale. Calibrate the scale per its defined procedure using the traceable weight. 3. Place the unknown weight on the scale pan and record the reading. 4. Place the standard weight on the scale pan and record the reading. 5. Repeat steps 1-3 as required according to the number of measurements decided. 6. A spreadsheet may be used to calculate the actual weight of the data provided that it is validated.

Weight Single-Substitution Method

Sensitivity Weight selection for the Full Electronic Scale: Four times the difference between X and S, but not to exceed 1% of balance capacity or 2 times the allowable tolerance. Note: If no tare weights are used, the entries may be set to zero (Sartorius has a tare feature). If equal nominal values are used, the entries may be set to zero. No Air Buoyancy correction applied. erms:

Weight Single-Substitution Method


Calculation of the correction Cx:

Weight Single-Substitution Method


Steps: 1. Turn on the scale and allow for thermal equilibrium. All weight used for calibration and to verify must also be at thermal equilibrium. 2. Zero the scale. Calibrate the scale per its defined procedure using the traceable weight. 3. Place the standard weight (Plus Tare if required) on the scale pan and record the reading. 4. Place the unknown weight (Plus Tare if required) on the scale pan and record the reading. 5. Place the unknown weight (Plus Tare if required) and a sensitivity weight on the scale pan and record the reading.

Weight Single-Substitution Method


A spreadsheet may be used to calculate the actual weight of the data provided that it is validated. Data Sheet to log observations:

Acoustics
Acoustics is the science concerned with the production, control, transmission, reception, and effects of sound. Its origins began with the study of mechanical vibrations and the radiation of these vibrations through mechanical waves. The most common sensor used for acoustic measurement is the microphone. Measurement-grade microphones are different than typical recording-studio microphones because they can provide a detailed calibration for their response and sensitivity. These microphones are intended for use as standard measuring microphones for the testing of speakers and checking noise levels. They are calibrated transducers and are usually supplied with a calibration certificate stating absolute sensitivity against frequency. Measurement and analysis of sound is a powerful diagnostic tool in noise reduction programs in any noisy environment such as factories and airports.

Sound
Sound can be defined as any pressure change (in air, water or other medium) that the human ear can detect. The most familiar instrument for measuring pressure changes in air is the barometer. However, the pressure changes that occur with changing weather are much too slow for the human ear to detect and therefore do not meet the definition of sound. If variations in atmospheric pressure occur more often at a faster rate (at least 20 times a second) then, they can be heard and can be defined as sound. BackNext

Frequency and Tone


The number of pressure changes per second is called the frequency of the sound, and is measured in Hertz (Hz). The frequency of a sound produces it's distinctive tone. The rumble of distant thunder has a low frequency and a whistle has a high frequency.

Audible Frequencies

The typical range of hearing for a healthy young person extends from approximately 20 Hz up to 20 000 Hz while the range from the lowest to highest note of a piano is 27,5 Hz to 4186 Hz. The pressure changes travel through any elastic medium (such as air) from the source of the sound to the listener's ears. If one sees the lightening strike at a known distance (traveling at the speed of light) and then hear the thunder three to 5 seconds later, one can approximately determine the speed of sound.

Speed of Sound
The speed of sound is approximately 345 meters per second at ambient conditions. The speed of sound at room temperature is approximately: A. 443 meters per second B. 344 km per second C. 344 meters per second D. 344 meters per second2

Wavelength
If the speed of the sound and its frequency is known, one can determine its wavelength.

From the equation, note that high frequency sounds have short wavelengths and low frequency sounds have long wavelengths. The wavelength () is the first quantity used to describe sound. Sound which has only one frequency is known as a pure tone. Pure tones are rarely encountered and most sounds are made up of different frequencies. A single note on a piano has a complex waveform. Industrial noise consists of a wide mixture of frequencies known as broad band noise. If the noise has frequencies evenly distributed throughout the audible range it is known as white noise and it sounds like rushing water

Check Your Understanding

A sound that has only one frequency is known as: A. Mono tone B. Pure tone C. Uni tone D. Quiet tone

Amplitude
The second quantity used to describe sound is the size or amplitude of the pressure fluctuations. The weakest sound that a human ear can detect is about 20 micro Pascal (Pa). This is about 1/5 000 000 000th of the normal atmosphere. This is the intensity level where a sound becomes just audible. For a continuous tone of between 2000 and 4000 Hertz, heard by a person with good hearing acuity under laboratory conditions, this is 0.0002 dyne/cm2 sound pressure and is given the reference level of 0 dB. The loudest sound the human ear can detect is approximately 20 Pascal, which is a million times more than the weakest sound. Thus the scale for sound measurement would be very large. To manage the large scale, a logarithmic scale expressed in ratios is used.

Decibel
The ratio is between a measured quantity of sound and an agreed upon reference level and called the decibel or dB. The decibel is not an absolute unit of measurement. It is a ratio between a measured quantity and an agreed reference level. The dB scale is logarithmic and uses the human hearing threshold of 20 Pa as the reference level. This is defined as 0 dB. Alternate units for this reference level are: 2 x 10-4 microbar (bar); 2 x 10-5 Newton/m2 (N/m2); 2 x 10-5 Pascal (Pa). When the sound pressure in Pa is multiplied by 10, 20 dB is added to the dB level. So 200 Pa corresponds to 20 dB (re 20 Pa), 2000 Pa to 40 dB and so on. Thus, the dB scale compresses a range of a million into a range of only 120 dB. Sound Level in dB = 10 log10(I1 / I0) The ratios 1,10,100, 1000 give dB values 0 dB, 10 dB, 20 dB and 30 dB respectively. This implies that an increase of 10 dB corresponds to a ratio increase by a factor 10.

This can easily be shown: Given a ratio R we have R[dB] = 10 log R. Increasing the ratio by a factor of 10 we have: 10 log (10*R) = 10 log 10 + 10 log R = 10 dB + R dB. Another important dB-value is 3dB. This comes from the fact that: An increase by a factor 2 gives: an increase of 10 log 2 approximately 3 dB. A "increase" by a factor 1/2 gives: an "increase" of 10 log 1/2 approximately -3 dB. The advantage of the decibel scale is that it gives a much better approximation to the human perception of relative loudness than the Pascal scale. This is because the ear reacts to a logarithmic change in level, which corresponds to the decibel scale where 1 dB is the same relative change everywhere on the scale.

Check Your Understanding


The reference level used for sound measurement in dB is A. 20 Pascal B. 20 Pa C. 20 nPa D. 20 kPa What is 20 000 Pa sound level expressed in dB? A. 40 dB B. 15 dB C. 20 dB D. 30 dB he buzz of a mosquito produces an intensity having a 40-dB rating. How many times more intense is the sound of normal conversation if it has an intensity rating of 60-dB?

A. 20 B. 100 C. 200 D. 400

Bels
Compared to the buzz of a mosquito normal conversation is 20 dB more intense. This 20 db difference corresponds to a 2-Bel difference. This difference is equivalent to a sound which is 102 more intense. Always raise 10 to a power which is equivalent to the difference in "Bels." An automatic focus camera is able to focus on objects by use of an ultrasonic sound wave. The camera sends out sound waves which reflect off distant objects and return to the camera. A sensor detects the time it takes for the waves to return and then determines the distance an object is from the camera. If a sound wave (speed = 340 m/s) returns to the camera 0.150 seconds after leaving the camera, how far away is the object? A. 50 m B. 25.5 m C. 30 m D. 20 m The illustration at the right shows sound level values for various commonly occuring sound sources. BackNext A sound level meter is an instrument designed to respond to sound in approximately the same way as the human ear and to give objective, reproducible measurements of sound pressure level. There are many different sound measuring systems available. Although different in detail, each system consists of a microphone, a processing section and a read-out unit

Measurement of Sound
The most suitable type of microphone for sound level meters is the condenser microphone, which combines precision with stability and reliability. The electrical signal produced by the microphone is relatively small and is amplified by a preamplifier before being processed. Different types of processing may be performed on the signal. The signal may pass through a weighting network which is an electronic circuit whose sensitivity varies with frequency in the same way as the human ear simulating loudness profiles. Three standardized weighting known as "A", "B" and "C" have been internationally characterized. The "A" weighting network is the most widely used since the "B" and "C" weightings do not correlate well with subjective tests.

Measurement of Sound
Frequency analysis is performed and the results are presented on a spectrogram. After the signal has been weighted and/or divided into frequency bands the resultant signal is amplified, and the Root Mean Square (RMS) value determined in an RMS detector. RMS value is important in sound measurement because the RMS value is directly related to the amount of energy in the sound being measured. The read-out displays the sound level in dB, or some other derived unit such as dB(A) (which means that the measured sound level has been A-weighted). The signal is also available at output sockets (in AC or DC form), for connection to external instruments such as level or tape recorders to provide a record and for further processing.

Calibration of Sound Level Meter

It is not practical to maintain a standard source which generates a Pascal of sound pressure; therefore all acoustical measurements are traceable through accurately calibrated microphones. Sound level meters should be calibrated in order to provide precise and accurate results. This is best done by placing a portable acoustic calibrator, such as a sound level calibrator or a pistonphone, directly over the microphone. These calibrators provide a precisely defined sound pressure level to which the sound level meter can be adjusted. It is good measurement practice to calibrate sound level meters immediately before and after each measurement session. If recordings are to be made of noise measurements, then the calibration signal should also be recorded to provide a reference level on playback.

Calibration of Sound Level Meter


The three main groups of microphones are pressure, free-field, and random-incidence, each with their own correction factors for different applications. A laboratory standard microphone is a condenser microphone capable of being calibrated to a very high accuracy by a primary method such as the closed coupler reciprocity method. Consequently it is required to have an exposed diaphragm surrounded by a ring which allows it to be mated to a small coupler without disturbing the diaphragm. Other sound sensors include hydrophones for measuring sound in water or accelerometers for measuring vibrations causing sound.

Acoustics

Accelerometer
An accelerometer is a device for measuring acceleration. An accelerometer inherently measures its own motion (locomotion), in contrast to a device based on remote sensing. An accelerometer is an instrument for measuring acceleration, detecting and measuring vibrations, or for measuring acceleration due to gravity (inclination). Accelerometers can be used to measure vibration on cars, machines, buildings, process control systems and safety installations. They can also be used to measure seismic activity, inclination, machine vibration, dynamic distance and speed with or without the influence of gravity. Accelerometers are used along with gyroscopes in inertial guidance systems, as well as in many other scientific and engineering systems. One of the most common uses for micro electromechanical system (MEMS) accelerometers is in airbag deployment systems for modern automobiles. In this case the accelerometers are used to detect the rapid negative acceleration of the vehicle to determine when a collision has occurred and the severity of the collision. BackNext

Standards
IEC 651 is a standard established by the International Electrotechnical Commission which defines the specifications for various grades of sound level meters. In the United States and some other countries, the American National Standard ANSI S1.4-1983 is used. IEC 61094 'Measurement Microphones' gives specifications and defines calibration methods for various types of microphone used for making measurements.

Optical Radiation

Optical radiation is part of the electromagnetic spectrum covering the region from the ultraviolet (UV, 200 nm) to the near infrared (NIR, 3000 nm). From the earliest times, people have wondered about the nature of light. The Greeks and Isaac Newton thought that light was a stream of particles. During the 18th Century, a scientist called Thomas Young showed that light was made up of waves. Light is a form of energy. It is an electromagnetic wave. The electromagnetic spectrum is very large ranging from gamma rays to AM (Amplitude Modulation) waves. The visible part of this range is very small ranging from about 400 nanometers to a little over 700 nanometers. Plants and vegetables grow when they absorb light. Electrons are emitted by certain metals when light is shined on them, showing that there is some energy in light. This is the basis of the photoelectric cell.

Optical Radiation
A question that is frequently asked is why light waves can travel in a vacuum while sound cannot. This is because light waves consist of a coupled fluctuations in electric and magnetic fields and therefore require no material medium for their passage. Sound waves are pressure fluctuations and cannot occur without a material medium to transmit them.

Optical Radiation
In free space, all electromagnetic waves have the velocity of light, which is: c = 3.00 x 108 m/s = 186,000 mi/s The relationship with frequency (f), wavelength (?) and the speed of light (c) is:

Optical Radiation

Visible Light Spectrum in Relations to Other Wavelengths

Optical Radiation
The visible part of the spectrum may be further subdivided according to color, with red at the long wavelength end and violet at the short wavelength end, as illustrated (schematically) in the following figure.

It can be seen that visible light occupies a very narrow band of the whole spectrum. Typically, their wavelengths are between 4 x 10-7 m (4,000 or 0.4 m m) to 7 x 10-7 m (7,000 or 0.7 m m). ( stands for 'angstrom' and is equal to 10-10 m.)

Optical Radiation

Relationship between energy, frequency and wavelength


Electromagnetic waves are formed when an electric field couples with a magnetic field. The magnetic and electric fields of an electromagnetic wave are perpendicular to each other and to the direction of the wave. James Clerk Maxwell and Heinrich Hertz are two scientists who studied how electromagnetic waves are formed and how fast they travel.

BackNext

Relationship between energy, frequency and wavelength


Electromagnetic waves can be described by their wavelengths, energy, and frequency. All three of these things describe a different property of light, yet they are related to each other

mathematically. This means that it is correct to talk about the energy of an X-ray or the wavelength of a microwave or the frequency of a radio wave. X-rays and gamma-rays are usually described in terms of energy, optical and infrared light in terms of wavelength, and radio in terms of frequency. This is a scientific convention that allows the use of the units that are the most convenient for describing whatever energy of light one is measuring. There is a large difference in energy between radio waves and gamma-rays. BackNext Example: Electron-volts, or eV, are a unit of energy often used to describe light in astronomy. A radio wave can have energy of around 4 x 10-10 eV A gamma-ray can have an energy of 4 x 109 eV. That is an energy difference of 1019 (or ten million trillion) eV!

Wavelength
The wavelength describes the distance between two peaks of a wave. Wavelength is usually measured in meters (m).

Frequency
Frequency is the number of cycles of a wave to pass some point in a second. The units of frequency are thus cycles per second, or Hertz (Hz). Radio stations broadcast radio transmissions at different frequencies.

Photons

Light acts like a wave, but sometimes it acts like a particle. Particles of light are called photons. Low-energy photons, like radio photons, tend to behave more like waves, while higher energy photons (i.e. X-rays) behave more like particles. That is the reason X-rays are not measured in waves very often. They are measured in individual X-rays and their energies.

BackNext Visible light, gamma rays and microwaves are really the same things. They are all electromagnetic radiation; they just differ in their wavelengths. Light with shorter wavelengths and hence higher frequencies have more energy than those with longer wavelengths and hence lower frequencies. By the use of special sources or special filters, it is possible to limit the wavelength spread to a small band, typically of a width of 100 Such lights can be described as monochromatic, meaning of a single color. Interferometer

An interferometer is an optical device which utilizes the effect of interference. It starts with an input beam, splits it into two or more separate beams with a beamsplitter, possibly exposes some of these beams to some external influences (e.g. some length changes or refractive index changes

in a transparent medium), and recombines the beams on another beamsplitter. The power or the spatial shape of the resulting beam can then be used for a measurement such as length. Interferometers can be used for quite different purposes - by far not only for length measurements. Some examples are:

for the measurement of a distance (or changes of a distance or a position, i.e., a displacement) with an accuracy of better than an optical wavelength (in extreme cases, e.g. for gravitational wave detection, with a sensitivity many orders of magnitude below the wavelength) for measuring the wavelength e.g. of a laser beam or for analyzing a beam in terms of wavelength components for monitoring slight changes of an optical wavelength or frequency for measuring slight deviations of an optical surface from perfect flatness (or from some other shape) for measuring the linewidth of a laser for revealing tiny refractive index variations or induced index changes in a transparent medium for modulating the power or phase of a laser beam for measurements of the dispersion of optical components as an optical filter

Depending on the application, the demands on the light source in an interferometer can be very different. In many cases, a spectrally very pure source, e.g. a single-frequency laser is required. Sometimes, the laser has to be wavelength-tunable. In other cases (e.g. for dispersion measurements with white light interferometers), one requires a light source with a very broad and smooth spectrum.

Photodiodes

Photodiodes are frequently used for the detection of light. They are semiconductor devices which contain a p-n junction, and often an intrinsic (undoped) layer between n and p layers. Devices with an intrinsic layer are called p-i-n or PIN photodiodes. Light absorbed in the depletion region or the intrinsic region generates electron-hole pairs, most of which contribute to a photocurrent Photodiodes can be operated in two very different modes:

1. Photovoltaic mode: like a solar cell, the illuminated photodiode generates a voltage which can be measured. However, the dependence of this voltage on the light power is rather nonlinear, and the dynamic range is quite small. Also, the maximum speed is not achieved. 2. Photoconductive mode: here, one applies a reverse voltage to the diode (i.e., a voltage in the direction where the diode is not conducting without incident light) and measures the resulting photocurrent. (It may also suffice to keep the applied voltage close to zero.) The dependence of the photocurrent on the light power can be very linear over six or more orders of magnitude of the light power, e.g. in a range from a few nanowatts to tens of milliwatts for a silicon p-i-n photodiode with an active area of a few mm2. The magnitude of the reverse voltage has nearly no influence on the photocurrent and only a weak influence on the (typically rather small) dark current (obtained without light), but a higher voltage tends to make the response faster and also increases the heating of the device

Typical photodiode materials are:


silicon (Si): low dark current, high speed, good sensitivity roughly between 400 nm and 1000 nm (best around 800-900 nm) germanium (Ge): high dark current, slow speed due to large parasitic capacity, good sensitivity roughly between 600 nm and 1800 nm (best around 1400-1500 nm) indium gallium arsenide (InGaAs): expensive, low dark current, high speed, good sensitivity roughly between 800 nm and 1700 nm (best around 1300-1600 nm) Photometry is the measurement of light, which is defined as electromagnetic radiation which is detectable by the human eye. It is thus restricted to the wavelength range from about 360 to 830 nanometers (1000 nm = 1m). Photometry is just like radiometry except that everything is weighted by the spectral response of the eye. Visual photometry uses the eye as a comparison detector, while physical photometry uses either optical radiation detectors constructed to mimic the spectral response of the eye, or spectroradiometry coupled with appropriate calculations to do the eye response weighting. Typical photometric units include lumens, lux, and candela. he following table shows SI photometry units:

BackNext Radiometry is the measurement of optical radiation, which is electromagnetic radiation within the frequency range between 31011 and 31016 Hz. This range corresponds to wavelengths between 0.01 and 1000 micrometres, and includes the regions commonly called the ultraviolet, the visible and the infrared. Typical units encountered are watts/m2 and photons/sec-steradian. The difference between radiometry and photometry is that radiometry includes the entire optical radiation spectrum, while photometry is limited to the visible spectrum as defined by the response of the eye. Photometry is more difficult to understand, primarily because of the arcane terminology, but is fairly easy to do, because of the limited wavelength range. Radiometry, on the other hand, is conceptually somewhat simpler, but is far more difficult to actually do.

Check Your Understanding


Photometry is limited to the visible spectrum, while radiometry includes the entire ______________. A. infrared spectrum B. gamma range C. x-ray array D. optical radiation spectrum

Measurement of Light
The following table shows SI radiometry units:

BackNext

Types of Radiation
All matter produces radiation. The various types of radiation are shown in the table below.

Types of Radiation
Radio Waves are produced when free electrons are forced to move in a magnetic field, or when electrons change their spin in a molecule. They are used for communication and to study low energy motions in atoms. All electrical goods generate Radio Waves. Radio Waves from space can be used to study cool interstellar gases. Radio Waves cannot be detected by humans. Infra Red radiation is produced by the vibrations of molecules. Human skin feels this radiation as heat. Microwave ovens work by using Infra Red radiation of the correct frequency to make the water molecule vibrate faster. A faster vibrating molecule is a hotter molecule. Only the food which contains water is affected. The plate which is a dry mineral is unaffected. Infra Red is used as an analytical tool for molecules in Chemistry. Visible and Ultra Violet Light are produced by chemical reactions and ionisations of outer electrons in atoms and molecules. There are many chemical reactions that are instigated by this radiation: the chemical retinal in animal eyes, chlorophyll in plants, silver chloride in photography, the chemical melanin in human skin, silicon converts light to electricity. Light is the most familiar electromagnetic radiation because the Earth's atmosphere is transparent to it. Light (and a little of the Infra Red and Ultra Violet on either side of it) can pass through the atmosphere. Living organisms have evolved to use these waves. Visible Light is simply the part of the electromagnetic spectrum that reacts with the chemicals in our eyes. Bees can see more Ultra Violet than we can. Snakes can detect Infra Red. BackNext X-Rays are produced by fast electrons stopping suddenly, or by ionisation of the inner electrons of an atom. They are produced by high energy processes in space: gases being sucked in to a black hole and becoming compressed; exploding stars. They are used in medicine to look through flesh. In Physics the waves are small enough to pass between atoms and molecules so they can be used to determine molecular structures. Gamma Rays are produced by very high energy processes, usually involved with the nucleus of atoms. Radioactivity and exploding stars produce Gamma Rays. They are very dangerous because if they strike atoms and molecules they will do lots of damage. If the molecules are the long and complex molecules of life, death and mutation could occur.

Measurement of Light
Photometric Quantities Many different units of measure are used for photometric measurements. People sometimes ask why there need to be so many different units, or ask for conversions between units that can't be converted (lumens and candelas, for example).

We are familiar with the idea that the adjective "heavy" can refer to weight or density, which are fundamentally different things. Similarly, the adjective "bright" can refer to a lamp which delivers a high luminous flux (measured in lumens), or to a lamp which concentrates the luminous flux it has into a very narrow beam (candelas). Because of the ways in which light can propagate through three-dimensional space, spread out, become concentrated, reflect off shiny or matte surfaces, and because light consists of many different wavelengths, the number of fundamentally different kinds of light measurement that can be made is large, and so are the numbers of quantities and units that represent them.

Measurement of Light
Photometric Versus Radiometric Quantities There are two parallel systems of quantities known as photometric and radiometric quantities. Every quantity in one system has an analogous quantity in the other system. Some examples of parallel quantities include:

Luminance (photometric) and radiance (radiometric) Luminous flux (photometric) and radiant flux (radiometric) Luminous intensity (photometric) and radiant intensity (radiometric)

In photometric quantities every wavelength is weighted according to how visible it is, while radiometric quantities use unweighted absolute power. For example, the eye responds much more strongly to green light than to red, so a green source will have higher luminous flux than a red source with the same radiant flux would. Light outside the visible spectrum does not contribute to photometric quantities at all, so for example a 1000 watt space heater may put out a great deal of radiant flux (1000 watts, in fact), but as a light source it puts out very few lumens (because most of the energy is in the infrared, leaving only a dim red glow in the visible). Watts (radiant flux) versus lumens (luminous flux) A comparison of the watt and the lumen illustrates the distinction between radiometric and photometric units. The watt is a unit of power. We are accustomed to thinking of light bulbs in terms of power in watts. But power is not a measure of the amount of light output. It tells you how quickly the bulb will increase your electric bill, not how effective it will be in lighting your home. Because incandescent bulbs sold for "general service" all have fairly similar characteristics, power is a guide to light output, but only a rough one. Watts can also be a measure of output. In a radiometric sense, an incandescent light bulb is about 80% efficient; 20% of the energy is lost (e.g. by conduction through the lamp base) The remainder is emitted as radiation. Thus, a 60 watt light bulb emits a total radiant flux

of about 45 watts. Incandescent bulbs are, in fact, sometimes used as heat sources, (as in a chick incubator), but usually they are used for the purpose of providing light. As such, they are very inefficient, because most of the radiant energy they emit is invisible infrared. There are compact fluorescent bulbs that say on their package that they "provide the light of a 60 watt bulb" while consuming only 15 watts. The lumen is the photometric unit of light output. Although most consumers still think of light in terms of power consumed by the bulb, it has been a trade requirement for several decades that light bulb packaging give the output in lumens. The package of a 60 watt incandescent bulb indicates that it provides about 900 lumens, as does the package of the 15 watt compact fluorescent. The lumen is defined as amount of light given into one steradian by a point source of one candela strength; while the candela, a base SI unit, is defined as the luminous intensity of a source of monochromatic radiation, of frequency 540 terahertz, and a radiant intensity of 1/683 watts per steradian. (540 THz corresponds to about 555 nanometers, the wavelength, in the green, to which the human eye is most sensitive. The number 1/683 was chosen to make the candela about equal to the standard candle, the unit which it superseded). Combining these definitions, we see that 1/683 watt of 555 nanometer green light provides one lumen. The relation between watts and lumens is not just a simple scaling factor. We know this already, because the 60 watt incandescent bulb and the 15 watt compact fluorescent both provide 900 lumens. The definition tells us that 1 watt of pure green 555 nm light is "worth" 683 lumens. It does not say anything about other wavelengths. Because lumens are photometric units, their relationship to watts depends on the wavelength according to how visible the wavelength is. Infrared and ultraviolet radiation, for example, are invisible and do not count. One watt of infrared radiation (which is where most of the radiation from an incandescent bulb falls) is worth zero lumens. Within the visible spectrum, wavelengths of light are weighted according to a function called the "photopic spectral luminous efficiency." According to this function, 700 nm red light is only about 4% as efficient as 555 nm green light. Thus, one watt of 700 nm red light is "worth" only 27 lumens.

Photometric measurement techniques


Photometric measurement is based on photo detectors, devices (of several types) that produce an electric signal when exposed to light. Simple applications of this technology include switching luminaires on and off based on ambient light conditions, and light meters, used to measure the total amount of light incident on a point. More complex forms of photometric measurement are used frequently within the lighting industry.

Spherical photometers are used to measure the directional luminous flux produced by lamps, and consist of a large-diameter globe with a lamp mounted at its center. A photocell rotates about the lamp in three axes, measuring the output of the lamp from all sides. Luminaires (known to laypersons simply as light fixtures) are tested using goniophotometers and rotating mirror photometers, which keep the photocell stationary at a sufficient distance that the luminaire can be considered a point source. Rotating mirror photometers use a motorized system of mirrors to reflect light emanating from the luminaire in all directions to the distant photocell; goniophotometers use a rotating 2-axis table to change the orientation of the luminaire with respect to the photocell. In either case, luminous intensity is tabulated from this data and used in lighting design.

Units of Luminance
A foot-lambert or footlambert (fL, sometimes fl or ft-L) is a unit of luminance in U.S. customary units and elsewhere. A foot-lambert equals 1 / candela per square foot, or 3.4262591 candelas per square meter (nits). The luminance of a perfect Lambertian diffuse reflecting surface in foot-lamberts is equal to the incident illuminance in foot-candles. For real diffuse reflectors, the ratio of luminance to illuminance in these units is roughly equal to the reflectance of the surface. The foot-lambert is rarely used by electrical and lighting engineers, in favor of the candela per square foot or candela per square meter. The foot-lambert is still used in the motion picture industry for the luminance of images on a projection screen. The foot-lambert is also used in the flight simulation industry to measure the luminance of the visual display system. The peak display luminance (screen brightness) for Full Flight Simulators (FFS) is specified as at least 6 foot-lamberts, by the European Aviation Safety Authority (EASA) and the Federal Aviation Administration (FAA) in the USA. A foot-candle (sometimes footcandle; abbreviated fc, lm/ft2, or sometimes ft-c) is a non-SI unit of illuminance or light intensity widely used in photography, film, television, and the lighting industry.

Reflectivity
In optics and heat transfer, reflectivity is the fraction of incident radiation reflected by a surface. In full generality it must be treated as a directional property that is a function of the reflected direction, the incident direction, and the incident wavelength. However it is also common averaged over the reflected hemisphere as follows:

where Grefl() and Gincid() are reflected and incident spectral intensities, respectively. This can be further averaged over all wavelengths to give the total hemispherical reflectivity:

Reflectivity
Going back to the fact that reflectivity is a directional property, it should be noted that most surfaces can be divided into those that are specular and those that are diffuse. For specular surfaces, such as glass or polished metal, reflectivity will be nearly zero at all angles except at the appropriate reflected angle. For diffuse surfaces, such as matt white paint, reflectivity is uniform; radiation is reflected in all angles equally or near-equally. Such surfaces are said to be Lambertian. Most real objects have some mixture of diffuse and specular reflective properties. BackNext

Spectrometer
A spectrometer is an optical instrument used to measure properties of light over a specific portion of the electromagnetic spectrum. The variable measured is most often the light's intensity but could also, for instance, be the polarization state. The independent variable is usually the wavelength of the light, normally expressed as some fraction of a meter, but sometimes expressed as some unit directly proportional to the photon energy, such as wave number or electron volts, which has a reciprocal relationship to wavelength.

Spectrometer
A spectrometer is used in spectroscopy for producing spectral lines and measuring their wavelengths and intensities. Spectrometer is a term that is applied to instruments that operate over a very wide range of wavelengths, from gamma rays and X-rays into the far infrared. In general, any particular instrument will operate over a small portion of this total range because of the different techniques used to measure different portions of the spectrum. Below optical frequencies (that is, at microwave, radio, and audio frequencies), the spectrum analyzer is a closely related electronic device. Spectrometers are used in spectroscopic analysis to identify materials.

Spectroscopes Spectroscopes are used often in astronomy and some branches of chemistry. Early spectroscopes were simply a prism with graduations marking wavelengths of light. Modern spectroscopes, such as monochromators, generally use a diffraction grating, a movable slit, and some kind of photo detector, all automated and controlled by a computer. When a material is heated to incandescence it emits light that is characteristic of the atomic makeup of the material. Particular light frequencies give rise to sharply defined bands on the scale which can be thought of as fingerprints. For example, the element sodium has a very characteristic double yellow band known as the Sodium D-lines at 588.9950 and 589.5924 nanometers, the colour of which will be familiar to anyone who has seen a low pressure sodium vapor lamp. Arrays of photosensors are also used in place of film in spectrographic systems. Such spectral analysis, or spectroscopy, has become an important scientific tool for analyzing the composition of unknown material and for studying astronomical phenomena and testing astronomical theories. Spectrographs
A spectrograph is an instrument that transforms an incoming time-domain waveform into a frequency spectrum, or generally a sequence of such spectra. There are several kinds of machines referred to as spectrographs, depending on the precise nature of the waves.The first spectrographs used photographic paper as the detector. The star spectral classification and discovery of the main sequence, Hubble's law and the Hubble sequence were all made with spectrographs that used photographic paper.

Spectrographs use electronic detectors, such as CCDs which can be used for both visible and UV light. The exact choice of detector depends on the wavelengths of light to be recorded. BackNext

Radiometer
A radiometer is a device used to measure the radiant flux or power in electromagnetic radiation. Although the term is perhaps most generally applied to a device which measures infrared radiation, it can also be applied to detectors operating any wavelength in the electromagnetic spectrum; a spectrum-measuring radiometer is also called a spectroradiometer. The most important characteristics of a radiometer are:

spectral range (what wavelengths) spectral sensitivity (what sensitivity versus wavelength) field of view (18 degrees or limited to a certain narrow field) directional response (typically cosine response of unidirectional response)

Radiometer
Radiometers can use all kinds of detectors; some are "thermal" that is absorbing energy and converting that to a signal, some sense photons (photodiode) having a constant response per quantum (light particle). In a common application, the radiation detector within a radiometer is a bolometer which absorbs the radiation falling on it and, as a result, rises in temperature. This rise can then be measured by a thermometer of some type. This temperature rise can be related to the power in the incident radiation. A Microwave radiometer operates in the Microwave region of the electromagnetic spectrum.

Chemical metrology, or metrology in chemistry, can be simply described as the science of achieving traceable analytical data in chemistry. International trends towards globalization have a huge impact on chemical metrology. Food quality, pesticide residues, medicines, pollutants, and genetically modified material are all hot topics, and a huge amount of effort is being committed world-wide to ensure that chemical measurements are recognized among countries engaged in global commerce. Metrology in chemistry is the science of chemical measurements. The principles of chemical and physical metrology are unique and the theoretical concepts of traceability, uncertainty and calibration are the similar. However, the practical application to real measurement problems in a systematic way is different for chemical metrology.

SI Base Unit
Different from Physical Metrology, Chemical Metrology concerns the measurement of amount of substance. The SI base unit is the Mole.

The mole is defined as the amount of substance which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12. Quantitative studies of chemical analysis and reactions are most conveniently understood in terms of the ratios of the number of atoms or molecules involved. Unfortunately the number of molecules in any practical sample of a substance is huge, to the order of 6.023 x 1023. The mole provides an alternative measure of the amount of substance that is much less unwieldy. The practical realization of the mole benefits greatly from our ability to measure the ratio of atomic masses to very high accuracy. Measurements of the mass of a sample then allow us to infer the number of moles in a pure sample simply. While determining the number of moles in a pure sample is easy, determining the amount of substance in impure or dilute samples is extraordinarily difficult. Problems associated with ongoing chemical reactions, imperfectly mixed samples, degradation of the sample, and the influence of other, perhaps similar, chemical species are influence variables unlike those found in physics. Also the huge range of chemical species makes chemical metrology the broadest of the metrological fields. The area of chemical metrology covers all measurements related to determination of the amount of particular substances (chemicals) inside a system, for example, determination of organic pollutants in environmental samples or analysis of food samples for the presence of harmful substances. Chemical analysis is essential to industry, trade, commerce, healthcare, safety and environmental protection for determining quality and conformance. The prime objective of Metrology in Chemistry is to provide the basis for comparability of chemical measurements and their traceability in order to establish global mutual recognition of analytical results and provide a technical foundation for wider agreements related to international trade, commerce and regulatory issues. The relationship of how chemical metrology fits into the rest of the metrology hierarchy is shown in the diagram below.

The aim of the CCQM is to establish world-wide comparability through traceability to the SI or other internationally agreed references. This way, a measurement result obtained in country A will be comparable with the result of the same type of measurement in country B. This does not mean that all measurement and test results must have the same accuracy, but within the statement of uncertainty the results should be comparable. To facilitate the establishment of traceability chain, CCQM works on the development of primary methods, primary pure reference materials and the validation of traceable methods. Other related activities include discussions on the quality and validity of the calibration and measurement capabilities and the certified reference materials claimed by the national/designated metrology institutes.

hemical

In order to assure the traceability, accuracy and compatibility of measurements, various Certified Reference Materials (CRMs) are prepared and distributed for interlaboratory comparisons between participating laboratories. This can apply to many different fields of discipline including food safety, environmental monitoring and pharmaceutical work. Certified reference materials (CRM) are batches of various substances that are intended to be homogeneous and well characterized with respect to the particular properties of interest to the user community they are designed to serve. CCQM is responsible for the organization of key comparisons to assess the capabilities and the competence of participating national/designated metrology institutes. Proficiency testing program is an important tool used by laboratory accreditation bodies to assess the competency of field laboratories. Successful participation in proficiency testing program is often regarded as a demonstration of the validity of the laboratories' traceability statement in terms of measurement uncertainty. Some instrumentation used for chemical metrology includes:

pH Meters High resolution gas chromatograph Mass spectrometer Liquid chromatograph/mass spectrometry with electrospray ionization interface and atmospheric pressure chemical ionization interface Gas chromatograph-Atomic emission detector Fourier transform infrared spectrophotometer Nuclear magnetic resonance spectrometer UV/Visible spectrophotometer Mass spectrometer for precision gas composition measurement Gas Chromatography with detectors including DID, ECD, TCD, FID, USD Automatic on-line air pollution monitoring system for NOx, SO2, CO, and CO2 Gas chromatograph / isotope ratio mass spectrometer

One of the examples of how important chemical metrology is can be summarized this way: The International Temperature Scale of 1990 (ITS-90) assigns temperatures to the solidliquid phase transitions (triple points, melting points, freezing points) of various substances. The ideally pure substances used for the definition of ITS-90 are unattainable in practice. The influence of impurities must be accounted for. This is the dominant uncertainty component in realizing most of the important reference temperatures. While a number of methods have been employed to estimate the uncertainty arising from the impurity effect, the methods that depend on chemical analyses of the materials are most useful. For the majority of cases, the demands of the thermometry community exceed the capabilities of routine analytical chemistry. It is therefore necessary to rigorously characterize homogeneous batches of material using as many techniques as possible to develop certified reference materials to serve both the thermometry and chemical metrology communities.

Other applications of chemical metrology are:


safeguarding the quality of food and the purity of air developing new products and materials, such as pharmaceuticals or ceramics monitoring conformity assessment and product specification protecting the consumer against fraud and counterfeit products assisting a hospital physician with a medical diagnosis supporting the justice system in the fight against drugs and organized crime providing forensic evidence for litigation

BackNext To ensure reliable and comparable chemical measurements, it is necessary to have unified national/regional/international systems in place to enable analysts to attain and demonstrate the comparability and traceability of their measurements. In order to achieve this, a measurement and testing scheme should require the following:

Validated methods Procedures for determining measurement uncertainty Procedures and tools for establishing traceability Pure substance reference materials and calibration standards Matrix reference materials Proficiency testing Third party laboratory accreditation to an international standard such as ISO 17025, ISO Guide 34

pH Measurement
A pH measurement is a determination of the activity of hydrogen ions in an aqueous solution. Many important properties of a solution can be determined from an accurate measurement of pH, including the acidity of a solution and the extent of a reaction in the solution. Many chemical processes and properties, such as the speed of a reaction and the

solubility of a compound, can also depend greatly on the pH of a solution. In applications ranging from industrial operations to biological processes, it is important to have an accurate and precise measurement of pH. For more on pH measurement see WorkPlace Trainings Water Quality course.

pH Measurement
A solution's pH is commonly measured in a number of ways. Qualitative pH estimates are commonly performed using Litmus paper or an indicator in solution. Quantitative pH measurements are reliably taken using potentiometric electrodes. These electrodes simply monitor changes in voltage caused by changes in the activity of hydrogen ions in a solution. The letters pH stand for "power of hydrogen" and the numerical value is defined as the negative base 10 logarithm of the molar concentration of hydrogen ions. pH = -log10[H+] In most cases, the activity of hydrogen ions in solution can be approximated by the Molar concentration of hydrogen ions ([H+]) in a solution Some examples of pH values are shown below.

The pH of a solution is a measure of the molar concentration of hydrogen ions in the solution and as such is a measure of the acidity or basicity of the solution. The letters pH stand for "power of hydrogen" and numerical value for pH is just the negative of the power of 10 of the molar concentration of H+ ions. The usual range of pH values encountered is between 0 and 14, with 0 being the value for concentrated hydrochloric acid (1 M HCl - Hydrochloric Acid), 7 the value for pure water (neutral pH), and 14 being the value for concentrated sodium hydroxide (1 M NaOH - Sodium Hydroxide). It is possible to get a pH of -1 with 10 M HCl, but that is about a practical limit of acidity. At the other extreme, a 10 M solution of NaOH would have a pH of 15.

In pure water, the molar concentration of H+ ions is 10-7 M and the concentration of OH- ions is also 10-7 M. Actually, when looked at in detail, it is more accurate to classify the concentrations as those of [H3O]+ and [OH]-. The product of the positive and negative ion concentrations is 10-14 in any aqueous solution at 25C. An important example of pH is that of the blood. Its nominal value of pH = 7.4 is regulated very accurately by the body. If the pH of the blood gets outside the range 7.35 to 7.45 the results can be serious and even fatal. If the pH of tap water is measured with a pH meter, it may be surprising to see how far from a pH of 7 it is because of dissolved substances in the water. Distilled water is necessary to get a pH near 7. Meters for pH measurement can give precise numerical values, but approximate values can be obtained with various indicators. Red and blue litmus paper has been one of the common indicators. Red litmus paper turns blue at a basic pH of about 5, and blue litmus paper turns red at an acid pH of about 8. Neither changes color if the pH is nearly neutral. Litmus is an organic compound derived from lichens. Phenolpthalein is also a common indicator, being colorless in solution at pH below 8 and turning pink for pH above 8.

H Measurement
The following table shows ion concentration and corresponding pH values.

iscrete statistics deal with events that have a definite outcome (heads or tails, success or failure). In engineering design, this knowledge is important in product testing and setting reliability targets. Let e denote an event which can happen in k ways out of a total of n ways. All n ways are equally likely. The probability of occurrence of the event n is defined as: p = p (e) = k/n The probability of occurrence of e is called its success. The probability of failure (nonoccurrence) of the event is denoted by q. q = p (not e) = (n-k)/n = 1 - k/n = 1 - p If the event cannot occur its probability is 0. If the event must occur, its probability is 1. Next we define the odds.

Let p be the probability that an event will occur. The odds in favor of its occurrence are p : q and the odds against it are q : p.

Summation
If x1 = 150, x2 = 200, x3 = 180, x4 = 160, x5 = 170, Then the summation xi = 150 + 200 + 180 + 160 + 170 = 860 The summation function is denoted by:

The Microsoft Excel Spreadsheet function for the SUM is: =SUM (Range) Sometimes it is easy to express data as a percentage. This makes comparison between different attributes and variables easier.

Example
A laboratory has 12 analog multimeters and 25 digital multimeters. A total of 7 multimeters are found to be out of tolerance before their calibration interval. Out of the 7, 2 are analog and 5 are digital. Expressed as a percentage: (2/12) * 100 = 16.67% of Analog multimeters is Out of Tolerance. (5/25) * 100 = 20.00% of Digital multimeters is Out of Tolerance. A 20.00 mm. gage block measures 20.01 mm. when measured with a micrometer. This value differs by:

he sample arithmetic mean or the sample mean is defined as: Total number of sample values/the number of sample values.

The sample mean is denoted by x-bar:

From the summation example, the sample mean is: (150 + 200 + 180 + 160 + 170) / 5 = 172 The Microsoft Excel Spreadsheet function for the mean is: =AVERAGE (Range) Next

definition
The sample median is defined as the middle value when the data is ranked in an increasing (or decreasing) order of magnitude. From our example: 150, 160, 170, 180, 200 If there are an even number of values, then the average of two middle values is the median. 150, 160, 170, 174, 180, 200 Median is: (170 + 174)/2 = 344/2 = 172 The Microsoft Excel Spreadsheet function for the MEDIAN is: =MEDIAN (Range)

Definition
The sample mode is defined as the value which occurs with the highest frequency. 150, 160, 180, 170, 170, 200

The mode is 170 in the example above. The Microsoft Excel Spreadsheet function for the MODE is: =MODE (Range) (Note: Care should be exercised when using the MODE function as it may not yield the expected result for non-integer numbers)

Definition
Range is defined as the difference between the largest and smallest values for a variable. If the data for a given variable is normally distributed, a quick estimate of the standard deviation can be made by dividing the range by six. From our example: 150, 160, 170, 180, 200 The range is 200 - 150 = 50 There is no RANGE function per se available in Microsoft Excel. The MAX and MIN functions are utilized to derive the range function: =MAX (Range) - Min (Range)

opulation
A population is a whole set of measurement (of one variable) about which we want to draw a conclusion. In product design, it is not possible to draw assumptions about the population. At the design and test stage, a very small sample may be available.

Sample
A sample is a sub-set of a population, a set of the measurements which comprise the population. Most of the time, we make assumptions based on sample data. That is why it is very important to choose the optimum sample size. The sample should be drawn randomly to remove any bias in data.

Variance
The sample variance, s2, is a popular measure of dispersion. It is an average of the squared deviations from the mean.

The Microsoft Excel Spreadsheet function for the SAMPLE VARIANCE is =VAR (Range)

Population Variance
Population variance is denoted by 2 and is calculated using the following formula:

The Microsoft Excel Spreadsheet function for the POPULATION VARIANCE is: =VARP (Range)

Sample Standard Deviation


The sample standard deviation, s, is a popular measure of dispersion. It measures the average distance between a single observation and its mean. The use of n-1 in the denominator instead of the more natural n is often of concern. It turns out that if n (instead of n-1) were used, a biased estimate of the population standard deviation would result. The use of n-1 corrects for this bias.

Sample Standard Deviation


Unfortunately, s is inordinately influenced by outliers. For this reason, you must always check for outliers in your data before you use this statistic. Also, s is a biased estimator of the population standard deviation. An unbiased estimate, calculated by adjusting s, is given under the heading Unbiased Standard Deviation:

The Microsoft Excel Spreadsheet function for the SAMPLE STANDARD DEVIATION is: =STDEV (Range)

Population Standard Deviation


Population standard deviation is denoted by and it is calculated using the following formula:

The Microsoft Excel Spreadsheet function for the POPULATION STANDARD DEVIATION is: =STDEVP (Range) his is an estimate of the standard error of the mean. The standard error of mean is an estimate of the precision of the sample mean. Its standard error and confidence limits, are calculated by dividing the corresponding Standard Deviation value by the square root of n.

Standard Error of Mean

Example

Let us illustrate by example the calculation of the Population and Sample variance and Standard deviation functions using the data shown at the right. Population Variance is: 0.10500/(10) = 0.0105 Sample Variance is: 0.10500/(10-1) = 0.0117 Population Standard Deviation is: 0.0105 = 0.1025 Sample Standard Deviation is: 0.0117 = 0.1080 Note that to calculate Standard deviation by the long hand method shown, one must calculate the Variance first and find the square root of the variance. As data gets complicated and larger, it is much easier to do statistical calculations using a calculator or a spreadsheet. The keystroke combination to calculate standard deviation will vary depending on the make and model of the calculator. The histogram is a traditional way of displaying the shape of a batch of data. It is constructed from a frequency distribution, where choices on the number of classes and class width have been made. These choices can drastically affect the shape of the histogram. The ideal shape to look for in the case of normality is a bell-shaped symmetrical distribution.

Bell Curve (Normal or Gaussian Distribution)


One pattern that appears often in nature is the Gaussian distribution which is named after Karl Gauss, a German mathematician who wrote an equation to describe the pattern. Because of its distinctive bell shape, it is often referred to as a "bell curve." It is also referred to as the "normal" distribution, but term "normal" is misleading because it implies that it is the most common or acceptable pattern which is not necessarily the case.

Bell Curve (Normal Distribution)

Normal Probability Density


The normal probability density function is defined as:

Where: = the distribution mean (population mean), which is a measure of central tendency or location. = the distribution standard deviation (population standard deviation which is a measure of dispersion. A z-table is a table of values that shows the number of standard deviations separating a value from the mean. The z-table is derived from the normal (Gaussian), bell-curve distribution. The z-value is calculated using the following formula:

Where: x: Individual Value : Population mean : Population Standard Deviation Next

Reliability Defined
Reliability is the probability that a product will perform its intended function satisfactorily for a pre-determined period of time in a given environment.

Reliability Targets in Product Design


Knowledge of reliability and its associated distributions is necessary for design engineers and scientists. Every product design has a reliability target. For example:

When a tire is designed for an automobile, there is a reliability target of it wear characteristics expressed in miles when it is used normally. When an aircraft engine is designed, it has a reliability target where it needs maintenance after a certain number of hours of operation. A pacemaker has a target expressed in terms of time when it needs a battery replacement.

The reliability of a product depends on how well reliability is planned into the product in the early stages of product design.

Types of Failures

It is important to know about types of failures and their associated probability distribution(s). The types of failures observed are:

Infant Mortality Rate Constant Failure Rate Wear out Period

The relationship of the three types of failures during equipment's lifetime is illustrated in the figure. This is sometimes known as the "bathtub" curve.

Let's take a look at each type of failure and its associated probability distribution(s).

Infant Mortality Rate


Normally this kind of failure is related to quality issues. This may be due to the faulty components or components not meeting specifications or poor workmanship. By replacing faulty components in the equipment, the early failures decrease over a period of time and decreasing failure rate. The Weibull probability distribution is normally used to determine the end of Infant Mortality Rate.

Constant Failure Rate


After the failures due to faulty components are eliminated, the equipment enters into what is known as the constant failure rate. The failures now occur randomly. The probability of failure can be predicted over a certain interval of time, not at a specific time. The Exponential probability distribution is normally used to determine the constant failure rate interval.

Wear Out Period


As equipment ages, components begin to start wearing out and failure are observed at increasing rates for a specified time interval. A variety of distributions can be used to model the Wear out period. The Normal distribution is used quite frequently.

Probability Distributions

There are four probability distributions that can be used to determine the reliability of a system or a component. These are:

Normal (Gaussian) Exponential Binomial Weibull

The probability distribution used depends on whether testing is time-terminated, or failuretermintated.

ime-Terminated Tests
In time-terminated tests not all units fail. Time-terminated tests produce only one parameter: the mean life, or failure rate.

Failure-Terminated Tests
In failure-terminated tests, all units are tested to failure. Failure-terminated tests produce Standard Deviation and Weibull parameters.

Which Distribution to Use?


Normal and Weibull distributions can only be used with failure-terminated data. Binomial and Exponential distributions can be used with both time-terminated and failureterminated data.

BackNext

Normal Distribution
As discussed earlier, the normal distribution can be used to model the wear out period data.

Normal Probability Density


The normal probability density function is defined as:

Where: = the distribution mean (population mean), which is a measure of central tendency or location. = the distribution standard deviation (population standard deviation which is a measure of dispersion. The standard normal distribution table (z-table: = 0 and = 1) is used calculate the probability of failure. A portion of the z-table is reproduced below:

Normal Distribution

The reliability to time t: R(t) is 1 - f(t). To predict reliability to (t), the area below a calculated z value is used. The formula to calculate z is:

Where: t = time, = mean, and = standard deviation (if sample data is used then = s) BackNext

ample Problem
Equipment A has a mean time to wear out of 6,000 hours with a standard deviation of 1000 hours. What is the probability that the equipment will last beyond 5,000 hours?

Step 1 -- Calculate z
z = (5,000 - 6,000)/1000) = -1.0

Step 2 -- Use the z-table


View z-table From the z- table, the area left or right of 0 is 0.5000. The area between 0 and 1 is 0.34134. Therefore the area left of -1 is 0.5000 - 0.34134 = 0.15866

Step 3 -- Calculate the Probability


R(t) = 1 - f(t) = 1 - 0.15866 = 0.84134 The probability that equipment will last beyond 5,000 hours is 84.134%.

Exponential Distribution
The exponential distribution is used to predict the reliability of equipment in the constant failure rate as discussed.

Probability Density
The exponential probability density function is defined as:

for t >= , = Failure Rate and = Mean Time Between Failure (MTBF)

Reliability
The reliability for the exponential distribution is:

Hazard Rate
The Hazard Rate for the exponential distribution is:

Properties
Properties of the exponential distribution:

The Mean = Standard Deviation Approximately 63.21% of the area under the falls below the mean

Failure Rate
The Failure Rate () for exponential data can be calculated by: Failure Rate () = Number of Items Failed/ Total Test Time The Mean Time Between Failure (MTBF) for exponential data can be calculated by: MTBF () = Total Test Time/Number of Items Failed = 1/ 12 meters were tested for 30 hours each. Out of the 12 items, 3 failed at 20, 21 and 25 hours respectively. What is the failure rate of the meters? Failure Rate () = 3/[(20 + 21 + 25) + (9 x 30)] = 0.008929/hour What is the Mean Time Between Failure (MTBF)? MTBF() = 1/ = 1/0.008929 = 112 hours 10 items were tested for 50 hours each. Out of the 10 items, 3 failed at 30, 45 and 29 hours respectively. What is the failure rate of the items? A. 0.0053/hour B. 0.0087/hour C. 0.0066/hour D. 0.0024/hour

Example
Six instruments are tested for failure with the following time to failure: 86, 78, 65, 94, 59 and 63 hours. Find the reliability at 40 hours. Mean failure time: (86, 78, 65, 94, 59 and 63)/6 = 74.17 T = 40 R = e(-t/) = e-(40/74.17) = 0.5831 The Binomial Distribution is used for success/failure testing where only one of the outcomes is possible. The outcomes are mutually exclusive.

Probability Density
The probability density function for the binomial distribution is:

Where: P(x; n, p) = the probability of exactly x successes in n independent trials n = the sample size or number of trials p = the probability of success in on a single trial Next The mean of the binomial distribution is n. The variance of the binomial distribution is np(1 - p).

Example
The probability of successfully calibrating an instrument is 88%. What is the probability of successfully calibrating two instruments in the next three attempts?

p = 0.88, n = 3, x = 2

Solution is 0.2788

Binomial Distribution

For reliability, p = and n = T the Binomial equation becomes

Example
Six instruments are tested for failure with the following time to failure: 86, 78, 65, 94, 59 and 63 hours. Find the reliability at 40 hours. Mean failure time: (86, 78, 65, 94, 59 and 63)/6 = 74.17 T = 40

Compare this answer to the exponential distribution calculation

Weibull Distribution

Overview
The Weibull distribution is a general-purpose reliability distribution used to model material strength, times-to-failure of electronic and mechanical components, equipment or systems. In its most general case, the three-parameter Weibull Probability Distribution Function (pdf) is defined by:

with three parameters, namely , and where = shape parameter, = scale parameter and = non-zero location parameter.

Weibull Distribution

Graph

BackNext

Weibull Distribution

Details

If the location parameter is assumed to be zero, the distribution then becomes the 2-parameter Weibull, or

One additional form is the 1-parameter Weibull distribution which assumes that the location parameter &gamme; is zero, and the shape parameter is a known constant, or = constant = C, so

Details
When < 1, the Hazard Rate decreases with time and can be described as a decreasing failure rate as found in the infant mortality phase. When = 1, the Weibull distribution approximates the exponential distribution, and the failure rate is constant. When < 1, the Hazard Rate increases with time and can be described as a increasing failure rate as found in the wear out phase. At = approximately 3.44, the Weibull distribution resembles the normal distribution. For the two parameter Weibull distribution, Reliability is expressed as:

Example
An instrument has a Weibull distribution with = 2, = 4, 000 hours and = 1,000 hours. What is the reliability at 3,000 hours?

The simple concept of system reliability is discussed so critical systems can be evaluated when design considerations are made.

Basic Concepts
Redundancy is defined as the existence of more than one means for achieving a stated level of performance; all paths must fail before the system will fail. Derating techniques can be applied to reduce the failure rates. System reliability can be increased by designing components with operating safety margins.

Example
A component is being designed with an average load strength of 30 000 kg. The expected load is 20 000 kg. What is the safety factor? The safety factor is 30 000 / 20 000 = 1.5 or 150% What is the margin of safety? The margin of safety is (30 000 - 20 000) / 20 000 = 0.5

Series Reliability
Series reliability is defined as:

Note that if one component in series reliability fails, the whole system fails. This is important to note in design consideration.

Parallel Reliability
Note that if one component in the system fails, there is still redundancy in the design.

Examples:

Parallel automobile hydraulic systems for brakes An aircraft is designed to fly on one engine if all others fail.

BackNext

Definitions
The International Vocabulary of Basic and General Term in Metrology (VIM) defines Uncertainty of Measurement as: "A parameter, associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand." The Guide to Expression of Uncertainty in Measurement (GUM) defines Standard Uncertainty of Measurement as: "The uncertainty of measurement expressed as the standard deviation." easurement Uncertainty is graphically expressed as:

Error Inherent in Measurement


When a measurement is made, there an error associated with the measurement. This error is expressed as the uncertainty of measurement. Measurement = True Value Error (Measurement Uncertainty)

Why do we need to understand Measurement Uncertainty?


Because it is a Good Practice. It let's one estimate the error of the measurement process. International Standards require it. o ISO 17025 o ISO 9001 o ANSI Z540-1

It is important for scientists and engineers to be aware of measurement uncertainty. Many engineering decisions are based on measurements made. If the error associated with the measurement is more than the required tolerance, wrong assumptions can be made on the design resulting in possible design implications that can impact safety, non-performance and other factors.

Measurement Risk
Imagine using an instrument to measure a parameter. If the measurement uncertainty of the measurement is larger than the tolerance required, the risk of making a correct measurement is larger too. This is expressed graphically:

References
There are several publications available for reference for determining measurement uncertainty. The three main ones are: 1. NIST 1297:1994 (free PDF download from NIST web site)

2. ANSI/NCSL Z540-2: 1997 (Americanized version of G.U.M.) 3. ISO Guide to Uncertainty of Measurement (GUM)

alculations
Primary calculations encountered in determining measurement uncertainty are:

Basic statistics (mean, range, standard deviation, variance etc.) Other knowledge of statistical methods may be useful: o Statistical Process Control (SPC) o Analysis of Variance o Gage R. & R. o Design of Experiments (DOE)

Steps for determining Measurement Uncertainty


1. 2. 3. 4. 5. 6. 7. Identify the uncertainties in the measurement process. Classify type of uncertainty (A or B). Document in an uncertainty budget. Quantify (calculate) individual uncertainty by various methods. Combine uncertainty (Root-Sum-Square (RSS) method). Assign appropriate k-factor multiplier to uncertainty. Document in an Uncertainty Report.

Details of each step are available from the Measurement Uncertainty topic menu. This is a brainstorming exercise. Depending on the measurement process, it can be done individually or in a group. The purpose is to identify the factors affecting the measurement process. At a minimum, the following should be considered:

Environment Measuring Equipment o Equipment Specifications o Repeatability o Reproducibility o Equipment resolution Measurement Setup, Method, Procedure Operator Software Calculation method in firmware Other constraints

Assess uncertainty and assign uncertainty Type A or B.

Type A Evaluation Method

Type A is the method of evaluation of uncertainty of measurement using statistical analysis of a series of observations. Examples:

Standard Deviation of a series of measurements Other statistical evaluation methods (SPC, ANOVA, DOE)

A series of measurements are taken to determine the uncertainty of measurement as shown:

Type B Evaluation Method


Type B is the method of evaluation of uncertainty of measurement using means other than the statistical analysis of a series of observations. Examples:

History of parameter Other knowledge of the process parameter Based on specification

An example of Type B is the temperature specification of an oil bath stated by the manufacturer is 100.00 +/- 0.2 Celsius.

The type A and B uncertainty components are documented in the uncertainty budget with the associated data. In addition, the uncertainty components are assigned the type of distribution that they fall under. Four distributions are normally encountered when estimating uncertainty:

1. 2. 3. 4.

Normal (Gaussian) Rectangular Triangular U - shaped (trough)

Let's take a brief look at each of these distributions.

Normal Distribution (Type A)


Normal distribution is one way to evaluate uncertainty contributors so that they can be quantified and budgeted for. This approach allows a manufacturer to take into account prior knowledge and manufacturer's specifications. Normal distribution helps one to understand the magnitude of different uncertainty factors and to understand what is important. The normal distribution is used when there is a better probability of finding values closer to the mean value than further away from it, assuming one is comfortable in estimating the width of the variation by estimating a certain number of standard deviations.

Rectangular Distribution (Type B)


Rectangular distribution is the most conservative distribution. The manufacturer has an idea of the variation limits, but little idea as to the distribution of uncertainty contributors between these limits. It is often used when information is derived from calibration certificates and manufacturer's specifications.

Triangular Distribution (Type B)


Triangular distribution is often used in evaluations of noise and vibration. The manufacturer must be more comfortable estimating the width of variation using "hard" limits rather than a certain number of standard deviations. Typical examples of where triangular distribution is used are noise and vibration.

U-shaped Distribution (Type B)


U-shaped distribution is attributed to cyclic events, such as temperature, often yield uncertainty contributors that fall into a sine wave type pattern. U-shaped distribution is the probability density function for a sine wave. n order to combine uncertainties for normal (Gaussian) and non-normal distributions, correction factors must first be applied.

Correction factors that apply when combining normal and non-normal distributions:

Rectangular Distribution Example


A manufacturer specifies that the Xyz Gage has a specification of +/- 0.001 units. The standard uncertainty for this rectangular distribution is:

U-Shaped Distribution Example


The temperature of the oil bath stated by the manufacturer is 100.0 +/- 0.2 Celsius. The standard uncertainty for this U-shaped (trough) distribution is:

Triangular Distribution Example


A series of measurements taken indicate that most of the measurements fall at the center with a few spreading equally (+/-) 0.5 units away from the mean. The standard uncertainty for this triangular distribution is:

Resolution Example
The resolution is not necessarily a distribution, but correction factor is necessary if it is known how the least significant digit in an instrument is resolved. If it is not known how it is resolved, then it is treated as a rectangular resolution. A Digital Multimeter (DMM) has a resolution of 0.001 Volts in the volts scale. The standard uncertainty associated with the resolution of this DMM at the voltage scale is:

oot-Sum-Square (RSS) Method


It is preferable to combine all Type A uncertainties and Type B uncertainties separately as shown using the Root-Sum-Square method (RSS) as shown

The combined Type A and B uncertainties are then combined using the RSS Method:

Guide to Expression of Uncertainty in Measurement (GUM) defines Expanded Uncertainty as: "A quantity defining an interval about the result of a measurement that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand".

k-factor
k-factor provides the confidence interval and is used as a multiplier to the combined uncertainty. The GUM usually specifies k = 2 when reporting expanded measurement Uncertainty.

This Expanded Uncertainty is usually reported in a calibration or test report and is a quantifiable number. The final step involves documenting the uncertainty in an Uncertainty Report. View Sample Uncertainty Report.

Measurement Uncertainty and Traceability


Measurement uncertainty also is an integral part of measurement traceability. Recall that the definition of traceability is:

Traceability means that a measured result can be related to stated references, usually national or international standards, through an unbroken chain of comparisons, all having stated uncertainties.

Traceability
Traceability in a measurement process is through an unbroken chain of comparisons. This is illustrated by the traceability pyramid.

Therefore at the GENERAL CALIBRATION level, measurement uncertainties from BIPM through WORKING METROLOGY LABORATORIES are cumulatively combined and reported.

The 10:1 Model


In the past, it was usually stated that to calibrate an instrument parameter, all one needed was a standard 10 times more accurate than the instrument parameter. This was referred to as 10:1 Test Accuracy Ratio (TAR). Imagine using this logic in the calibration traceability logic pyramid hierarchy today.

It becomes very obvious that the 10:1 Test Accuracy Ratio (TAR) will be hard to achieve if this rule is unilaterally mandated. With the advances in measurement technology and economies of scale, it is not economical to purchase an accurate and a precise measurement device and use it at the PROCESS MEASUREMENT level.

The 4:1 Model


When the 10:1 Test Accuracy Ratio became difficult to achieve, it was determined that a 4:1 TAR would be acceptable with a small risk of measurement.

Even the 4:1 Test Accuracy Ratio is hard to achieve at times without significant measurement risk.

Test Uncertainty Ratios


However, if a good estimate of measurement uncertainty is made at all levels of measurement hierarchy, and it is cumulative as stated In the definition of measurement traceability, then it is

possible to calculate the risk associated with the measurement error. This risk is dependent on the application of the measurement. For example, the risk analysis is more critical if it is applied to the building of aircraft parts than making paper clips.

In guard banding, the measurement uncertainty is taken into consideration against the tolerance of the parameter being measured, and deducted from the tolerance interval. The new limits are the Lower Tolerance - Expanded Uncertainty and Upper Tolerance - Expanded Uncertainty.

Guard Banding Techniques


This reduces the risk of calling the measurement good while it is not (Type II Error - ) or calling the measurement bad while it indeed is good (Type I Error - ).

Guard Banding Techniques

pplying Metrology Concepts in Instrument Design


Why do we want to apply metrology concepts in instrument and product design? This question should be posed by all engineers and scientists and should be considered during all phases of engineering and design.

pplying Metrology Concepts in Instrument Design

Today, we live in a global economy. Design teams from various countries collaborate on projects. The common language binding these design concepts are in the various measurements, from project time lines to specifications on drawings, actual measurements and verification/validation data. Design processes are worked on concurrently to ensure that the product works as specified when different teams merge their designs. Misinterpretation of a measurement can have disastrous and costly results. Instrument and Product design consistency and quality rely on accurate and repeatable measurements and making sound decisions based on those measurements.

ypes of Errors

Ideally, confidence in calling a good measurement good and a bad measurement bad is desired. If a bad measurement is called good, there is a risk ( - Type II Error) associated with the measurement. Conversely, if a good measurement is called bad, there is a risk ( - Type I Error) associated with the measurement.

Type I Error

This error occurs when the null hypothesis is rejected when it is, in fact, true. The probability of making a type I error is called (alpha). is also the level of significance for the hypothesis test.

Type II Error
This error occurs when the null hypothesis is not rejected when it should be rejected. This error is denoted by the symbol (beta).

Design Failure
Designs can fail for a variety of reasons:

The design may not be capable. It may be weak, consumes too much power, have inherent design errors or have omissions in the design. The designed item might be over stressed. The components in the design may not be specified properly to handle stress applied to them. Failure may be caused by variation. Individual components may be specified properly without taking into account their overall variation.

Design Failure (continued)


Designs can fail for several other reasons as well:

Failure can occur due to wear out. Components can get weaker with age and suffer a breakdown. Failures can be time dependent such as a battery rundown, high temperature build up. Failure can occur when a well-designed system operates incorrectly under certain conditions. Failure can occur by errors in design, incorrect specifications, software bugs, faulty assembly, incorrect maintenance or use. Other sources of failure may be external such as electromagnetic interference, wrong operating instructions.

BackNext

Selecting Instruments
Measurements are generally made with either an analog device or a digital device.

Analog Measuring Instruments

Analog Measuring Instruments

Take care to avoid parallax errors when using an analog instrument. Meter manufacturers sometimes incorporate a mirrored surface on the meter face plate to help reduce errors due to parallax. To understand parallax errors, consider the meter reading "seen" by the technician directly in front of the meter and person on the right of the technician. What each person sees can disagree by several units. The person on the right, takes a reading from an angle, involving parallax, and therefore makes a less accurate reading. BackNext

Selecting Instruments

Analog Measuring Instruments


On liquid columns such as thermometers, burettes and pipettes, use the following approach to achieve consistent measurements.

Analog Measuring Instruments


When reading between scales on an analog instrument, the best you can estimate is half the division.

Therefore the actual 1.90 measurement can only be approximated as 2.0 and the actual 3.6 can only be approximated as 3.7. If a better resolution in measurement is desired, then the engineer should use an instrument which can resolve at that desired requirement.

Digital Measuring Instruments

In digital measuring instruments the measurement is converted into decimal digits so it is easy to read. While it is easy to read, many take the measured reading for granted. It is easy for manufacturers of digital meters to add more decimal places (resolution) to the display, implying more accuracy. The extra resolution may support other ranges in the meter. However, the accuracy of an instrument has nothing to do with its resolution. The user should consult the manufacturer's specifications to determine the accuracy claims. The parallax error of analog instruments is eliminated from the digital instrument as all users will see the same numbers.

Digital Measuring Instruments

In digital measuring instruments the measurement is converted into decimal digits so it is easy to read. While it is easy to read, many take the measured reading for granted. It is easy for manufacturers of digital meters to add more decimal places (resolution) to the display, implying more accuracy. The extra resolution may support other ranges in the meter. However, the accuracy of an instrument has nothing to do with its resolution. The user should consult the manufacturer's specifications to determine the accuracy claims. The parallax error of analog instruments is eliminated from the digital instrument as all users will see the same numbers.

Device Selection for Dimensional Measurements


When dimensions of a workpiece are to be measured, the accuracy of the measurement depends on the measuring instrument selected. For example, if the outer diameter (l00mm) of a cast iron product is to be measured, a vernier caliper may be sufficient. However, if the diameter of a tap gage with the same diameter is to be measured, even an outside micrometer is not accurate enough. An electric or pneumatic micrometer which is more accurate must be used with a gage block. It is recommended that the ratio of the tolerance of a workpiece to the accuracy of a measuring instrument be 10:1 in an ideal state and must be 5:1 in the worst case. Otherwise, tolerance is mixed with measurement error; and a good component is diagnosed faulty and vice versa.

Device Selection for Temperature Measurements


When selecting a device to measure temperature, be aware that different types of devices provide varying levels of measurement accuracy. The chart below gives information on the accuracy, stability, sensitivity, ranges, output and features of different types of temperature measuring devices.

Tip 1
Ensure that measuring equiment is reliable and accurate through the calibration process. Check the calibration sticker for the instrument's current status.

Tip 2
Make the measurement with an instrument that can resolve to the smallest unit. Do not confuse the resolution of an instrument with the accuracy of the equipment. For the accuracy claim, the engineer has to refer to the equipment specifications. It is wrong to assume that the smaller the unit, or fraction of a unit, on the measuring device, the more accurate the device can measure.

Tip 2 (continued)

If you want to measure to 2 decimal places, use an instrument that will resolve to at least 3 or more decimal places. Using a 2 decimal place instrument will result in approximately 25 %

decrease in the precision (repeatability) of your data collected. The example at the right illustrates this phenomenon: Note that any value that is calculated from the measurements is also carried to one more decimal. It is always a good practice to round at the very end of calculations then in the beginning or in the middle.

Tip 3
Know your instruments! Use proper techniques when using the measuring instrument and reading the value measured. On analog instruments, avoid parallax errors by always taking readings by looking straight down (or ahead) at the measuring device. Looking at the measuring device from a left or right angle will provide an incorrect value.

Tip 4
Any measurement made with a measuring device is approximate. If a measurement of an object is made two different times, the two measurements may not be the same. Repeat the same measure several times to get a good average. Avoid the "one measurement bliss" mistake. It is only after more than one measurement is taken that one knows the first measurement may be correct.

Tip 4 (continued)
The difference between repeated measurements is called variation in the measurements. This variation, or uncertainty in measurement, is a mathematical way to show the uncertainty in the measurement.

The difference between the true value and the measured value is defined as error. The "true value" can never really be known but it has a high probability of existing somewhere within the bound of measurement uncertainty associated with that measurement. BackNext

Tip 5
Measure under controlled conditions. If the parameter that is measured can change size depending upon climatic conditions (swell or shrink), be sure to measure it under the same conditions each time. This may apply to measuring instruments as well. Follow the environmental conditions under which the equipment is designed to wor Utilize Four-Wire measurement. If you want to measure the resistance of some component that is located a significant distance away from the Ohmmeter, you would need to take into account the resistance of the test leads as the Ohmmeter would measure all the resistance including that of the wire. In another scenario, measuring small resistances would be difficult if the test lead resistance was significantly larger than the artifact being measured. tip 6 Utilize Four-Wire measurement. If you want to measure the resistance of some component that is located a significant distance away from the Ohmmeter, you would need to take into account the resistance of the test leads as the Ohmmeter would measure all the resistance including that of the wire. In another scenario, measuring small resistances would be difficult if the test lead resistance was significantly larger than the artifact being measured.

A way to measure small resistance or resistance from a long distance involves the use of both an ammeter and a voltmeter. Ohm's Law says that resistance is equal to voltage divided by current (R = V/I). The resistance can be determined of the Device Under Test (DUT) if we measure the current and voltage across it is measured. In the circuit below at the right, the Ohmmeter measures both the test leads and the DUT Resistance R. In the 4-wire measurement circuit at the right, the more accurate resistance of the DUT resistance is only measured. The resistance of the 2-pair wires is basically nullified. Many digital multimeters are equipped to perform 4-wire measurement. In metrology applications, where accuracy is of paramount importance, highly accurate and precise "standard" resistors are also equipped with four terminals: two for carrying the measured current, and two for conveying the resistor's voltage drop to the voltmeter. This way, the voltmeter only measures voltage dropped across the precision resistance itself, without any stray voltages or across current-carrying wires or wire-to-terminal connection resistances.

Quality: Validation and Verification Issues

Verification is an investigation that shows that specified requirements are fulfilled. It is a process by which a hardware or software component is shown to meet its specification. In verification, specifications (requirements) are not challenged, but verify that a product characteristic meets its intended use. Process Capability Studies can be used for validation tasks:

Tests to determine whether an implemented system fulfills its requirements. The checking of data for correctness or for compliance with applicable standards, rules, and conventions. Process by which an assembly of hardware components or software module are shown to function in a total systems environment. Perform operating and support hazard analyses of each test, and review all test plans and procedures. Evaluate the interfaces between the test system configuration and personnel, support equipment, special test equipment, test facilities, and the test environment during assembly, checkout, operation, foreseeable emergencies, disassembly and/or tear-down of the test configuration. Make sure hazards identified by analyses and tests are eliminated or the associated risk is minimized. Identify the need for special tests to demonstrate or evaluate safety of test functions.

What is a Specification?
A specification is a set of requirements. Normally, a specification is the specific set of (high level) requirements agreed to by the sponsor/user and the manufacturer/producer of a system. Specifications may be customer driven, market driven or by a regulatory agency driven. The specification may also contain both the systems requirements and the test requirements by which it is determined that the systems requirements have been met, known as the acceptance test requirement(s), and a mapping between them.

Man-Made Specifications
Specifications may be man-made. These are specified by the customer without any scientific basis and can be very generic.

"Put a man on the moon before the decade is over," is an example of a man-made specification. It is the duty of the scientist or engineer to design and meet the specification.

Specification limits reflect the functional requirements of the product/characteristic. Examples of specifications encountered in metrology:

Percent of reading Percent of full scale Percent of reading + X.x units (X.x units are commonly known as a floor specification) Parts per million (ppm) Something expressed in terms of resolution

Specifications should be realistic based on actual performance of the measurement.

Test Uncertainty Ratio


The other use of specification is in the analysis of Test Uncertainty Ratio (TUR) and assessing risk if the TUR is less than 4:1. Depending on the use of equipment, the risk associated with TUR of less than 4:1 may or may not be acceptable. For example, it may be acceptable for a tire pressure gage for checking a bicycle's tire but not acceptable for checking an aircraft's tire pressure. The Test Uncertainty Ratio (TUR) is defined as: Unit Under Test (UUT) Specification/Expanded Uncertainty (k=2) of the Standard

Tolerance Intervals
Tolerance intervals, also known as tolerance limits, are related to engineering tolerances limits. An engineering tolerance limit defines the maximum and minimum values for a product to work correctly. Engineering tolerance limits can be specified via geometric dimensioning and tolerancing techniques (GDT). Tolerance intervals are fundamental to control charts of various types. Statistical tolerance intervals (or limits) are derived from process data. These intervals quantify the variation evident of the process. Tolerance intervals define the capability of the process by specifying minimum and maximum values, based on a sample of data. Tolerance intervals are intervals that cover a proportion, p, of the overall population with a given confidence level, (1), given that the data follow a normal distribution. Tolerance intervals differ in an important way from confidence intervals (CI). Tolerance intervals are constructed to contain a specified proportion of the population of individual

observations with a given confidence level. A confidence interval is an interval of possible values for a population parameter, such as the population mean u, with a specified confidence level equal to the degree of possibility.

Tolerance Intervals
Example: An engineer may require an interval that covers 99% of the measured weights for filled syringes with 95% confidence. The endpoints of a tolerance interval are called tolerance limits. The tolerance limits may be compared with the specification limits to judge the process behavior. (E.g., Are most of the values falling inside the specification limits?). OR If one measured a sample of 50 fill weights of a medication ampoule, and got a mean of 0.75 grams, and a standard deviation of 0.005 grams, then one can be 99% certain that 95% of the population shall be contained within the interval of 49.9871 to 50.0129 grams.

The Chi-Square Test


There are several kinds of chi-square tests but the most common is the Pearson chi-square test which allows us to test the independence of two categorical variables. All chi-square tests are based upon a chi-square distribution, similar to the way a t-test is based upon a t distribution or an F-test is based upon an F distribution.

Example
Suppose we have a hypothesis that the pass/fail rate in a particular test is different for Instrument A and Instrument B. If we take a random sample of 100 tests and measure both instruments' data and pass/fail criteria as categorical variables. The data for these 100 tests can be displayed in a contingency table, also known as a crossclassification table. A chi-square test can be used to test the null hypothesis (i.e., that the pass/fail rate is not different for Instrument A and Instrument B).

Chi-Square Statistic
Just as in a t-test, or F-test, there is a particular formula for calculating the chi-square test statistic. This statistic is then compared to a chi-square distribution with known degrees of freedom in order to arrive at the p-value. We use the p-value to decide whether or not we can reject the null hypothesis. If the p-value is less than "alpha" which is typically set at .05, then we can reject the null hypothesis, and in this

case, we say that our data indicates that the likelihood of passing the test is related to the specific Instrument A or B.

Tolerance Interval Calculation


Tolerance Interval is calculated by: and s is the sample standard deviation.

Where -bar is the average, is

and s is the sample standard deviation. Before determining tolerances, it is essential to make sure that the process is under control. Several techniques are available to determine this. A control chart is a graphical technique for determining whether a process is in a state of statistical control. Statistical control means that the extent of variation of the output of the process does not exceed that which is expected on the basis of the natural statistical variability of the process. Several types of control charts are used based on the nature of the process and on the intended use of the data. Every process has some inherent variability due to random factors over which there is no control and which cannot be eliminated economically. In a manufacturing/compounding process, random factors may include the distribution of various compounds, variations in the operator and equipment performance from one cycle to the next. The inherent variability of the process is the aggregate result of many individual causes, each having a small impact. Next

Variation

There is variation in everything. No matter how hard we try to make things identical, they always turn out a little different. For example, a drug company may advertise a certain weight on its medication container, but any specific container may not weigh the stated weight. If we were to select 100 containers at random and weigh them, we would find that each has a slightly different weight. If we were to construct a histogram of the weights of the containers, we would find that most of them are very close to the stated weight, but some are a little larger and some a little smaller. This pattern of weights is called a frequency distribution. By studying the shape of the distribution, we can gain a lot of useful information about the process.

Variation (continued)
As mentioned earlier in the section on Basic Statistics, the Gaussian distribution occurs frequently in nature. Although patterns similar to the Gaussian distribution are very common, they are by no means the only patterns that you will encounter. The process variation can form some pretty strange shapes. Frequency distributions are often characterized by their location and spread.

Causes of Variation
The causes of data variation can be grouped into two categories, random and systematic. Random causes are those that we cannot control unless we change the process itself. They are random in nature and an inherent part of the process. They are sometimes referred to as natural or system causes. Systematic causes, on the other hand, are those that can be linked to a assignable, correctable issue. Variations due to assignable causes (sometimes called special causes) tend to distort the usual distribution curve and prevent the process from operating at its bes

Control Charts
The control chart technique is applicable to processes that produce a stream of discrete output units. Control charts are designed to detect excessive variability due to specific assignable causes that can be corrected. Assignable causes result in relatively large variations, and they usually can be identified and economically removed. Examples of assignable causes of variations that may occur in the example above include a substandard batch of raw material, a machine malfunction, and an untrained or new operator. A control chart is a two-dimensional plot of the process over time. The horizontal dimension represents time, with samples displayed in chronological order, such that the earliest sample taken appears on the left and each newly acquired sample is plotted to the right. The vertical dimension represents the value of the sample statistic, which might be the sample mean, range, or standard deviation in the case of measurement by variables, or in the case of measurement by attributes, the number of nonconforming units, the fraction nonconforming, the number of nonconformities, or the average number of nonconformities per unit.

Control Charts (continued)


Typically a control chart includes three parallel horizontal lines, a center line and two control limits. The center line (CL) intersects the vertical dimension at a value that represents the level of the process under stable conditions (natural variability only). The process level might be based on a given standard or, if no standard is available, on the current level of the process calculated as the average of an initial set of samples. The two lines above and below the center-line are called the upper control limit (UCL) and lower control limit (LCL) respectively, and they both denote the normal range of variation for the sample statistic. The control limits intersect the vertical axis such that if only the natural variability of the process is present, then the probability of a sample point falling outside the control limits and causing a false alarm is very small. Typically, control limits are located at three standard deviations from the center line on both sides. This results in a probability of a false alarm being equal to 0.0027.

xamples of Control Charts

Principles of Operation
The principle of operation of control charts is simple and consists of five general steps: 1. Samples are drawn from the process output at regular intervals. 2. A statistic is calculated from the observed values of the units in the sample; a statistic is a mathematical function computed on the basis of the values of the observations in the sample. 3. The value of the statistic is charted over time; any points falling outside the control limits or any other non random pattern of points indicate that there has been a change in the process, either its setting or its variability. 4. If such change is detected, the process is stopped and an investigation is conducted to determine the causes for the change. 5. Once the causes of the change have been ascertained and any required corrective action has been taken, the process is resumed.

6. Benefit of Using Control Charts


7. The main benefit of control charts is to provide a visual means to identify conditions where the process level or variation has changed due to an assignable cause and consequently is no longer in a state of statistical control. The visual patterns that indicate either the out-of-control state or some other condition that requires attention are known as outliers, runs of points, low variability, trends, cycles, and mixtures.

Formulas and tables of constants used to calculate control limits for charts are shown below and on following two pages:

Formulas for the X-bar Control Chart


Upper Control Limit:

Lower Control Limit:

Center Line:

BackNext

Formulas for the R Control Chart


Upper Control Limit:

Lower Control Limit:

Center Line:

Assigning Appropriate Tolerances: Control Charts

Table of Constants Used to Calculate Control Limits for Charts

Process Capability
It is often necessary to compare the process variation with the engineering or specification tolerances to judge the suitability of the process. Process capability analysis addresses this issue. Other capability applications are:

Can be used as a basis for setting up a variables control chart. Evaluation of new equipment. Reviewing tolerances based on the inherent variability of a process. Assigning equipment to product (more capable equipment to tougher jobs). Routine process performance audits. The effects of adjustments during processing.

Process Capability studies should not be conducted unless the process is running under statistical control.

If the process with more variability shifts from the nominal value, there is a chance that some part of the process may be outside the process specification limits.

BackNext f the process has less variability, there is a better chance that it will still be within the process specification limits.

Process capability studies enables engineers and scientists to judge this process behavior and assess the risk.

A Better Measure
A better measure taking into account the location of distribution is given by the following. The lower of the two values obtained is used.

uidelines

CP > 1.33 Capable Process CP = 1.00 to 1.33 capable with tight control CP < 1.00 Incapable

Assessing Risk
In terms of assessing risk, the CP numbers can be converted to parts per million (ppm) as shown in the table:

Exercise
Use the Excel spreadsheet calculator (TUR_Risk) to complete the following exercise: Given:

A reference Standard with a nominal value of 10.00 units has a tolerance of +/- 0.3 unit. A recent verification of an instrument when using this standard with several repeat measurements yielded the following results: o Mean: 10.02 units o Standard Deviation: 0.015 units

Calculate: CP, CPK, z-value Questions:


Is the instrument capable? What is the probability of making bad measurements?

Process Capability
Process capability analysis can be also be used to assess TUR risks of X:x (e.g., 3.5:1) as illustrated below:

Calibration Aspects
Many organizations contract out the calibration services for their Inspection, Measurement and Test Equipment (IMTE). Some organizations have in-house laboratories (captive laboratories) that calibrate their equipment. The requirement for accredited calibration services has been driven by the both internal and external customers of laboratories. Accreditation for laboratories has been through the ISO 17025 standard: General requirements for the competence of testing and calibration laboratories. Not all accredited laboratories are the same. Many countries in the world have one accrediting body that is sanctioned by their government. In other countries, there is more than one accrediting body, either sanctioned by their government or operating under the free market system.

Calibration Aspects
It is important for a consumer of calibration and test service to select the laboratory that best meets their needs. An accredited laboratory is assessed on the parameters for Test and/or Calibration by the accrediting body. These parameters are listed under the laboratory's published scope of accreditation. If the particular parameter is not listed on the scope of accreditation, then it is not considered an accredited calibration or test.

It is important when selecting a laboratory for Test and Calibration work to study the laboratory's scope of accreditation. The laboratory's scope of accreditation may be viewed on its accrediting body's web site. This provides for an independent review of the laboratory's claims. BackNext

Example
An example of a scope of accreditation is shown which defines a laboratory's capability (parameter, range, best measurement capability (Uncertainty):

Continued...

Calibration Aspects

Example (continued)

BackNext

Important Note
It is important to note that because what the laboratory claims its best measurement capability under its scope of accreditation does not mean that, that is what the customer will get in their calibration. Calibration requirements including the requirements for the Measurement Uncertainty have to be specified in the purchase order.

Important Note
It is important to note that because what the laboratory claims its best measurement capability under its scope of accreditation does not mean that, that is what the customer will get in their calibration. Calibration requirements including the requirements for the Measurement Uncertainty have to be specified in the purchase order.

Mutual Recognition Agreements (MRA)


For this reason, various umbrella accrediting organizations have Mutual Recognition Agreements (MRA) so that accredited bodies recognize other laboratories calibration if it is accredited by another accrediting body. The accrediting bodies undergo a very thorough peer assessment by the umbrella accrediting organizations. This process ensures:

Consistency of accreditation process by accrediting bodies. Eliminates having to carry multiple accreditations by a laboratory. Helps eliminate artificial trade barriers.

Mutual Recognition Agreements (MRA)

The Mutual Recognition Agreement (MRA) hierarchy is shown in the following figure.

Calibration Intervals
Just because equipment sits on the shelf after calibration, does not mean that it is good when next used. One needs to perform intermediate checks, determine statistical analysis for stability, drift and determining calibration intervals. For review of calibration intervals, go to the Metrology Concepts portion of this course.

Various considerations should be given when assessing a design of the product. The following is a list that provides ideas for further developing the assessment checklist for equipment/product design

Is lighting adjustable for comfort? Are glare and reflections minimal? Do they cause any discomfort? Is ventilation adequate? Are there any fumes from equipment or activities? Is ozone at low levels? (photocopiers and laser printers produce ozone, and many now state what standards they meet. If in doubt they should be located in a separate, well ventilated room). Is the equipment thermally stable? Are there adequate fire prevention methods?

Does the level of humidity specified ensure correct operation? Is there any intrusive noise (from the equipment)? Are all serviceable areas accessible? Are there any wires or tripping hazards due to cords? Are electrical switches and sockets correctly rated? Ergonomics checklist Does the software display information in a suitable format and at a suitable speed? Does the software provide sensible error and help messages, and help avoid the consequences of errors (e.g. warning you before you delete something or do something to damage the instrument)? Safety interlocks? Is the software error proofed (i.e. it does not inadvertently bypass safety interlocks)? Safety/Hazard audit using criteria defined in Safety Characteristics

You might also like