You are on page 1of 82

Analytical chemistry

Analytical chemistry is the study of the separation, identification, and quantification of


the chemical components of natural and artificialmaterials. Qualitative analysis gives an
indication of the identity of the chemical species in the sample and quantitative
analysis determines the amount of one or more of these components. The separation of
components is often performed prior to analysis.

Analytical methods can be separated into classical and instrumental. Classical methods
(also known as wet chemistry methods) use separations such
as precipitation, extraction, and distillation and qualitative analysis by color, odor, or
melting point. Quantitative analysis is achieved by measurement of weight or volume.
Instrumental methods use an apparatus to measure physical quantities of the analyte
such aslight absorption, fluorescence, or conductivity. The separation of materials is
accomplished using chromatography or electrophoresismethods.

Analytical chemistry is also focused on improvements in experimental


design, chemometrics, and the creation of new measurement tools to provide better
chemical information. Analytical chemistry has applications
in forensics, bioanalysis, clinical analysis, environmental analysis, and materials
analysis.

History

Gustav Kirchhoff (left) and Robert Bunsen (right)


Analytical chemistry has been important since the early days of chemistry, providing
methods for determining which elements and chemicals are present in the world around
us. During this period significant analytical contributions to chemistry include the
development of systematic elemental analysis by Justus von Liebig and systematized
organic analysis based on the specific reactions of functional groups.

The first instrumental analysis was flame emissive spectrometry developed by Robert
Bunsen andGustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860.

Most of the major developments in analytical chemistry take place after 1900. During
this period instrumental analysis becomes progressively dominant in the field. In
particular many of the basic spectroscopic and spectrometric techniques were
discovered in the early 20th century and refined in the late 20th century.

The separation sciences follow a similar time line of development and also become
increasingly transformed into high performance instruments. In the 1970s many of these
techniques began to be used together to achieve a complete characterization of
samples.

Starting in approximately the 1970s into the present day analytical chemistry has
progressively become more inclusive of biological questions (bioanalytical chemistry),
whereas it had previously been largely focused on inorganic or small organic molecules.
Lasers have been increasingly used in chemistry as probes and even to start and
influence a wide variety of reactions. The late 20th century also saw an expansion of the
application of analytical chemistry from somewhat academic chemical questions
to forensic, environmental, industrial and medical questions, such as in histology.

Modern analytical chemistry is dominated by instrumental analysis. Many analytical


chemists focus on a single type of instrument. Academics tend to either focus on new
applications and discoveries or on new methods of analysis. The discovery of a
chemical present in blood that increases the risk of cancer would be a discovery that an
analytical chemist might be involved in. An effort to develop a new method might involve
the use of a tunable laser to increase the specificity and sensitivity of a spectrometric
method. Many methods, once developed, are kept purposely static so that data can be
compared over long periods of time. This is particularly true in industrial quality
assurance (QA), forensic and environmental applications. Analytical chemistry plays an
increasingly important role in the pharmaceutical industry where, aside from QA, it is
used in discovery of new drug candidates and in clinical applications where
understanding the interactions between the drug and the patient are critical.
Classical methods

The presence of copper in this qualitative analysis is indicated by the bluish-green color of the flame.

Although modern analytical chemistry is dominated by sophisticated instrumentation,


the roots of analytical chemistry and some of the principles used in modern instruments
are from traditional techniques many of which are still used today. These techniques
also tend to form the backbone of most undergraduate analytical chemistry educational
labs.
Qualitative analysis
A qualitative analysis determines the presence or absence of a particular compound,
but not the mass or concentration. That is it is not related to quantity.
Chemical tests
There are numerous qualitative chemical tests, for example, the acid test for gold and
the Kastle-Meyer test for the presence of blood.
Flame test
Inorganic qualitative analysis generally refers to a systematic scheme to confirm the
presence of certain, usually aqueous, ions or elements by performing a series of
reactions that eliminate ranges of possibilities and then confirms suspected ions with a
confirming test. Sometimes small carbon containing ions are included in such schemes.
With modern instrumentation these tests are rarely used but can be useful for
educational purposes and in field work or other situations where access to state-of-the-
art instruments are not available or expedient.
Gravimetric analysis
Gravimetric analysis involves determining the amount of material present by weighing
the sample before and/or after some transformation. A common example used in
undergraduate education is the determination of the amount of water in a hydrate by
heating the sample to remove the water such that the difference in weight is due to the
water loss of water.
Volumetric analysis
Titration involves the addition of a reactant to a solution being analyzed until some
equivalence point is reached. Often the amount of material in the solution being
analyzed may be determined. Most familiar to those who have taken college chemistry
is the acid-base titration involving a color changing indicator. There are many other
types of titrations, for example potentiometric titrations. These titrations may use
different types of indicators to reach some equivalence point.

Instrumental methods

Block diagram of an analytical instrument showing the stimulus and measurement of response

Spectroscopy
Spectroscopy measures the interaction of the molecules with electromagnetic radiation.
Spectroscopy consists of many different applications such as atomic absorption
spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, x-ray
fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual
polarisation interferometry, nuclear magnetic resonance spectroscopy, photoemission
spectroscopy, Mössbauer spectroscopy and so on.
Mass spectrometry
.

An accelerator mass spectrometerused for radiocarbon dating and other analysis.

Mass spectrometry measures mass-to-charge ratio of molecules


using electric and magnetic fields. There are several ionization methods: electron
impact, chemical ionization, electrospray, fast atom bombardment, matrix assisted laser
desorption ionization, and others. Also, mass spectrometry is categorized by
approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer,quadrupole
ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on.
Crystallography
Crystallography is a technique that characterizes the chemical structure of materials at
the atomic level by analyzing the diffraction patterns of usually x-rays that have been
deflected by atoms in the material. From the raw data the relative placement of atoms in
space may be determined.
Electrochemical analysis
Electroanalytical methods measure the potential (volts) and/or current (amps) in
an electrochemical cell containing the analyte. These methods can be categorized
according to which aspects of the cell are controlled and which are measured. The three
main categories arepotentiometry (the difference in electrode potentials is
measured), coulometry (the cell's current is measured over time), and voltammetry (the
cell's current is measured while actively altering the cell's potential).
Thermal analysis
Calorimetry and thermogravimetric analysis measure the interaction of a material
and heat.
Separation

Separation of black ink on a thin layer chromatography plate.

Separation processes are used to decrease the complexity of material


mixtures. Chromatography andelectrophoresis are representative of this field.
Hybrid techniques
Combinations of the above techniques produce a "hybrid" or "hyphenated" technique.[9]
[10][11][12][13]
Several examples are in popular use today and new hybrid techniques are
under development. For example, gas chromatography-mass spectrometry, gas
chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry,
liquid chromatography-NMR spectroscopy. liquid chromagraphy-infrared spectroscopy
and capillary electrophoresis-mass spectrometry.

Hyphenated separation techniques refers to a combination of two (or more) techniques


to detect and separate chemicals from solutions. Most often the other technique is some
form of chromatography. Hyphenated techniques are widely used
in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially
if the name of one of the methods contains a hyphen itself.
Microscopy
Fluorescence microscope image of two mouse cell nuclei in prophase(scale bar is 5 µm).

The visualization of single molecules, single cells, biological tissues


and nanomaterials is an important and attractive approach in analytical science. Also,
hybridization with other traditional analytical tools is revolutionizing analytical
science. Microscopy can be categorized into three different fields: optical
microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is
rapidly progressing because of the rapid development of the computer and camera
industries.
Lab-on-a-chip

A glass microreactor

Devices that integrate (multiple) laboratory functions on a single chip of only millimeters
to a few square centimeters in size and that are capable of handling extremely small
fluid volumes down to less than pico liters.

Standards
Standard curve

A calibration curve plot showing limit of detection (LOD), limit of quantification(LOQ), dynamic range,
and limit of linearity (LOL).

A general method for analysis of concentration involves the creation of a calibration


curve. This allows for determination of the amount of a chemical in a material by
comparing the results of unknown sample to those of a series known standards. If the
concentration of element or compound in a sample is too high for the detection range of
the technique, it can simply be diluted in a pure solvent. If the amount in the sample is
below an instrument's range of measurement, the method of addition can be used. In
this method a known quantity of the element or compound under study is added, and
the difference between the concentration added, and the concentration observed is the
amount actually in the sample.
Internal standards
Sometimes an internal standard is added at a known concentration directly to an
analytical sample to aid in quantitation. The amount of analyte present is then
determined relative to the internal standard as a calibrant.
Standard addition
The method of standard addition is used in instrumental analysis to determine
concentration of a substance (analyte) in an unknown sample by comparison to a set of
samples of known concentration, similar to using a calibration curve. Standard addition
can be applied to most analytical techniques and is used instead of a calibration
curve to solve the matrix effect problem.

Signals and noise


One of the most important components of analytical chemistry is maximizing the desired
signal while minimizing the associated noise.[15]The analytical figure of merit is known as
the signal-to-noise ratio (S/N or SNR).

Noise can arise from environmental factors as well as from fundamental physical
processes.
Thermal noise
Thermal noise results from the motion of charge carriers (usually electrons) in an
electrical circuit generated by their thermal motion. Thermal noise is white
noise meaning that the power spectral density is constant throughout the frequency
spectrum.

The root mean square value of the thermal noise in a resistor is given by[15]
where kB is Boltzmann's constant, T is the temperature, R is the resistance,
and Δf is the bandwidth of the frequency f.

Shot noise
Main article: Shot noise

Shot noise is a type of electronic noise that occurs when the finite number of
particles (such as electrons in an electronic circuit or photonsin an optical device) is
small enough to give rise to statistical fluctuations in a signal.

Shot noise is a Poisson process and the charge carriers that make up the current
follow a Poisson distribution. The root mean square current fluctuation is given by[15]

where e is the elementary charge and I is the average current. Shot noise is
white noise.
Flicker noise
Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases,
the noise decreases. Flicker noise arises from a variety of sources, such as
impurities in a conductive channel, generation and recombination noise in
a transistor due to base current, and so on. This noise can be avoided
by modulation of the signal at a higher frequency, for example through the use
of a lock-in amplifier.
Environmental noise

Noise in a thermogravimetric analysis; lower noise in the middle of the plot results from
less human activity (and environmental noise) at night.

Environmental noise arises from the surroundings of the analytical instrument.


Sources of electromagnetic noise are power lines, radio and television
stations, wireless devices and electric motors. Many of these noise sources
are narrow bandwidth and therefore can be avoided. Temperature
and vibration isolation may be required for some instruments.
Noise reduction
Noise reduction can be accomplished either in hardware or software.
Examples of hardware noise reduction are the use of shielded cable, analog
filtering, and signal modulation. Examples of software noise reduction
are digital filtering, ensemble average, boxcar average,
and correlation methods.

Applications
Analytical chemistry research is largely driven by performance (sensitivity,
selectivity, robustness, linear range, accuracy, precision, and speed), and cost
(purchase, operation, training, time, and space). Among the main branches of
contemporary analytical atomic spectrometry, the most widespread and
universal are optical and mass spectrometry. In the direct elemental analysis
of solid samples, the new leaders are laser-induced breakdown and laser
ablation mass spectrometry, and the related techniques with transfer of the
laser ablation products into inductively coupled plasma. Advances in design of
diode lasers and optical parametric oscillators promote developments in
fluorescence and ionization spectrometry and also in absorption techniques
where uses of optical cavities for increased effective absorption pathlength are
expected to expand. Steady progress and growth in applications of plasma-
and laser-based methods are noticeable. An interest towards the absolute
(standardless) analysis has revived, particularly in the emission spectrometry.

A lot of effort is put in shrinking the analysis techniques to chip size. Although
there are few examples of such systems competitive with traditional analysis
techniques, potential advantages include size/portability, speed, and cost.
(micro Total Analysis System (µTAS) or Lab-on-a-chip). Microscale
chemistry reduces the amounts of chemicals used.

Much effort is also put into analyzing biological systems. Examples of rapidly
expanding fields in this area are:
 Genomics - DNA sequencing and its related research. Genetic
fingerprinting and DNA microarray are very popular tools and research
fields.
 Proteomics - the analysis of protein concentrations and modifications,
especially in response to various stressors, at various developmental
stages, or in various parts of the body.
 Metabolomics - similar to proteomics, but dealing with metabolites.
 Transcriptomics - mRNA and its associated field
 Lipidomics - lipids and its associated field
 Peptidomics - peptides and its associated field
 Metalomics- similar to proteomics and metabolomics, but dealing with
metal concentrations and especially with their binding to proteins and other
molecules.

Analytical chemistry has played critical roles in the understanding of basic


science to a variety of practical applications, such as biomedical applications,
environmental monitoring, quality control of industrial manufacturing, forensic
science and so on.

The recent developments of computer automation and information


technologies have innervated analytical chemistry to initiate a number of new
biological fields. For example, automated DNA sequencing machines were the
basis to complete human genome projects leading to the birth of genomics.
Protein identification and peptide sequencing by mass spectrometry opened a
new field of proteomics. Furthermore, a number of ~omics based on analytical
chemistry have become important areas in modern biology.

Also, analytical chemistry has been an indispensable area in the development


of nanotechnology. Surface characterization instruments,electron
microscopes and scanning probe microscopes enables scientists to visualize
atomic structures with chemical characterizations.

Among active contemporary analytical chemistry research fields, micro total


analysis system is considered as a great promise of revolutionary technology.
In this approach, integrated and miniaturized analytical systems are being
developed to control and analyze single cells and single molecules. This
cutting-edge technology has a promising potential of leading a new revolution
in science as integrated circuits did in computer developments.

Sensory Analysis And Instruments

• Sensory analysis is a method used to describe the sensations that humans


perceive with their 5 senses when in contact with a product. Among the 5
senses, we distinguish physical (hearing, touch, sight) and chemical (olfaction,
taste) senses.
Alpha MOS has developed solutions linked to chemical senses.

The sense of smell, also called olfaction, is the perception of odorant molecules
either by direct inhalation or during mastication (retro nasal way).

The sense of taste consists of the perception of savors that are the sensations
perceived by the tongue during tasting. Commonly, savors are classified in 5
categories: salt, sour, bitter, sweet, umami

In sensory analysis, there exist 2 types of tests:


• hedonic tests (I like it / I don't like it) which consist of consumer tests
• analytical tests (descriptive or discriminative) that are conducted by a trained
sensory panel.

For analytical tests, Electronic nose and tongue analyzers can also be used. These
instruments have the specificity to analyze odor / taste in the same way as human nose
and tongue: instead of separating and identifying the various chemical compounds, they
measure a global odor / taste.

LIST OF ELECTRONIC DEVICES USED IN


CHEMICAL ANALYSIS AND
MEASUREMENTS

Breathalyzer
From Wikipedia, the free encyclopedia

Jump to: navigation, search


A law enforcement grade breathalyzer

A breathalyzer (U.S.A.) or breathalyser (U.K.) (a portmanteau of breath and


analyzer/analyser) is a device for estimating blood alcohol content (BAC) from a breath sample.
Breathalyzer is the brand name of a series of models made by one manufacturer of these
instruments (originally Smith and Wesson[1], later sold to National Draeger), but has become a
genericized trademark for all such instruments.[citation needed] In Canada, a preliminary non-
evidentiary screening device can be approved by Parliament as an approved screening device,
and an evidentiary breath instrument can be similarly designated as an approved instrument. The
U.S. National Highway Traffic Safety Administration maintains a Conforming Products List of
breath alcohol devices approved for evidentiary use,[2] as well as for preliminary screening use.[3]

Contents
[hide]

• 1 Origins
• 2 Chemistry
• 3 Law enforcement
• 4 Consumer use
• 5 Breath test evidence in the United States
• 6 Common sources of error
o 6.1 Calibration
o 6.2 Non-specific analysis
o 6.3 Interfering compounds
o 6.4 Homeostatic variables
o 6.5 Mouth alcohol
o 6.6 Testing during absorptive phase
o 6.7 Retrograde extrapolation
• 7 Photovoltaic assay
• 8 Breathalyzer myths
• 9 Products that interfere with testing
• 10 References
• 11 External links

[edit] Origins
A 1927 paper produced by Emil Bogen[4], who collected air in a football and then tested this air
for traces of alcohol, discovered that the alcohol content of 2 litres of expired air was a little
greater than that of 1 cc of urine. However, research into the possibilities of using breath to test
for alcohol in a person's body dates as far back as 1874, when Anstie made the observation that
small amounts of alcohol were excreted in breath[5].

The first practical roadside breath-testing device intended for use by the police was the
drunkometer. The drunkometer was developed by Professor Harger in 1938. The drunkometer
collected a motorist's breath sample directly into a balloon inside the machine. The breath sample
was then pumped through an acidified potassium permanganate solution. If there was alcohol in
the breath sample, the solution changed colour. The greater the colour change, the more alcohol
there was present in the breath.

The drunkometer was quite cumbersome and was approximately the size of a shoe box. It was
more reminiscent of a portable laboratory.

In late 1927, in a case in Marlborough, England, a Dr. Gorsky, Police Surgeon, asked a suspect
to inflate a football bladder with his breath. Since the 2 liters of the man's breath contained
1.5 ml of ethanol[dubious – discuss], Dr. Gorsky testified before the court that the defendant was "50%
drunk".[6] Though technologies for detecting alcohol vary, it is widely accepted that Dr. Robert
Borkenstein (1912–2002), a captain with the Indiana State Police and later a professor at Indiana
University at Bloomington, is regarded as the first to create a device that measures a subject's
blood alcohol level based on a breath sample. In 1954, Borkenstein invented his breathalyzer,
which used chemical oxidation and photometry to determine alcohol concentration. Subsequent
breathalyzers have converted primarily to infrared spectroscopy. The invention of the
breathalyzer provided law enforcement with a non-invasive test providing immediate results to
determine an individual's breath alcohol concentration at the time of testing. Also, the breath
alcohol concentration test result itself can vary between individuals consuming identical amounts
of alcohol due to gender, weight, and genetic pre-disposition.

[edit] Chemistry
When the user exhales into the breathalyzer, any ethanol present in their breath is oxidized to
acetic acid at the anode:

CH3CH2OH(g) + H2O(l) -> CH3CO2H(l) + 4H+(aq) + 4e-

At the cathode, atmospheric oxygen is reduced:

O2(g) + 4H+(aq) + 4e- -> 2H2O(l)


The overall reaction, then, is the oxidation of ethanol to acetic acid and water.

CH3CH2OH(l) + O2(g) -> CH3COOH(l) + H2O(l)

The electrical current produced by this reaction is measured, processed, and displayed as an
approximation of overall blood alcohol content by the breathalyzer.

[edit] Law enforcement


Breath analyzers do not directly measure blood alcohol content or concentration, which requires
the analysis of a blood sample. Instead, they estimate BAC indirectly by measuring the amount
of alcohol in one's breath. Two breathalyzer technologies are most prevalent. Desktop analyzers
generally use infrared spectrophotometer technology, electrochemical fuel cell technology, or a
combination of the two. Hand-held field testing devices are generally based on electrochemical
platinum fuel cell analysis and, depending upon jurisdiction, may be used by officers in the field
as a form of "field sobriety test" commonly called PBT (preliminary breath test) or PAS
(preliminary alcohol screening) or as evidential devices in POA (point of arrest) testing.

[edit] Consumer use


There are many models of consumer or personal breath alcohol testers on the market. These
hand-held devices are generally less expensive than the devices used by law enforcement. Most
retail consumer breath testers use semiconductor-based sensing technology, which is less
expensive, less accurate, and less reliable than fuel cell and infrared devices. While
semiconductor devices can sense the presence of breath alcohol, they do not provide as reliable
measures of BAC.

All breath alcohol testers sold to consumers in the United States are required to be certified by
the Food and Drug Administration[7], while those used by law enforcement must be approved by
the Department of Transportation's National Highway Traffic Safety Administration.[8]

Manufacturers of over-the-counter consumer breathalyzers must submit an FDA 510(k)


Premarket Clearance to demonstrate that the device to be marketed is at least as safe and
effective, that is, substantially equivalent, to a legally marketed device (21 CFR 807.92(a) (3))
that is not subject to Premarket Approval (PMA). Submitters must compare their device to one or
more similar legally marketed devices and make and support their substantial equivalency
claims.[9] The devices are cleared as "screeners" which means they have met the requirements
used by the FDA for detecting the presence of alcohol in the breath. Screener certification does
not mean that the device can measure breath alcohol content accurately. Many breathalyzers
cleared by the FDA are very innaccurate when it comes to BAC measurement. No
semiconductor device has ever been approved for evidential use (to stand-up in a court of law)
by any State Law Enforcement Agencies or the U.S. Department of Transportation.

[edit] Breath test evidence in the United States


This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (March 2010)

A breathalyzer in action

The breath alcohol content reading is used in criminal prosecutions in two ways. The operator of
a vehicle whose reading indicates a BrAC over the legal limit for driving will be charged with
having committed an illegal per se offense: that is, it is automatically illegal throughout the
United States to drive a vehicle with a BrAC of 0.08 or higher. One exception is the State of
Wisconsin, where a first time drunk driving offense is normally a civil ordinance violation.[10]
The breathalyzer reading will be offered as evidence of that crime, although the issue is what the
BrAC was at the time of driving rather than at the time of the test. Some jurisdictions now allow
the use of breathalyzer test results without regard as to how much time passed between operation
of the vehicle and the time the test was administered.[citation needed] The suspect will also be charged
with driving under the influence of alcohol (sometimes referred to as driving or operating while
intoxicated). While BrAC tests are not necessary to prove a defendant was under the influence,
laws in most states require the jury to presume that he was under the influence if his BrAC is
found and believed to be over 0.08 (grams of alcohol/210 liters breath) when driving.[citation needed]
This is a rebuttable presumption, however: the jury can disregard the test if they find it unreliable
or if other evidence establishes a reasonable doubt.

Infrared instruments are also known as "evidentiary breath testers" and generally produce court-
admissible results.[citation needed] Other instruments, usually hand held in design, are known as
"preliminary breath testers" (PBT), and their results, while valuable to an officer attempting to
establish probable cause for a drunk driving arrest, are generally not admissible in court. Some
states, such as Idaho, permit data or "readings" from hand-held PBTs to be presented as evidence
in court. If at all, they are generally only admissible to show the presence of alcohol or as a pass-
fail field sobriety test to help determine probable cause to arrest. South Dakota does not permit
data from any type of breath tester, and relies entirely on blood tests to ensure accuracy.

[edit] Common sources of error


An editor has expressed a concern that this section lends undue weight
to certain ideas, incidents, controversies or matters relative to the
article subject as a whole. Please help to create a more balanced
presentation. Discuss and resolve this issue before removing this message.
(March 2009)

Breath testers can be very sensitive to temperature, for example, and will give false readings if
not adjusted or recalibrated to account for ambient or surrounding air temperatures. The
temperature of the subject is also very important.[citation needed]

Breathing pattern can also significantly affect breath test results. One study found that the BAC
readings of subjects decreased 11–14% after running up one flight of stairs and 22–25% after
doing so twice. Another study found a 15% decrease in BAC readings after vigorous exercise or
hyperventilation. Hyperventilation for 20 seconds has been shown to lower the reading by
approximately 32%. On the other hand, holding one's breath for 30 seconds can increase the
breath test result by about 28%.[citation needed]

Some breath analysis machines assume a hematocrit (cell volume of blood) of 47%. However,
hematocrit values range from 42 to 52% in men and from 37 to 47% in women. A person with a
lower hematocrit will have a falsely high BAC reading.

Research indicates that breath tests can vary at least 15% from actual blood alcohol
concentration. An estimated 23% of individuals tested will have a BAC reading higher than their
true BAC. Police in Victoria, Australia, use breathalyzers that give a recognized 20% tolerance
on readings. Noel Ashby, former Victoria Police Assistant Commissioner (Traffic & Transport),
claims that this tolerance is to allow for different body types.[11]

[edit] Calibration

Many handheld breathalyzers sold to consumers use a silicon oxide sensor (also called a
semiconductor sensor) to determine the blood alcohol concentration. These sensors are far more
prone to contamination and interference from other substances besides breath alcohol. The
sensors require recalibration or replacement every six months. Higher end personal breathalyzers
and professional-use breath alcohol testers use platinum fuel cell sensors.These too require
recalibration but at less frequent intervals than semiconductor devices, usually once a year.

Calibration is the process of checking and adjusting the internal settings of a breathalyzer by
comparing and adjusting its test results to a known alcohol standard. Law enforcement
breathalyzers are meticulously maintained and re-calibrated frequently to ensure accuracy.

There are two methods of calibrating a precision fuel cell breathalyzer, the Wet Bath and the Dry
Gas method. Each method requires specialized equipment and factory trained technicians. It is
not a procedure that can be conducted by untrained users or without the proper equipment.
The Dry-Gas Method utilizes a portable calibration standard which is a precise mixture of
alcohol and inert nitrogen available in a pressurized canister. Initial equipment costs are less than
alternative methods and the steps required are fewer. The equipment is also portable allowing
calibrations to be done when and where required.

The Wet Bath Method utilizes an alcohol/water standard in a precise specialized alcohol
concentration, contained and delivered in specialized simulator equipment. Wet bath apparatus
has a higher initial cost and is not intended to be portable. The standard must be fresh and
replaced regularly.

Some semiconductor models are designed to allow the sensor module to be replaced without the
need to send the unit to a calibration lab.[citation needed]

[edit] Non-specific analysis

One major problem with older breathalyzers is non-specificity: the machines not only identify
the ethyl alcohol (or ethanol) found in alcoholic beverages, but also other substances similar in
molecular structure or reactivity.

The oldest breathalyzer models pass breath through a solution of potassium dichromate, which
oxidizes ethanol into acetic acid, changing color in the process. A monochromatic light beam is
passed through this sample, and a detector records the change in intensity and, hence, the change
in color, which is used to calculate the percent alcohol in the breath. However, since potassium
dichromate is a strong oxidizer, numerous alcohol groups can be oxidized by it, producing false
positives. This source of false positives is unlikely as very few other substances found in exhaled
air is oxidisable.

Infrared-based breathalyzers project an infrared beam of radiation through the captured breath in
the sample chamber and detect the absorbance of the compound as a function of the wavelength
of the beam, producing an absorbance spectrum that can be used to identify the compound, as the
absorbance is due to the harmonic vibration and stretching of specific bonds in the molecule at
specific wavelengths (see infrared spectroscopy). The characteristic bond of alcohols in infrared
is the O-H bond, which gives a strong absorbance at a short wavelength. The more light is
absorbed by compounds containing the alcohol group, the less reaches the detector on the other
side—and the higher the reading. Other groups, most notably aromatic rings and carboxylic acids
can give similar absorbance readings.[12] Even water vapour does.

[edit] Interfering compounds

Some natural and volatile interfering compounds do exist, however. For example, the National
Highway Traffic Safety Administration (NHTSA) has found that dieters and diabetics may have
acetone levels hundreds or even thousand of times higher than those in others. Acetone is one of
the many substances that can be falsely identified as ethyl alcohol by some breath machines.
However, fuel cell based systems are non-responsive to substances like acetone.
A study in Spain showed that metered-dose inhalers (MDIs) used in asthma treatment are also a
cause of false positives in breath machines.

Substances in the environment can also lead to false BAC readings. For example, methyl tert-
butyl ether (MTBE), a common gasoline additive, has been alleged anecdotally to cause false
positives in persons exposed to it. Tests have shown this to be true for older machines; however,
newer machines detect this interference and compensate for it.[13] Any number of other products
found in the environment or workplace can also cause erroneous BAC results. These include
compounds found in lacquer, paint remover, celluloid, gasoline, and cleaning fluids, especially
ethers, alcohols, and other volatile compounds.

[edit] Homeostatic variables

Breathalyzers assume that the subject being tested has a 2100-to-1 partition ratio[14] in converting
alcohol measured in the breath to estimates of alcohol in the blood. If the instrument estimates
the BAC, then it measures weight of alcohol to volume of breath, so it will effectively measure
grams of alcohol per 2100 ml of breath given. This measure is in direct proportion to the amount
of grams of alcohol to every 100 ml of blood. Therefore, there is a 2100-to-1 ratio of alcohol in
blood to alcohol in breath. However, this assumed partition ratio varies from 1300:1 to 3100:1 or
wider among individuals and within a given individual over time. Assuming a true (and US
legal) blood-alcohol concentration of .07%, for example, a person with a partition ratio of 1500:1
would have a breath test reading of .10%—over the legal limit.

Most individuals do, in fact, have a 2100-to-1 partition ratio in accordance with William Henry's
law, which states that when the water solution of a volatile compound is brought into equilibrium
with air, there is a fixed ratio between the concentration of the compound in air and its
concentration in water. This ratio is constant at a given temperature. The human body is 37
degrees Celsius on average. Breath leaves the mouth at a temperature of 34 degrees Celsius.
Alcohol in the body obeys Henry's Law as it is a volatile compound and diffuses in body water.
To ensure that variables such as fever and hypothermia could not be pointed out to influence the
results in a way that was harmful to the accused, the instrument is calibrated at a ratio of 2100:1,
underestimating by 9 percent. In order for a person running a fever to significantly overestimate,
he would have to have a fever that would likely see the subject in the hospital rather than driving
in the first place. Studies suggest that about 1.8% of the population have a partition ratio below
2100:1. Thus, a machine using a 2100-to-1 ratio could actually overestimate the BAC. As much
as 14% of the population has a partition ratio above 2100, thus causing the machine to under-
report the BAC.

Further, the assumption that the test subject's partition ratio will be average—that there will be
2100 parts in the blood for every part in the breath—means that accurate analysis of a given
individual's blood alcohol by measuring breath alcohol is difficult, as the ratio varies
considerably.

Variance in how much one breathes out can also give false readings, usually low.[15] This is due
to biological variance in breath alcohol concentration as a function of the volume of air in the
lungs, an example of a factor which interferes with the liquid-gas equilibrium assumed by the
devices. The presence of volatile components is another example of this; mixtures of volatile
compounds can be more volatile than their components, which can create artificially high levels
of ethanol (or other) vapors relative to the normal biological blood/breath alcohol equilibrium.

[edit] Mouth alcohol

One of the most common causes of falsely high breathalyzer readings is the existence of mouth
alcohol. In analyzing a subject's breath sample, the breathalyzer's internal computer is making
the assumption that the alcohol in the breath sample came from alveolar air—that is, air exhaled
from deep within the lungs. However, alcohol may have come from the mouth, throat or stomach
for a number of reasons. To help guard against mouth-alcohol contamination, certified breath-
test operators are trained to observe a test subject carefully for at least 15–20 minutes before
administering the test.

The problem with mouth alcohol being analyzed by the breathalyzer is that it was not absorbed
through the stomach and intestines and passed through the blood to the lungs. In other words, the
machine's computer is mistakenly applying the partition ratio (see above) and multiplying the
result. Consequently, a very tiny amount of alcohol from the mouth, throat or stomach can have a
significant impact on the breath-alcohol reading.

Other than recent drinking, the most common source of mouth alcohol is from belching or
burping. This causes the liquids and/or gases from the stomach—including any alcohol—to rise
up into the soft tissue of the esophagus and oral cavity, where it will stay until it has dissipated.
The American Medical Association concludes in its Manual for Chemical Tests for Intoxication
(1959): "True reactions with alcohol in expired breath from sources other than the alveolar air
(eructation, regurgitation, vomiting) will, of course, vitiate the breath alcohol results." For this
reason, police officers are supposed to keep a DUI suspect under observation for at least 15
minutes prior to administering a breath test. Instruments such as the Intoxilyzer 5000 also feature
a "slope" parameter. This parameter detects any decrease in alcohol concentration of 0.006 g per
210 L of breath in 0.6 second, a condition indicative of residual mouth alcohol, and will result in
an "invalid sample" warning to the operator, notifying the operator of the presence of the residual
mouth alcohol. PBT's, however, feature no such safeguard.

Acid reflux, or gastroesophageal reflux disease, can greatly exacerbate the mouth-alcohol
problem. The stomach is normally separated from the throat by a valve, but when this valve
becomes herniated, there is nothing to stop the liquid contents in the stomach from rising and
permeating the esophagus and mouth. The contents—including any alcohol—are then later
exhaled into the breathalyzer.[16]

Mouth alcohol can also be created in other ways. Dentures, for example, will trap alcohol.
Periodental disease can also create pockets in the gums which will contain the alcohol for longer
periods. Also known to produce false results due to residual alcohol in the mouth is passionate
kissing with an intoxicated person. Recent use of mouthwash or breath freshener—possibly to
disguise the smell of alcohol when being pulled over by police—contain fairly high levels of
alcohol.
[edit] Testing during absorptive phase

Absorption of alcohol continues for anywhere from 20 minutes (on an empty stomach) to two-
and-one-half hours (on a full stomach) after the last consumption. Peak absorption generally
occurs within an hour. During the initial absorptive phase, the distribution of alcohol throughout
the body is not uniform. Uniformity of distribution, called equilibrium, occurs just as absorption
completes. In other words, some parts of the body will have a higher blood alcohol content
(BAC) than others. One aspect of the non-uniformity before absorption is complete is that the
BAC in arterial blood will be higher than in venous blood. Laws generally require blood samples
to be venous.

During the initial absorption phase, arterial blood alcohol concentrations are higher than venous.
After absorption, venous blood is higher. This is especially true with bolus dosing. With
additional doses of alcohol, the body can reach a sustained equilibrium when absorption and
elimination are proportional, calculating a general absorption rate of 0.02/drink and a general
elimination rate of 0.015/hour. (One drink is equal to 1.5 ounces of liquor, 12 ounces of beer, or
5 ounces of wine [1].)

Breath alcohol is a representation of the equilibrium of alcohol concentration as the blood gases
(alcohol) pass from the (arterial) blood into the lungs to be expired in the breath. The venous
blood picks up oxygen for distribution throughout the body. Breath alcohol concentrations are
generally lower than blood alcohol concentrations, because a true representation of blood alcohol
concentration is only possible if the lungs were able to completely deflate. Vitreous (eye) fluid
provides the most accurate account of blood alcohol concentrations.

[edit] Retrograde extrapolation

The breathalyzer test is usually administered at a police station, commonly an hour or more after
the arrest. Although this gives the BrAC at the time of the test, it does not by itself answer the
question of what it was at the time of driving. The prosecution typically provides an estimated
alcohol concentration at the time of driving utilizing retrograde extrapolation, presented by
expert opinion. This involves projecting back in time to estimate the BrAC level at the time of
driving, by applying the physiological properties of absorption and elimination rates in the
human body.

Extrapolation is calculated using five factors and a general elimination rate of 0.015/hour.

For example: Time of breath test-10:00pm...Result of breath test-0.080...Time of driving-9:00pm


(stopped by officer)...Time of last drink-8:00pm...Last food-12:00pm

Using these facts, an expert can say the person's last drink was consumed on an empty stomach,
which means absorption of the last drink (at 8:00) was complete within one hour-9:00. At the
time of the stop, the driver is fully absorbed. The test result of 0.080 was at 10:00. So the one
hour of elimination that has occurred since the stop is added in, making 0.080+0.015=0.095 the
approximate breath alcohol concentration at the time of the stop.
[edit] Photovoltaic assay
The photovoltaic assay, used only in the dated Photo Electric Intoximeter (PEI), is a form of
breath testing rarely encountered today. The process works by using photocells to analyze the
color change of a redox (oxidation-reduction) reaction. A breath sample is bubbled through an
aqueous solution of sulfuric acid, potassium dichromate, and silver nitrate. The silver nitrate acts
as a catalyst, allowing the alcohol to be oxidized at an appreciable rate. The requisite acidic
condition needed for the reaction might also be provided by the sulfuric acid. In solution, ethanol
reacts with the potassium dichromate, reducing the dichromate ion to the chromium (III) ion.
This reduction results in a change of the solution's colour from red-orange to green. The reacted
solution is compared to a vial of non-reacted solution by a photocell, which creates an electric
current proportional to the degree of the colour change; this current moves the needle that
indicates BAC.

Like other methods, breath testing devices using chemical analysis are somewhat prone to false
readings. Compounds that have compositions similar to ethanol, for example, could also act as
reducing agents, creating the necessary color change to indicate increased BAC.

[edit] Breathalyzer myths


There are a number of substances or techniques that can supposedly "fool" a breathalyzer (i.e.,
generate a lower blood alcohol content).

A 2003 episode of the popular science television show MythBusters tested a number of methods
that supposedly allow a person to fool a breathalyzer test. The methods tested included breath
mints, onions, denture cream, mouthwash, pennies and batteries; all of these methods proved
ineffective. The show noted that using items such as breath mints, onions, denture cream and
mouthwash to cover the smell of alcohol may fool a person, but, since they will not actually
reduce a person's BAC, there will be no effect on a breathalyzer test regardless of the quantity
used. Pennies supposedly produce a chemical reaction, while batteries supposedly create an
electrical charge, yet neither of these methods affected the breathalyzer results.[17]

The Mythbusters episode also pointed out another complication: It would be necessary to insert
the item into one's mouth (e.g. eat an onion, rinse with mouthwash, conceal a battery), take the
breath test, and then possibly remove the item — all of which would have to be accomplished
discreetly enough to avoid alerting the police officers administering the test (who would
obviously become very suspicious if they noticed that a person was inserting items into their
mouth prior to taking a breath test). It would likely be very difficult, especially for someone in an
intoxicated state, to be able to accomplish such a feat.[17]

In addition, the show noted that breath tests are often verified with blood tests (which are more
accurate) and that even if a person somehow managed to fool a breath test, a blood test would
certainly confirm a person's guilt.[17]
Other substances that might reduce the BAC reading include a bag of activated charcoal
concealed in the mouth (to absorb alcohol vapor), an oxidizing gas (such as N2O, Cl2, O3, etc.)
that would fool a fuel cell type detector, or an organic interferent to fool an infrared absorption
detector. The infrared absorption detector is more vulnerable to interference than a laboratory
instrument measuring a continuous absorption spectrum since it only makes measurements at
particular discrete wavelengths. However, due to the fact that any interference can only cause
higher absorption, not lower, the estimated blood alcohol content will be overestimated.[citation
needed]
Additionally, Cl2 is rather toxic and corrosive.

A 2007 episode of the Spike network's show Manswers showed some of the more common and
not-so-common ways of attempts to beat the breathalyzer, none of which work. Test 1 was to
suck on a copper coin. (Actually, copper coins are now generally often only copper-coated and
mostly zinc or steel.[citation needed]) Test 2 was to hold a battery on the tongue. Test 3 was to chew
gum. None of these tests showed a "pass" reading if the subject had consumed alcohol.

[edit] Products that interfere with testing


On the other hand, products such as mouthwash or breath spray can "fool" breath machines by
significantly raising test results. Listerine mouthwash, for example, contains 27% alcohol. The
breath machine is calibrated with the assumption that the alcohol is coming from alcohol in the
blood diffusing into the lung rather than directly from the mouth, so it applies a partition ratio of
2100:1 in computing blood alcohol concentration—resulting in a false high test reading. To
counter this, officers are not supposed to administer a PBT for 15 minutes after the subject eats,
vomits, or puts anything in their mouth. In addition, most instruments require that the individual
be tested twice at least two minutes apart. Mouthwash or other mouth alcohol will have
somewhat dissipated after two minutes and cause the second reading to disagree with the first,
requiring a retest. (Also see the discussion of the "slope parameter" of the Intoxilyzer 5000 in the
"Mouth Alcohol" section above.)

This was clearly illustrated in a study conducted with Listerine mouthwash on a breath machine
and reported in an article entitled "Field Sobriety Testing: Intoxilyzers and Listerine Antiseptic"
published in the July 1985 issue of The Police Chief (p. 70). Seven individuals were tested at a
police station, with readings of 0.00%. Each then rinsed his mouth with 20 milliliters of Listerine
mouthwash for 30 seconds in accordance with directions on the label. All seven were then tested
on the machine at intervals of one, three, five and ten minutes. The results indicated an average
reading of 0.43 blood-alcohol concentration, indicating a level that, if accurate, approaches lethal
proportions. After three minutes, the average level was still 0.020, despite the absence of any
alcohol in the system. Even after five minutes, the average level was 0.011.

In another study, reported in 8(22) Drinking/Driving Law Letter 1, a scientist tested the effects of
Binaca breath spray on an Intoxilyzer 5000. He performed 23 tests with subjects who sprayed
their throats and obtained readings as high as 0.81—far beyond lethal levels. The scientist also
noted that the effects of the spray did not fall below detectable levels until after 18 minutes.

Carbon dioxide sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

A carbon dioxide sensor or CO2 sensor is an instrument for the measurement of carbon dioxide
gas. The most common principles for CO2 sensors are infrared gas sensors (NDIR) and chemical
gas sensors. Measuring carbon dioxide is important in monitoring Indoor air quality and many
industrial processes.

Contents
[hide]

• 1 Nondispersive Infrared (NDIR) CO2


Sensors
• 2 Chemical CO2 Sensors
• 3 Applications
• 4 See also

• 5 References

[edit] Nondispersive Infrared (NDIR) CO2 Sensors


NDIR sensors are spectroscopic sensors to detect CO2 in a gaseous environment by its
characteristic absorption. The key components are an infrared source, a light tube, an
interference (wavelength) filter, and an infrared detector. The gas is pumped or diffuses into the
light tube, and the electronics measures the absorption of the characteristic wavelength of light.
NDIR sensors are most often used for measuring carbon dioxide.[1] The best of these have
sensitivities of 20-50 PPM.[1] Typical NDIR sensors are still in the (US) $100 to $1000 range.
New developments include using Microelectromechanical systems to bring down the costs of
this sensor and to create smaller devices (for example for use in air conditioning applications).
NDIR CO2 sensors are also used for dissolved CO2 for applications such as beverage
carbonation, pharmeceutical fermentation and CO2 sequestration applications. In this care they
are mated to an ATR (attenuated total reflection) optic and measure the gas insitu.

[edit] Chemical CO2 Sensors


Chemical CO2 gas sensors with sensitive layers based on polymer- or heteropolysiloxane have
the principal advantage of a very low energy consumption and can be reduced in size to fit into
microelectronic-based systems. On the downside, short- and long term drift effects as well as a
rather low overall lifetime are major obstacles when compared with the NDIR measurement
principle[2].

[edit] Applications
• For air conditioning applications these kind of sensors can be used to monitor
the quality of air and the tailored need of fresh air, respectively.
• In applications where direct temperature measurement is not applicable NDIR
sensors can be used. The sensors absorb ambient infrared radiation (IR)
given off by a heated surface.

Carbon monoxide detector


From Wikipedia, the free encyclopedia

Jump to: navigation, search


‹ The template below (Globalize/North America) is being considered for deletion. See templates for discussion to help reach
a consensus.›

The examples and perspective in this article deal primarily with North
America and do not represent a worldwide view of the subject.
Please improve this article and discuss the issue on the talk page.

Carbon Monoxide detector connected to a North American power outlet


A carbon monoxide detector or CO detector is a device that detects the presence of the carbon
monoxide (CO) gas in order to prevent carbon monoxide poisoning. CO is a colorless and
odorless compound produced by incomplete combustion. It is often referred to as the "silent
killer" because it is virtually undetectable without using detection technology.[1] Elevated levels
of CO can be dangerous to humans depending on the amount present and length of exposure.
Smaller concentrations can be harmful over longer periods of time while increasing
concentrations require diminishing exposure times to be harmful.[2]

CO detectors are designed to measure CO levels over time and sound an alarm before dangerous
levels of CO accumulate in an environment, giving people adequate warning to safely ventilate
the area or evacuate. Some system-connected detectors also alert a monitoring service that can
dispatch emergency services if necessary.

While CO detectors do not serve as smoke detectors and vice versa, dual smoke/CO detectors are
also sold. Smoke detectors detect the smoke generated by flaming or smoldering fires, whereas
CO detectors go into alarm and warn people about dangerous CO buildup caused, for example,
by a malfunctioning fuel-burning device. In the home, some common sources of CO include
open flames, space heaters, water heaters, blocked chimneys or running a car inside a garage.[3]

Contents
[hide]

• 1 Installation
• 2 Sensors
o 2.1 Opto-Chemical
o 2.2 Biomimetic
o 2.3 Electrochemical
o 2.4 Semiconductor
• 3 Digital
• 4 Wireless
• 5 Legislation
• 6 Manufacturers
• 7 References

• 8 External links

[edit] Installation
The devices, which retail for $20-$60USD and are widely available, can either be battery-
operated or AC powered (with or without a battery backup). Battery lifetimes have been
increasing as the technology has developed and certain battery powered devices now advertise a
battery lifetime of over 6 years. All CO detectors have "test" buttons like smoke detectors.

CO detectors can be placed near the ceiling or near the floor because CO is very close to the
same density as air[4][5].
Since CO is colorless, tasteless and odorless (unlike smoke from a fire), detection in a home
environment is impossible without such a warning device. It is a highly toxic inhalant and
attracts to the hemoglobin(in the blood stream) 200x faster than oxygen, producing inadequate
amounts of oxygen traveling through the body. In North America, some state, provincial and
municipal governments have statutes requiring installation of CO detectors in construction -
among them, the U.S. states of Alaska, Colorado, Connecticut, Florida, Georgia, Illinois,
Maryland, Massachusetts, Minnesota, New Jersey, New York, Rhode Island, Texas, Vermont,
Virginia, Wisconsin and West Virginia, as well as New York City and the Canadian province of
Ontario.[6]

When carbon monoxide detectors were introduced into the market, they had a limited lifespan of
2 years. However technology developments have increased this and many now advertise up to 7
years. [7] Newer models are designed to signal a need to be replaced after that time-span although
there are many instances of detectors operating far beyond this point.

According to the 2005 edition of the carbon monoxide guidelines, NFPA 720 [8], published by the
National Fire Protection Association, sections 5.1.1.1 and 5.1.1.2, all CO detectors “shall be
centrally located outside of each separate sleeping area in the immediate vicinity of the
bedrooms,” and each detector “shall be located on the wall, ceiling or other location as specified
in the installation instructions that accompany the unit.”

Installation locations vary by manufacturer. Manufacturers’ recommendations differ to a certain


degree based on research conducted with each one’s specific detector. Therefore, make sure to
read the provided installation manual for each detector before installing.

CO detectors are available as stand-alone models or system-connected, monitored devices[9].


System-connected detectors, which can be wired to either a security or fire panel, are monitored
by a central station. In case the residence is empty, the residents are sleeping or occupants are
already suffering from the effects of CO, the central station can be alerted to the high
concentrations of CO gas and can send the proper authorities to investigate.

The gas sensors in CO alarms have a limited and indeterminable life span, typically two to five
years. The test button on a CO alarm only tests the battery and circuitry not the sensor. CO
alarms should be tested with an external source of calibrated test gas, as recommended by the
latest version of NFPA 720. Alarms over five years old should be replaced but they should be
checked on installation and at least annually during the manufacturers warranty period.

[edit] Sensors
Early designs were basically a white pad which would fade to a brownish or blackish colour if
carbon monoxide were present. Such chemical detectors are cheap and widely available, but only
give a visual warning of a problem. As carbon monoxide related deaths increased during the
1990s, audible alarms became standard.

The alarm points on carbon monoxide detectors are not a simple alarm level (as in smoke
detectors) but are a concentration-time function. At lower concentrations (eg 100 parts per
million) the detector will not sound an alarm for many tens of minutes. At 400 parts per million
(PPM), the alarm will sound within a few minutes. This concentration-time function is intended
to mimic the uptake of carbon monoxide in the body while also preventing false alarms due to
relatively common sources of carbon monoxide such as cigarette smoke.

There are four types of sensors available and they vary in cost, accuracy and speed of response.
[10]
The latter three types include sensor elements that typically last up to 10 years. At least one
CO detector is available which includes a battery and sensor in a replaceable module. Most CO
detectors do not have replaceable sensors.

[edit] Opto-Chemical

The detector consists of a pad of a coloured chemical which changes colour upon reaction with
carbon monoxide. They only provide a qualitative warning of the gas however. The main
advantage of these detectors is that they are the lowest cost but are also offer the lowest level of
protection.

[edit] Biomimetic

A biomimetic (chem-optical or gel cell) sensor works with a form of synthetic hemoglobin which
darkens in the presence of CO, and lightens without it. This can either be seen directly or
connected to a light sensor and alarm. Battery lifespan usually lasts 2-3 years. Device lasts on the
average of about 10 years. These products were the first to enter the mass market but have now
largely fallen out of favour.

[edit] Electrochemical

This is a type of fuel cell that instead of being designed to produce power, is designed to produce
a current that is precisely related to the amount of the target gas (in this case carbon monoxide)
in the atmosphere. Measurement of the current gives a measure of the concentration of carbon
monoxide in the atmosphere. Essentially the electrochemical cell consists of a container, 2
electrodes, connection wires and an electrolyte - typically sulfuric acid. Carbon monoxide is
oxidized at one electrode to carbon dioxide while oxygen is consumed at the other electrode. For
carbon monoxide detection, the electrochemical cell has advantages over other technologies in
that it has a highly accurate and linear output to carbon monoxide concentration, requires
minimal power as it is operated at room temperature, and has a long lifetime (typically
commercial available cells now have lifetimes of 5 years or greater). Until recently, the cost of
these cells and concerns about their long term reliability had limited uptake of this technology in
the marketplace, although these concerns are now largely overcome. This technology is now the
dominant technology in USA and Europe.

[edit] Semiconductor
Thin wires of the semiconductor tin dioxide on an insulating ceramic base provide a sensor
monitored by an integrated circuit. This sensing element needs to be heated to approximately 400
deg C in order to operate. Oxygen increases resistance of the tin dioxide, but carbon monoxide
reduces resistance therefore by measurement of the resistance of the sensing element means a
monitor can be made to trigger an alarm. The power demands of this sensor means that these
devices can only be mains powered although a pulsed sensor is now available that has a limited
lifetime (months) as a battery powered detector. Device usually lasts on the average of 5-10
years. This technology has traditionally found high utility in Japan and the far east with some
market penetration in USA. However the superior performance of electrochemical cell
technology is beginning to displace this technology

[edit] Digital
Although all home detectors use an audible alarm signal as the primary indicator, some versions
also offer a digital readout of the CO concentration, in parts per million. Typically, they can
display both the current reading and a peak reading from memory of the highest level measured
over a period of time. These advanced models cost somewhat more but are otherwise similar to
the basic models.

The digital models offer the advantage of being able to observe levels that are below the alarm
threshold, learn about levels that may have occurred during an absence, and assess the degree of
hazard if the alarm sounds. They may also aid emergency responders in evaluating the level of
past or ongoing exposure or danger.

[edit] Wireless
Wireless home safety solutions are available that link carbon monoxide detectors to vibrating
pillow pads, strobes or a remote warning handset. This allows those with impediments such as
hard of hearing, partially sighted, heavy sleepers or the infirm the precious minutes to wake up
and get out in the event of carbon monoxide in their property.

[edit] Legislation
House builders in Colorado will be required to install carbon monoxide detectors in new homes
in a bill signed into law in March 2009 by the state legislature.

House Bill 1091 requires installation of the detectors in new and resold homes near bedrooms as
well as rented apartments and homes. It takes effect from July 1, 2009. The legislation was
introduced after the death of Denver investment banker Parker Lofgren and his family. Lofgren,
39; his wife Caroline, 42; and their children, Owen, 10, and Sophie, 8, were found dead in a
multimillion-dollar home near Aspen, Colorado on Nov. 27, 2008, victims of carbon-monoxide
poisoning.

In New York State, Amanda's Law,” (A6093A/C.367) requires one- and two-family residences
which have fuel burning appliances to have at least one carbon monoxide alarm installed on the
lowest story having a sleeping area, effective February 22, 2010. Although homes built before
Jan. 1, 2008 will be allowed to have battery-powered alarms, homes built after that date will
need to have hard-wired alarms. In addition, New York State contractors will have to install a
carbon monoxide detector when replacing a fuel burning water heater or furnace if the home is
without an alarm. The law is named for Amanda Hansen, a teenager who died from carbon
monoxide poisoning from a defective boiler while at a sleepover at a friend's house.[11]

[edit] Manufacturers
• System Sensor
• DuPont
• First Alert
• Kidde
• KWJ Engineering, Inc.
• Sprue Aegis
• Duomo (UK) Ltd.
• Ei Electronics (Ireland) www.eielectronics.com
• Gas Safe Europe Ltd (www.detectagas.com)

Catalytic bead sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

A catalytic bead sensor is a type of sensor that is used for gas detection.

Contents
[hide]

• 1 Principle
• 2 Issues
• 3 See also

• 4 References

[edit] Principle
The catalytic bead sensor MSA 94150

The catalytic bead sensor consist of two coils of fine platinum wire each embedded in a bead of
alumina, connected electrically in a Wheatstone bridge circuit. One of the pellistors is
impregnated with a special catalyst which promotes oxidation whilst the other is treated to inhibit
oxidation. Current is passed through the coils so that they reach a temperature at which oxidation
of a gas readily occurs at the catalysed bead (500-550°C). Passing combustible gas raises the
temperature further which increases the resistance of the platinum coil in the catalysed bead,
leading to an imbalance of the bridge. This output change is linear, for most gases, up to and
beyond 100% LEL, response time is a few seconds to detect alarm levels (around 20% LEL)[1], at
least 12% oxygen by volume is needed for the oxidation.

[edit] Issues
• Catalyst poisoning - because of the direct contact of the gas with the catalytic
surface it may be deactived in some circumstances.
• Sensor drift - Decreased sensitivity may occur depending on operating and
ambient conditions.
• Modes of failure - which include poisoning and sinter blockage, they become
apparent during routine maintenance checking.

Chemical field-effect transistor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article does not cite any references or sources.


Please help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (December 2009)

ChemFET, or chemical field-effect transistor, is a type of a field-effect transistor acting as a


chemical sensor. It is a structural analog of a MOSFET transistor, where the charge on the gate
electrode is applied by a chemical process. It may be used to detect atoms, molecules, and ions in
liquids and gases.

ISFET, an ion-sensitive field-effect transistor, is the best known subtype of ChemFET devices.
It is used to detect ions in electrolytes.

ENFET is a CHEMFET specialized for detection of specific enzymes.

Electrochemical gas sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material
may be challenged and removed. (June 2008)

Electrochemical gas sensors are gas detectors that measure the concentration of a target gas by
oxidizing or reducing the target gas at an electrode and measuring the resulting current.

Contents
[hide]

• 1 Construction
• 2 Theory of operation
• 3 Diffusion controlled
response
• 4 Cross sensitivity
• 5 See also

• 6 References

[edit] Construction
The sensors contain two or three electrodes, occasionally four, in contact with an electrolyte. The
electrodes are typically fabricated by fixing a high surface area precious metal on to the porous
hydrophobic membrane. The working electrode contacts both the electrolyte and the ambient air
to be monitored usually via a porous membrane. The electrolyte most commonly used is a
mineral acid, but organic electrolytes are also used for some sensors. The electrodes and housing
are usually in a plastic housing which contains a gas entry hole for the gas and electrical
contacts.
[edit] Theory of operation
The gas diffuses into the sensor, through the back of the porous membrane to the working
electrode where it is oxidized or reduced. This electrochemical reaction results in an electric
current that passes through the external circuit. In addition to measuring, amplifying and
performing other signal processing functions, the external circuit maintains the voltage across the
sensor between the working and counter electrodes for a two electrode sensor or between the
working and reference electrodes for a three electrode cell. At the counter electrode an equal and
opposite reaction occurs, such that if the working electrode is an oxidation, then the counter
electrode is a reduction.HHGGHY

[edit] Diffusion controlled response


The magnitude of the current is controlled by how much of the target gas is oxidized at the
working electrode. Sensors are usually designed so that the gas supply is limited by diffusion and
thus the output from the sensor is linearly proportional to the gas concentration. This linear
output is one of the advantages of electrochemical sensors over other sensor technologies, (e.g.
infrared), whose output must be linearized before they can be used. A linear output allows for
more precise measurement of low concentrations and much simpler calibration (only baseline
and one point are needed).

Diffusion control offers another advantage. Changing the diffusion barrier allows the sensor
manufacturer to tailor the sensor to a particular target gas concentration range. In addition, since
the diffusion barrier is primarily mechanical, the calibration of electrochemical sensors tends to
be more stable over time and so electrochemical sensor based instruments require much less
maintenance than some other detection technologies. In principle, the sensitivity can be
calculated based on the diffusion properties of the gas path into the sensor, though experimental
errors in the measurement of the diffusion properties make the calculation less accurate than
calibrating with test gas.[1]

[edit] Cross sensitivity


For some gases such as ethylene oxide, cross sensitivity can be a problem because ethylene oxide
requires a very active working electrode catalyst and high operating potential for its oxidation.
Therefore gases which are more easily oxidized such as alcohols and carbon monoxide will also
give a response. Cross sensitivity problems can be eliminated though through the use of a
chemical filter, for example filters that allows the target gas to pass through unimpeded, but
which reacts with and removes common interferences.

While electrochemical sensors offer many advantages, they are not suitable for every gas. Since
the detection mechanism involves the oxidation or reduction of the gas, electrochemical sensors
are usually only suitable for gases which are electrochemically active, though it is possible to
detect electrochemically inert gases indirectly if the gas interacts with another species in the
sensor that then produces a response.[2] Sensors for carbon dioxide are an example of this
approach and they have been commercially available for several years.
Electronic nose
From Wikipedia, the free encyclopedia

Jump to: navigation, search

An electronic nose is a device intended to detect odors or flavors.

Over the last decade, “electronic sensing” or “e-sensing” technologies have undergone important
developments from a technical and commercial point of view. The expression “electronic
sensing” refers to the capability of reproducing human senses using sensor arrays and pattern
recognition systems. Since 1982[1], research has been conducted to develop technologies,
commonly referred to as electronic noses, that could detect and recognize odors and flavors. The
stages of the recognition process are similar to human olfaction and are performed for
identification, comparison, quantification and other applications. However, hedonic evaluation is
a specificity of the human nose given that it is related to subjective opinions. These devices have
undergone much development and are now used to fulfill industrial needs.

Contents
[hide]

• 1 Other techniques to analyze


odors
• 2 Electronic Nose working
principle
• 3 How to perform an analysis
• 4 Range of applications
• 5 See also
• 6 References

• 7 External links

[edit] Other techniques to analyze odors


In all industries, odor assessment is usually performed by human sensory analysis,
Chemosensors or by gas chromatography (GC, GC/MS). The latter technique gives information
about volatile organic compounds but the correlation between analytical results and actual odor
perception is not direct due to potential interactions between several odorous components.

[edit] Electronic Nose working principle


The electronic nose was developed in order to mimic human olfaction that functions as a non-
separative mechanism: i.e. an odor / flavor is perceived as a global fingerprint.
Electronic Noses include three major parts: a sample delivery system, a detection system, a
computing system.

The sample delivery system enables the generation of the headspace (volatile compounds) of a
sample, which is the fraction analyzed. The system then injects this headspace into the detection
system of the electronic nose. The sample delivery system is essential to guarantee constant
operating conditions.

The detection system, which consists of a sensor set, is the “reactive” part of the instrument.
When in contact with volatile compounds, the sensors react, which means they experience a
change of electrical properties. Each sensor is sensitive to all volatile molecules but each in their
specific way. Most electronic noses use sensor arrays that react to volatile compounds on
contact: the adsorption of volatile compounds on the sensor surface causes a physical change of
the sensor. A specific response is recorded by the electronic interface transforming the signal
into a digital value. Recorded data are then computed based on statistical models.[2]

The more commonly used sensors include metal oxide semiconductors (MOS), conducting
polymers (CP), quartz crystal microbalance, surface acoustic wave (SAW), and field effect
transistors (MOSFET).

In recent years, other types of electronic noses have been developed that utilize mass
spectrometry or ultra fast gas chromatography as a detection system.[2]

The computing system works to combine the responses of all of the sensors, which represents
the input for the data treatment. This part of the instrument performs global fingerprint analysis
and provides results and representations that can be easily interpreted. Moreover, the electronic
nose results can be correlated to those obtained from other techniques (sensory panel, GC,
GC/MS).

[edit] How to perform an analysis


As a first step, an electronic nose need to be trained with qualified samples so as to build a
database of reference. Then the instrument can recognize new samples by comparing volatile
compounds fingerprint to those contained in its database. Thus they can perform qualitative or
quantitative analysis.

[edit] Range of applications


Electronic nose instruments are used by Research & Development laboratories, Quality Control
laboratories and process & production departments for various purposes:

in R&D laboratories for:

• Formulation or reformulation of products


• Benchmarking with competitive products
• Shelf life and stability studies
• Selection of raw materials
• Packaging interaction effects
• Simplification of consumer preference test

in Quality Control laboratories for at line quality control such as:

• Conformity of raw materials, intermediate and final products


• Batch to batch consistency
• Detection of contamination, spoilage, adulteration
• Origin or vendor selection
• Monitoring of storage conditions.

In process and production departments for:

• Managing raw material variability


• Comparison with a reference product
• Measurement and comparison of the effects of manufacturing process on
products
• Following-up cleaning in place process efficiency
• Scale-up monitoring
• Cleaning in place monitoring.

Various application notes describe analysis in areas such as Flavor & Fragrance, Food &
Beverage, Packaging, Pharmaceutical, Cosmetic & Perfumes, Chemical companies. More
recently they can also address public concerns in terms of olfactive nuisance monitoring with
networks of on-field devices.[3]

Electrolyte–insulator–semiconductor sensor
From Wikipedia, the free encyclopedia

Jump to: navigation, search

The introduction to this article provides insufficient context for those


unfamiliar with the subject. Please help improve the article with a good
introductory style. (October 2009)

An Electrolyte–insulator–semiconductor (EIS) sensor is a sensor that is made of these three


components:

• an electrolyte with the chemical that should be measured


• an insulator that allows field-effect interaction, without leak currents between
the two other components
• a semiconductor to register the chemical changes
The EIS sensor can be used in combination with other structures, for example to construct a
light-addressable potentiometric sensor (LAPS).

Hydrogen sensor
From Wikipedia, the free encyclopedia

Jump to: navigation, search

A hydrogen sensor is a gas detector that detects the presence of hydrogen. They contain micro-
fabricated point-contact hydrogen sensors and are used to locate leaks. They are considered low-
cost, compact, durable, and easy to maintain as compared to conventional gas detecting
instruments.[1]

Contents
[hide]

• 1 Key Issues
o 1.1 Additional requirements
• 2 Types of microsensors
o 2.1 Optical fibre hydrogen sensors
o 2.2 Other type hydrogen sensors
o 2.3 Enhancement
• 3 See also
• 4 References

• 5 External links

[edit] Key Issues


There are five key issues with hydrogen detectors:[2]

• Reliability: Functionality should be easily verifiable.


• Performance: Detection 0.5% hydrogen in air or better
• Response time < 1 second.
• Lifetime: At least the time between scheduled maintenance.
• Cost: Goal is $5 per sensor and $30 per controller.

[edit] Additional requirements

• Measurement range coverage of 0.1%–10.0% concentration[3]


• Operation in temperatures of -30°C to 80°C
• Accuracy within 5% of full scale
• Function in an ambient air gas environment within a 10%–98% relative
humidity range
• Resistance to hydrocarbon and other interference.
• Lifetime greater than 10 years

[edit] Types of microsensors


There are various types of hydrogen microsensors, which use different mechanisms to detect the
gas. Palladium is used in many of these, because it selectively absorbs hydrogen gas and forms
the compound palladium hydride.[4] Palladium-based sensors have a strong temperature
dependence which makes their response time too large at very low temperatures.[5] Palladium
sensors have to be protected against carbon monoxide, sulfur dioxide and hydrogen sulfide.

[edit] Optical fibre hydrogen sensors

Several types of optical fibre surface plasmon resonance (SPR) sensor are used for the point-
contact detection of hydrogen:

• Fiber Bragg grating coated with a palladium layer - Detects the hydrogen by
metal hindrance.
• Micromirror - With a palladium thin layer at the cleaved end, detecting
changes in the backreflected light.
• Tapered fibre coated with palladium - Hydrogen changes the refractive index
of the palladium, and consequently the amount of losses in the evanescent
wave.

[edit] Other type hydrogen sensors

• MEMS hydrogen sensor - The combination of nanotechnology and


microelectromechanical systems (MEMS) technology allows the production of
a hydrogen microsensor that functions properly at room temperature. The
hydrogen sensor is coated with a film consisting of nanostructured indium
oxide (In2O3) and tin oxide (SnO2).[6]
• Thin film sensor - A palladium thin film sensor is based on an opposing
property that depends on the nanoscale structures within the thin film. In the
thin film, nanosized palladium particles swell when the hydride is formed, and
in the process of expanding, some of them form new electrical connections
with their neighbors. The resistance decreases because of the increased
number of conducting pathways.[2][7]
• Thick film sensor - Thick film hydrogen sensors rely on the fact that palladium
hydride's electrical resistance is greater than the metal's resistance. The
absorption of hydrogen causes a measurable increase in electrical resistance.
• Chemochromic hydrogen sensor - Reversible and irreversible chemochromic
hydrogen sensors, a smart pigment paint that visually identifies hydrogen
leaks by a change in color. The sensor is also available as tape. [8] [9]
• Diode based Schottky sensor - A Schottky diode-based hydrogen gas sensor
employs a palladium-alloy gate. Hydrogen can be selectively absorbed in the
gate, lowering the Schottky energy barrier.[10] A Pd/InGaP metal-
semiconductor (MS) Schottky diode can detect a concentration of 15 parts
per million (ppm) H2 in air.[11] Silicon carbide semiconductor or silicon
substrates are used.
• Metallic La-Mg2-Ni which is electrical conductive, absorbs hydrogen near
ambient conditions, forming the nonmetallic hydride LaMg2NiH7 an
insulator[12].

Sensors are typically calibrated at the manufacturing factory and are valid for the service life of
the unit.

[edit] Enhancement

Siloxane enhances the sensitivity and reaction time of hydrogen sensors.[4] Detection of hydrogen
levels as low as 25 ppm can be achieved; far below hydrogen's lower explosive limit of around
40,000 ppm.

Hydrogen sulfide sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

A hydrogen sulfide sensor or H2S sensor is a gas sensor for the measurement of hydrogen
sulfide[1].

Contents
[hide]

• 1 Principle
• 2 Applications
• 3 Research
• 4 See also

• 5 References

[edit] Principle
The H2S sensor is a metal oxide semiconductor (MOS) sensor which operates by a reversible
change in resistance caused by adsorption and desorption of hydrogen sulfide in a film with
hydrogen sulfide sensitive material like tin oxide thick films and gold thin films. Current
response time is 25 ppb to 10 ppm < one minute.

[edit] Applications
This type of sensor has been under constant development because of the toxic and corrosive
nature of hydrogen sulfide:

• The H2S sensor is used to detect hydrogen sulfide in the hydrogen feed
stream of fuel cells to prevent catalyst poisoning and to measure the quality
of guard beds used to remove sulfur from hydrocarbon fuels[2].

[edit] Research
• 2004 - a nanocrystalline SnO2–Ag on ceramic wafer sensor is reported[3].

Infrared point sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

A infrared point sensor is a point gas detector based on the nondispersive infrared sensor
technology.

Contents
[hide]

• 1 Principle
• 2 Micro heaters
• 3 Range
• 4 See also

• 5 External links

[edit] Principle
Dual source and dual receivers are used for self compensation of changes in alignment, light
source intensity and component efficiency. The transmitted beams from two infrared sources are
superimposed onto an internal beam splitter. 50% of the overlapping sample and reference signal
is passed through the gas measuring path and reflected back onto the measuring detector. The
presence of combustible gas will reduce the intensity of the sample beam and not the reference
beam, with the difference between these two signals being proportional to the concentration of
gas present in the measuring path. The other 50% of the overlapped signal passes through the
beam splitter and onto the compensation detector. The compensation detector monitors the
intensity of the two infrared sources and automatically compensates for any long term drift.

Mean time between failures may go up to 15 years.


[edit] Micro heaters
Micro heaters can be used to raise the temperature from optical surfaces above ambient to
enhance performance and to prevent condensation on the optical surfaces.

[edit] Range
Toxic gases are measured in the low parts per million (ppm) range. Flammable gases are
measured in the 0 - 100 % lower flammable limit (LFL) or lower explosive limit (LEL) range.

Ion selective electrode


From Wikipedia, the free encyclopedia

(Redirected from Ion-selective electrode)

Jump to: navigation, search

An ion-selective electrode (ISE), also known as a specific ion electrode (SIE), is a transducer
(or sensor) that converts the activity of a specific ion dissolved in a solution into an electrical
potential, which can be measured by a voltmeter or pH meter. The voltage is theoretically
dependent on the logarithm of the ionic activity, according to the Nernst equation. The sensing
part of the electrode is usually made as an ion-specific membrane, along with a reference
electrode. Ion-selective electrodes are used in biochemical and biophysical research, where
measurements of ionic concentration in an aqueous solution are required, usually on a real time
basis.

Contents
[hide]

• 1 Types of ion-selective membrane


o 1.1 Glass membranes
o 1.2 Crystalline membranes
o 1.3 Ion-exchange resin membranes
 1.3.1 Construction
 1.3.2 Applications
o 1.4 Enzyme electrodes
• 2 Interferences
• 3 See also
• 4 References

• 5 External links
[edit] Types of ion-selective membrane
There are four main types of ion-selective membrane used in ion-selective electrodes: glass, solid
state, liquid based, and compound electrode.

[edit] Glass membranes

Glass membranes are made from an ion-exchange type of glass (silicate or chalcogenide). This
type of ISE has good selectivity, but only for several single-charged cations; mainly H+, Na+, and
Ag+. Chalcogenide glass also has selectivity for double-charged metal ions, such as Pb2+, and
Cd2+. The glass membrane has excellent chemical durability and can work in very aggressive
media. A very common example of this type of electrode is the pH glass electrode.

[edit] Crystalline membranes

Crystalline membranes are made from mono- or polycrystallites of a single substance. They have
good selectivity, because only ions which can introduce themselves into the crystal structure can
interfere with the electrode response. Selectivity of crystalline membranes can be for both cation
and anion of the membrane-forming substance. An example is the fluoride selective electrode
based on LaF3 crystals.

[edit] Ion-exchange resin membranes

Ion-exchange resins are based on special organic polymer membranes which contain a specific
ion-exchange substance (resin). This is the most widespread type of ion-specific electrode. Usage
of specific resins allows preparation of selective electrodes for tens of different ions, both single-
atom or multi-atom. They are also the most widespread electrodes with anionic selectivity.
However, such electrodes have low chemical and physical durability as well as "survival time".
An example is the potassium selective electrode, based on valinomycin as an ion-exchange
agent.

[edit] Construction

These electrodes are prepared from glass capillary tubing approximately 2 millimeters in
diameter, a large batch at a time. Polyvinyl chloride is dissolved in a solvent and plasticizers
(typically phthalates) added, in the standard fashion used when making something out of vinyl.
In order to provide the ionic specificity, a specific ion channel or carrier is added to the solution;
this allows the ion to pass through the vinyl, which prevents the passage of other ions and water.

One end of a piece of capillary tubing about an inch or two long is dipped into this solution and
removed to let the vinyl solidify into a plug at that end of the tube. Using a syringe and needle,
the tube is filled with salt solution from the other end, and may be stored in a bath of the salt
solution for an indeterminate period. For convenience in use, the open end of the tubing is fitted
through a tight o-ring into a somewhat larger diameter tubing containing the same salt solution,
with a silver or platinum electrode wire inserted. New electrode tips can thus be changed very
quickly by simply removing the older electrode and replacing it with a new one.

[edit] Applications

In use, the electrode wire is connected to one terminal of a galvanometer or pH meter, the other
terminal of which is connected to a reference electrode, and both electrodes are immersed in the
solution to be tested. The passage of the ion through the vinyl via the carrier or channel creates
an electrical current, which registers on the galvanometer; by calibrating against standard
solutions of varying concentration, the ionic concentration in the tested solution can be estimated
from the galvanometer reading.

In practice there are several issues which affect this measurement, and different electrodes from
the same batch will differ in their properties. Leakage between the vinyl and the wall of the
capillary, thereby allowing passage of any ions, will cause the meter reading to show little or no
change between the various calibration solutions, and requires that that electrode be discarded.
Similarly, with use the ion-sensitive channels in the vinyl appear to gradually become blocked or
otherwise inactivated, causing the electrode to lose sensitivity. The response of the electrode and
galvanometer is temperature sensitive, and also 'drifts' over time, requiring recalibration
frequently during a series of measurements, ideally at least one calibration sample before and
after each test sample. On the other hand, after immersion in the solution there is a transient
'settling time' which can be five minutes or even longer, before the electrode and galvanometer
equilibrate to a new reading; so that timing of the reading is critical in order to find the most
accurate 'window' after the response has settled, but before it has drifted appreciably.

[edit] Enzyme electrodes

Enzyme electrodes definitely are not true ion-selective electrodes but usually are considered
within the ion-specific electrode topic. Such an electrode has a "double reaction" mechanism - an
enzyme reacts with a specific substance, and the product of this reaction (usually H+ or OH-) is
detected by a true ion-selective electrode, such as a pH-selective electrodes. All these reactions
occur inside a special membrane which covers the true ion-selective electrode, which is why
enzyme electrodes sometimes are considered as ion-selective. An example is glucose selective
electrodes.

[edit] Interferences
The most serious problem limiting use of ion-selective electrodes is interference from other,
undesired, ions. No ion-selective electrodes are completely ion-specific; all are sensitive to other
ions having similar physical properties, to an extent which depends on the degree of similarity.
Most of these interferences are weak enough to be ignored, but in some cases the electrode may
actually be much more sensitive to the interfering ion than to the desired ion, requiring that the
interfering ion be present only in relatively very low concentrations, or entirely absent. In
practice, the relative sensitivities of each type of ion-specific electrode to various interfering ions
is generally known and should be checked for each case; however the precise degree of
interference depends on many factors, preventing precise correction of readings. Instead, the
calculation of relative degree of interference from the concentration of interfering ions can only
be used as a guide to determine whether the approximate extent of the interference will allow
reliable measurements, or whether the experiment will need to be redesigned so as to reduce the
effect of interfering ions. The nitrate electrode has various ionic interferences, i.e. perchlorate,
iodide, chloride, and sulfate. These interferences vary markedly in the extent to which they
interfere. Thus, perchlorate gives a response which is about 50,000x as great as an equal amount
of nitrate, while 1000x as much sulfate produces about a 10% error in the reading.[1] Chloride
causes a 10% error when present at about 30x the nitrate level, but can be removed by the
addition of silver sulfate. Alternately, nitrate can be determined by using an ammonia gas
sensing electrode. This technique allows the user to determine both ammonium and nitrate ions
sequentially. The procedure makes use of the reducing ability of titanium chloride. Trivalent
titanium reduces any nitrate ion, up to 20 ppm, to ammonium ion (i.e., reverse nitrification). At
pH 12-13, any ammonium ion in the sample is converted to ammonia gas and is ultimately
detected by the electrode[2].

Nondispersive infrared sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

A nondispersive infrared sensor (or NDIR) sensor is a simple spectroscopic device often used
as gas detector.

NDIR-Analyzer with one double tube for CO and another double tube for
hydrocarbons

[edit] Principle
The main components are an infrared source (lamp), a sample chamber or light tube, a
wavelength filter, and the infrared detector. The gas is pumped (or diffuses) into the sample
chamber, and gas concentration is measured electro-optically by its absorption of a specific
wavelength in the infrared (IR). The IR light is directed through the sample chamber towards the
detector. In parallel there is an other chamber with an enclosed reference gas, typically nitrogen.
The detector has an optical filter in front of it that eliminates all light except the wavelength that
the selected gas molecules can absorb. Ideally other gas molecules do not absorb light at this
wavelength, and do not affect the amount of light reaching the detector.

As many gases absorb well in the IR area, it is often necessary to compensate for interfering
components. For instance, CO2 and H2O often initiate cross sensitivity in the infrared spectrum.
As many measurements in the IR area are cross sensitive to H2O it is difficult to analyse for
instance SO2 and NO2 in low concentrations using the infrared light principle.

The IR signal from the source is usually chopped or modulated so that thermal background
signals can be offset from the desired signal[1].

Microwave chemistry sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article does not cite any references or sources.


Please help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (August 2008)

Microwave chemistry sensor or Surface acoustic wave (SAW) sensors consist of an input
transducer, a chemically adsorbent polymer film, and an output transducer on a piezoelectric
substrate, which is typically quartz. The input transducer launches an acoustic wave that travels
through the chemical film and is detected by the output transducer. The Sandia-made device runs
at a very high frequency (approximately 525 MHz), and the velocity and attenuation of the signal
are sensitive to the viscoelasticity and mass of the thin film . SAWS have been able to
distinguish organophosphates, chlorinated hydrocarbons, ketones, alcohols, aromatic
hydrocarbons, saturated hydrocarbons, and water . The SAW used in these tests have four
channels—each channel consists of a transmitter and a receiver, separated by a small distance.
Three of the four channels have a polymer deposited on the substrate between the transmitter and
receiver. The purpose of the polymers is to adsorb chemicals of interest, with different polymers
having different affinities to various chemicals. When a chemical is adsorbed, the mass of the
polymer increases, causing a slight change in phase of the acoustic signal relative to the
reference (fourth) channel, which does not contain a polymer. The SAW device also contains
three Application Specific Integrated Circuit chips (ASICs), which contain the electronics to
analyze the signals and provide a DC voltage signal proportional to the phase shift. The SAW
device, containing the transducers and ASICs, is bonded to a piece of quartz glass, which is
placed in a leadless chip carrier (LCC). Wire bonds connect the terminals of the leadless chip
carrier to the SAW circuits.

[edit] Apllication
The Microwave chemistry sensor can detect several chemical materials including:

• Lead
• Mercury
• Oxygen

Nitrogen oxide sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

A nitrogen oxide sensor or NOx sensor is typically a high temperature device built to detect
nitrogen oxides in combustion environments such as an automobile or truck tailpipe or a
smokestack.

Contents
[hide]

• 1 Availability
• 2 Motivating factors
• 3 Difficulties
o 3.1 Harsh environment
o 3.2 High sensitivity and durability required
• 4 Conclusion
• 5 See also

• 6 External links

[edit] Availability
Continental Automotive Systems/NGK are in production of a NOx sensor for automotive and
truck applications. Several automobile and related companies such as Delphi, Ford, Chrysler, and
Toyota have also put extensive research into development of NOx sensors. Many academic and
government labs are pushing to develop the sensors as well. The term NOx actually represents
several forms of nitrogen oxides such as NO (nitric oxide), NO2 (nitrogen dioxide) and N2O
(nitrous oxide aka laughing gas). In a gasoline engine NO is the most common form of NOx
being around 93% while NO2 is around 5% and the rest is N2O. There are other forms of NOx
such as N2O4 (the dimer of NO2), which only exists at lower temperatures, and N2O5, for
example. However, owing to lower combustion temperatures, diesel engines produce lower
engine-out NOx emissions than do spark-ignition gasoline engines, but the nonexistent NOx
after-treatment causes diesel engines to emit significantly more NOx at the tailpipe compared to
a typical gasoline engine with a 3-way catalyst. In addition, the diesel oxidation catalyst
significantly increases the fraction of NO2 in "NOx" by oxidizing over 50% of NO using the
excess oxygen in the diesel exhaust gases.

[edit] Motivating factors


The drive to develop a NOx sensor comes from environmental factors. NOx gases can cause
various problems such as smog and acid rain. Many governments around the world have passed
laws to limit their emissions (along with other combustion gases such as SOx (oxides of sulfur),
CO (carbon monoxide) and CO2 (carbon dioxide) and hydrocarbons). Companies have realized
that one way of minimizing NOx emissions is to first detect them and then employ some sort of
feedback loop in the combustion process, minimizing NOx production by, for example,
combustion optimization or regeneration of NOx traps.

[edit] Difficulties
[edit] Harsh environment

Due to the high temperature of the combustion environment only certain types of material can
operate in situ. The majority of NOx sensors developed have been made out of ceramic type
metal oxides with the most common being yttria stabilized zirconia (YSZ), which is currently
used in the decades old oxygen sensor. The YSZ is compacted into a dense ceramic and actually
conducts oxygen ions (O2-) at the high temperatures of a tailpipe such at 400 °C and above. To
get a signal from the sensor a pair of high temperature electrodes such as noble metals (platinum,
gold, or palladium) or other metal oxides are placed onto the surface and an electrical signal such
as the change in voltage or current is measured as a function of NOx concentration.

[edit] High sensitivity and durability required

The levels of NO are around 100-2000 ppm (parts per million) and NO2 20-200 ppm in a range
of 1-10% O2. The sensor has to be very sensitive to pick up these levels.

The main problems that have limited the development of a successful NOx sensor (which are
typical of many sensors) are selectivity, sensitivity, stability, reproducibility, response time, limit
of detection and cost. In addition due to the harsh environment of combustion the high gas flow
rate can cool the sensor which alters the signal or it can delaminate the electrodes over time and
soot particles can degrade the materials.

[edit] Conclusion
The NOx sensing element mentioned above is only a part of the whole device, which also
includes the packaging and electronics. The development of a very good NOx sensor is highly
desirable and has the potential to serve a large market. Thus there is a large push to produce a
robust sensor that works reliably and accurately over a wide temperature range. It is currently
one of the most sought after combustion gas sensors. Degradation by CO2 and SO2 gases,
however, remains a major problem of NOx sensors.
Olfactometer
From Wikipedia, the free encyclopedia

Jump to: navigation, search

An olfactometer is an instrument typically used to detect and measure ambient odor dilution.
Olfactometers are utilized in conjunction with human subjects in laboratory settings, most often
in market research, to quantify and qualify human olfaction.[1] Olfactometers are used to gauge
the odor detection threshold of substances. To measure intensity, olfactometers introduce an
odorous gas as a baseline against which other odors are compared.

Alternatively, an olfactometer is a device used for producing aromas in a precise and controlled
manner.

Optode
From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article does not cite any references or sources.


Please help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (December 2009)

An optode or optrode is an optical sensor device that optically measures a specific substance
usually with the aid of a chemical transducer.

Contents
[hide]

• 1 Construction
• 2 Operation
• 3 Popularity

• 4 See also

[edit] Construction
An optode requires three components to function: a chemical that responds to an analyte, a
polymer to immobilise the chemical transducer and instrumentation (optical fibre, light source,
detector and other electronics). Optodes usually have the polymer matrix coated onto the tip of
an optical fibre, but in the case of evanescent wave optodes the polymer is coated on a section of
fibre that has been unsheathed.

[edit] Operation
Optodes can apply various optical measurement schemes such as reflection, absorption,
evanescent wave, luminescence (fluorescence and phosphorescences), chemiluminescence,
surface plasmon resonance. By far the most popular methodology is luminescence.

Luminescence in solution obeys the linear Stern-Volmer relationship. Fluorescence of a


molecule is quenched by specific analytes, e.g. ruthenium complexes are quenched by oxygen.
When a fluorophore is immobilised within a polymer matrix a myriad of micro-environments are
created. The micro-environments reflect varying diffusion co-efficients for the analyte. This
leads to a non-linear relationship between the fluorescence and the quencher (analyte). This
relationship is modelled in various ways, the most popular model is the two site model created
by James Demas (University of Virginia).

The signal (fluorescence) to oxygen ratio is not linear, and an optode is most sensitive at low
oxygen concentration, i.e. the sensitivity decreases as oxygen concentration increases. The
optode sensors can however work in the whole region 0–100% oxygen saturation in water, and
the calibration is done the same way as with the Clark type sensor. No oxygen is consumed and
hence the sensor is stirring insensitive, but the signal will stabilize more quickly if the sensor is
stirred after being put into the sample.

[edit] Popularity
Optical sensors are growing in popularity due to the low-cost, low power requirements and long
term stability. They provide viable alternatives to electrode-based sensors or more complicated
analytical instrumentation.

Major international conferences are devoted to their development e.g. Europtrode VIII Tübingen
2006, OFS 18, Cancun 2006.

Oxygen sensor
From Wikipedia, the free encyclopedia

Jump to: navigation, search

It has been suggested that AFR sensor be merged into this article or
section. (Discuss)

Contents
[hide]

• 1 Automotive applications
o 1.1 Function of a lambda probe
o 1.2 The probe
o 1.3 Operation of the probe
 1.3.1 Zirconia sensor
 1.3.2 Wideband zirconia sensor
 1.3.3 Titania sensor
o 1.4 Location of the probe in a system
o 1.5 Sensor surveillance
o 1.6 Sensor failures
• 2 Diving applications
• 3 Scientific applications
o 3.1 Electrodes
o 3.2 Optodes
• 4 See also

• 5 References

An oxygen sensor, or lambda sensor, is an electronic device that measures the proportion of
oxygen (O2) in the gas or liquid being analyzed. It was developed by Robert Bosch GmbH during
the late 1960s under supervision by Dr. Günter Bauman. The original sensing element is made
with a thimble-shaped zirconia ceramic coated on both the exhaust and reference sides with a
thin layer of platinum and comes in both heated and unheated forms. The planar-style sensor
entered the market in 1998 (also pioneered by Robert Bosch GmbH) and significantly reduced
the mass of the ceramic sensing element as well as incorporating the heater within the ceramic
structure. This resulted in a sensor that both started operating sooner and responded faster. The
most common application is to measure the exhaust gas concentration of oxygen for internal
combustion engines in automobiles and other vehicles. Divers also use a similar device to
measure the partial pressure of oxygen in their breathing gas.

Scientists use oxygen sensors to measure respiration or production of oxygen and use a different
approach. Oxygen sensors are used in oxygen analyzers which find a lot of use in medical
applications such as anesthesia monitors, respirators and oxygen concentrators.

There are many different ways of measuring oxygen and these include technologies such as
zirconia, electrochemical (also known as Galvanic), infrared, ultrasonic and very recently laser.
Each method has its own advantages and disadvantages.

[edit] Automotive applications


A three-wire oxygen sensor suitable for use in a Volvo 240 or similar.

Automotive oxygen sensors, colloquially known as O2 sensors, make modern electronic fuel
injection and emission control possible. They help determine, in real time, if the air fuel ratio of a
combustion engine is rich or lean. Since oxygen sensors are located in the exhaust stream, they
do not directly measure the air or the fuel entering the engine. But when information from
oxygen sensors is coupled with information from other sources, it can be used to indirectly
determine the air-to-fuel ratio. Closed-loop feedback-controlled fuel injection varies the fuel
injector output according to real-time sensor data rather than operating with a predetermined
(open-loop) fuel map. In addition to enabling electronic fuel injection to work efficiently, this
emissions control technique can reduce the amounts of both unburnt fuel and oxides of nitrogen
entering the atmosphere. Unburnt fuel is pollution in the form of air-borne hydrocarbons, while
oxides of nitrogen (NOx gases) are a result of combustion chamber temperatures exceeding 1,300
kelvins due to excess air in the fuel mixture and contribute to smog and acid rain. Volvo was the
first automobile manufacturer to employ this technology in the late 1970s, along with the 3-way
catalyst used in the catalytic converter.

The sensor does not actually measure oxygen concentration, but rather the amount of oxygen
needed to completely oxidize any remaining combustibles in the exhaust gas. Rich mixture
causes an oxygen demand. This demand causes a voltage to build up, due to transportation of
oxygen ions through the sensor layer. Lean mixture causes low voltage, since there is an oxygen
excess.

Modern spark-ignited combustion engines use oxygen sensors and catalytic converters in order
to reduce exhaust emissions. Information on oxygen concentration is sent to the engine
management computer or ECU, which adjusts the amount of fuel injected into the engine to
compensate for excess air or excess fuel. The ECU attempts to maintain, on average, a certain
air-fuel ratio by interpreting the information it gains from the oxygen sensor. The primary goal is
a compromise between power, fuel economy, and emissions, and in most cases is achieved by an
air-fuel-ratio close to stoichiometric. For spark-ignition engines (such as those that burn gasoline,
as opposed to diesel), the three types of emissions modern systems are concerned with are:
hydrocarbons (which are released when the fuel is not burnt completely, such as when misfiring
or running rich), carbon monoxide (which is the result of running slightly rich) and NOx (which
dominate when the mixture is lean). Failure of these sensors, either through normal aging, the
use of leaded fuels, or fuel contaminated with silicones or silicates, for example, can lead to
damage of an automobile's catalytic converter and expensive repairs.
Tampering with or modifying the signal that the oxygen sensor sends to the engine computer can
be detrimental to emissions control and can even damage the vehicle. When the engine is under
low-load conditions (such as when accelerating very gently, or maintaining a constant speed), it
is operating in "closed-loop mode." This refers to a feedback loop between the ECU and the
oxygen sensor(s) in which the ECU adjusts the quantity of fuel and expects to see a resulting
change in the response of the oxygen sensor. This loop forces the engine to operate both slightly
lean and slightly rich on successive loops, as it attempts to maintain a mostly stoichiometric ratio
on average. If modifications cause the engine to run moderately lean, there will be a slight
increase in fuel economy, sometimes at the expense of increased NOx emissions, much higher
exhaust gas temperatures, and sometimes a slight increase in power that can quickly turn into
misfires and a drastic loss of power, as well as potential engine damage, at ultra-lean air-to-fuel
ratios. If modifications cause the engine to run rich, then there will be a slight increase in power
to a point (after which the engine starts flooding from too much unburned fuel), but at the cost of
decreased fuel economy, and an increase in unburned hydrocarbons in the exhaust which causes
overheating of the catalytic converter. Prolonged operation at rich mixtures can cause
catastrophic failure of the catalytic converter (see backfire). The ECU also controls the spark
engine timing along with the fuel injector pulse width, so modifications which alter the engine to
operate either too lean or too rich may result in inefficient fuel consumption whenever fuel is
ignited too soon or too late in the combustion cycle.

When an internal combustion engine is under high load (e.g. wide open throttle), the output of
the oxygen sensor is ignored, and the ECU automatically enriches the mixture to protect the
engine, as misfires under load are much more likely to cause damage. This is referred to an
engine running in 'open-loop mode'. Any changes in the sensor output will be ignored in this
state. In many cars (with the exception of some turbocharged models), inputs from the air flow
meter are also ignored, as they might otherwise lower engine performance due to the mixture
being too rich or too lean, and increase the risk of engine damage due to detonation if the
mixture is too lean.

[edit] Function of a lambda probe

Lambda probes are used to reduce vehicle emissions by ensuring that engines burn their fuel
efficiently and cleanly. Robert Bosch GmbH introduced the first automotive lambda probe in
1976[1], and it was first used by Volvo and Saab in that year. The sensors were introduced in the
US from about 1980, and were required on all models of cars in many countries in Europe in
1993.

By measuring the proportion of oxygen in the remaining exhaust gas, and by knowing the
volume and temperature of the air entering the cylinders amongst other things, an ECU can use
look-up tables to determine the amount of fuel required to burn at the stoichiometric ratio (14.7:1
air:fuel by mass for gasoline) to ensure complete combustion.

[edit] The probe

The sensor element is a ceramic cylinder plated inside and out with porous platinum electrodes;
the whole assembly is protected by a metal gauze. It operates by measuring the difference in
oxygen between the exhaust gas and the external air, and generates a voltage or changes its
resistance depending on the difference between the two.

The sensors only work effectively when heated to approximately 316 °C (600 °F), so most newer
lambda probes have heating elements encased in the ceramic that bring the ceramic tip up to
temperature quickly. Older probes, without heating elements, would eventually be heated by the
exhaust, but there is a time lag between when the engine is started and when the components in
the exhaust system come to a thermal equilibrium. The length of time required for the exhaust
gases to bring the probe to temperature depend on the temperature of the ambient air and the
geometry of the exhaust system. Without a heater, the process may take several minutes. There
are pollution problems that are attributed to this slow start-up process, including a similar
problem with the working temperature of a catalytic converter.

The probe typically has four wires attached to it: two for the lambda output, and two for the
heater power, although some automakers use a common ground for the sensor element and
heaters, resulting in three wires. Earlier non-electrically-heated sensors had one or two wires.

[edit] Operation of the probe

[edit] Zirconia sensor

A planar zirconia sensor (schematic picture)

The zirconium dioxide, or zirconia, lambda sensor is based on a solid-state electrochemical fuel
cell called the Nernst cell. Its two electrodes provide an output voltage corresponding to the
quantity of oxygen in the exhaust relative to that in the atmosphere. An output voltage of 0.2 V
(200 mV) DC represents a "lean mixture" of fuel and oxygen, where the amount of oxygen
entering the cylinder is sufficient to fully oxidize the carbon monoxide (CO), produced in
burning the air and fuel, into carbon dioxide (CO2). An output voltage of 0.8 V (800 mV) DC
represents a "rich mixture", one which is high in unburned fuel and low in remaining oxygen.
The ideal setpoint is approximately 0.45 V (450 mV) DC. This is where the quantities of air and
fuel are in the optimum ratio, which is ~0.5% lean of the stoichiometric point, such that the
exhaust output contains minimal carbon monoxide.

The voltage produced by the sensor is nonlinear with respect to oxygen concentration. The
sensor is most sensitive near the stoichiometric point and less sensitive when either very lean or
very rich.

The engine control unit (ECU) is a control system that uses feedback from the sensor to adjust
the fuel/air mixture. As in all control systems, the time constant of the sensor is important; the
ability of the ECU to control the fuel-air-ratio depends upon the response time of the sensor. An
aging or fouled sensor tends to have a slower response time, which can degrade system
performance. The shorter the time period, the higher the so-called "cross count" [2] and the more
responsive the system.

The zirconia sensor is of the "narrow band" type, referring to the narrow range of fuel/air ratios
to which it responds.

[edit] Wideband zirconia sensor

A planar wideband zirconia sensor (schematic picture)

A variation on the zirconia sensor, called the "wideband" sensor, was introduced by Robert
Bosch in 1994 but is (as of 2006) used in only a few vehicles (such as the Subaru Impreza WRX
when equipped with a manual transmission). It is based on a planar zirconia element, but also
incorporates an electrochemical gas pump. An electronic circuit containing a feedback loop
controls the gas pump current to keep the output of the electrochemical cell constant, so that the
pump current directly indicates the oxygen content of the exhaust gas. This sensor eliminates the
lean-rich cycling inherent in narrow-band sensors, allowing the control unit to adjust the fuel
delivery and ignition timing of the engine much more rapidly. In the automotive industry this
sensor is also called a UEGO (for Universal Exhaust Gas Oxygen) sensor. UEGO sensors are
also commonly used in aftermarket dyno tuning and high-performance driver air-fuel display
equipment. The wideband zirconia sensor is used in stratified fuel injection systems, and can
now also be used in diesel engines to satisfy the forthcoming EURO and ULEV emission limits.

Wideband sensors have three elements:

• Ion Oxygen pump


• Narrowband zirconia sensor
• Heating element

The wiring diagram for the wideband sensor typically has six wires:

• resistive heating element (two wires)


• sensor
• pump
• calibration resistor
• common

[edit] Titania sensor

A less common type of narrow-band lambda sensor has a ceramic element made of titanium
dioxide (titania). This type does not generate its own voltage, but changes its electrical resistance
in response to the oxygen concentration. The resistance of the titania is a function of the oxygen
partial pressure and the temperature. Therefore, some sensors are used with a gas temperature
sensor to compensate for the resistance change due to temperature. The resistance value at any
temperature is about 1/1000th the change in oxygen concentration. Luckily, at lambda = 1, there
is a large change of oxygen, so the resistance change is typically 1000 times between rich and
lean, depending on the temperature.

As titania is an N-type semiconductor with a structure TiO2-x, the x defects in the crystal lattice
conduct the charge. So, for fuel-rich exhaust the resistance is low, and for fuel-lean exhaust the
resistance is high. The control unit feeds the sensor with a small electrical current and measures
the resulting voltage across the sensor, which varies from near 0 volts to about 5 volts. Like the
zirconia sensor, this type is nonlinear, such that it is sometimes simplistically described as a
binary indicator, reading either "rich" or "lean". Titania sensors are more expensive than zirconia
sensors, but they also respond faster.

In automotive applications the titania sensor, unlike the zirconia sensor, does not require a
reference sample of atmospheric air to operate properly. This makes the sensor assembly easier
to design against water contamination. While most automotive sensors are submersible, zirconia-
based sensors require a very small supply of reference air from the atmosphere. In theory, the
sensor wire harness and connector are sealed. Air that leaches through the wire harness to the
sensor is assumed to come from an open point in the harness - usually the ECU which is housed
in an enclosed space like the trunk or vehicle interior.

[edit] Location of the probe in a system

The probe is typically screwed into a threaded hole in the exhaust system, located after the
branch manifold of the exhaust system combines, and before the catalytic converter. New
vehicles are required to have a sensor before and after the exhaust catalyst to meet U.S.
regulations requiring that all emissions components be monitored for failure. Pre and post-
catalyst signals are monitored to determine catalyst efficiency. Additionally, some catalyst
systems require brief cycles of lean (oxygen-containing) gas to load the catalyst and promote
additional oxidation reduction of undesirable exhaust components.

[edit] Sensor surveillance

The air-fuel ratio and naturally, the status of the sensor, can be monitored by means of using an
air-fuel ratio meter that displays the read output voltage of the sensor.

[edit] Sensor failures

Normally, the lifetime of an unheated sensor is about 30,000 to 50,000 miles (50,000 to
80,000 km). Heated sensor lifetime is typically 100,000 miles (160,000 km). Failure of an
unheated sensor is usually caused by the buildup of soot on the ceramic element, which
lengthens its response time and may cause total loss of ability to sense oxygen. For heated
sensors, normal deposits are burned off during operation and failure occurs due to catalyst
depletion, similar to the reason a battery stops producing current. The probe then tends to report
lean mixture, the ECU enriches the mixture, the exhaust gets rich with carbon monoxide and
hydrocarbons, and the mileage worsens.

Leaded gasoline contaminates the oxygen sensors and catalytic converters. Most oxygen sensors
are rated for some service life in the presence of leaded gasoline but sensor life will be shortened
to as little as 15,000 miles depending on the lead concentration. Lead-damaged sensors typically
have their tips discolored light rusty.

Another common cause of premature failure of lambda probes is contamination of fuel with
silicones (used in some sealings and greases) or silicates (used as corrosion inhibitors in some
antifreezes). In this case, the deposits on the sensor are colored between shiny white and grainy
light gray.

Leaks of oil into the engine may cover the probe tip with an oily black deposit, with associated
loss of response.

An overly rich mixture causes buildup of black powdery deposit on the probe. This may be
caused by failure of the probe itself, or by a problem elsewhere in the fuel rationing system.
Applying an external voltage to the zirconia sensors, e.g. by checking them with some types of
ohmmeter, may damage them.

Symptoms of a failing oxygen sensor includes:

• Sensor Light on dash indicates problem


• Increased tailpipe emissions
• Increased fuel consumption
• Hesitation on acceleration
• Stalling
• Rough idling

[edit] Diving applications


Main article: electro-galvanic fuel cell

A diving breathing gas oxygen analyser

The diving type of oxygen sensor, which is sometimes called an oxygen analyser or ppO2
meter, is used in scuba diving. They are used to measure the oxygen concentration of breathing
gas mixes such as nitrox and trimix.[3] They are also used within the oxygen control mechanisms
of closed-circuit rebreathers to keep the partial pressure of oxygen within safe limits.[4] This type
of sensor operates by measuring the electricity generated by a small electro-galvanic fuel cell.

[edit] Scientific applications


In marine biology or limnology oxygen measurements are usually done in order to measure
respiration of a community or an organism, but have also been used to measure primary
production of algae. The traditional way of measuring oxygen concentration in a water sample
has been to use wet chemistry techniques e.g. the Winkler titration method. There are however
commercially available oxygen sensors that measure the oxygen concentration in liquids with
great accuracy. There are two types of oxygen sensors available: electrodes (electrochemical
sensors) and optodes (optical sensors).

[edit] Electrodes

A dissolved oxygen meter for laboratory use.

The Clark-type electrode is the most used oxygen sensor for measuring oxygen dissolved in a
liquid. The basic principle is that there is a cathode and an anode submersed in an electrolyte.
Oxygen enters the sensor through a permeable membrane by diffusion, and is reduced at the
cathode, creating a measurable electrical current.

There is a linear relationship between the oxygen concentration and the electrical current. With a
two-point calibration (0% and 100% air saturation), it is possible to measure oxygen in the
sample.

One drawback to this approach is that oxygen is consumed during the measurement with a rate
equal to the diffusion in the sensor. This means that the sensor must be stirred in order to get the
correct measurement and avoid stagnant water. With an increasing sensor size, the oxygen
consumption increases and so does the stirring sensitivity. In large sensors there tend to also be a
drift in the signal over time due to consumption of the electrolyte. However, Clark-type sensors
can be made very small with a tip size of 10 µm. The oxygen consumption of such a microsensor
is so small that it is practically insensitive to stirring and can be used in stagnant media such as
sediments or inside plant tissue.

[edit] Optodes

An oxygen optode is a sensor based on optical measurement of the oxygen concentration. A


chemical film is glued to the tip of an optical cable and the fluorescence properties of this film
depend on the oxygen concentration. Fluorescence is at a maximum when there is no oxygen
present. When an O2 molecule comes along it collides with the film and this quenches the
photoluminescence. In a given oxygen concentration there will be a specific number of O2
molecules colliding with the film at any given time, and the fluorescence properties will be
stable.

The signal (fluorescence) to oxygen ratio is not linear, and an optode is most sensitive at low
oxygen concentration. That is, the sensitivity decreases as oxygen concentration increases
following the Stern-Volmer relationship. The optode sensors can, however, work in the whole
region 0% to 100% oxygen saturation in water, and the calibration is done the same way as with
the Clark type sensor. No oxygen is consumed and hence the sensor is insensitive to stirring, but
the signal will stabilize more quickly if the sensor is stirred after being put in the sample. These
type of electrode sensors can be used for insitu and realtime monitoring of Oxygen production in
water splitting reactions. The platinized electrodes can accomplish the real time monitoring of
Hydrogen production in water splitting device. Calzaferri and his co workers employed this type
of electrodes very extensively for photoelectrochemical water splitting research.

Pellistor
From Wikipedia, the free encyclopedia

Jump to: navigation, search

A pellistor is a solid-state device[1] used to detect gases which are either combustible or which
have a significant difference in thermal conductivity to that of air. The word "pellistor" is a
combination of pellet and resistor.

Contents
[hide]

• 1 Principle
• 2 History
• 3 Types
o 3.1 Catalytic
o 3.2 Thermal Conductivity

• 4 References

[edit] Principle
The detecting element consist of small "pellets" of catalyst loaded ceramic whose resistance
changes in the presence of gas.

[edit] History
The pellistor was developed in the early 1960s for use in mining operations as the successor of
the flame safety lamp and the canary. It was invented by English scientist Alan Baker.

[edit] Types
[edit] Catalytic

The catalytic pellistor as used in the catalytic bead sensor works by burning the target gas; the
heat generated producing a change in the resistance of the detecting element of the sensor
proportional to the gas concentration.

[edit] Thermal Conductivity

The thermal conductivity (TC) pellistor works by measuring the change in heat loss (and hence
temperature/resistance) of the detecting element in the presence of the target gas.

Glass electrode
From Wikipedia, the free encyclopedia

(Redirected from PH glass electrode)

Jump to: navigation, search

A glass electrode is a type of ion-selective electrode made of a doped glass membrane that is
sensitive to a specific ion. It is an important part of the instrumentation for chemical analysis and
physico-chemical studies. In modern practice, widely used membranous ion-selective electrodes
(ISE, including glasses) that are part of a galvanic cell. The electric potential of the electrode
system in solution is sensitive to changes in the content of a certain type of ions, which is
reflected in the dependence of the electromotive force (EMF) of galvanic element concentrations
of these ions.

Contents
[hide]

• 1 History
• 2 Applications
• 3 Types
• 4 Interfering ions
o 4.1 Metallic function of GE
• 5 Range of a pH glass electrode
• 6 Construction
• 7 Storage
• 8 See also
• 9 References

• 10 External links

[edit] History

Pioneering publication about GE (1909)

Already the first studies of glass electrodes (GE) found different sensitivities of different glasses
to change of the medium's acidity (pH), due to the effect of alkali metal ions.

• 1906 — M. Cramer determined that the electric potential that arises between
parts of the fluid, located on opposite sides of the glass membrane is
proportional to the concentration of acid (hydrogen ion concentration) [1].
• 1909 — S. P. L. Sørensen introduced the concept of pH.
• 1909 — F. Haber and Z. Klemensiewicz publicized on January 28, 1909 results
of their research on the glass electrode in The Society of Chemistry in
Karlsruhe (first publication — The Journal of Physical Chemistry by W. Ostwald
and J. H. van 't Hoff) — 1909) [2]
• 1922 — W. S. Hughes proved that the alkali-silicate GE are similar to
hydrogen electrode, reversible with respect to H+ [3].

[edit] Applications
Glass electrodes are commonly used for pH measurements. As well, there are specialized ion
sensitive glass electrodes used for determination of concentration of lithium, sodium,
ammonium, and other ions. Glass electrodes have been utilized in a wide range of applications
— from pure research, control of industrial processes, to analyze foods, cosmetics and
comparison of indicators of the environment and environmental regulations: a microelectrode
measurements of membrane electrical potential of a biological cell, analysis of soil acidity, etc.

[edit] Types
Almost all commercial electrodes respond to single charged ions, like H+, Na+, Ag+. The most
common glass electrode is the pH-electrode. Only a few chalcogenide glass electrodes are
sensitive to double-charged ions, like Pb2+, Cd2+ and some others.

There are two main glass-forming systems:

• silicate matrix based on molecular network of silicon dioxide (SiO2) with


additions of other metal oxides, such as Na, K, Li, Al, B, Ca, etc.
• chalcogenide matrix based on molecular network of AsS, AsSe, AsTe.

[edit] Interfering ions

A silver chloride reference electrode (left) and glass pH electrode (right)

Because of the ion-exchange nature of the glass membrane, it is possible for some other ions to
concurrently interact with ion-exchange centers of the glass and to distort the linear dependence
of the measured electrode potential on pH or other electrode function. In some cases it is possible
to change the electrode function from one ion to another. For example, some silicate pNa
electrodes can be changed to pAg function by soaking in a silver salt solution.
Interference effects are commonly described the semiempirical Nicolsky-Eisenman equation
(also known as Nikolsky-Eisenman equation)[4], an extension to the Nernst equation. It is given
by

where E is the emf, E0 the standard electrode potential, z the ionic valency including the sign, a
the activity, i the ion of interest, j the interfering ions and kij is the selectivity coefficient. The
smaller the selectivity coefficient, the less is the interference by j.

To see the interfering effect of Na+ to a pH-electrode:

[edit] Metallic function of GE

Before the 1950's, there was no explanation for some important aspects of the behavior of glass
electrodes (GE) and the factual reversibility of this behavior. Some authors have denied the
existence of a particular function at GE in such solutions where they do not behave fully as
hydrogen electrode, denying the phenomena of these functions, which were attributed by
researchers as an incorrect interpretation of the structural changes in the surface layers of glass
[5]
; it is mistakenly was attributed the changes of EMF element to change the capacity of GE, and
therefore received too large values of pH. [6].

George Eisenman wrote in his retrospective review [7]:

The pioneering studies of Lengiel and Blum were extended by others, who were
primarily interested in the existence of Na+ sensitivity per se (i.e., the Na+
selectivity relative to H+ only) and in establishing whether or not the electrodes
were reversible in the thermodynamic sense. A review of this work is given by
Shultz, whose studies and those of Nicolskii and Tolmacheva are noteworthy. In
fact, Shultz was the first to demonstrate, by direct comparison with a sodium
amalgam electrode, that glasses behave as reversible electrodes for Na+ at neutral
and alkaline pH.

In 1951 Mikhail Schultz, first proved rigorously the thermodynamical reversibility of the Na-
function of different glasses in different pH ranges (later the functions for other metal ions) that
confirmed the validity of one of the key hypotheses of ion exchange theory, now the Nikolsky-
Shultz-Eisenman [6][8] thermodynamic ion-exchange theory of GE.

This fact is important because ion-exchange theory «began to work» after a thermodynamically
rigorous experimental confirmation of metallic function only. Before, it could be called only as a
hypothesis (an epistemological). This opened the way for industrial technology GE, forming
ionometry with them, later with membrane electrodes. In the context of «generalized» theory of
glass electrodes, Shultz has created a framework for interpreting a mechanism of the influence of
diffusion processes in glasses and resin in their electrode properties, giving new quantitative
relationship, which take into account the dynamic and energetic characteristics of ion
exchangers. Schulz introduced a thermodynamic consideration of processes in the membranes.
Considering the different abilities of the dissociation of ionogenic groups of the glasses, his
theory allows a rigorous analytical way to connect the electrode properties of glasses and ion-
exchange resins with their chemical characteristics [9].

[edit] Range of a pH glass electrode


The pH range at constant concentration can be divided into 3 parts:

Scheme of the typical dependence E-pH for ion-selective electrode

• Complete realization of general electrode function, where dependence of


potential on pH has linear behavior and within which such electrode really
works as ion-selective electrode for pH.

• Alkali error range - at low concentration of hydrogen ions (high values of pH)
contributions of interfering alkali metals (like Li, Na, K) are comparable with
the one of hydrogen ions. In this situation dependence of the potential on pH
become non-linear.

The effect is usually noticeable at pH > 12, and concentrations of lithium or sodium ions of 0.1
moles per litre or more. Potassium ions usually cause less error than sodium ions.

• Acidic error range - at very high concentration of hydrogen ions (low values of
pH) the dependence of the electrode on pH becomes non-linear and the
influence of the anions in the solution also becomes noticeable. These effects
usually become noticeable at pH < 1.

There are different types of pH glass electrode, some of them have improved characteristics for
working in alkaline or acidic media. But almost all electrodes have sufficient properties for
working in the most popular pH range from pH=2 to pH=12. Special electrodes should be used
only for working in aggressive conditions.

Most of text written above is also correct for any ion-exchange electrodes.

[edit] Construction

Scheme of typical pH glass electrode

A typical modern pH probe is a combination electrode, which combines both the glass and
reference electrodes into one body. The combination electrode consists of the following parts
(see the drawing):

1. a sensing part of electrode, a bulb made from a specific glass


2. sometimes the electrode contains a small amount of AgCl precipitate inside
the glass electrode
3. internal solution, usually 0.1 mol/L HCl for pH electrodes or 0.1 mol/L MeCl for
pMe electrodes
4. internal electrode, usually silver chloride electrode or calomel electrode
5. body of electrode, made from non-conductive glass or plastics.
6. reference electrode, usually the same type as 4
7. junction with studied solution, usually made from ceramics or capillary with
asbestos or quartz fiber.

The bottom of a pH electrode balloons out into a round thin glass bulb. The pH electrode is best
thought of as a tube within a tube. The inside most tube (the inner tube) contains an unchanging
saturated KCl and a 0.1 mol/L HCl solution. Also inside the inner tube is the cathode terminus of
the reference probe. The anodic terminus wraps itself around the outside of the inner tube and
ends with the same sort of reference probe as was on the inside of the inner tube. Both the inner
tube and the outer tube contain a reference solution but only the outer tube has contact with the
solution on the outside of the pH probe by way of a porous plug that serves as a salt bridge.


o The details of this section describe the functioning of two separate
types of glass electrodes as one unit. It needs clarification.

This device is essentially a galvanic cell that can be represented as:

Ag | AgCl | KCl solution || glass membrane || test solution || ceramic junction || KCl solution |
AgCl | Ag

The measuring part of the electrode, the glass bulb on the bottom, is coated both inside and out
with a ~10 nm layer of a hydrated gel. These two layers are separated by a layer of dry glass. The
silica glass structure (that is, the conformation of its atomic structure) is shaped in such a way
that it allows Na+ ions some mobility. The metal cations (Na+) in the hydrated gel diffuse out of
the glass and into solution while H+ from solution can diffuse into the hydrated gel. It is the
hydrated gel, which makes the pH electrode an ion selective electrode.

H+ does not cross through the glass membrane of the pH electrode, it is the Na+ which crosses
and allows for a change in free energy. When an ion diffuses from a region of activity to another
region of activity, there is a free energy change and this is what the pH meter actually measures.
The hydrated gel membrane is connected by Na+ transport and thus the concentration of H+ on
the outside of the membrane is 'relayed' to the inside of the membrane by Na+.

All glass pH electrodes have extremely high electric resistance from 50 to 500 MΩ. Therefore,
the glass electrode can be used only with a high input-impedance measuring device like a pH
meter, or, more generically, a high input-impedance voltmeter which is called an electrometer.

[edit] Storage
Between measurements any glass and membrane electrodes should be kept in the solution of its
own ion (Ex. pH glass electrode should be kept in 0.1 mol/L HCl or 0.1 mol/L H2SO4). It is
necessary to prevent the glass membrane from drying out.

Potentiometric sensor
From Wikipedia, the free encyclopedia

Jump to: navigation, search

A potentiometric sensor is a type of chemical sensor that may be used to determine the
analytical concentration of some components of the analyte gas or solution. These sensors
measure the electrical potential of an electrode when no current is flowing.

Contents
[hide]

• 1 Principle
• 2 Classification of
sensors
• 3 See also

• 4 References

[edit] Principle
The signal is measured as the potential difference (voltage) between the working electrode and
the reference electrode. The working electrode's potential must depend on the concentration of
the analyte in the gas or solution phase. The reference electrode is needed to provide a defined
reference potential.

[edit] Classification of sensors


Potentiometric solid state gas sensors have been generally classified into three broad groups.

• Type I sensors have an electrolyte containing mobile ions of the chemical


species in the gas phase that it is monitoring. The commercial product, YSZ
oxygen sensor,[1] is an example of type I.

• Type II sensors do not have mobile ions of the chemical species to be sensed,
but an ion related to the target gas can diffuse in the solid electrolyte to allow
equilibration with the atmosphere. Therefore, type I and type II sensors have
the same design with gas electrodes combined with metal and an electrolyte
where oxidized or reduced ions can be electrochemically equilibrated through
the electrochemical cell. In the third type of electrochemical sensor, auxiliary
phases are added to the electrodes to enhance the selectivity and stability.

• Type III sensors make the electrode concept even more confusing. With
respect to the design of a solid state sensor, the auxiliary phase looks as part
of the electrode. But it cannot be an electrode because auxiliary phase
materials are not generally good electrical conductor. In spite of this
confusion, type III design offers more feasibility in terms of designing various
sensors with different auxiliary materials and electrolytes.

Working redox electrode


From Wikipedia, the free encyclopedia

(Redirected from Redox electrode)

Jump to: navigation, search

A gold disk working electrode with a Teflon shroud insulating the disk.

The working electrode, is the electrode in an electrochemical system on which the reaction of
interest is occurring.[1][2][3] The working electrode is often used in conjunction with an auxiliary
electrode, and a reference electrode in a three electrode system. Depending on whether the
reaction on the electrode is a reduction or an oxidation, the working electrode can be referred to
as either cathodic or anodic. Common working electrodes can consist of inert metals such as
gold, silver or platinum, to inert carbon such as glassy carbon or pyrolytic carbon, and mercury
drop and film electrodes.

Contents
[hide]

• 1 Special types of working


electrodes
• 2 See also
• 3 References

• 4 External links

[edit] Special types of working electrodes


• Ultramicroelectrode (UME)
• Rotating disk electrode (RDE)
• Rotating ring-disk electrode (RRDE)
• Hanging mercury drop electrode (HMDE)
• Dropping mercury electrode (DME)

Smoke detector
From Wikipedia, the free encyclopedia

Jump to: navigation, search

A smoke detector is a device that detects smoke, typically as an indicator of fire. Commercial,
industrial, and mass residential devices issue a signal to a fire alarm system, while household
detectors, known as smoke alarms, generally issue a local audible and/or visual alarm from the
detector itself.

Smoke detectors are typically housed in a disk-shaped plastic enclosure about 150 millimetres
(6 in) in diameter and 25 millimetres (1 in) thick, but the shape can vary by manufacturer or
product line. Most smoke detectors work either by optical detection (photoelectric) or by
physical process (ionization), while others use both detection methods to increase sensitivity to
smoke. Smoke detectors in large commercial, industrial, and residential buildings are usually
powered by a central fire alarm system, which is powered by the building power with a battery
backup. However, in many single family detached and smaller multiple family housings, a
smoke alarm is often powered only by a single disposable battery.

Contents
[hide]

• 1 History
• 2 Design
o 2.1 Optical
o 2.2 Ionization
o 2.3 Air-sampling
o 2.4 Carbon monoxide and carbon dioxide detection
o 2.5 Performance differences
• 3 Commercial smoke detectors
o 3.1 Conventional
o 3.2 Addressable
• 4 Standalone smoke alarms
o 4.1 Batteries
o 4.2 Reliability
o 4.3 Installation and placement
• 5 References

• 6 External links

[edit] History
The first automatic electric fire alarm was invented in 1890 by Francis Robbins Upton (US
patent no. 436,961). Upton was an associate of Thomas Edison, but there is no evidence that
Edison contributed to this project.

In the late 1930s the Swiss physicist Walter Jaeger tried to invent a sensor for poison gas. He
expected that gas entering the sensor would bind to ionized air molecules and thereby alter an
electric current in a circuit in the instrument. His device failed: small concentrations of gas had
no effect on the sensor's conductivity. Frustrated, Jaeger lit a cigarette—and was soon surprised
to notice that a meter on the instrument had registered a drop in current. Smoke particles had
apparently done what poison gas could not. Jaeger's experiment was one of the advances that
paved the way for the modern smoke detector.

It was 30 years, however, before progress in nuclear chemistry and solid-state electronics made a
cheap sensor possible. While home smoke detectors were available during most of the 1960s, the
price of these devices was rather high. Before that, alarms were so expensive that only major
businesses and theaters could afford them.

The first truly affordable home smoke detector was invented by Duane D. Pearsall in 1965,
featuring an individual battery powered unit that could be easily installed and replaced. The first
units for mass production came from Duane Pearsall’s company, Statitrol Corporation, in
Lakewood, Colorado.[1][2]

These first units were made from strong fire resistant steel and shaped much like a bee's hive.
The battery was a rechargeable specialized unit created by Gates Energy. The need for a quick
replace battery didn't take long to show itself and the rechargeable was replaced with a pair of
AA batteries along with a plastic shell encasing the detector. The small assembly line sent close
to 500 units per day before Statitrol sold its invention to Emerson Electric in 1980 and Sears’s
retailers picked up full distribution of the 'now required in every home' smoke detector.

The first commercial smoke detectors came to market in 1969. Today they are installed in 93%
of US homes and 85% of UK homes. However it is estimated that any given time over 30% of
these alarms don't work, as users remove the batteries, or forget to replace them.
Although commonly attributed to NASA, smoke detectors were not invented as a result of the
space program, though a variant with adjustable sensitivity was developed for Skylab.[3]

[edit] Design
[edit] Optical

Optical Smoke Detector with the cover removed.

Optical Smoke Detector


1: optical chamber
2: cover
3: case moulding
4: photodiode (detector)
5: infrared LED
Inside a basic ionization smoke detector. The black, round structure at the right is
the ionization chamber. The white, round structure at the upper left is the
piezoelectric buzzer that produces the alarm sound.

An optical detector is a light sensor. When used as a smoke detector, it includes a light source
(incandescent bulb or infrared LED), a lens to collimate the light into a beam, and a photodiode
or other photoelectric sensor at an angle to the beam as a light detector. In the absence of smoke,
the light passes in front of the detector in a straight line. When smoke enters the optical chamber
across the path of the light beam, some light is scattered by the smoke particles, directing it at the
sensor and thus triggering the alarm.

Also seen in large rooms, such as a gymnasium or an auditorium, are devices to detect a
projected beam. A unit on the wall sends out a beam, which is either received by a receiver or
reflected back via a mirror. When the beam is less visible to the "eye" of the sensor, it sends an
alarm signal to the fire alarm control panel.

Optical smoke detectors are quick in detecting particulate (smoke) generated by smoldering
(cool, smoky) fires. Many independent tests indicate that optical smoke detectors typically detect
particulates (smoke) from hot, flaming fires approximately 30 seconds later than ionization
smoke alarms.[dubious – discuss][citation needed]

They are less sensitive to false alarms from steam or cooking fumes generated in kitchen or
steam from the bathroom than are ionization smoke alarms. For the aforementioned reason, they
are often referred to as 'toast proof' smoke alarms.[citation needed]

[edit] Ionization

This type of detector is cheaper than the optical detector; however, it is sometimes rejected
because it is more prone to false (nuisance) alarms than photoelectric smoke detectors [4][5]. It can
detect particles of smoke that are too small to be visible. It includes about 37 kBq or 1 µCi of
radioactive americium-241 (241Am), corresponding to about 0.3 µg of the isotope.[6][7] The
radiation passes through an ionization chamber, an air-filled space between two electrodes, and
permits a small, constant current between the electrodes. Any smoke that enters the chamber
absorbs the alpha particles, which reduces the ionization and interrupts this current, setting off
the alarm.
241
Am, an alpha emitter, has a half-life of 432 years. This means that it does not have to be
replaced during the useful life of the detector, and also makes it safe for people at home, since it
is only slightly radioactive. Alpha radiation, as opposed to beta and gamma, is used for two
additional reasons: Alpha particles have high ionization, so sufficient air particles will be ionized
for the current to exist, and they have low penetrative power, meaning they will be stopped by
the plastic of the smoke detector and/or the air. About one percent of the emitted radioactive
energy of 241Am is gamma radiation.

[edit] Air-sampling

An air-sampling smoke detector is capable of detecting microscopic particles of smoke. Most air-
sampling detectors are aspirating smoke detectors, which work by actively drawing air through a
network of small-bore pipes laid out above or below a ceiling in parallel runs covering a
protected area. Small holes drilled into each pipe form a matrix of holes (sampling points),
providing an even distribution across the pipe network. Air samples are drawn past a sensitive
optical device, often a solid-state laser, tuned to detect the extremely small particles of
combustion. Air-sampling detectors may be used to trigger an automatic fire response, such as a
gaseous fire suppression system, in high-value or mission-critical areas, such as archives or
computer server rooms.

Most air-sampling smoke detection systems are capable of a higher sensitivity than spot type
smoke detectors and provide multiple levels of alarm threshold, such as Alert, Action, Fire 1 and
Fire 2. Thresholds may be set at levels across a wide range of smoke levels. This provides earlier
notification of a developing fire than spot type smoke detection, allowing manual intervention or
activation of automatic suppression systems before a fire has developed beyond the smoldering
stage, thereby increasing the time available for evacuation and minimizing fire damage.

[edit] Carbon monoxide and carbon dioxide detection

Some smoke alarms use a carbon dioxide sensor or carbon monoxide sensor in order to detect
extremely dangerous products of combustion.[8][9] However, not all smoke detectors that are
advertised with such gas sensors are actually able to warn of poisonous levels of those gases in
the absence of a fire.

[edit] Performance differences

Optical or "toast-proof" smoke detectors are generally quicker in detecting particulate (smoke)
generated by smoldering (cool, smokey) fires. Ionization smoke detectors are generally quicker
in detecting particulate (smoke) generated by flaming (hot) fires.[10]

According to fire tests conformant to EN 54, normally the CO2 cloud from smoke can be
detected before particulate.[9]
Obscuration is a unit of measurement that has become the standard definition of smoke detector
sensitivity. Obscuration is the effect that smoke has on reducing visibility. Higher concentrations
of smoke result in higher obscuration levels, lowering visibility.

Typical smoke detector obscuration ratings


[11]

Type of
Obscuration Level
Detector

Ionization 2.6-5.0% obs/m (0.8-1.5% obs/ft)

Photoelectric 6.5-13.0% obs/m (2-4% obs/ft)

Beam 3% obs/m (0.9% obs/ft)[citation needed]

0.005-20.5% obs/m (0.0015-


Aspirating
6.25% obs/ft)

0.06-6.41% obs/m (0.02-2.0%


Laser
obs/ft) [12]

[edit] Commercial smoke detectors


This section requires expansion.
An integrated locking mechanism for commercial building doors. Inside an enclosure
are a locking device, smoke detector and power supply.

Commercial smoke detectors are either conventional or analog addressable, and are wired up to
security monitoring systems or fire alarm control panels (FACP). These are the most common
type of detector, and usually cost a lot more than a household smoke alarms. They exist in most
commercial and industrial facilities, such as high rises, ships and trains. These detectors don't
need to have built in alarms, as alarm systems can be controlled by the connected FACP, which
will set off relevant alarms, and can also implement complex functions such as a staged
evacuation.

[edit] Conventional

The word Conventional is slang used in to distinguish the method used to communicate with the
control unit from that used by addressable detectors whose methods were unconventional at the
time of their introduction. So called “Conventional Detectors” cannot be individually identified
by the control unit and resemble an electrical switch in their information capacity. These
detectors are connected in parallel to the signaling path or (initiating device circuit) so that the
current flow is monitored to indicate a closure of the circuit path by any connected detector when
smoke or other similar environmental stimulus sufficiently influences any detector. The resulting
increase in current flow is interpreted and processed by the control unit as a confirmation of the
presence of smoke and a fire alarm signal is generated.

[edit] Addressable

This type of installation gives each detector on a system an individual number, or address. Thus,
addressable detectors allow an FACP, and therefore fire fighters, to know the exact location of
an alarm where the address is indicated on a diagram.

Analog addressable detectors provide information about the amount of smoke in their detection
area, so that the FACP can decide itself, if there is an alarm condition in that area (possibly
considering day/night time and the readings of surrounding areas). These are usually more
expensive than autonomous deciding detectors.[13]

[edit] Standalone smoke alarms


The main function of a standalone smoke alarm is to alert persons at risk. Several methods are
used and documented in industry specifications published by Underwriters Laboratories[14]
Alerting methods include:

• Audible tones
o usually around 3200 Hz due to component constraints (Audio
advancements for persons with hearing impairments have been made;
see External links)
o 85 dBA at 10 feet
• Spoken voice alert
• Visual strobe lights
o 110 candela output
• Tactile stimulation, e.g., bed or pillow shaker (No standards exist as of 2008
for tactile stimulation alarm devices.)

Some models have a hush or temporary silence feature that allows silencing without removing
the battery. This is especially useful in locations where false alarms can be relatively common
(i.e. due to "toast burning") or users could remove the battery permanently to avoid the
annoyance of false alarms, but removing the battery permanently is strongly discouraged.

While current technology is very effective at detecting smoke and fire conditions, the deaf and
hard of hearing community has raised concerns about the effectiveness of the alerting function in
awakening sleeping individuals in certain high risk groups such as the elderly, those with hearing
loss and those who are intoxicated.[15] Between 2005 and 2007, research sponsored by the United
States' National Fire Protection Association (NFPA) has focused on understanding the cause of a
higher number of deaths seen in such high risk groups. Initial research into the effectiveness of
the various alerting methods is sparse. Research findings suggest that a low frequency (520 Hz)
square wave output is significantly more effective at awakening high risk individuals. Wireless
Wi-Safe smoke and carbon monoxide detectors linked to alert mechanisms such as vibrating
pillow pads, strobes and remote warning handsets have been found to support the groups above.
[16]

[edit] Batteries

Photoelectric smoke detector equipped with strobe light for the hearing impaired

Most residential smoke detectors run on 9-volt alkaline or carbon-zinc batteries. When these
batteries run down, the smoke detector becomes inactive. Most smoke detectors will signal a
low-battery condition. The alarm may chirp at intervals if the battery is low, though if there is
more than one unit within earshot, it can be hard to locate. It is common, however, for houses to
have smoke detectors with dead batteries. It is estimated, in the UK, that over 30% of smoke
alarms may have dead or removed batteries. As a result, public information campaigns have been
created to remind people to change smoke detector batteries regularly. In Australia, for example,
it is advertised that all smoke alarm batteries should be replaced on the first day of April every
year. In regions using daylight saving time, these campaigns may suggest that people change
their batteries when they change their clocks or on a birthday.

Some detectors are also being sold with a lithium battery that can run for about 7 to 10 years,
though this might actually make it less likely for people to change batteries, since their
replacement is needed so infrequently. By that time, the whole detector may need to be replaced.
Though relatively expensive, user-replaceable 9-volt lithium batteries are also available.

Common NiMH and NiCd rechargeable batteries have a high self-discharge rate, making them
unsuitable for use in smoke detectors. This is true even though they may provide much more
power than alkaline batteries if used soon after charging, such as in a portable stereo. Also, a
problem with rechargeable batteries is a rapid voltage drop at the end of their useful charge. This
is of concern in devices such as smoke detectors, since the battery may transition from "charged"
to "dead" so quickly that the low-battery warning period from the detector is either so brief as to
go unnoticed, or may not occur at all.

The NFPA, recommends that home-owners replace smoke detector batteries with a new battery
at least once per year, when it starts chirping (a signal that its charge is low), or when it fails a
test, which the NFPA recommends to be carried out at least once per month by pressing the
"test" button on the alarm.[17]

[edit] Reliability

In 2004, NIST issued a comprehensive report [5]. The report concludes, among other things, that
"smoke alarms of either the ionization type or the photoelectric type consistently provided time
for occupants to escape from most residential fires", and "consistent with prior findings,
ionization type alarms provided somewhat better response to flaming fires than photoelectric
alarms, and photoelectric alarms provided (often) considerably faster response to smoldering
fires than ionization type alarms".

The NFPA strongly recommends the replacement of home smoke alarms every 10 years. Smoke
alarms become less reliable with time, primarily due to aging of their electronic components,
making them susceptible to nuisance false alarms. In ionization type alarms, decay of the 241Am
radioactive source is a negligible factor, as its half-life is far greater than the expected useful life
of the alarm unit.

Regular cleaning can prevent false alarms caused by the build up of dust or other objects such as
flies, particularly on optical type alarms as they are more susceptible to these factors. A vacuum
cleaner can be used to clean ionization and optical detectors externally and internally. However,
on commercial ionisation detectors it is not recommended for a lay person to clean internally. To
reduce false alarms caused by cooking fumes, use an optical or 'toast proof' alarm near the
kitchen. [18]
A jury in the United States District Court for the Northern District of New York decided in 2006
that First Alert and its parent company, BRK Brands, was liable for millions of dollars in
damages because the ionization technology in the smoke alarm in the Hackert's house was
defective, failing to detect the slow-burning fire and choking smoke that filled the home as the
family slept.[19]

[edit] Installation and placement

A 2007 U.S. guide to placing smoke detectors, suggesting that one be placed on
every floor of a building, and in each bedroom.

In the United States, most state and local laws regarding the required number and placement of
smoke detectors are based upon standards established in Article 72 of the NFPA fire code.

Laws governing the installation of smoke detectors vary depending on the locality. Homeowners
with questions or concerns regarding smoke detector placement may contact their local fire
marshal or building inspector for assistance. However, some rules and guidelines for existing
homes are relatively consistent throughout the developed world. For example, Canada and
Australia require a building to have a working smoke detector on every level. The United
States[citation needed] requires smoke detectors on every habitable level and within the vicinity of all
bedrooms. Habitable levels include attics that are tall enough to allow access.
In new construction, minimum requirements are typically more stringent. All smoke detectors
must be hooked directly to the electrical wiring, be interconnected and have a battery backup. In
addition, smoke detectors are required either inside or outside every bedroom, depending on
local codes. Smoke detectors on the outside will detect fires more quickly, assuming the fire does
not begin in the bedroom, but the sound of the alarm will be reduced and may not wake some
people. Some areas also require smoke detectors in stairways, main hallways and garages.

Wired units with a third "interconnect" wire allow a dozen or more detectors to be connected, so
that if one detects smoke, the alarms will sound on all the detectors in the network, improving
the chances that occupants will be alerted, even if they are behind closed doors or if the alarm is
triggered one or two floors from their location. Wired interconnection may only be practical for
use in new construction, especially if the wire needs to be routed in areas that are inaccessible
without cutting open walls and ceilings. As of the mid-2000s, development has begun on
wirelessly networking smoke alarms, using technologies such as ZigBee, which will allow
interconnected alarms to be easily retrofitted in a building without costly wire installations. Some
wireless systems using Wi-Safe technology will also detect smoke or carbon monoxide through
the detectors, which simultaneously alarm themselves with vibrating pads, strobes and remote
warning handsets. As these systems are wireless they can easily be transferred from one property
to another.

In the UK the placement of detectors are similar however the installation of smoke alarms in new
builds need to comply to the British Standards BS5839 pt6. BS 5839: Pt.6: 2004 recommends
that a new-build property consisting of no more than 3 floors (less than 200sqm per floor))
should be fitted with a Grade D, LD2 system. Building Regulations in England,Wales &
Scotland recommend that BS 5839: Pt.6 should be followed, but as a minimum a Grade D, LD3
system should be installed. Building Regulations in Northern Ireland require a Grade D, LD2
system to be installed, with smoke alarms fitted in the escape routes and the main living room
and a heat alarm in the kitchen, this standard also requires all detectors to have a main supply
and a battery back up.

Zinc oxide nanorod sensor


From Wikipedia, the free encyclopedia

Jump to: navigation, search

A zinc oxide nanorod sensor or ZnO nanorod sensor is an electronic device detecting presence
of certain gas or liquid molecules (e.g. NO, hydrogen[1][2], etc.) in the ambient atmosphere. The
sensor exploits enhanced surface area (and thus surface activity) intrinsic to all nano-sized
materials, including ZnO nanorods. Adsorption of molecules on the nanorods can be detected
through variation of the nanorods properties, such as electrical conductivity, vibration frequency,
mass, etc. The simplest and thus most popular way is to pass electrical current through the
nanorods and observe its changes upon gas exposure.
LIST OF ANALYTIC INSTRUMENTS

This page introduces our major analytical instruments.

Analytical Instrument Abbreviation Major Uses


Elemental Analysis
Organic Elemental Analyzer These
Flame and Flameless Atomic Absorption instruments
AAS/FL-AAS analyze
Spectrometer
products and
Emission Spectrophotometer materials
Inductively Coupled Plasma Emission Spectrometer ICP-AES developed in all
industrial fields
X-ray Fluorescence Analyzer XRF
and analyze the
Inductively Coupled Plasma Mass Spectrometer ICP-MS purity,
Ion Chromatograph IC components and
impurities of
Molecular Absorption Spectrophotometer materials used
Particle Analyzer in
Total Nitrogen Analyzer TN manufacturing
processes and
Nitrogen and Carbon Analyzer NC production
activities. They
analyze the
compositions of
exhaust gases
and wastewater
generated in
manufacturing
processes.
They are helpful
to analyze
gases generated
by heating.

<Related
service fields>
Environmental
Research
Services
Electronics
Material
Evaluation
Services
Surface Analysis
X-ray Photoelectron Spectrometer XPS/ESCA These
Auger Electron Spectrometer AES/SAM instruments
analyze surface
Secondary Ion Mass Spectrometer SIMS and interface
Time-of-flight secondary ion mass spectrometer TOF-SIMS conditions of
electronic
Electron Probe X-ray Microanalyzer EPMA/XMA
materials,
Field Emission Scanning Electron Microscope FE-SEM-EDX metals,
Low Level Alpha particle measuring instrument ceramics,
catalysts and
other
substances.
They are helpful
to analyze the
foreign particles
on substance
surfaces and
Thermal Desorption Mass Spectrometer TDS interfaces.

<Related
service fields>
Electronics
Material
Evaluation
Services
Form Observation
Transmission Electron Microscope TEM The surface and
Scanning Electron Microscope SEM interface
conditions of
Atomic Force Microscope AFM substances
Optical Microscope OM produced in all
industrial fields
Field Emission Scanning Microscope FE-SEM
can be observed
directly through
these electron
microscopes.
The inside of
substances can
be directly
observed.

<Related
service fields>
Electronics
Material
Evaluation
Services
Compound Structure Analysis
Organic Mass Spectrometer MS
Time-of-Flight Mass Spectrometer TOF-MS
Nuclear Magnetic Resonance Analyzer NMR
Visible/Ultraviolet Spectrochemical Analyzer UV•VIS
Fourier Transform Infrared Spectrometer FT-IR
Raman Spectrometer RSS
X-ray Diffraction Analyzer XRD
Electron Spin Resonance Analyzer ESR
Fourier Transform Infrared Microspectrometer μ-FT-IR
Scanning Infrared Microprobe Analyzer IRμS/SIRM
Thermal Analysis/Thermal Properties
Thermogravimetric Analyzer TG
Differential Thermal Analyzer DTA
Differential Scanning Calorimeter DSC
Specific Heat Measuring Instrument
Reaction Heat Measuring Instrument
Vaporization Heat Measuring Instrument
Thermal Expansion Coefficient Measuring Instrument
Thermal Conductivity Measuring Instrument
Chromatography and Separation Analysis
Gas Chromatography GC
Gas Chromatography-Mass Spectrometer GC-MS
Gas Chromatography-Fourier Transform Infrared
GC-IR
Spectrometer
Gas Chromatography-Atomic Emission Detector GC-AED
Liquid Chromatograph LC
Liquid Chromatography-Mass Spectrometer LC-MS
Liquid Chromatography-Tandem Mass Spectrometer LC-MS/MS
Gel Permeation Chromatograph GPC
Thin Layer Chromatograph TLC
Pyrolysis Gas Chromatograph PyGC
Gas Chromatography-Tandem Mass Spectrometer GS-MS/MS
Instruments for Liquid Chromatography-Mass
LC-MS
Spectrometer
Gel Permeation Chromatograph - Scattering Method GPC-LALLS
Capillary Electrophoresis CE

You might also like