You are on page 1of 19

CHAPTER 12

The summary and the improvements of delay testing, FPGA, coping with physical failures and
soft errors, high speed I/O testing, memes testing, RF testing is:

1. Delay Testing:
The objective of delay testing is to detect timing defects and ensure that the design meets the
desired performance specifications. The need for delay testing has evolved from a common
problem faced by the semiconductor industry, Designs that function properly at low clock
frequencies might fail at the desired operational speed.

The growing need for delay testing is result of advances in very-large-scale integration (VLSI)
technology and an increase in design speed. These factors are also changing the target objectives
of delay tests. In the early days, most defects affecting the performance could be detected using
tests for gross delay defects. The increase of the circuit size has resulted in fault models that can
detect distributed defects localized to a certain area of chip. With the introduction of deep
submicron technology, noise effects are becoming significant contributors to timing failures and
they call for further adaptations of the fault models and testing strategies.

Test Application Schemes for Testing Delay Defects:


To observe delay defects, it is necessary to create and propagate transitions in the circuit running
at-speed (at its specified operating frequency). The test application scheme for combinational
circuits is shown in Figure 12.2.

In normal operation, only one clock (system clock) is used to control the input and output latches
(in a broader sense, storage elements), and its period is Tc. In this illustration, the input and
output latches are controlled by two different clocks in the test mode: the input is applied at time
t1. Time Ts = t1-t0 is assumed to be sufficient for all signals in the circuit to stabilize under the
first vector. After the second vector is applied, the circuit is allowed to settle down only until
time t2, where t2 - t1 = Tc. At time t2, the primary output values are observed and compared to a
prestored response of a fault-free circuit to determine if there is a defect. If the input and output
latches use the same clock source in the test mode, the scheme illustrated in Figure 12.2 still
applies.

Delay Fault Models:

Three delay fault models are considered, transition fault model, gate-delay fault model, and path-
delay fault model. It is assumed that in nominal design each gate has a given fall (rise) delay
from each input to output pin. Also, the interconnects are assumed to have given rise (fall)
delays. The transition fault model assumes that the delay fault affects only one gate in the
circuit. There are two transition faults associated with each gate: a slow-to-rise fault and a slow-
to-fall fault.

Advantage of transition fault model


The main advantage of the transition fault model is that the number of faults in the circuit is
relatively small. Also, the stuck-at fault test generation and fault simulation tools can be easily
modified for handling transition faults. The gate-delay fault model assumes that the delay fault
is lumped at one gate in the circuit; however, unlike the transition model, the gate delay fault
model does not assume that the increased delay will affect the performance independent of the
propagation path through the fault site. The gate-delay fault model is a quantitative model, as it
takes into account the circuit delays. Under the path-delay fault model, a circuit is considered
faulty if the delay of any of its paths exceeds a specified limit. A possible cost-effective strategy
for delay testing would include:

 Use of functional vectors that could be applied at the system’s operational speed and
should catch some delay defects (functional vectors should be evaluated for transition
fault coverage)
 Application of ATPG tests for undetected transition faults
 Application of ATPG tests for long path-delay faults

Summary:

Delay testing is becoming an increasingly crucial part of VLSI design testing process.
Continuously increasing circuit operating frequencies results in designs in which performance
specifications can be violated by very small defects. Process variations are now more likely to
cause marginal violations of the performance specifications. The continuous shrinking of device
feature size, increased number of interconnect layers and gate density, and higher voltage drop
along the power nets give rise to noise faults, such as distributed delay defects, power supply
noise, substrate noise, and crosstalk. The figure below represents the cost of testing.
Due to the statistical timing involved, delay along a target path is highly pattern dependent. In
order to produce high-quality patterns, we therefore need to generate test patterns that not only
sensitize the given set of statistical critical paths but also exercise the worst-case delays along
these paths. Also, with statistical timing and statistical delay defect models, the notion of path
sensitization becomes probabilistic; thus, the key challenge is to develop a feasible statistically
constrained ATPG method where statistical timing-sensitization constraints can be employed to
guide the ATPG justification process.

Improvements till 2017:


As mentioned above, the transition and path-delay fault models provide better defect coverage,
increasing the production test quality and reducing the defective parts per million (DPM) levels.
Thus, the transition and path-delay fault models have become very popular in the past decade.
The path delay test targets the accumulated delay defects on critical paths in a design and
generates test patterns to detect them. Traditionally, static timing analysis (nominal, best-case, or
worst-case) is performed to identify the critical paths in the circuit. As technology continues to
scale down, more and more delay variations are introduced, which can affect the performance of
the target circuit to a large extent. In recent years, several techniques have been proposed with
most attempts having focused on developing algorithms to target a delay fault via the longest
path.

 In [1], the authors proposed an as late as possible transition fault (ALAPTF) model to
launch one or more transition faults at the fault site as late as possible to detect the faults
through the path with least slack. This method requires a large CPU run time compared to
traditional ATPGs.
 A method to generate K longest paths per gate for testing transition faults was proposed
in [2]. However, the longest path through a gate may still be a short path for the design,
and thus may not be efficient for SDD detection. Furthermore, this method also suffers
from high complexity, extended CPU run time, and pattern count.
 The authors in [3] proposed a delay fault coverage metric to detect the longest sensitized
path affecting a TDF fault site. It is based on the robust path delay test and attempts to
find the longest sensitizable path passing through the target fault site and generating a
slow-to-rise or slow-to-fall transition. It is impossible to implement this method on large
industry circuits since the number of long paths increases exponentially with circuit size.
 The authors in [4] proposed path-based and cone-based metrics for estimating the path
delay under test, which can be used for path length analysis. This method is not accurate
due to its dependence on gate delay models, unit gate delay, and differential gate delay
models, which were determined by the gate type, the number of fan-in and fan-out nets,
and the transition type at the outputs. Furthermore, this method is also based on static
timing analysis. It does not take into account pattern-induced noise, process variations,
and their impact on path delays.
 The authors in [5] proposed two hybrid methods using 1-detect and timing aware ATPGs
to detect SDDs with a reduced pattern count. These methods first identify a subset of
transition faults that are critical and should be targeted by the timing-aware ATPG. Then
top-off ATPG is run on the undetected faults after timing-aware ATPG to meet the fault
coverage requirement. The efficiency of this method is questionable, since it still results
in a pattern count much larger
than traditional 1-detect ATPG.
 In [6], a static-timing-analysis based method was proposed to generate and select patterns
that sensitize long paths. It finds long paths (LPs), intermediate paths, and short paths to
each observation point using static timing analysis tools. Then intermediate path and
short path observation points are masked in the pattern generation procedure to force the
ATPG tool to generate patterns for LPs. Next, a pattern selection procedure is applied to
ensure the pattern quality.
 The output-deviation based method was proposed in [7]. This method defines gate-delay
defect probabilities (DDPs) to model delay variations in a design. Gaussian distribution
gate delay is assumed and a delay defect probability matrix (DDPM) is assigned to each
gate. Then, the signal-transition probabilities are propagated to outputs to obtain the
output deviations, which are used for pattern evaluation and selection. However, in case
of a large number of gates along the paths, with this method, calculated output deviation
metric can saturate and similar output deviations (close to 1) can be obtained for both
long and intermediate paths (relative to clock cycle). Since in modern designs, there
exists a large number of paths with large depth in terms of gate count, the output
deviation-based method may not be very effective. A similar method was developed in
[8] to take into account the contributions of interconnect to the total delay of sensitized
paths. Unfortunately, it also suffers from the saturation problem.
 A false-path-aware statistical timing analysis framework was proposed in [9]. It selects
all logically sensitizable long paths using worst-case statistical timing information and
obtains the true timing information of the selected paths. After obtaining the critical long
paths, it uses path delay fault test patterns to target them. This method will be limited by
the same constraints the path delay fault test experiences.

2. COPING WITH PHYSICAL FAILURES, SOFT ERRORS,


AND RELIABILITY ISSUES
Defect level is a function of failure rate and manufacturing yield. Failure rate in turn is a function
of fault coverage. Therefore, to cope with vast likely physical failures in nanometer designs, we
need to seriously reduce the defect level to meet the defects per million (DPM) goals. This can
be done by improving the fault coverage of the chips (devices) under test, the manufacturing
yield, or both; however, not all chips passing manufacturing tests would function correctly in the
field.

Signal Integrity and Power Supply Noise


Signal integrity is the ability of a signal to generate correct responses in a circuit. Informally
speaking, signal integrity indicates how clean or distorted a signal is.

Integrity Loss Fault Model


True characteristics of a signal are reflected in its waveform. In practice, digital electronic
components can tolerate certain levels of voltage swing and transition/propagation delay. Any
portion of signal that exceeds these levels represents integrity loss (IL).
Three main requirements in testing VLSI chips for signal integrity:
 Determine the locations to sample and monitor IL,
 Carry out pattern generation to stimulate extreme integrity loss
 Design integrity loss sensors/detectors and readout circuitry.

Pattern Generation
Due to the nature of signal integrity loss (fault) and its intermittent occurrence, integrity fault
testing must be done at speed. The pin and probing limitations further restrict the accurate
observation of signal integrity losses. Conventional pseudo-random pattern generators (PRPGs)
to stimulate maximum integrity loss on long interconnects have been tried.

Sensing and Readout


Because the integrity loss is a waveform-related metric, it must be captured (sampled) right after
creation. The work presented in offers inexpensive cells, called noise detector (ND) and skew
detector (SD) cells, based on a modified cross-coupled P-channel metal oxide semiconductor
(PMOS) differential sense amplifier. A PSN monitor circuit presented in is claimed to catch high
resolution (100-ps) PSN at the power/ground lines. A power supply distribution model to control
PSN has been also reported in the literature. The model presented in identifies the hot-spots on
the chip and optimizes power-supply distribution to minimize the noise.
The work presented in offers inexpensive cells, called noise detector (ND) and skew detector
(SD) cells, based on a modified cross-coupled P-channel metal oxide semiconductor (PMOS)
differential sense amplifier. each of such sensors can be integrated within a boundary scan cell to
form an observation boundary scan cell (OBSC).

Parametric Defects, Process Variations, and Yield


Defects are physical defects that occur during manufacturing and can cause static or timing
physical failures. Examples of defects include partial or spongy via and the presence of extra
material between a signal line and the voltage line.
Process variations, such as transistor channel length variation, transistor threshold voltage
variation, metal interconnect thickness variation, and inter metal layer dielectric thickness
variation, have a big impact on device speed characteristics. In general, the effect of process
variation shows up first in the most critical paths in the design, those with maximum and
minimum delays.
Random imperfection, such as resistive bridging defects between metal lines, resistive opens on
metal lines, improper via formations, and shallow trench isolation defects, is yet another source
of defects and are referred to as parametric defects. Based on the parameters of the defect and
neighboring parasitic, the defect may result in a static or at-speed failure.
Soft Errors
Soft errors are the result of transients that are induced at the circuit when a radiation particle
strikes. This radiation can range from cosmic origin (when stars are formed and die) or from
every day material (e.g., lead isotopes). Soft errors can happen to all memory and storage
elements. Sometimes, they can be benign (e.g., the memory elements are not used in the
application); other times, they can cause a system crash or even worse—a silent data corruption
(SDC) if they are undetected. One approach to improve the reliability of a chip is to remove the
source of soft errors. Spatial redundancy relies on the assumption that defects and radiation
particles will only hit on a specific device and not another device.

Improvements
 In spite of the good test quality that pseudo-random patterns can achieve, the random
nature of process prevents any test session from having a bound on the length
 The maximum aggressor (MA) fault model is one of the fault models proposed for
crosstalk.
 Use a genetic algorithm (with random basis) to stimulate the worst case PSN.
 A new simulation platform, named TIARA-G4 is introduced that uses the numerical
evaluation of the sensitivity of advanced semiconductor memories (static RAMs)
subjected to natural radiation at ground level.
 TIARA-G4 should be used in the future to more deeply investigate the radiation response
of ultimate MOS circuits and alternate Nano electronic devices in the natural (terrestrial)
environment.
 Mixed mode vector pattern generation.
 Pseudorandom vectors test.
 Followed deterministic tests.
 Built-In Logic Block Observation
 Practical Compression Techniques
 Eliminate the radiation contaminant from packaging material

3. FPGA Testing:
Field programmable gate arrays (FPGAs) are generally composed of a two dimensional array of
programmable logic blocks (PLBs) interconnected by a programmable routing network with
programmable I/O cells at the periphery of the device. The larger FPGAs, with over 22,000
PLBs and 800 specialized cores, could easily reach the 100-million transistor mark and pose a
testing challenge in terms of their size and diversity of functions.

Programmability Impact:
The system function performed by the FPGA is controlled by an underlying configuration
memory. In most of the current FPGAs, the configuration memory is a RAM ranging in size
from 32 Kbits to 50 Mbits. The system function can be changed at any time by simply rewriting
the configuration memory with new data, referred to as reconfiguration. Another trend is FPGAs
that support dynamic partial reconfiguration, where a portion of the FPGA can be reconfigured
while the remainder of the FPGA is performing normal system operation, also referred to as
runtime reconfiguration. The fault models used for testing the routing resources include shorts
(bridging faults) and opens in the wire segments, stuck-at-1 and stuck-at-0 wire segments, and
stuck-on and stuck-off programmable switches, which include the controlling configuration
memory bits stuck-at-1 and stuck-at-0. While the programmable switch stuck-off fault can be
detected by a simple continuity test, stuck-on faults are similar to bridging faults and require
opposite logic values to be applied to the wire segments on both sides of the switch while
monitoring both wire segments in order to detect the stuck-on fault. When faults are detected, the
system function can be reconfigured to avoid the faulty resources if the faults can be diagnosed
(identified and located); therefore, diagnosis of the faulty resources is an important aspect of
FPGA testing in order to take full advantage of the fault and/or defect tolerant potential of these
devices.

Testing Approaches:
Two types of testing approaches have been developed for FPGAs: external testing and built-in
self-test (BIST). In external testing approaches, the FPGA is programmed for a given test
configuration with the application of input test stimuli and the monitoring of output responses
performed by external sources such as a test machine. For FPGAs with boundary scan that
support INTEST capabilities, the input test stimuli can be applied and output responses can be
monitored via the boundary scan interface; otherwise, the FPGA I/O pins must be used, resulting
in package-dependent testing. The basic idea in BIST for FPGAs is to configure some of the
PLBs as test pattern generators (TPGs) and output response analyzers (ORAs). These BIST
resources are then used to detect faults in PLBs, routing resources, and special cores such as
RAMs and DSPs. Once the programmable resources have been tested, the FPGA is reconfigured
for the intended system function without any overhead or performance penalties due to the BIST
circuitry. This facilitates system-level use of the BIST configurations.

Built-In Self-Test of Logic Resources:


In BIST architecture, the programmable logic blocks under test (BUTs) and ORAs are arranged
in alternating columns (or rows), and multiple identical TPGs are used to drive the alternating
columns (or rows) of BUTs. During a given test session, the BUTs are repeatedly reconfigured in
their various modes of operation until they are completely tested. Dynamic partial
reconfiguration can be used because only the BUTs must be reconfigured while the TPGs,
ORAs, and interconnections remain constant for the test session. Faulty PLBs can be identified
based on the BIST results using a diagnostic procedure developed for this logic BIST
architecture.

Built-In Self-Test of Routing Resources:


Two routing BIST approaches have proven to be effective in testing the programmable
interconnect resources in FPGAs. One is a comparison-based approach. The other approach is
parity based. As in the case of logic BIST, the sets of wires under test are repeatedly
reconfigured to test the various routing resources (wire segments and programmable switches) in
the FPGA. While both routing BIST approaches have been proven to be effective in detecting
faults, the comparison-based approach has been extended to the diagnosis of faults in the
programmable interconnect network for fault tolerant applications [Harris 2002]. By constructing
many small routing BIST circuits consisting of independent TPGs, ORAs, and sets of wires
under test in the FPGA, diagnostic resolution is improved because an ORA indicating the
presence of a fault also identifies the self-test area containing the fault.

Recent Trends:
More recent trends in FPGA testing include delay fault testing and the use of embedded
processor cores for on-chip test configuration generation and application. Testing for delay faults
in FPGAs is important. External test techniques and BIST approaches have been developed to
detect delay faults in FPGAs. The incorporation of embedded microprocessor cores that can
write and read the FPGA configuration memory has facilitated the algorithmic generation of test
configurations from within the FPGA instead of downloading test configurations. Complex
programmable logic devices (CPLDs) are similar to FPGAs in that they contain programmable
logic and routing resources as well as embedded cores such as RAMs and FIFOs. Now that
FPGAs are incorporating embedded cores such as memories, DSPs, and microprocessors,
FPGAs more closely resemble system-on-chip (SOC) implementations. At the same time, SOCs
are incorporating more embedded FPGA cores. As a result, FPGA testing techniques are
becoming increasingly important for a broader range of system applications.

Improvements till Now:


 More than an order of magnitude (1/13) energy improvement of FPGA by combining low
voltage operation and fine-grained body bias optimization is demonstrated from the
measurement of the new SOTB implementation of Flex Power FPGA test chip.
 An improved BIST architecture for all Xilinx 7-Series FPGAs that is scalable to large
arrays. The two primary sources of overhead associated with FPGA BIST, the test time
and the memory required for storing the BIST configurations, are also reduced when
compared to previous FPGA-BIST approaches.
 A hierarchical approach is used to define a set of FPGA configurations which enable
interconnect fault detection and diagnosis. This technique enables the detection of
bridging faults involving intra-cluster interconnect and extra-cluster interconnect. The
hierarchical structure of a cluster-based tile is exploited to define intra-cluster
configurations separately from extra-cluster configurations, thereby improving the
efficiency of the configuration definition process. The cornerstone of this work is the
concise expression of the detectability conditions of each fault, and the distinguishability
conditions of each fault pair.

4. HIGH-SPEED I/O TESTING


System-level performance has been improving steadily with faster transistors and integration.
Integration brings together circuitry that used to be on different chips, so the signals travel a
shorter distance and encounter smaller loads (within a chip). The I/O bandwidth has been the
limiting factor in system-level performance for some time, at least the last 10 to 15 years. Due to
the need to control the cost of the PCB, many physical effects, such as those that cause energy
loss and coupling as well as contribute to deterministic jitter (DJ) and random jitter (RJ), have
been limiting the signaling rate. Despite this, signaling rates at the board level have improved,
although it still lags far behind what is possible within the chip; for example, a current
mainstream processor runs at 3 to 4GHz internally, but the front-side bus (FSB) only runs up to
800 Mbps. This has influenced the processor architecture to a great degree.

I/O Interface Technology and Trend

 The most common signaling protocol is that of the


common clock (CC) type.
 The signal is launched off one chip with the system
clock and received at another chip at the following
clock edge.
 At the sending end, there is a clock to signal delay
specification, and at the receiving end there is
setup and hold time on either side of the following
clock edge.
 Problem: The clock skew between the sending component and the receiving component can cut
into the cycle time.
 Solution: I/O designers have come up with source synchronous (SS) and clock forwarding (CF)
schemes.
 With such a scheme, not only will the sending component send the signal, but another strobe
(similar like clock signal) also goes along with the signal.
 The receiving component uses this strobe to clock the signal.
 Such signaling schemes have allowed the signaling rate to raise gradually from below 10 to 50
Mbps to present 800 Mbps.
 Drawback: The new SS signaling scheme has improved system-level performance a problem has
begun to appear. The parallel interface that we use to transfer the reference clock will be
completely cancelled through this “phase differential” signaling technique.

I/O Testing and Challenges

I/O testing is handled by automatic test equipment (ATE) that mimics the other side of the I/O
interfaces. Clock is generated from a tester channel and so are the test stimuli. As the device
undertest (DUT) responds with its generated signals, the output responses are strobed by the
ATE with its own timing system, so this approach works perfectly with common clock devices.
With this scheme, ATE performance rises with the device I/O performance. The advent of the SS
technology has turned the whole ATE test methodology upside down.
For the ATE to send SS signals, the scenario is the same as that of the CC, and the ATE
generates the strobes and the data. However, when it is time for the DUT to send data, it sends
out strobes and the ATE is supposed to use those strobes to strobe the data. Here, the fixed
timing system of the tester fails.
Challenges:
 The higher speed serial signaling link approach is more challenging. The test process
involves characterizing the output signals from the DUT and generating worst-case
signals to test the receiver and its associated CR subsystem. This usually involves a
bench setup consisting of an oscilloscope (real-time digitizer or equivalent sampling),
timing interval analyzers (TIAs), or bit error rate tester (BERT) and signal generators that
are capable of deterministic and random pattern generation as well as amplitude control
and jitter injection.
 The ATE industry must essentially reproduce the functionalities of this necessary bench
setup with a high-performance interface between the ATE driver/receiver and DUT.
 Serial I/O ATE must handle the asynchronous nature of this interface, which is different
from conventional synchronous ATE.
 Such ATE could be very expensive and misaligned with the end-user cost expectation.

High-Performance I/O Test Solutions

 An I/O structural test methodology called I/O wrap involves applying the transition fault
test methodology to the I/O circuitry. By tying an output to an input, the output data is
launched and latched back into the input buffer on the following clock. Because most
signal pads are I/O in nature, the I/O wrap methodology is very convenient. Input-only or
output-only pads may relate to the DFT or on the test interface unit (TIU).
 By controlling the delay, one can measure the relative delay between the strobes and data,
all without the need for precision timing measurement from ATE. This is transition fault
testing of the I/O pair with a tighter clock cycle.
 In SS signaling protocol, the absolute delay of the I/O is not critical; instead, the relative
delay of
 the strobes and its associated data bits are important. These delay timings, denoted as
time valid before (Tvb) and time valid after (Tva). By stressing this combined timing, we
know how much margin there is with the combined pair. If the induced delay to the
clock/strobes is calibrated, we can even have more accurate measurement of this
combined loop time than external instrumentation. Because the failure mechanisms for
signal delay and input setup/hold time are different, the probability of aliasing is very
low.
 Defect-based test method: If a data bit is substantially different from the other data bits,
we can conclude that a defect or local process variation exists with that particular bit and
declare that to be a failure.

Future Challenges

 The methodologies for testing all this high-speed serial signaling must consider cost and
quality.
 the decision feedback equalizer (DFE) will be widely used in the receiver to reduce the
BER.
 Test methodologies will have to advance to match and mimic future link and silicon
architectures and technologies.
 On-chip and off-chip test solutions will be necessary for acceptable accuracy, fault
coverage, throughput, and cost.
 There is also a need for a link layer where error detection and correction, data flow
control, etc., are handled.
 In addition, there is a need for a protocol layer, which will turn the internal data transfer
into data packets.
 Cross-domain asynchronous clocking will result in nondeterministic chip responses and
further
 lead to mismatches and yield loss

Improvements

In linear equalizer, the linear filter may introduce additional noise variance at the output signal,
this may have led to poor performance. This drawback can be avoided in a Decision Feedback
Equalizer at the expense of a more complex model. The decision feedback equalizer will predict
the noise level of the channel through the noise predictor based on previous noise samples. Then
predicted noise is subtracted from the input signal by using a feedback filter to reduce the noise
level of the channel.

Having I/O loopback modes in the design mitigates the need for the costly high-speed testers to
support manufacturing test. Loopback modes can also support silicon characterization with
bench equipment and system level characterization/test. Internal Loopbacks are implemented on-
chip to reduce the reliance on the load board and to enable in-field system testing. There are four
differences schemes for implementing the Loopback: (i) analog near-end; (ii) digital near-
end;(iii) analog far-end; and (iv) digital far-end

5. MEMS TESTING
MEMS are the acronym for a microelectromechanical system. The prefix “micro” indicates the
most important feature of MEMS: its extremely small size. The typical size of MEMS
components is in the range of between 1 micron (um) and 1 millimeter (mm). This means that
the key feature size of a MEMS device is usually smaller than the diameter of human hair. For
feature size below 1_m, the quantum effect cannot be ignored. It belongs to the recently emerged
concept of a nanoelectromechanicalsystem (NEMS). Thus, MEMS devices primarily
concentrate on the feature sizes from 1 ∼1000_m. Further, the electronic and mechanical parts of
a MEMS device interact with each other, so it can be called a “system.” For example, in a
MEMS system, the signals in a mechanical sensor can be sensed by an electronic circuit, while
the actuation instructions from the electronic circuit can be implemented by a mechanical
actuator. Thus, MEMS can incorporate the environment data collection, signal processing, and
actuation in the same “smart” system. When compared with conventional electromechanical
products, MEMS has the following specific features and corresponding advantages:
 Small volume, low weight, and high resolution;
 High reliability;
 Low energy consumption and high efficiency;
 Multifunction capabilities and intelligentization;
 Low cost.
Typical examples of commercial MEMS devices are the ADXL series accelerometers which
have been widely used in the world’s automobile market.

MEMS Built-In Self-Test


In the sensitivity BIST mode, a certain amount of driving voltage Vd can be applied to
the driving plate to mimic the action of a physical stimulus (i.e., test pattern) with
electrostatic force. In Figure 12.11, if voltage Vd is applied to fixed plate F1, and
nominal voltage Vnom is applied to M, an electrostatic attractive force Fd will be
experienced by the central movable mass. The electrostatic force is used to apply the
input stimulus during the BIST mode, and the device response to the electrostatic force is
measured and compared with the good device response to check whether the device is
faulty. This is the basic idea for the sensitivity test mode of a capacitive MEMS device.

 Symmetry BIST Scheme:


symmetry BIST scheme is used for capacitive MEMS devices utilizing central mass
partitioning. The following presentation is mainly based on the idea of fixed-plate
partitioning.
 A Dual-Mode BIST Technique:
A dual-mode BIST technique for capacitive MEMS devices can be implemented by
dividing the fixed capacitance plates at each side of the movable microstructure into
three portions: one for electrostatic activation and the other two equal portions for
capacitance sensing.

A BIST Example for MEMS Comb Accelerometers:


A typical surface-micromachined comb accelerometer is shown in the Figure. The comb
accelerometer is made of a thin layer of polysilicon on the top of a silicon substrate. The
thickness of the polysilicon structure layer is about 2um. The fixed portion of the device includes
four anchors and many left and right fixed fingers. The movable portion of the MEMS device
includes four tether beams, the central movable mass, and all movable fingers extruding out of
the mass.

Improvements and Innovations through the years:


 In 2015 A test method for MEMS devices was presented in which physical defects are
detected in the frequency domain rather than the time domain. A resonator, that can be
part of a read out circuit, is utilized to test capacitive Micro-Electro-Mechanical Systems
(MEMS). The proposed technique is based on the principle of resonant frequency where
variations of the resonant frequency are observed to detect structural defects. To verify
the validity of the proposed approach, a MEMS comb-drive is designed and fabricated.
Measurement and simulation results indicate that the proposed method can be used to
capture common comb-drive defects such as missing or broken fingers, shorted fingers
and tilted arms.

 Improvements in MEM accelerometer, through the years:

 Evolution of MEM accelerometer.


Improvements in BIST MEMS (2012)
Early MEMS sensors did not have built-in self-test features. Engineers were left to their
own devices and had to physically tilt (in an accelerometer case) or rotate (in a gyroscope
case) the Printed Circuit Board (PCB) with the mounted sensor and measure the sensor’s
output to determine if the sensor was working according to it's specifications. If the
output changed according to the motion applied, this indicated that the sensor was
working properly. Otherwise, the sensor was considered “failed”. Obviously, this
ponderous procedure was not suitable for mass production.

Latest MEM Testing Techniques


Recently, the embedded self-test has become an indispensable feature in MEMS sensors.
The concept of the self-test is to apply an electrostatic force to the mechanical sensing
element of the sensor and simulate the situation as if an external motion or rotation was
applied to the device. If the change of sensor output, with or without self-test feature
enabled, is within the specified range, this then means that the sensor operates correctly
according to its specifications.
To test MEMS researchers came up with a wide variety of techniques that can display
certain values. However, there is no single technology that can cover all; each has
strengths as well as weaknesses. [4]

Below is a list with all major and some minor technologies employed in MEMS testing:

 Atomic force microscopy (AFM)


 Cnfocal microscopy (CM)
 Digital holographic microscopy (DHM)
 Laser Doppler vibrometer (LDV)
 Optical microscopy (OM)
 Scanning electron microscopy (SEM)
 Strobe video microscopy (SVM)
 White light interferometry (WLI)
Following technologies were experimented with but are no longer considered for MEMS testing:

 Beam deflection
 Electronic speckle pattern interferometry (ESPI)
 Ellipsometry
 Light scattering
 Spectroscopy

Recent Applications of MEMs:


 Brain–computer interface
 Cantilever - one of the most common forms of MEMS.
 Electrostatic motors used where coils are difficult to fabricate
 Kelvin probe force microscope
 MEMS sensor generations
 MEMS thermal actuator, MEMS actuation created by thermal expansion
 Micro-opto-electromechanical systems (MOEMS), MEMS including optical elements
 Neural dust - millimeter-sized devices operated as wirelessly powered nerve sensors
 Photoelectrowetting, MEMS optical actuation using photo-sensitive wetting
 Micropower, Hydrogen generators, gas turbines, and electrical generators made of etched
silicon.
 Millipede memory, a MEMS technology for non-volatile data storage of more than a
terabit per square inch.
 Nanoelectromechanical systems are similar to MEMS but smaller.
 Scratch Drive Actuator, MEMS actuation using repeatedly applied voltage differences
[3]

Conclusions:
Microelectromechanical systems have achieved tremendous progress in recent decades. Various
MEMS devices based upon different working principles have been developed MEMS has also
found broad applications in various areas. With the rapid development of MEMS technology and
its integration into system-on-chip (SOC) designs, MEMS testing (especially BIST) is becoming
an even more important issue. An efficient and robust test solution is urgently needed for
MEMS; however, due to the great diversity of MEMS structures and their working principles,
various defect sources, multiple field coupling, and its essential analog features, MEMS testing
remains a very challenging work.
6. RF Testing
These devices operate at very high frequencies (300MHz and beyond) and are ubiquitous in
the form of cellular phones, laptops with integrated wireless access, mobile PDAs, and various
other wireless devices. Apart from the uses described above, RF circuits are used for numerous
other applications (e.g., medical care, air traffic control in airports, radar applications, and
satellite and deep space communications).

Radiofrequency identification
The convenience of radiofrequency identification (RFID) is contributing to its increasing
popularity for applications such as highway tollbooths, supermarkets, and warehouses. To
measure the gain of a device, the DUT is stimulated with a single tone input, with the power of
the applied tone well within its linear region of operation. The ratio of the output power and the
input power is specified as the gain of the DUT.

The conversion gain (CG) is measured for mixers (both up-conversion and down conversion) to
specify the gain in signal power while frequency translation of the signal is performed via the use
of a local oscillator (LO) signal. Thus, conversion gain is defined as the ratio of the output
power of the translated frequency tone to the power of the input tone.

Third-order intercept
To measure the third-order intercept (TOI), two tones with the same amplitude and small
difference in frequency are applied to the DUT, the output of which can be very easily viewed
using a spectrum analyzer.

Noise figure
The noise figure (NF), also known as noise factor, characterizes the noise performance of a
device or a system.

Adjacent channel power ratio


The adjacent channel power ratio (ACPR) specifies the amount of power leakage into adjacent
bands of communication. To test for ACPR, a pseudo-random bitstream is transmitted from the
baseband DSP and the spectrum of the transmitter output is captured. From this captured output
spectral response, the in-band channel power and the out-of-band channel power are measured.
The ratio of these two quantities is specified as the ACPR of the transmitter.

Error vector magnitude


The error vector magnitude (EVM) is a system-level specification measured at the baseband. It
describes the quality of modulation and easily identifies any non-idealities within the system. To
measure the EVM, a set of known bits is modulated in the transmitter baseband to create
constellation symbols. Alternate test relies on strong correlation between the response of the
DUT to an applied stimulus and its performance metrics (i.e., specifications). To do so, the
method finds a test input stimulus using an optimization algorithm such that the sensitivity of the
test response to the specifications is maximized.
In a frequency-domain behavioral simulation framework was developed. It has been
demonstrated that by using this method, test generation time can be significantly reduced without
loss of accuracy. Currently, similar methods are in use for test generation purposes.

A novel method has been presented where RF test signals were generated using DSPs. Also,
researchers are trying to minimize RF testing cost by making wafer-probe testing more efficient
so bad ICs are not packaged, thus saving packaging costs, which may be significant for RF
SOCs. A loopback delay-insertion-based DFT method was proposed to minimize the overall test
cost.

Improvements

Advances In MEMS Switches For RF Test Applications


The materials, the designs and the fabrication techniques that extend this MEMS switching
technology’s power handling capability by nearly two orders of magnitude while maintaining
consistent DC through RF performance and reliability have been developed. This enables MEMS
switch devices to serve a horizontal platform for applications such as T&M, ATE, handheld
electronics, lighting control and electrical protection devices such as circuit breakers. Advances
in solid state technology over the last several decades have produced very good solid state
switches that have enabled significant improvement in performance, reduction in size and cost of
modern T&M and ATE systems. Unfortunately, solid state technology has not been able to
completely eliminate the need for EM switches in such systems.

Focused Calibration for Advanced RF Test with Embedded RF Detectors


On-chip RF detectors have been used for direct measurements of gain or 1dB compression point
(1dBcp) in receivers and transmitters. Also a measurement technique aimed at the
intermodulation spec IP3, supported by spectral analysis of the RF detector response at low
frequency, was presented Recently, sub-ranging detectors with increased gain and dynamic range
have been proposed and efforts toward more measurements with RF detectors were
demonstrated, such as quadrature mismatch, RF isolation or intermodulation distortions (IMD) .

RF detectors serve also as two-tone measurements aimed at IP3/IP2 specs, which are
essential for RF frontend blocks. The measurement technique is supported by a customized
model of the circuit under test (CUT) since an RF detector cannot distinguish among different
spectral components and for a given test the contribution of one spectral component can be
obscured by another. To cope with this problem, the interfering components must be carefully
identified for de-embedding the measurement. This comes at a price of more simulations
preceding test, but the actual test time is not increased in this way. The measurement resolution
must be high enough to distinguish among signal level increments introduced by nonlinear
products (IMD and HD) of the CUT. This creates a challenge since the respective small
differences decide the quality of test [2].
Advancing RF Test with Open FPGAs
Software-defined RF test system architectures have become increasingly popular over the past
couple of decades. Almost every commercial off-the-shelf (COTS) automated RF test system
today uses application software to communicate through a bus interface to the instrument. As RF
applications become more complex, engineers are continuously challenged with increasing
functionality without increasing test times, and ultimately test cost. While improvements in test
measurement algorithms, bus speeds, and CPU speeds have reduced test times, further
improvements are necessary to address the continued increase in the complexity of RF test
applications. To address the need for speed and flexibility, COTS RF test instruments
increasingly include field-programmable gate arrays (FPGAs). FPGAs are reprogrammable
silicon chips that can be configured to implement custom hardware functionality through
software development environments. While FPGAs offer an increase in performance, FPGAs
typically are closed with fixed personalities designed for specific purposes and allow little
customization. Use reprogrammable FPGAs have a significant advantage over closed, fixed-
personality FPGAs because they make it possible to customize RF instrumentation hardware for
specific application needs [3].

RF and Microwave Production Test Requirements for Advanced Mixed-Signal


Devices
The Test & Measurement (T&M) is at the beginning and end of all of Advanced Mixed-
Signal (AMS) devices function design. In the semiconductors industry, the T&M is a challenge
of great importance for devices qualification and characterization in laboratory, and for high
volume mass production. All of semiconductor manufacturers remain preoccupied by test time
and cost of AMS devices for accelerating the delivery to customers and reducing as well as the
time to market. All of these constraints are dictated and imposed by customers and competition
pressure especially in wireless communications systems and applications which are the economic
driver of semiconductors industry. Semiconductor manufacturers are also the main providers of
AMS devices for the market. They have to qualify their RF/Microwave components on specific
correlation benches before bringing devices to mass production tests. Usually, the correlation
benches are consisting of several measurement instruments such Microwave VNA (Vectorial
Network Analyzer), Spectral Analyzer, Power Meter, Noise Figure Meter and their specifications
have to be chosen correctly. Measurement method is usually based on removable test fixture,
socket solution or soldered package solution.

High volume facilities


The introducing of product in high volume facilities test floor is very delicate step. For a large
range of complex RFIC and RFSOC wireless device applications, the AMS and RF devices
tested are intended to wireless LAN, Bluetooth, set top box (STB), 3G and 4G cellular,
WiMAX™ and new emerging standards like LTE. The main and known manufacturers of
Automatic Test Equipments (ATE) can offer performance levels. These testers are operating on
test floors in different locations throughout the world. Different RF testers based on ATE are
used for both on-wafer and final tests. In the final test, an automatic handler is putting device
into socket for achieving test steps, and removing it from socket before going to the next device.
In order to know the intrinsic parameters of Device Under Test (DUT) with good level of
precision, a comparison is usually made between experimental results and those obtained
theoretically by simulation model. At presently, AMS devices are operating in
telecommunications applications and subsystems under high frequencies and connected together
each to other by transmission lines

Refrences:
1. P. Gupta, and M. S. Hsiao, “ALAPTF: A new transition fault model and the ATPG
algorithm,” in Proc. Int. Test Conf. (ITC’04), pp. 1053–1060, 2004
2. W. Qiu, J. Wang, D. Walker, D. Reddy, L. Xiang, L. Zhou, W. Shi, and H. Balachandran, “K
Longest Paths Per Gate (KLPG) Test Generation for Scan Scan-Based Sequential Circuits,” in
Proc. IEEE ITC, pp. 223–231, 2004
3. A. K. Majhi, V. D. Agrawal, J. Jacob, L. M. Patnaik, “Line coverage of path delay faults,”
IEEE
Trans. on Very Large Scale Integration (VLSI) Systems, vol. 8, no. 5, pp. 610–614, 2000
4. H. Lee, S. Natarajan, S. Patil, I. Pomeranz, “Selecting High-Quality Delay Tests for
Manufacturing Test and Debug,” in Proc. IEEE International Symposium on Defect and Fault-
Tolerance in VLSI Systems (DFT’06), 2006
5. S. Goel, N. Devta-Prasanna and R. Turakhia, “Effective and Efficient Test pattern Generation
for Small Delay Defects,” IEEE VLSI Test Symposium (VTS’09), 2009
6. N. Ahmed, M. Tehranipoor and V. Jayaram, “Timing-Based Delay Test for Screening Small
Delay Defects,” IEEE Design Automation Conf., pp. 320–325, 2006
7. M. Yilmaz, K. Chakrabarty, and M. Tehranipoor, “Test-Pattern Grading and Pattern Selection
for Small-Delay Defects,” in Proc. IEEE VLSI Test Symposium (VTS’08), 2008
8. M. Yilmaz, K. Chakrabarty, and M. Tehranipoor, “Interconnect-Aware and Layout-Oriented
Test-Pattern Selection for Small-Delay Defects,” in Proc. IEEE Int. Test Conference (ITC’08),
2008
9. R. Mattiuzzo, D. Appello C. Allsup, “Small Delay Defect Testing,” http://www.tmworld.com/
article/CA6660051.html, Test & Measurement World, 2009
10. A. Dianat, A. Attaran, R. Rashidzadeh, “Test method for capacitive MEMS devices utilizing
pierce oscillator”, Circuits and Systems (ISCAS), 2015 IEEE International Symposium on 24-27
May 2015.
11. http://ieee-sensors2013.org/sites/ieee-sensors2013.org/files/Serrano_Accels.pdf
12. https://en.wikipedia.org/wiki/Microelectromechanical_systems
13. https://en.wikipedia.org/wiki/MEMS_testing

You might also like