Professional Documents
Culture Documents
The summary and the improvements of delay testing, FPGA, coping with physical failures and
soft errors, high speed I/O testing, memes testing, RF testing is:
1. Delay Testing:
The objective of delay testing is to detect timing defects and ensure that the design meets the
desired performance specifications. The need for delay testing has evolved from a common
problem faced by the semiconductor industry, Designs that function properly at low clock
frequencies might fail at the desired operational speed.
The growing need for delay testing is result of advances in very-large-scale integration (VLSI)
technology and an increase in design speed. These factors are also changing the target objectives
of delay tests. In the early days, most defects affecting the performance could be detected using
tests for gross delay defects. The increase of the circuit size has resulted in fault models that can
detect distributed defects localized to a certain area of chip. With the introduction of deep
submicron technology, noise effects are becoming significant contributors to timing failures and
they call for further adaptations of the fault models and testing strategies.
In normal operation, only one clock (system clock) is used to control the input and output latches
(in a broader sense, storage elements), and its period is Tc. In this illustration, the input and
output latches are controlled by two different clocks in the test mode: the input is applied at time
t1. Time Ts = t1-t0 is assumed to be sufficient for all signals in the circuit to stabilize under the
first vector. After the second vector is applied, the circuit is allowed to settle down only until
time t2, where t2 - t1 = Tc. At time t2, the primary output values are observed and compared to a
prestored response of a fault-free circuit to determine if there is a defect. If the input and output
latches use the same clock source in the test mode, the scheme illustrated in Figure 12.2 still
applies.
Three delay fault models are considered, transition fault model, gate-delay fault model, and path-
delay fault model. It is assumed that in nominal design each gate has a given fall (rise) delay
from each input to output pin. Also, the interconnects are assumed to have given rise (fall)
delays. The transition fault model assumes that the delay fault affects only one gate in the
circuit. There are two transition faults associated with each gate: a slow-to-rise fault and a slow-
to-fall fault.
Use of functional vectors that could be applied at the system’s operational speed and
should catch some delay defects (functional vectors should be evaluated for transition
fault coverage)
Application of ATPG tests for undetected transition faults
Application of ATPG tests for long path-delay faults
Summary:
Delay testing is becoming an increasingly crucial part of VLSI design testing process.
Continuously increasing circuit operating frequencies results in designs in which performance
specifications can be violated by very small defects. Process variations are now more likely to
cause marginal violations of the performance specifications. The continuous shrinking of device
feature size, increased number of interconnect layers and gate density, and higher voltage drop
along the power nets give rise to noise faults, such as distributed delay defects, power supply
noise, substrate noise, and crosstalk. The figure below represents the cost of testing.
Due to the statistical timing involved, delay along a target path is highly pattern dependent. In
order to produce high-quality patterns, we therefore need to generate test patterns that not only
sensitize the given set of statistical critical paths but also exercise the worst-case delays along
these paths. Also, with statistical timing and statistical delay defect models, the notion of path
sensitization becomes probabilistic; thus, the key challenge is to develop a feasible statistically
constrained ATPG method where statistical timing-sensitization constraints can be employed to
guide the ATPG justification process.
In [1], the authors proposed an as late as possible transition fault (ALAPTF) model to
launch one or more transition faults at the fault site as late as possible to detect the faults
through the path with least slack. This method requires a large CPU run time compared to
traditional ATPGs.
A method to generate K longest paths per gate for testing transition faults was proposed
in [2]. However, the longest path through a gate may still be a short path for the design,
and thus may not be efficient for SDD detection. Furthermore, this method also suffers
from high complexity, extended CPU run time, and pattern count.
The authors in [3] proposed a delay fault coverage metric to detect the longest sensitized
path affecting a TDF fault site. It is based on the robust path delay test and attempts to
find the longest sensitizable path passing through the target fault site and generating a
slow-to-rise or slow-to-fall transition. It is impossible to implement this method on large
industry circuits since the number of long paths increases exponentially with circuit size.
The authors in [4] proposed path-based and cone-based metrics for estimating the path
delay under test, which can be used for path length analysis. This method is not accurate
due to its dependence on gate delay models, unit gate delay, and differential gate delay
models, which were determined by the gate type, the number of fan-in and fan-out nets,
and the transition type at the outputs. Furthermore, this method is also based on static
timing analysis. It does not take into account pattern-induced noise, process variations,
and their impact on path delays.
The authors in [5] proposed two hybrid methods using 1-detect and timing aware ATPGs
to detect SDDs with a reduced pattern count. These methods first identify a subset of
transition faults that are critical and should be targeted by the timing-aware ATPG. Then
top-off ATPG is run on the undetected faults after timing-aware ATPG to meet the fault
coverage requirement. The efficiency of this method is questionable, since it still results
in a pattern count much larger
than traditional 1-detect ATPG.
In [6], a static-timing-analysis based method was proposed to generate and select patterns
that sensitize long paths. It finds long paths (LPs), intermediate paths, and short paths to
each observation point using static timing analysis tools. Then intermediate path and
short path observation points are masked in the pattern generation procedure to force the
ATPG tool to generate patterns for LPs. Next, a pattern selection procedure is applied to
ensure the pattern quality.
The output-deviation based method was proposed in [7]. This method defines gate-delay
defect probabilities (DDPs) to model delay variations in a design. Gaussian distribution
gate delay is assumed and a delay defect probability matrix (DDPM) is assigned to each
gate. Then, the signal-transition probabilities are propagated to outputs to obtain the
output deviations, which are used for pattern evaluation and selection. However, in case
of a large number of gates along the paths, with this method, calculated output deviation
metric can saturate and similar output deviations (close to 1) can be obtained for both
long and intermediate paths (relative to clock cycle). Since in modern designs, there
exists a large number of paths with large depth in terms of gate count, the output
deviation-based method may not be very effective. A similar method was developed in
[8] to take into account the contributions of interconnect to the total delay of sensitized
paths. Unfortunately, it also suffers from the saturation problem.
A false-path-aware statistical timing analysis framework was proposed in [9]. It selects
all logically sensitizable long paths using worst-case statistical timing information and
obtains the true timing information of the selected paths. After obtaining the critical long
paths, it uses path delay fault test patterns to target them. This method will be limited by
the same constraints the path delay fault test experiences.
Pattern Generation
Due to the nature of signal integrity loss (fault) and its intermittent occurrence, integrity fault
testing must be done at speed. The pin and probing limitations further restrict the accurate
observation of signal integrity losses. Conventional pseudo-random pattern generators (PRPGs)
to stimulate maximum integrity loss on long interconnects have been tried.
Improvements
In spite of the good test quality that pseudo-random patterns can achieve, the random
nature of process prevents any test session from having a bound on the length
The maximum aggressor (MA) fault model is one of the fault models proposed for
crosstalk.
Use a genetic algorithm (with random basis) to stimulate the worst case PSN.
A new simulation platform, named TIARA-G4 is introduced that uses the numerical
evaluation of the sensitivity of advanced semiconductor memories (static RAMs)
subjected to natural radiation at ground level.
TIARA-G4 should be used in the future to more deeply investigate the radiation response
of ultimate MOS circuits and alternate Nano electronic devices in the natural (terrestrial)
environment.
Mixed mode vector pattern generation.
Pseudorandom vectors test.
Followed deterministic tests.
Built-In Logic Block Observation
Practical Compression Techniques
Eliminate the radiation contaminant from packaging material
3. FPGA Testing:
Field programmable gate arrays (FPGAs) are generally composed of a two dimensional array of
programmable logic blocks (PLBs) interconnected by a programmable routing network with
programmable I/O cells at the periphery of the device. The larger FPGAs, with over 22,000
PLBs and 800 specialized cores, could easily reach the 100-million transistor mark and pose a
testing challenge in terms of their size and diversity of functions.
Programmability Impact:
The system function performed by the FPGA is controlled by an underlying configuration
memory. In most of the current FPGAs, the configuration memory is a RAM ranging in size
from 32 Kbits to 50 Mbits. The system function can be changed at any time by simply rewriting
the configuration memory with new data, referred to as reconfiguration. Another trend is FPGAs
that support dynamic partial reconfiguration, where a portion of the FPGA can be reconfigured
while the remainder of the FPGA is performing normal system operation, also referred to as
runtime reconfiguration. The fault models used for testing the routing resources include shorts
(bridging faults) and opens in the wire segments, stuck-at-1 and stuck-at-0 wire segments, and
stuck-on and stuck-off programmable switches, which include the controlling configuration
memory bits stuck-at-1 and stuck-at-0. While the programmable switch stuck-off fault can be
detected by a simple continuity test, stuck-on faults are similar to bridging faults and require
opposite logic values to be applied to the wire segments on both sides of the switch while
monitoring both wire segments in order to detect the stuck-on fault. When faults are detected, the
system function can be reconfigured to avoid the faulty resources if the faults can be diagnosed
(identified and located); therefore, diagnosis of the faulty resources is an important aspect of
FPGA testing in order to take full advantage of the fault and/or defect tolerant potential of these
devices.
Testing Approaches:
Two types of testing approaches have been developed for FPGAs: external testing and built-in
self-test (BIST). In external testing approaches, the FPGA is programmed for a given test
configuration with the application of input test stimuli and the monitoring of output responses
performed by external sources such as a test machine. For FPGAs with boundary scan that
support INTEST capabilities, the input test stimuli can be applied and output responses can be
monitored via the boundary scan interface; otherwise, the FPGA I/O pins must be used, resulting
in package-dependent testing. The basic idea in BIST for FPGAs is to configure some of the
PLBs as test pattern generators (TPGs) and output response analyzers (ORAs). These BIST
resources are then used to detect faults in PLBs, routing resources, and special cores such as
RAMs and DSPs. Once the programmable resources have been tested, the FPGA is reconfigured
for the intended system function without any overhead or performance penalties due to the BIST
circuitry. This facilitates system-level use of the BIST configurations.
Recent Trends:
More recent trends in FPGA testing include delay fault testing and the use of embedded
processor cores for on-chip test configuration generation and application. Testing for delay faults
in FPGAs is important. External test techniques and BIST approaches have been developed to
detect delay faults in FPGAs. The incorporation of embedded microprocessor cores that can
write and read the FPGA configuration memory has facilitated the algorithmic generation of test
configurations from within the FPGA instead of downloading test configurations. Complex
programmable logic devices (CPLDs) are similar to FPGAs in that they contain programmable
logic and routing resources as well as embedded cores such as RAMs and FIFOs. Now that
FPGAs are incorporating embedded cores such as memories, DSPs, and microprocessors,
FPGAs more closely resemble system-on-chip (SOC) implementations. At the same time, SOCs
are incorporating more embedded FPGA cores. As a result, FPGA testing techniques are
becoming increasingly important for a broader range of system applications.
I/O testing is handled by automatic test equipment (ATE) that mimics the other side of the I/O
interfaces. Clock is generated from a tester channel and so are the test stimuli. As the device
undertest (DUT) responds with its generated signals, the output responses are strobed by the
ATE with its own timing system, so this approach works perfectly with common clock devices.
With this scheme, ATE performance rises with the device I/O performance. The advent of the SS
technology has turned the whole ATE test methodology upside down.
For the ATE to send SS signals, the scenario is the same as that of the CC, and the ATE
generates the strobes and the data. However, when it is time for the DUT to send data, it sends
out strobes and the ATE is supposed to use those strobes to strobe the data. Here, the fixed
timing system of the tester fails.
Challenges:
The higher speed serial signaling link approach is more challenging. The test process
involves characterizing the output signals from the DUT and generating worst-case
signals to test the receiver and its associated CR subsystem. This usually involves a
bench setup consisting of an oscilloscope (real-time digitizer or equivalent sampling),
timing interval analyzers (TIAs), or bit error rate tester (BERT) and signal generators that
are capable of deterministic and random pattern generation as well as amplitude control
and jitter injection.
The ATE industry must essentially reproduce the functionalities of this necessary bench
setup with a high-performance interface between the ATE driver/receiver and DUT.
Serial I/O ATE must handle the asynchronous nature of this interface, which is different
from conventional synchronous ATE.
Such ATE could be very expensive and misaligned with the end-user cost expectation.
An I/O structural test methodology called I/O wrap involves applying the transition fault
test methodology to the I/O circuitry. By tying an output to an input, the output data is
launched and latched back into the input buffer on the following clock. Because most
signal pads are I/O in nature, the I/O wrap methodology is very convenient. Input-only or
output-only pads may relate to the DFT or on the test interface unit (TIU).
By controlling the delay, one can measure the relative delay between the strobes and data,
all without the need for precision timing measurement from ATE. This is transition fault
testing of the I/O pair with a tighter clock cycle.
In SS signaling protocol, the absolute delay of the I/O is not critical; instead, the relative
delay of
the strobes and its associated data bits are important. These delay timings, denoted as
time valid before (Tvb) and time valid after (Tva). By stressing this combined timing, we
know how much margin there is with the combined pair. If the induced delay to the
clock/strobes is calibrated, we can even have more accurate measurement of this
combined loop time than external instrumentation. Because the failure mechanisms for
signal delay and input setup/hold time are different, the probability of aliasing is very
low.
Defect-based test method: If a data bit is substantially different from the other data bits,
we can conclude that a defect or local process variation exists with that particular bit and
declare that to be a failure.
Future Challenges
The methodologies for testing all this high-speed serial signaling must consider cost and
quality.
the decision feedback equalizer (DFE) will be widely used in the receiver to reduce the
BER.
Test methodologies will have to advance to match and mimic future link and silicon
architectures and technologies.
On-chip and off-chip test solutions will be necessary for acceptable accuracy, fault
coverage, throughput, and cost.
There is also a need for a link layer where error detection and correction, data flow
control, etc., are handled.
In addition, there is a need for a protocol layer, which will turn the internal data transfer
into data packets.
Cross-domain asynchronous clocking will result in nondeterministic chip responses and
further
lead to mismatches and yield loss
Improvements
In linear equalizer, the linear filter may introduce additional noise variance at the output signal,
this may have led to poor performance. This drawback can be avoided in a Decision Feedback
Equalizer at the expense of a more complex model. The decision feedback equalizer will predict
the noise level of the channel through the noise predictor based on previous noise samples. Then
predicted noise is subtracted from the input signal by using a feedback filter to reduce the noise
level of the channel.
Having I/O loopback modes in the design mitigates the need for the costly high-speed testers to
support manufacturing test. Loopback modes can also support silicon characterization with
bench equipment and system level characterization/test. Internal Loopbacks are implemented on-
chip to reduce the reliance on the load board and to enable in-field system testing. There are four
differences schemes for implementing the Loopback: (i) analog near-end; (ii) digital near-
end;(iii) analog far-end; and (iv) digital far-end
5. MEMS TESTING
MEMS are the acronym for a microelectromechanical system. The prefix “micro” indicates the
most important feature of MEMS: its extremely small size. The typical size of MEMS
components is in the range of between 1 micron (um) and 1 millimeter (mm). This means that
the key feature size of a MEMS device is usually smaller than the diameter of human hair. For
feature size below 1_m, the quantum effect cannot be ignored. It belongs to the recently emerged
concept of a nanoelectromechanicalsystem (NEMS). Thus, MEMS devices primarily
concentrate on the feature sizes from 1 ∼1000_m. Further, the electronic and mechanical parts of
a MEMS device interact with each other, so it can be called a “system.” For example, in a
MEMS system, the signals in a mechanical sensor can be sensed by an electronic circuit, while
the actuation instructions from the electronic circuit can be implemented by a mechanical
actuator. Thus, MEMS can incorporate the environment data collection, signal processing, and
actuation in the same “smart” system. When compared with conventional electromechanical
products, MEMS has the following specific features and corresponding advantages:
Small volume, low weight, and high resolution;
High reliability;
Low energy consumption and high efficiency;
Multifunction capabilities and intelligentization;
Low cost.
Typical examples of commercial MEMS devices are the ADXL series accelerometers which
have been widely used in the world’s automobile market.
Below is a list with all major and some minor technologies employed in MEMS testing:
Beam deflection
Electronic speckle pattern interferometry (ESPI)
Ellipsometry
Light scattering
Spectroscopy
Conclusions:
Microelectromechanical systems have achieved tremendous progress in recent decades. Various
MEMS devices based upon different working principles have been developed MEMS has also
found broad applications in various areas. With the rapid development of MEMS technology and
its integration into system-on-chip (SOC) designs, MEMS testing (especially BIST) is becoming
an even more important issue. An efficient and robust test solution is urgently needed for
MEMS; however, due to the great diversity of MEMS structures and their working principles,
various defect sources, multiple field coupling, and its essential analog features, MEMS testing
remains a very challenging work.
6. RF Testing
These devices operate at very high frequencies (300MHz and beyond) and are ubiquitous in
the form of cellular phones, laptops with integrated wireless access, mobile PDAs, and various
other wireless devices. Apart from the uses described above, RF circuits are used for numerous
other applications (e.g., medical care, air traffic control in airports, radar applications, and
satellite and deep space communications).
Radiofrequency identification
The convenience of radiofrequency identification (RFID) is contributing to its increasing
popularity for applications such as highway tollbooths, supermarkets, and warehouses. To
measure the gain of a device, the DUT is stimulated with a single tone input, with the power of
the applied tone well within its linear region of operation. The ratio of the output power and the
input power is specified as the gain of the DUT.
The conversion gain (CG) is measured for mixers (both up-conversion and down conversion) to
specify the gain in signal power while frequency translation of the signal is performed via the use
of a local oscillator (LO) signal. Thus, conversion gain is defined as the ratio of the output
power of the translated frequency tone to the power of the input tone.
Third-order intercept
To measure the third-order intercept (TOI), two tones with the same amplitude and small
difference in frequency are applied to the DUT, the output of which can be very easily viewed
using a spectrum analyzer.
Noise figure
The noise figure (NF), also known as noise factor, characterizes the noise performance of a
device or a system.
A novel method has been presented where RF test signals were generated using DSPs. Also,
researchers are trying to minimize RF testing cost by making wafer-probe testing more efficient
so bad ICs are not packaged, thus saving packaging costs, which may be significant for RF
SOCs. A loopback delay-insertion-based DFT method was proposed to minimize the overall test
cost.
Improvements
RF detectors serve also as two-tone measurements aimed at IP3/IP2 specs, which are
essential for RF frontend blocks. The measurement technique is supported by a customized
model of the circuit under test (CUT) since an RF detector cannot distinguish among different
spectral components and for a given test the contribution of one spectral component can be
obscured by another. To cope with this problem, the interfering components must be carefully
identified for de-embedding the measurement. This comes at a price of more simulations
preceding test, but the actual test time is not increased in this way. The measurement resolution
must be high enough to distinguish among signal level increments introduced by nonlinear
products (IMD and HD) of the CUT. This creates a challenge since the respective small
differences decide the quality of test [2].
Advancing RF Test with Open FPGAs
Software-defined RF test system architectures have become increasingly popular over the past
couple of decades. Almost every commercial off-the-shelf (COTS) automated RF test system
today uses application software to communicate through a bus interface to the instrument. As RF
applications become more complex, engineers are continuously challenged with increasing
functionality without increasing test times, and ultimately test cost. While improvements in test
measurement algorithms, bus speeds, and CPU speeds have reduced test times, further
improvements are necessary to address the continued increase in the complexity of RF test
applications. To address the need for speed and flexibility, COTS RF test instruments
increasingly include field-programmable gate arrays (FPGAs). FPGAs are reprogrammable
silicon chips that can be configured to implement custom hardware functionality through
software development environments. While FPGAs offer an increase in performance, FPGAs
typically are closed with fixed personalities designed for specific purposes and allow little
customization. Use reprogrammable FPGAs have a significant advantage over closed, fixed-
personality FPGAs because they make it possible to customize RF instrumentation hardware for
specific application needs [3].
Refrences:
1. P. Gupta, and M. S. Hsiao, “ALAPTF: A new transition fault model and the ATPG
algorithm,” in Proc. Int. Test Conf. (ITC’04), pp. 1053–1060, 2004
2. W. Qiu, J. Wang, D. Walker, D. Reddy, L. Xiang, L. Zhou, W. Shi, and H. Balachandran, “K
Longest Paths Per Gate (KLPG) Test Generation for Scan Scan-Based Sequential Circuits,” in
Proc. IEEE ITC, pp. 223–231, 2004
3. A. K. Majhi, V. D. Agrawal, J. Jacob, L. M. Patnaik, “Line coverage of path delay faults,”
IEEE
Trans. on Very Large Scale Integration (VLSI) Systems, vol. 8, no. 5, pp. 610–614, 2000
4. H. Lee, S. Natarajan, S. Patil, I. Pomeranz, “Selecting High-Quality Delay Tests for
Manufacturing Test and Debug,” in Proc. IEEE International Symposium on Defect and Fault-
Tolerance in VLSI Systems (DFT’06), 2006
5. S. Goel, N. Devta-Prasanna and R. Turakhia, “Effective and Efficient Test pattern Generation
for Small Delay Defects,” IEEE VLSI Test Symposium (VTS’09), 2009
6. N. Ahmed, M. Tehranipoor and V. Jayaram, “Timing-Based Delay Test for Screening Small
Delay Defects,” IEEE Design Automation Conf., pp. 320–325, 2006
7. M. Yilmaz, K. Chakrabarty, and M. Tehranipoor, “Test-Pattern Grading and Pattern Selection
for Small-Delay Defects,” in Proc. IEEE VLSI Test Symposium (VTS’08), 2008
8. M. Yilmaz, K. Chakrabarty, and M. Tehranipoor, “Interconnect-Aware and Layout-Oriented
Test-Pattern Selection for Small-Delay Defects,” in Proc. IEEE Int. Test Conference (ITC’08),
2008
9. R. Mattiuzzo, D. Appello C. Allsup, “Small Delay Defect Testing,” http://www.tmworld.com/
article/CA6660051.html, Test & Measurement World, 2009
10. A. Dianat, A. Attaran, R. Rashidzadeh, “Test method for capacitive MEMS devices utilizing
pierce oscillator”, Circuits and Systems (ISCAS), 2015 IEEE International Symposium on 24-27
May 2015.
11. http://ieee-sensors2013.org/sites/ieee-sensors2013.org/files/Serrano_Accels.pdf
12. https://en.wikipedia.org/wiki/Microelectromechanical_systems
13. https://en.wikipedia.org/wiki/MEMS_testing