You are on page 1of 23

1

1
ece156B, Slide 1
Functional Verification and TestBench
Generation Direct and Random Testing
Lecture 7
Slide #2
Functional Verification Demand
Functional verification cost grows
faster than design complexity
100K
1M 10M
1M
100M
10B
(gates)
(cycles)
2
2
Slide #3
A simple example of why
If you have a design with 10-bit
flipflops, the maximum number of
states is 1024
If it is 11-bit, the number of
states is 2048
So in theory, the verification
space grows exponentially while
the design grows linearly
Slide #4
Approaches
Exhaustive simulation is impossible
You can simulate every instruction but
not every instruction sequence
Formal verification on everything is
impossible
Formal verification can handle up to a
certain number of states
So, the only feasible solution is to
simulate under certain guidance
3
3
Slide #5
Pre-silicon Validation cycles
not that we dont try
0
1000
2000
3000
4000
5000
6000
4
0
'
9
8
4
3
'
9
8
4
6
'
9
8
4
9
'
9
8
5
2
'
9
8
0
3
'
9
9
0
6
'
9
9
0
9
'
9
9
1
2
'
9
9
1
5
'
9
9
1
8
'
9
9
2
1
'
9
9
2
4
'
9
9
2
7
'
9
9
3
0
'
9
9
3
3
'
9
9
3
6
'
9
9
3
9
'
9
9
4
2
'
9
9
4
5
'
9
9
4
8
'
9
9
5
1
'
9
9
(Mi l l i ons)
Pentium 4
Full-
chip
~1/4 sec
of real time
execution
Slide #6
Verification Crisis
More than 50% of the project budget
already goes to verification
Simulation and testbench preparation
time already drags the time-to-market
Design complexity grows tremendously
with the use of IP cores
Cost of chip re-spin is high
> $100K for ASIC
> $1M for a complex SOC
4
4
Slide #7
Functional Verification Testbench Simulation
In practice, testbench simulation remains the only
effective method for full-chip functional verification
Issues of simulation
Design specification and abstraction
Hand books
Test stimuli
Manual construction
Direct and random
Definition of correctness
Manual observation
Simulate another behavior model and compare
Construction of monitors assertion-based methodology
Effectiveness of tests
Guided by coverage metrics
Length of simulation
billions of cycles (for -processors and complex SOCs)
Slide #8
IBM RTPG
RTPG Biased Pseudo-Random Test Program
Generation
IBM J ournal of R&D, 1992
1
st
to talk about RTPG for verification of R6000
Design Automation Conference, 1995
1
st
public talk on RTPG for verifying PowerPC
The RTPG evolves into constrained random
verification (Synposys Vera, System Verilog)
Synopsyss Vera verification environment
Check Vera web page for information
Very important piece of technology
5
5
Slide #9
How to verify a microprocessor
Tests for a microprocessor is nothing but an
assembly program
You would try a set of assembly programs
In the old days, people had tables of
important programs to be applied
These tables were accumulated over years
Based on previous experience
Disadvantages
You can only apply them when design is ready
If you are developing a new design, then what?
Slide #10
Random Test Program Generation - RTPG
Basic Ideas
Allow users to specify test program
biasing templates instead of the test
programs themselves
Hide detail from users
The tool ensures the validity of the test
programs randomly generated
Automatic guarantee to satisfy all architectural
constraints (among instructions)
Use biasing seeds and constraints to
guide the random generation process
6
6
Slide #11
Architectural constraint
For example, a special register has to be set first
before using a branch instruction
Ex.
CMP R0, A, B (if A>B, R0=0 otherwise R0=1)
J NE addr, R0 (if R0 <>0, goto addr)
Moreover, addr should be a 6-bit constant
Constraints can be defined for one instruction
The instruction format limit what you can use
Or defined among instructions
Architectural constraints (for examples)
32 registers (20 general purpose, 12 special purpose)
24-bit addressing
Indirect addressing
Slide #12
Biasing structure
Random bias can be supplied along each pre-
defined dimension
For example, you may want to define
Bias to select an instruction
Bias to select the next instruction based on the current one
Bias to select an operand
Bias to use branch and jump
Bias to cause overflow, underflow
Bias to interrupt and to cause exception
A test template is a skeleton of an assembly
program with some random selection along
the pre-defined dimension
7
7
Slide #13
Example
You may have a test program template look
like the following flow
Add <random R1-R4> <random R4-R8>
<random R8-R20>
<0.9 prob to add/ 0.1 prob to sub> R3 R5
<random R4-R7>
<0.1 prob> J ump addr
Add R1, R2, <0.9 to have 0xffffffff/ 0.1 to
randomly generate a number>
Slide #14
RTPG Methodology for Microprocessors
Billions of cycles are simulated
Verification coverages are
systematically measured based on a
collection of known events
A subset of tests are selected for post-
silicon validation
Microprocessor
Architectural
Reference
Model
Random Test
Pattern
Generator
Biasing and constrained
test program templates
Tests
Expecte
d
Results
RTPG
RTL Full-Chip
Model
Logic Simulator
Pass?
8
8
Slide #15
Test Program
A test program consists of three parts
Initialization
Set up all relevant initial states in registers, flags,
tables, caches, and memory locations
Instruction sequence
The actual instructions to be tested
Expected result
The final states that were changed during the
tests (not necessarily at the end of the tests)
Slide #16
Initialization and Observation
The initialization sequence is important
It allows a test program to be run without the
interference from other test programs
You may also want to add a suffix program to
ensure observability of the results
Move register contents to fixed memory locations
for inspection (like a dump operation)
The initialization and observation parts may
have constraints associated with the actual
program content
9
9
Slide #17
RTPG design biasing structure
From architecture, you need to decide what
dimensions can vary randomly
To decide the structure of biasing
For more examples
Random selection of an instruction subset
Random addition of branch or call instructions
Random selection of operand among immediate
register or memory mode
Random selection of operands content
Random selection among a group of pre-determine
instruction sequences
Random enforcement of exceptions or interrupts
Slide #18
RTPG design expected results
It is important to decide how and where
expected results should be obtained
For examples
Do or do not have an explicit behavior model
Use or do not use assertions
Can or cannot observe at internal registers
Can or cannot observe at internal caches
Can or cannot observe at internal signal lines
10
10
Slide #19
Reference Model of a Processor
ISA Instruction Set Architecture
An abstract model viewed by assembly
programmer
Program status words
Instruction address
Conditions
Key machine states
Registers
32 general purpose registers
Special purpose registers
Memory
4K sized page block
Slide #20
Observability Consistency Checking
Logical registers may not be the same as
physical registers
Register renaming depends on the implementation
Logical cache may not be the same as physical
cache structure
1 level cache vs. 3 level cache
Memory is the only safe observation point
Memory
-processor
i
n
s
t
r
u
c
t
i
o
n
s
e
q
u
e
n
c
e
s
Safe
observation
point
registers
cache
potential
observation points
11
11
Slide #21
RTPG
User supplies test templates for test generation
User supplies biases for randomized tests
RTPG ensure
tests are correct
expected results are computed based on the logical
model (or the higher-level model)
processor
features
pseudo random test generator
parse refine dispatch simulate
Tests
symbolic
templates
manual
input
biasing
manual
Slide #22
An Example (DAC 95)
Add op1 op2 base
op1 is one of the 16 registers, 0 15
op2 is a 8-bit displacement 00 . FF
base is one of the 16 32-bit base register, 0.
15
What the instruction does
[op1] + [op2+[base]] -> op1
[op1] is a 32-bit signed number
[op2]+base point to a virtual memory address
The content of the memory location is a 32-bit
signed number
12
12
Slide #23
Instruction tree to help test generation
op1 op2
base
[op2]+base
L: length
A: address
D: the data content
Slide #24
Use the tree to constrain tests
13
13
Slide #25
An actual test case
Add 7, 0100, 9
virtual address
data
(syntax)
(semantic)
Slide #26
Syntax Vs. Semantic
You need to define the (syntactic and semantic) model
for each instruction
This serves as the basis for biasing
Then, you need to define the model for a pair or more
instructions
Only if they can interact with the given architecture
For example,
One instruction set the overflow bit and the next instruction test for the
overflow bit
One instruction set the jump address and the next instruction perform the
actual jump
To begin with, you can provide a set of instruction
sequences where each is considered as a unit
Then, you provide a model for each sequence to specify its
syntax and semantic
Also define where the biases can be supplied and take effect
14
14
Slide #27
Unexpected Events
Test programs are generated randomly
May result in unexpected events like exceptions
Test template
Instantiate: t1,t2,t3,t4
Repeat 100 times
Add t1, t2, R1
Div t3, R1, t4
Test program
Initialization
R4 =12, R5 =-1, R3 =2
Add R3, 10, R1
Div R4, R1, R4
Add R4, R5, R1
Div R3, R1, R3
<exception>
Slide #28
Unexpected Events Cont.
Unexpected events may be avoided by
constraint specification
If happens, some actions may be
required to
Continue the simulation
Stop and correct the template
Some unexpected events may be useful
for exploring design bugs
15
15
Slide #29
Event Handler
Test template
Instantiate: t1,t2,t3,t4
Repeat 100 times
Add t1, t2, R1
Div t3, R1, t4
Test program
Initialization
R4 =12, R5 =-1, R3 =2
Add R3, 10, R1
Div R4, R1, R4
Add R4, R5, R1
Div R3, R1, R3
Move R1, 100
Store 1, R1
Add R7, R1, R1
Event: R1 =0
Action:
Move R1, 100
Store 1, R1
Slide #30
Basic Principles
Mandatory sequences
Single instruction set; Instruction pairs; 3-
instruction set; Selected sequences (experience
+ new ideas); Random sequences
Coverage Measurement
Statement coverage; Toggle coverage; State
transition coverage;
Assertion coverage
error discovery rate tapeout
Weeks
#

o
f

e
r
r
o
r
s

f
o
u
n
d
Stop!
16
16
Slide #31
More about coverage
The basic coverage can be measured by line
coverage in the code
A line is actually simulated in the event driven
simulation, i.e. an event occurs at the line
Toggle coverage
Each signal line can be toggled (between 0 and 1)
Finite state machine (FSM) state coverage
Each state has been reached
Design error coverage
Randomly inject an error
Check to see if the error can be detected
ece156B, Slide 32
Design Error Modeling
17
17
Slide #33
Quality Measurement
Structure-based logic error models provide a good
way (independent of the function) for measuring
the quality of verification
May implicitly helps to verify the functional
correctness of a design
Equivalence
Error
Models
Design model 2 Design model 1
Slide #34
Design Error Models
Logic design error models
Simple structural mutations
Extra inverter,
Gate substitution,
Extra wire,
Extra gate,
Missing gate,
Wrong wire, etc.
Have high correlations with manufacturing fault models
Recent studies show that many realistic design errors
can be captured by capturing those errors
18
18
Slide #35
Reference
Original design error paper (TCAD1988)
Date 1998 best paper (to evaluate various
verification approaches)
Also describe how to inject error and simulate
errors using a logic simulator
Slide #36
Error Injection (Gate)
N extra lines allows injection of 2^n 1 errors
Can use logic simulator to collect results
May be time-consuming
Inject to sensitive areas only
decoder
MUX
good
faulty
select
extra
inputs
19
19
Slide #37
Consistent Behavior Gate vs. Transistor
3.23%
A
D
2.35%
71.76%
B
22.97%
Redundant
0.098%
C
0.196%
11.53%
A
D
1.66%
69.1%
B
16.72%
Redundant
C
0.59%
0.098%
0
.
2
9
3
%
Gate-Level Results Transistor-Level Results
A,B,C,D are 4 different verification methods
Slide #38
Divide and Conquer
As design becomes
extremely complex, fixing
a bug become more
tedious
If we start RTPG directly
on the whole chip,
We spend a lot of time to
fix bugs in some units
Yet some other units
receive less attention
20
20
Slide #39
You want to start small
Pick a block of interest
Try to verify this block only
You can inject and simulate 1000 errors
on this block and measure coverage
Then, you do it block by block
Slide #40
Pre-RTPG Simulation UniSim
A chip is usually divided into 6-7 units
Goal: detect 95% of the design errors at unit level and the
remaining 5% at the full-chip level
UniSim allows individual unit to be verified independently
from the completion of other units
At this level, verification does not need to be complete, nor
100% correct.
Tests
Expected
Results
RTPG
A C++system to
emulate the I/O
behaviors of other
parts of the chip
New Tests
New
Expected
Results
Logic simulator
Unit under
verification
Pass?
21
21
Slide #41
In our case
You need to dump the internal results at the
boundary of a block and record them as inputs
and expected outputs for the block
This should be recorded as cycle-based simulation
Capture inputs here
Record results Test programs
supplied here
Slide #42
An Example (Taylor&Quinn -p DAC98)
Direct and pseudo random testing
Start recording after all units are integrated
After initial tests to setup the system
Time
#

o
f

b
u
g
s

f
o
u
n
d
22
22
Slide #43
Excitation of Bugs
More than 3 quarters % of bugs can be
excited via pseudo random tests
Direct tests are designed to compensate
misses from pseudo random tests
There are bugs that can be excited by
multiple means
0% 20% 40% 60% 80% 100%
Pseudo RandomTests
Direct Tests
Other
% of Total Bugs
Slide #44
Observation of Bugs
Assertion checkers are manually placed by designers to monitor design
properties
Very effective in practice
Roughly half of the bugs are found by comparing to the reference model
Register comparison
PC comparison
Memory comparison
25%
22%
15%
14%
8%
6%
5%
3%
2%
0% 5% 10% 15% 20% 25% 30%
Assertion Checker
Register Compare
Simulation Hang
PC Compare
Memory Compare
Manual Inspection
Self-Check Tests
Cache Coherency
Saves Check
% of Total Bugs
23
23
Slide #45
Types of Bugs
Recall: Unit level verification has been carried out
Current verification is to catch bugs during the
implementation of architectural SPEC
78%
9%
5%
3%
5%
0% 20% 40% 60% 80% 100%
Implementation
Bugs
Programming
Mistakes
RTL/Schematic
Mismatch
Architectural
Conception
Other
% of Total Bugs
Slide #46
8 Bugs missed into 1
st
Silicon
(Taylor&Quinn -p DAC98)
2: RTL vs. schematics mismatch
1997 results Logic EC may have solved these problems
1: electrical issues
We expect more in the future no good solution so far
1: insufficient randomization of initial state in RTL
2: logic bugs that should be found by more
sophisticated checkers
1: logic bug that is hard to catch by either pseudo
random tests or direct tests
Something is going to be missed

You might also like