Professional Documents
Culture Documents
to Data Mining
by Tan, Steinbach, Kumar
Tan,Steinbach, Kumar
4/18/2004
Classification: Definition
O
O O
Find a model for class attribute as a function of the values of other attributes. Goal: previously unseen records should be assigned a class as accurately as possible.
A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
Tan,Steinbach, Kumar
4/18/2004
Attrib2 Large Medium Small Medium Large Medium Large Small Medium Small
Attrib3 125K 100K 70K 120K 95K 60K 220K 85K 75K 90K
Training Set
Tid 11 12 13 14 15
10
Model
Apply Model
Class ? ? ? ? ?
Deduction
Test Set
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3
Predicting tumor cells as benign or malignant Classifying credit card transactions as legitimate or fraudulent Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil Categorizing news stories as finance, weather, entertainment, sports, etc
Introduction to Data Mining 4/18/2004 4
Tan,Steinbach, Kumar
Classification Techniques
Decision Tree based Methods O Rule-based Methods O Memory based reasoning O Neural Networks O Nave Bayes and Bayesian Belief Networks O Support Vector Machines
O
Tan,Steinbach, Kumar
4/18/2004
Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes
Splitting Attributes
Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO
Training Data
Tan,Steinbach, Kumar Introduction to Data Mining
te ca
te ca
Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes
Married NO
MarSt
There could be more than one tree that fits the same data!
Tan,Steinbach, Kumar
4/18/2004
Attrib2 Large Medium Small Medium Large Medium Large Small Medium Small
Attrib3 125K 100K 70K 120K 95K 60K 220K 85K 75K 90K
Training Set
Tid 11 12 13 14 15
10
Model
Apply Model
Decision Tree
Class ? ? ? ? ?
Deduction
Test Set
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8
Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO
Married
Tan,Steinbach, Kumar
4/18/2004
Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO
No
1 0
Married
Tan,Steinbach, Kumar
4/18/2004
10
Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO
No
1 0
Married
Tan,Steinbach, Kumar
4/18/2004
11
Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO
No
1 0
Married
Tan,Steinbach, Kumar
4/18/2004
12
Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO
No
1 0
Married
Tan,Steinbach, Kumar
4/18/2004
13
Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO
No
1 0
Married
Assign Cheat to No
Tan,Steinbach, Kumar
4/18/2004
14
Attrib2 Large Medium Small Medium Large Medium Large Small Medium Small
Attrib3 125K 100K 70K 120K 95K 60K 220K 85K 75K 90K
Training Set
Tid 11 12 13 14 15
10
Model
Apply Model
Decision Tree
Class ? ? ? ? ?
Deduction
Test Set
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15
Many Algorithms: Hunts Algorithm (one of the earliest) CART ID3, C4.5 SLIQ,SPRINT
Tan,Steinbach, Kumar
4/18/2004
16
Let Dt be the set of training records that reach a node t General Procedure: If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt If Dt is an empty set, then t is a leaf node labeled by the default class, yd If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset.
Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes
Dt
Tan,Steinbach, Kumar
4/18/2004
17
Hunts Algorithm
Dont Cheat
Tid Refund Marital Status 1 Yes No No Yes No No Yes No No No Single Married Single Married
Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes
Refund
Yes Dont Cheat No Dont Cheat
2 3 4 5 6 7 8
Refund
Yes Dont Cheat Single, Divorced No Yes Dont Cheat Married Dont Cheat < 80K Dont Cheat
Tan,Steinbach, Kumar
Refund
No
10
9 10
Marital Status
Single, Divorced
Marital Status
Married Dont Cheat >= 80K
Cheat
Taxable Income
Cheat
4/18/2004 18
Tree Induction
O
Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records
How
to specify the attribute test condition? How to determine the best split?
Tan,Steinbach, Kumar
4/18/2004
19
Tree Induction
O
Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records
How
to specify the attribute test condition? How to determine the best split?
Tan,Steinbach, Kumar
4/18/2004
20
Depends on attribute types Nominal Ordinal Continuous Depends on number of ways to split 2-way split Multi-way split
Tan,Steinbach, Kumar
4/18/2004
21
Binary split: Divides values into two subsets. Need to find optimal partitioning.
{Sports, Luxury}
CarType
{Family}
OR
{Family, Luxury}
CarType
{Sports}
Tan,Steinbach, Kumar
4/18/2004
22
Binary split: Divides values into two subsets. Need to find optimal partitioning.
{Small, Medium}
Size
{Large}
OR
{Medium, Large}
Size
{Small}
{Small, Large}
Size
{Medium}
4/18/2004 23
Tan,Steinbach, Kumar
Taxable Income?
> 80K
[25K,50K)
[50K,80K)
Tan,Steinbach, Kumar
4/18/2004
25
Tree Induction
O
Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records
How
to specify the attribute test condition? How to determine the best split?
Tan,Steinbach, Kumar
4/18/2004
26
c11
C0: 0 C1: 1
c20
C0: 4 C1: 6
C0: 1 C1: 3
C0: 8 C1: 0
C0: 1 C1: 7
...
...
C0: 0 C1: 1
Tan,Steinbach, Kumar
4/18/2004
27
C0: 5 C1: 5
Non-homogeneous, High degree of impurity
C0: 9 C1: 1
Homogeneous, Low degree of impurity
Tan,Steinbach, Kumar
4/18/2004
28
Tan,Steinbach, Kumar
4/18/2004
29
M0 B?
A?
Yes Node N1
C0 C1 N10 N11
No Node N2
C0 C1 N20 N21
Yes Node N3
C0 C1 N30 N31
No Node N4
C0 C1 N40 N41
M1 M12
M2
M3 M34
M4
GINI (t ) = 1 [ p ( j | t )]2
j
Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information
C1 C2 0 6 C1 C2 1 5 C1 C2 2 4 C1 C2 3 3
Gini=0.000
Gini=0.278
Gini=0.444
Gini=0.500
Tan,Steinbach, Kumar
4/18/2004
31
C1 C2
0 6
P(C1) = 0/6 = 0
P(C2) = 6/6 = 1
C1 C2
1 5
P(C1) = 1/6
P(C2) = 5/6
C1 C2
Tan,Steinbach, Kumar
2 4
Used in CART, SLIQ, SPRINT. When a node p is split into k partitions (children), the quality of split is computed as,
GINI split =
i =1
ni GINI (i ) n
where,
Tan,Steinbach, Kumar
4/18/2004
33
Splits into two partitions Effect of Weighing partitions: Larger and Purer Partitions are sought for.
Parent
B?
Yes No Node N2
C1 C2
6 6
Gini = 0.500
Node N1
C1 C2
N1 5 2
N2 1 4
Gini=0.333
Introduction to Data Mining
For each distinct value, gather counts for each class in the dataset Use the count matrix to make decisions
Multi-way split Two-way split (find best partition of values)
CarType {Sports, {Family} Luxury} 3 1 2 4 0.400 CarType {Family, {Sports} Luxury} 2 2 1 5 0.419
C1 C2 Gini
Tan,Steinbach, Kumar
4/18/2004
35
Use Binary Decisions based on one value Several Choices for the splitting value Number of possible splitting values = Number of distinct values Each splitting value has a count matrix associated with it Class counts in each of the partitions, A < v and A v Simple method to choose best v For each v, scan the database to gather count matrix and compute its Gini index Computationally Inefficient! Repetition of work.
Introduction to Data Mining
Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes
Tan,Steinbach, Kumar
4/18/2004
36
For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and computing gini index Choose the split position that has the least gini index
Cheat No No No Yes Yes Yes No No No No
Taxable Income
60 55 > 3 7 65 <= 0 1
70 72 > 3 6 <= 0 2
75 80 > 3 5 <= 0 3
85 87 > 3 4 <= 1 3
90 92 > 2 4 <= 2 3
95 97 > 1 4 <= 3 3
> 0 3
> 0 2
0.420
0.400
0.375
0.343
0.417
0.400
0.300
0.343
0.375
0.400
0.420
Tan,Steinbach, Kumar
4/18/2004
37
Entropy (t ) = p ( j | t ) log p ( j | t )
j
(log nc) when records are equally distributed among all classes implying least information Minimum (0.0) when all records belong to one class, implying most information
Entropy (t ) = p ( j | t ) log p ( j | t )
j 2
C1 C2
0 6
P(C1) = 0/6 = 0
P(C2) = 6/6 = 1
C1 C2
1 5
P(C1) = 1/6
P(C2) = 5/6
Entropy = (1/6) log2 (1/6) (5/6) log2 (1/6) = 0.65 P(C1) = 2/6 P(C2) = 4/6
C1 C2
2 4
Tan,Steinbach, Kumar
Information Gain:
GAIN
split
n Entropy (i ) = Entropy ( p ) n
k i i =1
Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) Used in ID3 and C4.5 Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 40
Gain Ratio:
GainRATIO
split
i =1
Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! Used in C4.5 Designed to overcome the disadvantage of Information Gain
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 41
Error (t ) = 1 max P (i | t )
i
(1 - 1/nc) when records are equally distributed among all classes, implying least interesting information (0.0) when all records belong to one class, implying most interesting information
Tan,Steinbach, Kumar
4/18/2004
42
Error (t ) = 1 max P (i | t )
i
C1 C2
0 6
P(C1) = 0/6 = 0
P(C2) = 6/6 = 1
C1 C2
1 5
P(C1) = 1/6
P(C2) = 5/6
Error = 1 max (1/6, 5/6) = 1 5/6 = 1/6 P(C1) = 2/6 P(C2) = 4/6
C1 C2
2 4
Tan,Steinbach, Kumar
Tan,Steinbach, Kumar
4/18/2004
44
Parent C1 C2 7 3
Gini = 0.42
C1 C2
N1 3 0
N2 4 3
Gini=0.361
Tan,Steinbach, Kumar
4/18/2004
45
Tree Induction
O
Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records
How
to specify the attribute test condition? How to determine the best split?
Tan,Steinbach, Kumar
4/18/2004
46
Stop expanding a node when all the records belong to the same class Stop expanding a node when all the records have similar attribute values Early termination (to be discussed later)
Tan,Steinbach, Kumar
4/18/2004
47
Advantages: Inexpensive to construct Extremely fast at classifying unknown records Easy to interpret for small-sized trees Accuracy is comparable to other classification techniques for many simple data sets
Tan,Steinbach, Kumar
4/18/2004
48
Example: C4.5
Simple depth-first construction. O Uses Information Gain O Sorts Continuous Attributes at each node. O Needs entire data to fit in memory. O Unsuitable for Large Datasets. Needs out-of-core sorting.
O O
Tan,Steinbach, Kumar
4/18/2004
49
Tan,Steinbach, Kumar
4/18/2004
50
500 circular and 500 triangular data points. Circular points: 0.5 sqrt(x12+x22) 1 Triangular points: sqrt(x12+x22) > 0.5 or sqrt(x12+x22) < 1
Tan,Steinbach, Kumar
4/18/2004
51
Underfitting: when model is too simple, both training and test errors are large
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 52
Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 54
Notes on Overfitting
O
Overfitting results in decision trees that are more complex than necessary Training error no longer provides a good estimate of how well the tree will perform on previously unseen records Need new ways for estimating errors
Tan,Steinbach, Kumar
4/18/2004
55
Re-substitution errors: error on training ( e(t) ) Generalization errors: error on testing ( e(t)) Methods for estimating generalization errors: Optimistic approach: e(t) = e(t) Pessimistic approach:
For each leaf node: e(t) = (e(t)+0.5) Total errors: e(T) = e(T) + N 0.5 (N: number of leaf nodes) For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances): Training error = 10/1000 = 1% Generalization error = (10 + 300.5)/1000 = 2.5% uses validation data set to estimate generalization error
Introduction to Data Mining 4/18/2004 56
Tan,Steinbach, Kumar
Occams Razor
O
Given two models of similar generalization errors, one should prefer the simpler model over the more complex model For complex models, there is a greater chance that it was fitted accidentally by errors in data Therefore, one should include model complexity when evaluating a model
Tan,Steinbach, Kumar
4/18/2004
57
C? C1 0
Xn
X X1 X2 X3 X4
y ? ? ? ?
Xn
O O
Cost(Model,Data) = Cost(Data|Model) + Cost(Model) Cost is the number of bits needed for encoding. Search for the least costly model. Cost(Data|Model) encodes the misclassification errors. Cost(Model) uses node encoding (number of children) plus splitting condition encoding.
Introduction to Data Mining 4/18/2004 58
Tan,Steinbach, Kumar
Pre-Pruning (Early Stopping Rule) Stop the algorithm before it becomes a fully-grown tree Typical stopping conditions for a node:
Stop if all instances belong to the same class Stop if all the attribute values are the same
Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain).
Tan,Steinbach, Kumar
4/18/2004
59
Post-pruning Grow decision tree to its entirety Trim the nodes of the decision tree in a bottom-up fashion If generalization error improves after trimming, replace sub-tree by a leaf node. Class label of leaf node is determined from majority class of instances in the sub-tree Can use MDL for post-pruning
Tan,Steinbach, Kumar
4/18/2004
60
Example of Post-Pruning
Training Error (Before splitting) = 10/30 Class = Yes Class = No 20 10 Pessimistic error = (10 + 0.5)/30 = 10.5/30 Training Error (After splitting) = 9/30 Pessimistic error (After splitting) = (9 + 4 0.5)/30 = 11/30 PRUNE!
Error = 10/30
A?
A1 A2
Class = Yes Class = No 8 4 Class = Yes Class = No 3 4
A4 A3
Class = Yes Class = No 4 1 Class = Yes Class = No 5 1
4/18/2004 61
Tan,Steinbach, Kumar
Examples of Post-pruning
Optimistic error?
Dont prune for both cases
Case 1:
Pessimistic error?
Dont prune case 1, prune case 2
C0: 11 C1: 3
C0: 2 C1: 4
C0: 14 C1: 3
C0: 2 C1: 2
4/18/2004 62
Tan,Steinbach, Kumar
Missing values affect decision tree construction in three different ways: Affects how impurity measures are computed Affects how to distribute instance with missing value to child nodes Affects how a test instance with missing value is classified
Tan,Steinbach, Kumar
4/18/2004
63
Taxable Income Class 125K 100K 70K 120K No No No No Yes No No Yes No Yes
Split on Refund: Entropy(Refund=Yes) = 0 Entropy(Refund=No) = -(2/6)log(2/6) (4/6)log(4/6) = 0.9183 Entropy(Children) = 0.3 (0) + 0.6 (0.9183) = 0.551 Gain = 0.9 (0.8813 0.551) = 0.3303
Missing value
Tan,Steinbach, Kumar
4/18/2004
64
Distribute Instances
Tid Refund Marital Status 1 2 3 4 5 6 7 8 9
10
Yes
Class=Yes Class=No
Refund
0 + 3/9 3
No
2 + 6/9 4
Class=Yes Class=No
Yes
Class=Yes Class=No 0 3
Refund
No
Cheat=Yes Cheat=No
Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9
4/18/2004 65
Tan,Steinbach, Kumar
Classify Instances
New record:
Tid Refund Marital Status 11
1 0
Married
Taxable Income Class 85K ?
Single 1 1 2
3 6/9 3.67
No
Refund Yes NO Single, Divorced TaxInc < 80K NO > 80K YES No MarSt Married NO
Probability that Marital Status = Married is 3.67/6.67 Probability that Marital Status ={Single,Divorced} is 3/6.67
Tan,Steinbach, Kumar
4/18/2004
66
Other Issues
Data Fragmentation O Search Strategy O Expressiveness O Tree Replication
O
Tan,Steinbach, Kumar
4/18/2004
67
Data Fragmentation
O
Number of instances gets smaller as you traverse down the tree Number of instances at the leaf nodes could be too small to make any statistically significant decision
Tan,Steinbach, Kumar
4/18/2004
68
Search Strategy
O O
Finding an optimal decision tree is NP-hard The algorithm presented so far uses a greedy, top-down, recursive partitioning strategy to induce a reasonable solution Other strategies? Bottom-up Bi-directional
Tan,Steinbach, Kumar
4/18/2004
69
Expressiveness
O
Decision tree provides expressive representation for learning discrete-valued function But they do not generalize well to certain types of Boolean functions
Not expressive enough for modeling continuous variables Particularly when test condition involves only a single attribute at-a-time
Introduction to Data Mining 4/18/2004 70
Tan,Steinbach, Kumar
Decision Boundary
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
No :0 :4
0.6
0.7
0.8
0.9
Border line between two neighboring regions of different classes is known as decision boundary Decision boundary is parallel to axes because test condition involves a single attribute at-a-time
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 71
x+y<1
Class = +
Class =
Test condition may involve multiple attributes More expressive representation Finding optimal test condition is computationally expensive
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 72
Tree Replication
P
Model Evaluation
O
Metrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates? Methods for Model Comparison How to compare the relative performance among competing models?
Introduction to Data Mining 4/18/2004 74
Tan,Steinbach, Kumar
Model Evaluation
O
Metrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates? Methods for Model Comparison How to compare the relative performance among competing models?
Introduction to Data Mining 4/18/2004 75
Tan,Steinbach, Kumar
PREDICTED CLASS
Class=Yes Class=No b d
a: TP (true positive) b: FN (false negative) c: FP (false positive) d: TN (true negative)
Class=Yes
a c
4/18/2004
76
ACTUAL CLASS
Class=Yes Class=No
a (TP) c (FP)
Accuracy =
Tan,Steinbach, Kumar
a+d TP + TN = a + b + c + d TP + TN + FP + FN
Introduction to Data Mining 4/18/2004 77
Limitation of Accuracy
O
Consider a 2-class problem Number of Class 0 examples = 9990 Number of Class 1 examples = 10 If model predicts everything to be class 0, accuracy is 9990/10000 = 99.9 % Accuracy is misleading because model does not detect any class 1 example
Tan,Steinbach, Kumar
4/18/2004
78
Cost Matrix
Tan,Steinbach, Kumar
4/18/2004
79
+ -1 1
Model M2 ACTUAL CLASS
100 0
PREDICTED CLASS
+ -
PREDICTED CLASS
+ + 150 60
40 250 + -
+ 250 5
45 200
Cost vs Accuracy
Count
PREDICTED CLASS
Class=Yes Class=Yes Class=No
ACTUAL CLASS
a c
b d
Class=No
Cost
PREDICTED CLASS
Class=Yes Class=Yes Class=No
ACTUAL CLASS
p q
q p
Class=No
Tan,Steinbach, Kumar
4/18/2004
81
Cost-Sensitive Measures
Precision (p) = Recall (r) = a a+c
a a+b 2rp 2a = r + p 2a + b + c
F - measure (F) =
O O O
Precision is biased towards C(Yes|Yes) & C(Yes|No) Recall is biased towards C(Yes|Yes) & C(No|Yes) F-measure is biased towards all except C(No|No)
Weighted Accuracy =
Tan,Steinbach, Kumar
wa + w d wa + wb+ wc + w d
1 4 1 2 3 4
4/18/2004
82
Model Evaluation
O
Metrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates? Methods for Model Comparison How to compare the relative performance among competing models?
Introduction to Data Mining 4/18/2004 83
Tan,Steinbach, Kumar
How to obtain a reliable estimate of performance? Performance of a model may depend on other factors besides the learning algorithm: Class distribution Cost of misclassification Size of training and test sets
Tan,Steinbach, Kumar
4/18/2004
84
Learning Curve
O
Learning curve shows how accuracy changes with varying sample size Requires a sampling schedule for creating learning curve:
O
Methods of Estimation
O
Holdout Reserve 2/3 for training and 1/3 for testing Random subsampling Repeated holdout Cross validation Partition data into k disjoint subsets k-fold: train on k-1 partitions, test on the remaining one Leave-one-out: k=n Stratified sampling oversampling vs undersampling Bootstrap Sampling with replacement
Introduction to Data Mining 4/18/2004 86
Tan,Steinbach, Kumar
Model Evaluation
O
Metrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates? Methods for Model Comparison How to compare the relative performance among competing models?
Introduction to Data Mining 4/18/2004 87
Tan,Steinbach, Kumar
ROC Curve
- 1-dimensional data set containing 2 classes (positive and negative) - any points located at x > t is classified as positive
ROC Curve
(TP,FP): O (0,0): declare everything to be negative class O (1,1): declare everything to be positive class O (1,0): ideal
O
Tan,Steinbach, Kumar
4/18/2004
90
No model consistently outperform the other O M1 is better for small FPR O M2 is better for large FPR Area Under the ROC curve
O
Ideal:
Random guess:
Tan,Steinbach, Kumar
Use classifier that produces posterior probability for each test instance P(+|A) Sort the instances according to P(+|A) in decreasing order Apply threshold at each unique value of P(+|A) Count the number of TP, FP, TN, FN at each threshold TP rate, TPR = TP/(TP+FN) FP rate, FPR = FP/(FP + TN)
Tan,Steinbach, Kumar
4/18/2004
92
+
0.25 5 5 0 0 1 1
0.43 4 5 0 1 0.8 1
+
0.53 4 4 1 1 0.8 0.8
+
0.85 3 1 4 2 0.6 0.2
+
0.93 2 0 5 3 0.4 0
+
0.95 1 0 5 4 0.2 0 1.00 0 0 5 5 0 0
Threshold >=
TP FP TN FN TPR FPR
ROC Curve:
Tan,Steinbach, Kumar
4/18/2004
93
Test of Significance
O
Tan,Steinbach, Kumar
4/18/2004
94
x Bin(N, p)
e.g: Toss a fair coin 50 times, how many heads would turn up? Expected number of heads = Np = 50 0.5 = 25
Given x (# of correct predictions) or equivalently, acc=x/N, and N (# of test instances), Can we predict p (true accuracy of model)?
Tan,Steinbach, Kumar
4/18/2004
95
Area = 1 -
P( Z <
/2
acc p <Z p (1 p ) / N
1 / 2
)
Z/2 Z1- /2
= 1
O
/2
/2
/2
Tan,Steinbach, Kumar
4/18/2004
96
Consider a model that produces an accuracy of 80% when evaluated on 100 test instances:
N=100, acc = 0.8 Let 1- = 0.95 (95% confidence) From probability table, Z/2=1.96
N 50 0.670 0.888 100 0.711 0.866 500 0.763 0.833 1000 0.774 0.824 5000 0.789 0.811
1-
p(lower) p(upper)
Tan,Steinbach, Kumar
4/18/2004
97
e1 ~ N (1 , 1 )
e (1 e ) n
i i i
e2 ~ N (2 , 2 )
Approximate:
Tan,Steinbach, Kumar
=
i
4/18/2004
98
= + +
2 2 2 2 t 1 2 1
2 2
d =d Z
t
/2
t
99
4/18/2004
An Illustrative Example
Given: M1: n1 = 30, e1 = 0.15 M2: n2 = 5000, e2 = 0.25 O d = |e2 e1| = 0.1 (2-sided test)
O
=
d
If models are generated on the same test sets D1,D2, , Dk (e.g., via cross-validation)
For each set: compute dj = e1j e2j dj has mean dt and variance t k 2 Estimate: (d d )
=
2 t t
j =1
k (k 1) d = d t
1 , k 1 t
Tan,Steinbach, Kumar
4/18/2004
101