You are on page 1of 53

Hierarchical Clustering algorithms

Agglomerative (bottom-up):

Divisive (top-down):

Precondition: Start with each document as a separate cluster. Postcondition: Eventually all documents belong to the same cluster. Precondition: Start with all documents belonging to the same cluster. Postcondition: Eventually each document forms a cluster of its own.

Does not require the number of clusters k in advance.


Needs a termination/readout condition
Prasad L17HierCluster

Hierarchical Agglomerative Clustering (HAC) Algorithm


Start with all instances in their own cluster. Until there is only one cluster: Among the current clusters, determine the two clusters, ci and cj, that are most similar. Replace ci and cj with a single cluster ci cj

Prasad

L17HierCluster

Dendrogram: Document Example


As clusters agglomerate, docs likely to fall into a hierarchy of topics or concepts.

d3 d5 d1 d2 d4
d1,d2 d3,d4,d5

d4,d5

d3

Prasad

L17HierCluster

Key notion: cluster representative


We want a notion of a representative point in a cluster, to represent the location of each cluster Representative should be some sort of typical or central point in the cluster, e.g.,
point inducing smallest radii to docs in cluster smallest squared distances, etc. point that is the average of all docs in the cluster
Centroid or center of gravity Measure intercluster distances by distances of centroids.

Prasad

L17HierCluster

Example: n=6, k=3, closest pair of centroids

d6

d4

d5

d3

Centroid after second step.


d1 d2

Centroid after first step.


Prasad 5

Outliers in centroid computation


Can ignore outliers when computing centroid. What is an outlier?
Lots of statistical definitions, e.g.
Say some cluster moment. moment of point to centroid > M 10.

Centroid Outlier

Prasad

L17HierCluster

Closest pair of clusters


Many variants to defining closest pair of clusters Single-link
Similarity of the most cosine-similar (single-link)

Complete-link
Similarity of the furthest points, the least cosinesimilar

Centroid
Clusters whose centroids (centers of gravity) are the most cosine-similar

Average-link
Average cosine between pairs of elements

Single Link Agglomerative Clustering


Use maximum similarity of pairs:

sim(ci ,c j ) = max sim( x, y)


Can result in straggly (long and thin) clusters due to chaining effect. After merging ci and cj, the similarity of the resulting cluster to another cluster, ck, is:
xci , yc j

sim(( ci c j ), ck ) = max( sim(ci , ck ), sim(c j , ck ))


Prasad L17HierCluster 8

Single Link Example

Prasad

L17HierCluster

Complete Link Agglomerative Clustering


Use minimum similarity of pairs:

sim(ci ,c j ) = min sim( x, y )


xci , yc j

Makes tighter, spherical clusters that are typically preferable. After merging ci and cj, the similarity of the resulting ), ck ) = min( sim( i , ck ), sim , j , sim(( ci c jcluster to anotherccluster, ck(cis: ck ))
Ci
Prasad

Cj

Ck
10

Complete Link Example

Prasad

L17HierCluster

11

Computational Complexity
In the first iteration, all HAC methods need to compute similarity of all pairs of n individual instances which is O(n2). In each of the subsequent n-2 merging iterations, compute the distance between the most recently created cluster and all other existing clusters. In order to maintain an overall O(n2) performance, computing similarity to each cluster must be done in constant time.
Often O(n3) if done naively or O(n2 log n) if done more cleverly
Prasad L17HierCluster 12

Group Average Agglomerative Clustering


Use average similarity across all pairs within the merged cluster to measure the similarity of two clusters.

1 sim(ci , c j ) = c ) y(c )sim( x, y) ci c j ( ci c j 1) x( ci j i c j : y x


Compromise between single and complete link. Two options:
Averaged across all ordered pairs in the merged cluster Averaged over all pairs between the two original clusters No clear difference in efficacy

Prasad

L17HierCluster

13

Computing Group Average Similarity


Assume cosine similarity and normalized vectors with unit length. Always maintain sum of vectors in each cluster. s (c j ) = x
xc j

Compute sim(ci , c j ) = time:


Prasad

( s (ci ) + s (c of s (ci ) + s (c )) (| ci | + | c j similarityj )) (clusters jinconstant|) (| ci | + | c j |)(| ci | + | c j | 1)


L17HierCluster 14

Medoid as Cluster Representative


The centroid does not have to be a document. Medoid: A cluster representative that is one of the documents (E.g., the document closest to the centroid) One reason this is useful
Consider the representative of a large cluster (>1K docs)

The centroid of this cluster will be a dense Prasad L17HierCluster vector The medoid of this cluster will be a sparse

15

Efficiency: Using approximations


In standard algorithm, must find closest pair of centroids at each step Approximation: instead, find nearly closest pair
use some data structure that makes this approximation easier to maintain simplistic example: maintain closest pair based on distances in projection on aRandom line random line
Prasad 16

Major issue - labeling


After clustering algorithm finds clusters how can they be useful to the end user? Need pithy label for each cluster
In search results, say Animal or Car in the jaguar example. In topic trees (Yahoo), need navigational cues. Often done by hand, a posteriori.

Prasad

L17HierCluster

17

How to Label Clusters


Show titles of typical documents
Titles are easy to scan Authors create them for quick scanning! But you can only show a few titles which may not fully represent cluster

Show words/phrases prominent in cluster


More likely to fully represent cluster Use distinguishing words/phrases
Prasad L17HierCluster Differential labeling 18

Labeling
Common heuristics - list 5-10 most frequent terms in the centroid vector.
Drop stop-words; stem.

Differential labeling by frequent terms


Within a collection Computers, clusters all have the word computer as frequent term. Discriminant analysis of centroids.

Perhaps better: distinctive noun phrase


Prasad L17HierCluster 19

What is a Good Clustering?


Internal criterion: A good clustering will produce high quality clusters in which:
the intra-class (that is, intra-cluster) similarity is high the inter-class similarity is low

The measured quality of a clustering depends on both the document representation and the similarity measure used
Prasad L17HierCluster 20

External criteria for clustering quality


Quality measured by its ability to discover some or all of the hidden patterns or latent classes in gold standard data Assesses a clustering with respect to ground truth requires labeled data
Assume documents with C gold standard classes, while our clustering algorithms produce K clusters, 1, 2, , K with ni members.
Prasad L17HierCluster 21

External Evaluation of Cluster Quality


Simple measure: purity, the ratio between the dominant class in the cluster i and the size of cluster i
1 Purity (i ) = max j (nij ) j C ni Others are entropy of classes in clusters (or mutual information between classes and clusters)
Prasad L17HierCluster 22

Purity example

Cluster I


Cluster II


Cluster III

Cluster I: Purity = 1/6 * (max(5, 1, 0)) = 5/6 Cluster II: Purity = 1/6 * (max(1, 4, 1)) = 4/6 Cluster III: Purity = 1/5 * (max(2, 0, 3)) = 3/5
23

Rand Index
Number of points Same class in ground truth Different classes in ground truth Same Cluster in clustering Different Clusters in clustering

A B

C D

Rand index: symmetric version


A+ D RI = A+ B +C + D
Compare with standard Precision and Recall.

A P= A+ B
Prasad L17HierCluster

A R= A+C
25

Rand Index example: 0.68


Number of points Same class in ground truth Different classes in ground truth Same Cluster in clustering Different Clusters in clustering

20 20

24 72

Final word and resources


In clustering, clusters are inferred from the data without human input (unsupervised learning) However, in practice, its a bit less clear: there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . .

SKIP WHAT FOLLOWS

Prasad

L17HierCluster

28

Term vs. document space


So far, we clustered docs based on their similarities in term space For some applications, e.g., topic analysis for inducing navigation structures, can dualize
use docs as axes represent (some) terms as vectors proximity based on co-occurrence of terms in docs now clustering terms, not docs L17HierCluster

Prasad

29

Term vs. document space


Cosine computation
Constant for docs in term space Grows linearly with corpus size for terms in doc space

Cluster labeling
Clusters have clean descriptions in terms of noun phrase co-occurrence

Application of term clusters


Prasad L17HierCluster 30

Multi-lingual docs
E.g., Canadian government docs. Every doc in English and equivalent French.
Must cluster by concepts rather than language

Simplest: pad docs in one language with dictionary equivalents in the other
thus each doc has a representation in both languages

Axes are terms in both languages


Prasad L17HierCluster 31

Feature selection
Which terms to use as axes for vector space? IDF is a form of feature selection
Can exaggerate noise e.g., mis-spellings

Better to use highest weight mid-frequency words the most discriminating terms Pseudo-linguistic heuristics, e.g.,
drop stop-words stemming/lemmatization use only nouns/noun phrases
L17HierCluster

Prasad

Good clustering should figure out some

32

Evaluation of clustering
Perhaps the most substantive issue in data mining in general: how do you measure goodness? Most measures focus on computational efficiency Time and space For application of clustering to search: Measure retrieval effectiveness

Prasad

L17HierCluster

33

Approaches to evaluating
Anecdotal Ground truth comparison
Cluster retrieval

Purely quantitative measures


Probability of generating clusters found Average distance between cluster members

Microeconomic / utility

Prasad

L17HierCluster

34

Anecdotal evaluation
Probably the commonest (and surely the easiest)
I wrote this clustering algorithm and look what it found!

No benchmarks, no comparison possible Any clustering algorithm will pick up the easy stuff like partition by languages Generally, unclear scientific value.
Prasad L17HierCluster 35

Ground truth comparison


Take a union of docs from a taxonomy & cluster
Yahoo!, ODP, newspaper sections

Compare clustering results to baseline


e.g., 80% of the clusters found map cleanly Subjec've to taxonomy nodes

But is it the right answer?


There can be several equally right answers

For the docs given, the static prior taxonomy may be incomplete/wrong in Prasad places

36

the clustering algorithm may have gotten right

Microeconomic viewpoint
Anything - including clustering - is only as good as the economic utility it provides For clustering: net economic gain produced by an approach (vs. another approach) Strive for a concrete optimization problem Examples
recommendation systems clock time for interactive search
Prasad

expensive

L17HierCluster

37

Evaluation example: Cluster retrieval


Ad-hoc retrieval Cluster docs in returned set Identify best cluster & only retrieve docs from it How do various clustering methods affect the quality of whats retrieved? Concrete measure of quality:
Precision as measured by user judgements for these queries
Prasad L17HierCluster 38

Done with TREC queries

Evaluation
Compare two IR algorithms
1. send query, present ranked results 2. send query, cluster results, present clusters

Experiment was simulated (no users)


Results were clustered into 5 clusters Clusters were ranked according to percentage relevant documents Documents within clusters were ranked according to similarity to query
Prasad L17HierCluster 39

Latent Semantic Indexing

Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann

Prasad

L18LSI

40

Todays topic
Latent Semantic Indexing
Term-document matrices are very large But the number of topics that people talk about is small (in some sense) Clothes, movies, politics, Can we represent the term-document space by a lower dimensional latent space?

Prasad

L18LSI

41

Linear Algebra Background

Prasad

L18LSI

42

Eigenvalues & Eigenvectors


Eigenvectors (for a square mm matrix S)
Example

(right) eigenvector

eigenvalue

How many eigenvalues are there at most?


only has a non-zero solution if this is a m-th order equation in which can have at most m distinct solutions (roots of the characteristic
polynomial) can be complex even though S is real.

43

Matrix-vector multiplication
30 0 0 S = 0 20 0 0 0 1
has eigenvalues 30, 20, 1 with corresponding eigenvectors

1 v 1 = 0 0

0 v 2 = 1 0

0 v 3 = 0 1

On each eigenvector, S acts as a multiple of the identity matrix: but as a different multiple on each. Any vector (say x= the eigenvectors:
2 4 6

) can be viewed as a combination of x = 2v1 + 4v2 + 6v3

Matrix vector multiplication


Thus a matrix-vector multiplication such as Sx (S, x as in the previous slide) can be rewritten in terms of the eigenvalues/vectors:

Sx = S(2v1 + 4v 2 + 6v 3 ) Sx = 2Sv1 + 4Sv 2 + 6Sv 3 = 2 1v1 + 4 2v 2 + 6 3 v 3 Sx = 60v1 + 80v 2 + 6v 3


Even though x is an arbitrary vector, the action of S on x is determined by the eigenvalues/vectors.
Prasad L18LSI 45

Matrix vector multiplication


Observation: the effect of small eigenvalues is small. If we ignored the smallest eigenvalue (1), then instead of 60 60 80 80 we would get 6 0 These vectors are similar (in terms of close (in terms of cosine similarity), or Euclidean distance).
Prasad L18LSI

46

Eigenvalues & Eigenvectors


For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal

Sv 2 v = 1 v = { 1and 1, , , {, 0 2} }v 2 { 1 2 } 1 2
All eigenvalues of a real symmetric matrix are real.


L18LSI

for= comp , IS if S S T = 0 and


All eigenvalues of a positive semi-definite matrix are non-negative

n T w if the = , v w Sw 0 , Sv 0
Prasad 47

Example
Let

2 1 S= 1 2

Real, symmetric.

Then

2 1 2 S= I 0 1 . 2 ( ) = 2 1

The eigenvalues are 1 and 3 (nonnegative, real). The eigenvectors are orthogonal (andvalues real): 1 1 Plug in these and solve for 1 1 Prasad eigenvectors.

Eigen/diagonal Decomposition
Let be a square matrix with m linearly independent eigenvectors (a nonUnique defective matrix) Theorem: Exists an eigen decomposition diagonal
(cf. matrix diagonalization theorem)

for distinct eigenvalues

Columns of U are eigenvectors of S Diagonal elements of


Prasad L18LSI

are eigenvalues of
49

Diagonal decomposition: why/how


Let U have the eigenvectors as columns: U=v ... vn 1

Then, SU can be written

1 ... SU n ... v ... = ... 1 S v v n= v = vv n 1 n 1 1 n

Thus SU=U, or U1SU=


Prasad

And S=UU1.

50

Diagonal decomposition - example


Recall

2 1 S = ; 1 2

1 = 1, 2 = 3.
form

1 and 1 The eigenvectors 1 1


Inverting, we have
1

1 U= 1
Recall UU1 =1.

1 1

1 2 1 / / 2 U= / 1 1 2 2 /

1 2 1 1 1 / 2 1 0/ Then, S=UU1 = 3/ 1 2 / 1 0 1 2 1

Example continued
Lets divide U (and multiply U1) by 2

Then,

S=

2/ 0 2 / 11 1 1 / 2 / 1 2 3 01 1 1 1 / / / 2 2 2 2/
Q

(Q-1= QT )

Why? Stay tuned


Prasad L18LSI 52

Symmetric Eigen Decomposition


If is a symmetric matrix: Theorem: There exists a (unique) eigen T decompositionS Q = Q
where Q is orthogonal:
Q-1= QT Columns of Q are normalized eigenvectors Columns are orthogonal. (everything is real)
Prasad L18LSI 53

You might also like