Professional Documents
Culture Documents
The Problem
Stream
Identify all elements whose current frequency exceeds support threshold s = 0.1%.
Related problem
Stream
Identify all subsets of items whose current frequency exceeds s=0.1%
Simple Low memory footprint Output is approximate but guaranteed not to exceed a user specified error parameter. Can be deployed for streams of singleton items and handle streams of variable sized sets of items.
Overview
Introduction Frequency counting applications Problem definition Algorithm for Frequent Items Algorithm for Frequent Sets of Items Experimental results Summary
Introduction
Motivating examples
Iceberg Query
Perform an aggregate function over an attribute and eliminate those below some threshold.
Association Rules
Iceberg Datacubes
Traffic measurement
Algorithms that compute exact results Attempt to minimize number of data passes (best algorithms take two passes).
Problems when adapted to streams: Only one pass is allowed. Results are expected to be available with short response time. Fail to provide any a-priori guarantee on the quality of their output.
Why Streams?
Streams vs. Stored data Volume of a stream over its lifetime can be huge Queries for streams require timely answers, response times need to be small
Iceberg Query
Perform an aggregate function over an attribute and eliminate those below some threshold.
Association Rules
Iceberg Datacubes
Traffic measurement
Iceberg Queries
Identify aggregates that exceed a user-specified threshold r One of the published algorithms to compute iceberg queries efficiently uses repeated hashing over multiple passes.* Basic Idea: In the first pass a set of counters is maintained Each incoming item is hashed to one of the counters which is incremented These counters are then compressed to a bitmap, with a 1 denoting large counter value In the second pass exact frequencies are maintained for only those elements that hash to a counter whose bitmap value is 1 This algorithm is difficult to adapt for streams because it requires two passes
* M. FANG, N. SHIVAKUMAR, H. GARCIA-MOLINA,R. MOTWANI, AND J. ULLMAN. Computing iceberg
queries efficiently. In Proc. of 24th Intl. Conf. on Very Large Data Bases, pages 299310, 1998.
Association Rules
Definitions
Transaction subset of items drawn from I, the universe of all Items. Itemset X I has support s if X occurs as a subset at least a fraction - s of all transactions Associations rules over a set of transactions are of the form X=>Y, where X and Y are subsets of I such that XY = 0 and XUY has support exceeding a user specified threshold s. Confidence of a rule X => Y is the value support(XUY) / support(X)
U|
For support = 50%, confidence = 50%, we have the following rules 1 => 3 with 50% support and 66% confidence 3 => 1 with 50% support and 100% confidence
For the rule 1 => 3: Support = Support({1, 3}) = 50% Confidence = Support({1,3})/Support({1}) = 66%
Toivonens algorithm
Based on sampling of the data stream. Basically, in the first pass, frequencies are computed for samples of the stream, and in the second pass these the validity of these items is determined.
Can be adapted for data stream Problems: - false negatives occur because the error in frequency counts is two sided - for small values of e, the number of samples is enormous ~ 1/e (100 million samples)
Flow sequence of transport layer packets that share the same source+destination addresses Estan and Verghese proposed an algorithm for this identifying flows that exceed a certain threshold. The algorithm is a combination of repeated hashing and sampling, similar to those for iceberg queries. Algorithm presented in this paper is directly applicable to the problem of network flow identification. It beats the algorithm in terms of space and requirements.
Problem definition
Problem Definition
Algorithm accepts two user-specified parameters - support threshold s E (0,1) - error parameter E (0,1) - << s N length of stream (i.e no. of tuples seen so far) Itemset set of items Denote item(set) to be item or itemset At any point of time, the algorithm can be asked to produce a list of item(set)s along with their estimated frequency.
Approximation guarantees
All item(set)s whose true frequency exceeds sN are output. There are no
false negatives.
No item(set)s whose true frequency is less than (s- (N is output. Estimated frequencies are less than true frequencies by at most N
Input Example
S = 0.1% as a rule of thumb, should be set to one-tenth or onetwentieth of s. = 0.01% As per property 1, ALL elements with frequency exceeding 0.1% will be output. As per property 2, NO element with frequency below 0.09% will be output Elements between 0.09% and 0.1% may or may not be output. Those that make their way are false positives As per property 3, all individual frequencies are less than their true frequencies by at most 0.01%
An algorithm maintains an -deficient synopsis if its output satisifies the aforementioned properties
Goal: to devise algorithms to support deficient synopsis using as little main memory as possible
Stream
Create counters by sampling
28 31 41 23 35 19
34 15 30
Notations
Data structure S - set of entries of the form (e,f) f estimates the frequency of an element e. r sampling rate. Sampling an element with rate = r means we select the element with probablity = 1/r
Initially S is empty, r = 1. For each incoming element e if (e exists in S) increment corresponding f else { sample element with rate r if (sampled) add entry (e,1) to S else ignore }
Let t = 1/ log(s-1 -1) ( = probability of failure) First 2t elements are sampled at rate=1 The next 2t elements at rate=2 The next 4t elements at rate=4 and so on
The number of unsuccessful coin tosses folows a geometric distribution. Effectively, after each rate change S is transformed to exactly the state it would have been in, if the new rate had been used from the beginning. When a user requests a list of items with threshold s, the output are those entries in S where f (s )N
Theorem 1
Sticky Sampling computes an -deficient synopsis with probability at least 1 - using at most 2/ log(s-1 -1) expected number of entries.
Theorem 1 - proof
First 2t elements find their way into S When r 2 N = rt + rt` ( t`E [1,t) ) => 1/r t/N Error in frequency corresponds to a sequence of unsuccessful coin tosses during the first few occurrences of e. the probability that this length exceeds N is at most (1 1/r)N < (1 t/N)-N < e-t Number of elements with f > s is no more than 1/s => the probability that the estimate for any of them is deficient by N is at most e-t/s
Probability of failure should be at most . This yields e-t/s < t 1/ log(s-1 -1) since the space requirements are 2t, the space bound follows
The algorithm name is called sticky sampling because S sweeps over the stream like a magnet, attracting all elements which already have an entry in S The space complexity is independent of N The idea of maintaining samples was first presented by Gibbons and Matias who used it to solve the top-k problem. This algorithm is different in that the sampling rate r increases logarithmically to produce ALL items with frequency > s, not just the top k
Lossy Counting
bucket 1 bucket 2 bucket 3
Divide the stream into buckets Keep exact counters for items in the buckets Prune entrys at bucket boundaries
A deterministic algorithm that computes frequency counts over a stream of singleitem transactions, satisfying the guarantees outlined in Section 3 using at most 1/log(N) space where N denotes the current length of the stream. The user specifies two parameters: - support s - error
Definitions
The incoming stream is conceptually divided into buckets of width w = ceil(1/e) Buckets are labeled with bucket ids, starting from 1 Denote the current bucket id by bcurrent whose value is ceil(N/w) Denote fe to be the true frequency of an element e in the stream seen so far Data stucture D is a set of entries of the form (e,f,D)
The algorithm
Initially D is empty Receive element e if (e exists in D) increment its frequency (f) by 1 else create a new entry (e, 1, bcurrent 1) If it bucket boundary prune D by the following the rule: (e,f,D) is deleted if f + D bcurrent When the user requests a list of items with threshold s, output those entries in D where f (s )N
For an entry (e,f,D) f represents the exact frequency count for e ever since it was inserted into D. The value D is the maximum number of times e could have occurred in the first bcurrent 1 buckets ( this value is exactly bcurrent 1) Once a value is inserted into D its value D is unchanged
+
D is Empty
First Bucket
Next Bucket
Lemma 1
Lemma 2
Proof by induction Base case: bcurrent = 1 (e,f,D) is deleted only if f = 1 Thus fe bcurrent (fe = f) Induction step: - Consider (e,f,D) that gets deleted for some bcurrent > 1. - This entry was inserted when bucket D+1 was being processed. - It was deleted at late as the time as bucket D became full. - By induction the true frequency for e was no more than D. - f is the true frequency of e since it was inserted. - fe f+D combined with the deletion rule f+D bcurrent => fe bcurrent
Lemma 3
Proof: If the lemma is true for an element e whenever it gets deleted, it is true for all other N also. From lemmas 1, 2 we infer that fe eN whenever it gets deleted.
Lemma 4
If (e,f,D) E D, then f fe f + eN
Proof: If D=0, f=fe. Otherwise e was possibly deleted in the first D buckets. From lemma 2 fe f+D D bcurrent 1 eN Conclusion f fe f + eN
Lemma 3 shows that all elements whose true frequency exceed eN have entries in D Lemma 4 shows that the estimated frequency of all such elements are accurate to within eN => D correctly maintains an e-deficient synopsis
Theorem 2
Lossy counting computes an e-deficient synopsis using at most 1/elog(eN) entries
Theorem 2 - proof
Let B = bcurrent di denote the number of entries in D whose bucket id is B - i + 1 (iE[1,B]) e corresponding to di must occur at least i times in buckets B-i+1 through B, otherwise it would have been deleted We get the following constraint: (1) Sidi jw for j = 1,2,B. i = 1..j
Theorem 2 proof
The following inequality can be proved by induction: Sdi Sw/i for j = 1,2,B i = 1..j |D| = Sdi for i = 1..B From the above inequality |D| Sw/i 1/elogB = 1/elog(eN) z
No of entries
N (stream length)
Kinks in the curve for sticky sampling correspond to re-sampling Kinks in the curve for lossy counting correspond to bucket boundaries
0.1%
0.05%
1.0%
0.5%
27K
58K
9K
17K
6K
11K
419
709
27K
58K
1K
2K
0.01%
0.1%
322K
672K
69K
124K
37K
62K
2K
4K
322K
672K
10K
20K
0.005% 0.05%
Lossy counting is superior by a large factor Sticky sampling performs worse because of its tendency to remember every unique element that gets sampled Lossy counting is good at pruning low frequency elements quickly
KPS02 In the first path the algorithm maintains 1/e elements with their frequencies. If a counter exists for an element it is increased, if there is a free counter it is inserted, otherwise all existing counters are reduced by one Can be used to maintain e-deficient synopsis with exactly 1/e space If the input stream is Zipfian Lossy Counting takes less than 1/e space for e=0.01% roughly 2000 entries ~ 20% 1/e
Stream
Identify all subsets of items whose current frequency exceeds s = 0.1%.
Input: stream of transactions, each transaction is a set of items from I N: length of the stream User specifies two parameters: support s, error e Challenge: - handling variable sized transactions - avoiding explicit enumeration of all subsets of any transaction
Notations
Data structure D set of entries of the form (set, f, D) Transactions are divided into buckets w = ceil(1/e) no. of transactions in each bucket bcurrent current bucket id Transactions are not processed one by one. Main memory is filled with as many transactions as possible. Processing is done on a batch of transactions. B no. of buckets in main memory in the current batch being processed.
Update D
UPDATE_SET: for each entry (f,set,D) E D, update f by counting occurrences of set in the current batch. If the updated entry satisfies f+D bcurrent, we delete this entry NEW_SET: if a set set has frequency f B in the current batch and set does not occur in D, create a new entry (set,f,bcurrent B)
Algorithm facts
If fset eN it has an entry in D If (set,f,D)ED then the true frequency of fset satisfies the inequality f fset f+D When a user requests a list of items with threshold s, output those entries in D where f (s-e)N B needs to be a large number. Any subset of I that occurs B+1 times or more contributes to D.
Three modules
TRIE
maintains the data structure D
SUBSET-GEN
operates on the current batch of transactions
BUFFER
Module 1 - Buffer
bucket 1
bucket 2
bucket 3
bucket 4
bucket 5
bucket 6
In Main Memory
Read a batch of transactions Transactions are laid out one after the other in a big array A bitmap is used to remember transaction boundaries After reading in a batch, BUFFER sorts each transaction by its item-ids
Module 2 - TRIE
50 40 30 50 45 40 32 30 42 31 29 31 29 32 45 42
Nodes are labeled {item-id, f, D, level} Children of any node are ordered by their item-ids Root nodes are also ordered by their item-ids A node represents an itemset consisting of item-ids in that node and all its ancestors TRIE is maintained as an array of entries of the form {item-id, f, D, level} (pre-order of the trees). Equivalent to a lexicographic ordering of subsets it encodes. No pointers, levels compactly encode the underlying tree structure.
Module 3 - SetGen
3 2 2 3 1 1 3 2 1 3 1 4
BUFFER
if a subset S does not make its way into TRIE after application of both UPDATE_SET and NEW_SET, then no supersets of S should be considered
Overall Algorithm
3 2 2 3 1 1 3 2 1 3 1 4
BUFFER
SUBSET-GEN
TRIE
new TRIE
Efficient Implementations
Buffer
If item-ids are successive integers from 1 thru |I|, and I is small enough (less than 1 million) Maintain exact frequency counts for singleton sets. Prune away those itemids whose frequency is less than eN and then sort the transactions If |I| = 105, array size = 0.4 MB
Efficient Implementations
TRIE
Take advantage of the fact that the sets produced by SetGen are lexicographic. Maintain TRIE as a set of fairly large-sized chunks of memory instead of one huge array Instead of modifying the original TRIE, create a new TRIE. Chunks from the old TRIE are freed as soon as they are not required. By the time SetGen finishes, the chunks of the original TRIE have been discarder.
Efficient Implementations
SetGen
Employs a priority queue called Heap Initially contains pointers to smallest item-ids of all transactions in buffer Duplicate members are maintained together and constitute a single item in the Heap. Chain all these pointers together. Derive the space from BUFFER. Change itemids with pointers.
Efficient Implementations
SetGen cont
1 2 3 1 4 5 2 4 5 6 1 3
Efficient Implementations
SetGen cont
Repeatedly process the smallest item-id in Heap to generate singleton sets. If the singleton belongs to TRIE after UPDATE_SET and NEW_SET try to generate the next set by extending the current singleton set. This is done by invoking SetGen recursively with a new Heap created out of successors of the pointers to item-ids just processed and removed. When the recursive call returns, the smallest entry in Heap is removed and all successors of the currently smallest item-id are added.
Efficient Implementations
SetGen cont
1 2 3 1 2 5 2 4 5 6 1 3
2 3
Buffer scans the incoming stream by memory mapping the input file. Use standard qsort to sort transactions Threading SetGen and Buffer does not help because SetGen is significantly slower. The rate at which tries are scanned is much smaller than the rate at which sequiential disk I/O can be done Possible to maintain TRIE on disk without loss in performance
The size of TRIE is not limited by main memory this algorithm can function with a low amount of main memory. Since most available main memory can be devoted to BUFFER, this algorithm can handle smaller values of e than other algorithms can handle.
No candidate generation phase. Compact disk-based tries is novel Able to compute frequent itemsets under low memory conditions. Able to handle smaller values of support threshold than previously possible.
Experimental results
Experimental Results
IBM synthetic dataset T10.I4.1000K
N = 1Million Avg Tran Size = 10 Input Size = 49MB
What is studied
s N B t
set e = 0.1s
Time in seconds
B= B= B= B=
4 16 28 40
MB MB MB MB
Support threshold s
Support threshold s
Time in seconds
S S S S
= = = =
S S S S S
= = = = =
BUFFER size in MB
BUFFER size in MB
IBM test dataset T10.I4.1000K. Kinks occur due to TRIE optimization on last batch
S = 0.002 S = 0.004
Running time is linear proportional to the length of the stream The curve flattens in the end as processing the last batch is faster
Time
99 s 25 s 14 s 13 s 13 s 14 s
Memory
82 MB 53 MB 48 MB 48 MB 48 MB 48 MB
Memory
12 MB 10 MB 7MB 6 MB 5 MB 5 MB
Memory
45 MB 45 MB 45 MB 45 MB 45 MB 45 MB
Summary
A Novel Algorithm for computing approximate frequency counts over Data Streams
Summary
Require provably small main memory footprints Each of the motivating examples can now be solved over streaming data Handle smaller values of support threshold than previously possible Remains practical in environments with moderate main memory
Summary cont
Give an Apriori error guarantee Work for variable sized transactions. Optimized implementation for frequent itemsets For the datasets tested, the algorithm runs in one pass and produces exact results, beating previous algorithms in terms of time.
Questions?
More questions/comments can be sent to michal.spivak@sun.com