12 views

Uploaded by geohawaii

Universal Lossless Compression of Erased Symbols

- assasins creed soundtrack 1
- International Journal of Engineering Research and Development
- Image and Signal Processing for Networked E Health Applications Ilias G Maglogiannis
- Semantic Compression
- IJDL-0.4
- Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression Algorithms
- IJETTCS-2013-04-08-082
- Tao Dissertation
- Entropy R Package
- p17-mock
- 2181102
- E110
- FURTHER JPEG COMPRESSION
- 3datapreprocessing_ppt3
- Claude Shannon - A Mathematical Theory of Communication
- 2-3trees
- Good Preference API Xplanation
- Information and Coding Theory ECE533 - Lec-1 Introduction. Information Theory v4.0
- conversational case-based reasoning
- Image Compression

You are on page 1of 12

5563

Jiming Yu and Sergio Verd, Fellow, IEEE

AbstractA source goes through an erasure channel whose when the comoutput is . The goal is to compress losslessly pressor knows and and the decompressor knows . We propose a universal algorithm based on context-tree weighting (CTW), parameterized by a memory-length parameter . We show that if the erasure channel is stationary and memoryless, and is stationary and ergodic, then the proposed algorithm achieves a com01 ` ) bits per erasure. pression rate of ( 0 j 0 `

X Z

H X X ;Z

Index TermsContext-tree weighting, discrete memoryless erasure channel, entropy, erasure entropy, side information, universal lossless compression.

, regardless of whether is availditional entropy rate able at the compressor [7], [8]. If is also available at the encoder, then the source can be recovered strictly losslessly with simpler schemes [1], [6]. analytically is a challenging In general, evaluating , and . task. Obviously Another information measure related to the conditional entropy is the erasure entropy dened in [6] as rate (1) where

I. INTRODUCTION A. Setup FFICIENT algorithms for universal lossless compression with side information known to both the encoder and decoder have been developed in [1] based on the celebrated context-tree weighting (CTW) algorithms [2], [3]. The LempelZiv class of algorithms can also deal with some types of side information [4], [5]. More recently, [6] has analyzed the fundamental limits of data compression when the side information is the output of a memoryless erasure channel driven by the source. The algorithms in [1] not only can be applied to this setup but they achieve asymptotically the minimum compression rate. However, in this paper, we show an algorithm that outperforms (nonasymptotically) the algorithms in [1] by exploiting the special structure of the side information, namely, the original source corrupted by memoryless erasures. The setup is illustrated in Fig. 1, where

(2) Theorem 1 ([6]): If is stationary, and the erasure process is independent and identically distributed (i.i.d.) with marginal and independent of , then (3) (4) Reference [6] also provides a formula for the case when a binary symmetric Markov chain. Theorem 2 ([6]): If is the output of eral rst-order Markov chain input , and

1

is

where (6)

if if with representing the erasure, and the stationary erasure having erasure rate . process We assume that takes values on a nite alphabet , so takes values in the nite alphabet . The goal is to universally and losslessly compress the source sequence when is available at both the encoder and decoder. B. Fundamental Limit Generally, the optimal information rate for the encoder to almost losslessly describe the source to the decoder is the conManuscript received June 20, 2007; revised March 30, 2008. Current version published November 21, 2008. This work was supported by the National Science Foundation under Grant CCF-0728445. The authors are with with the Department of Electrical Engineering, Princeton University, Princeton, NJ 08544, USA (e-mail: jimingyu@princeton. edu; verdu@princeton.edu). Communicated by W. Szpankowski, Associate Editor for Source Coding. Color versions of Figures 24 in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TIT.2008.2006387

(7) where is the binary entropy function. In Section II we present the compression algorithm CTW for Erased symbols (CTWE) which borrows ideas from both CTW [2], [3] and the lossless compressor with side information [1]. The algorithm is analyzed in Section III when the erasure channel is memoryless. In Section IV, we give experimental results testing the algorithm with English texts and Markov sources.

1We denote the discrete memoryless erasure (DME) channel with erasure rate by .

DME( )

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 25, 2009 at 12:41 from IEEE Xplore. Restrictions apply.

5564

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 12, DECEMBER 2008

Fig. 1. Lossless compression with an erased version of the source as side information at encoder and decoder.

II. THE CTWE ALGORITHM A. Motivation It is easy to verify that if then is the response of to , (8) and consequently

(9) Equalities (8) and (9) not only quantify how many bits are required for lossless compression of each erasure but suggest encoding the erased symbols according to . Thus, a bidirectional model is needed for this cross-bidirectional conditional distribution, and the future being i.e., the past being conditioned upon conditioned upon . To construct a model for the estimation of , we restrict to a nite memory approximation (10) for some . The basis for applying the basic CTW is the following consequence of (8): (11) for any .

The terminology and notation in this section are those in [2]. In our modied tree structure, the branches in this tree and the and some nodes they point to are labeled by some , respectively. For any cross-bidirectional context (or node, they are used interchangeably in this paper) with some , dene : the number of occurrences of in the cross bidirectional context ; : the estimated probability (the KrichevskyTromov (KT) estimate [9]) for node ; : the weighted probability for node . We focus our description on the compressor with the decompressor proceeding in parallel as the process is recovered. A frequently used subprocedure is the updating rule for the CTW tree as described in [2], but we modify it to the cross-bidirectional setting as follows. Procedure 1: Update Update the tree according to symbol in the context as follows. Step 1: start from the root node, go through the branches labeled by

to arrive at the node . Step 2: starting from the leaf node reached at the end of Step 1, we update each node along the traversed path in Step 1. Step 2.1: Update the estimated probability for node as

in this context as

B. The Compression Algorithm CTWE Fix the memory length . With the source and side information realization being , the aim is to obtain estimates , based only on the knowledge of for and , for each . Because is an erased version of , the symbols in that are not the erasure must coincide with the corresponding symbols in : it is this property that we exploit in our initial construction of a cross-bidirectional CTW tree and its updating rules. Later on, as we have more knowledge of the source sequence (corresponding to the reconstruction of source symbols one-by-one at the decoder), we can update this tree sequentially to obtain more accurate estimates of .

Step 2.3: Update the weighted probability for node as if if where is the set of all children of denotes the depth of node . Step 2.4: If is the root node, stop; otherwise let father node of and go to Step 2.1. Remark 1: Before describing the CTWE algorithm, we notice that after encoding at the encoder and reconstructing it at the

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 25, 2009 at 12:41 from IEEE Xplore. Restrictions apply.

5565

decoder, both the encoder and the decoder now know , thus, can be used to update the model at both the encoder and decoder. Essentially, the maximum exploitation of available data is achieved, namely, by knowing as an additional symbol at both the encoder and decoder we can update the empirical count of and estimated/weighted probabilities in the cross-bidirec, as well as the empirical count of tional context and estimated/weighted probabilities in any cross-bidirecin which was erased originally tional context if other symbols in are not erased (notice is available at both the encoder and decoder, so must also be available). Now we are ready to describe our encoding algorithm CTWE. Step 1: construct a basic CTW tree with maximal depth according to , but during the construction whenever an erasure is encountered, directly skip that symbol. -ary tree Step 1.1: initially a complete, balanced with depth is easily constructed. All estimated probaand weighted probabilities for any node bilities in the tree are initially set to be . . Set , then go to Step 1.3; Step 1.2: if there are erasures in otherwise, , then necessarily we have , and execute the procedure. where Update Step 1.3: set . If , go to Step 1.2, otherwise stop. , For each character Step 2: if is not an erasure, then we do not need to encode ; otherwise, do the following to encode . , we cannot use the estimated Step 2.1: if model yet, so do the following: Step 2.1.1: compress by using a trivial xedlength code for the source alphabet, by rst conveying to the decoder the maximal depth chosen by the encoder. The decoder can directly decode this . Step 2.1.2: update the basic CTW tree as follows is available to both the encoder and de(notice coder now, after reconstructing at the decoder): For , if . . Update , we can use the CTW Step 2.2: if model to encode : Step 2.2.1: feed the coding distribution estimated by the CTW tree model into the arithmetic coder that compresses . This estimation is achieved by the current symmetric bidirectional basic CTW model specied by the current tree and its current statistics.2 Step 2.2.2: update the basic CTW tree as follows (notice is available to both the encoder and decoder now, after reconstructing at the decoder):

2For details, cf. [10], [11] and notice the obvious modication in our case with cross-bidirectional contexts (namely, the past and future contexts are given by different processes , respectively).

Update ; For , if Update . , we can use the Step 2.3: if CTW model to encode : Step 2.3.1: feed the coding distribution estimated by the CTW tree model into the arithmetic coder that compresses . This estimation is performed as in Step 2.2.1. Step 2.3.2: update the basic CTW tree as follows is available to both the encoder and de(notice coder now, after reconstructing at the decoder): Update ; For , if , . Update Step 2.4: if , we cannot use the CTW by using a trivial xedtree model, then compress length code for the source alphabet, by rst conveying to the decoder the maximal depth chosen by the encoder. The decoder can directly decode this . We do not need to further update the CTW tree model since there are no more symbols to encode other than the erasures in those last symbols. Example 1: Take binary alphabet and , suppose the source sequence to be compressed is , the . Initially, Step 1 erased version of it is updates according to the bold-face symbol and its length- underlined cross-bidirectional context below:

namely, Update is executed at both the encoder and is available to both. Similarly, there are two more decoder, as updates executed in Step 1: : Update

Update

Next, in Step 2, rst two symbols are encoded using a trivial xed-length code; the rst erasure is encoded according to the estimates from the tree constructed up to now and the decoder knows this symbol from decoding. At the decoder, we actually know the following now:

and at the encoder and decoder we can only utilize this information in order to maintain synchronization. Now Step 2 upfor the bold-face dates the tree by executing Update symbol and its length- underlined cross-bidirectional context shown above. Following this update, the second erasure is encoded according to the estimates from the tree. After decoding, the following is known at both the encoder and decoder:

Z X ;Z

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 25, 2009 at 12:41 from IEEE Xplore. Restrictions apply.

5566

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 12, DECEMBER 2008

The updates at both the encoder and decoder are as follows: : Update

for

and

(14) Update :

Update

: for , where

Now we are in a better position to encode the last erasure , and the maximum exploitation of available data is achieved. The remaining steps can be similarly derived from the algorithm description. C. Complexity Issues and Implementation Considerations The basic CTW tree in Section II-B has levels of nodes, consuming constant memory size and linear time to construct; storing the source sequence or the side information at the encoder or sequence consumes linear memory size decoder. Therefore, the time and space complexities are both in the data length . linear Implementation considerations for our CTWE algorithm are the same as for other CTW-based algorithms, namely, a binary symbol decomposition method is used for general nite alphabet , cf. [1], [10][13]. III. CONSISTENCY RESULTS We analyze algorithm CTWE in Section II-B under the assumption that the process is obtained by passing through the DME , though CTWE applies to the case of general erasure channel. The analysis here is similar to those of [1], [10], [11], but to deal with erasures, there are a number of important differences (especially in obtaining the formulae for the empirical counts in Proposition 1, and proving Lemma 1 and Proposition 2 below). and update the As Step 2 of the algorithm, we encode basic CTW tree if and only if is an erasure. We have the following intuitive fact. Proposition 1: Assuming we have successfully encoded and (the described to the decoder all the erased symbols in ), the following updated emdecoder can then reconstruct pirical counts for the nodes

, as well as and . Proof: Obviously, (12) implies (13) and (14) implies (15), , which is convebut with the (unnecessary) dependence on nient for the purpose of analyzing the algorithm. Showing (12) and (14) is a lengthy verication with many possible subcases, detailed in [14]. Proposition 1 conrms that CTWE has utilized the data to the largest extent, as expected from our algorithm description. when encoding (if The empirical counts for ) can be similarly obtained (corresponding to Step 2.3), but are not important in our asymptotic analysis. for the Note that although we only dene sample paths with , actually through (12) , and and (14) we can dene it for any sample path we indeed do so in the sequel. However, note that only those with are used in compression. Notice that every estimate or quantity used in encoding (when ) is derived from the empirical counts . In particular, the estimated probabilities and weighted probabilities at each node are deterministic functions of those counts when encoding , where the terminology is from [2], [3]. Let us is the KT estimator [9] for recall from [2], [3] that , and the relation between the empirical counts and is if if Dene (16) and for any If . , then let us dene

when (12)

If

, then

(13)

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 25, 2009 at 12:41 from IEEE Xplore. Restrictions apply.

5567

denoting all the leaf nodes of the same depth in the subtree rooted at node , we nally get (by letting )

According to Step 2 of the algorithm in Section II-B (cf. [10], [11] for a detailed description of Steps 2.2.1, 2.3.1, as well as for the following formula), the conditional distribution used for (if necessary) is the probability vector specied by encoding

(17) for any is (18) for any of , which holds for every nite-length sample path . be stationary and ergodic. Let with be such that there of with and a different conditional probability (22) where last equality is due to the fact that if . Since , and the , then . Notice from (17), tht a trivial bound for

(23) Then for a node in the basic CTW tree of Section II-B a.s. Proof: We assume totic case of (14) and (15) hold. One has (19) which takes values in , and the random variables (25) Dene (24)

(20) for cerAnd by recursively applying the denition of tain node as in (20) until we have reached the leaf nodes, with

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 25, 2009 at 12:41 from IEEE Xplore. Restrictions apply.

5568

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 12, DECEMBER 2008

in both frac-

(27) by Proposition 1. Since is jointly stationary and ergodic is stationary and ergodic, by the ergodic theorem (e.g., as , the rst term in the right side of (27) con[15]), as verges almost surely to . Similarly, as converges to

a.s. (34) Hence, by (32), note with we have , then , almost surely

(28)

(35) (36)

almost surely. Therefore, the second term in the right side of (27) converges almost surely to . and in (27), almost surely we Hence, as have (29) for any , implying (30) Thus, if , then for large enough and a.s. (31)

by (29) again, where (35) is also due to the fact that as . Note that with , for any almost surely we have

such that Notice that for any node , is a KT estimator [9] satisfying [9, Sec. II, eq. (2.6)] (which is based on Stirlings formula for Gamma functions)

(38) where the inequality holds by the concavity of entropy and becomes equality if and only if for any with , we have

(39) (33) is the empirical estimate of the entropy for the conditional distribution of given that its cross-bidi(namely, the past is given by rectional context is and the future is specied by ). Dividing with But the equality cannot hold for all , since this would contradict our assumption. Thus, strict inequality in (38) holds. Hence, by (64), almost surely the exponent on the right-hand and for any side of (22) becomes (note (31) holds for

5569

with

and (44)

for (45)

Now let

(46) for any such an integer always exists since Thus, it follows that Case 1: for each node , we have . Obviously always satises (46). such that its depth

(47) (40) , where the last sum is over all and this is a strictly positive constant, because our assumption and because of the strict inimplies equality in (38). The result then follows from (22) and (23). Dene Thus

(41) Lemma 2: If is stationary and ergodic, and the erasure channel is memoryless, then a.s. as for any . Proof: In view of (14) and (15), we assume . (which is equivalent If ), (17) becomes (42)

a.s.

(48)

by the stationarity and ergodicity of (cf. (29) and the derivation of (34) in the proof of Lemma 1). such that its depth Case 2: for each node , the conditions in Lemma 1 is satised, thus a.s. implying (49)

to

a.s.

(50)

In view of (43), notice that there are only terms, and the of those estimated probabilities coefcients

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 25, 2009 at 12:41 from IEEE Xplore. Restrictions apply.

a.s.

(51)

5570

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 12, DECEMBER 2008

Fig. 2. Compression rate per erasure versus erasure rate: Markov source.

as for any from the continuity of the logarithm. Now we are ready to analyze given compressed version of does not exceed

, and the result follows the length of the . The total code length

(52) where the rst term accounts for the binary representation of [16], [17] with a universal constant ; the second term is because, in the rst symbols and last symbols, each source symbol is directly encoded with a trivial xed-length code is an erasure, so they contribute for the alphabet when information bits to the code length; the third at most and fourth terms reect of the arithmetic coding operations. Proposition 2: If is stationary and ergodic, and , then tained by passing through the is oba.s. a.s. Proof: The case reduces to the case without side information. Thus, we assume . Let (53) The rst sum in the right side of (59) is equal to

(59)

(60)

5571

TABLE I COMPRESSION RATES FOR BINARY MARKOV SOURCE WITH TRANSITION PROBABILITY p

= 0:1

almost surely, where the inner sum is over all . By Lemma 2, the inner sum converges to almost surely as . Notice the bound (18) for (uniform in ) and the inner sum range is , we see each inner sum (uniform in ). Thus, the normalized is of the order , because if sum (60) converges to almost surely as and for some , then . or , almost surely the According to whether second sum in (59) is equal to

by using the same trick in deriving (63), avoiding the verication of an integrability condition involving the logarithm of the probabilities. The desired result follows from (64) and the fact that (59) converges to almost surely. We can achieve the same compression rate as that of [1, Theorem 1], shown in Theorem 3. We illustrate in Section IV that our compressor CTWE outperforms both the basic and extended algorithms in [1] in many practical situations. We also remark that results such as [18] indicate that a judiciously chosen memory length leads to convergence to the conditional . entropy Theorem 3: If is stationary and ergodic, and is obtained through the DME , then the asymptotic comby passing pression rate per symbol is a.s. Proof: It follows from (52), Proposition 2, and (11). IV. EXPERIMENTAL RESULTS We experiment with the following algorithms: CTWE algorithm (cf. Section II-B); CKV Basic CTW algorithm [1]; and CKV Extended CTW algorithm [1]; in the case where the erasure channel is stationary and memoryless. Notice the performance of those algorithms is measured by the compression rate per erasure, that is, the code length divided by the number of erasures. Under the hypotheses of Theorem 1, ; similarly (cf. [6]), we have . We will compare the performance of our algorithm CTWE and the algorithms of [1] (normalized by the number of erasures) to erasure entropy and entropy per erasure , in the cases when is close to and is close to , respectively. In the general case , we are able to compare the algorithms against the theoretical values of the optimal information rate per erasure for binary symmetric Markov chains, thanks to Theorem 2; but in general, we can only compare the algorithms themselves (normalized by the number of erasures) while providing estimates and erasure entropy . for entropy per erasure for reference. In addition, we list the entropy per symbol Furthermore, in CTWE, we need to specify the best parameter for compression: this is simple, we always choose the para-

(61)

(62) (63) , where the sums over are always understood as to be over all ; (61) follows from the stationarity ; (62) follows from the fact that is and ergodicity of with input . the output of DME Thus, (59) converges to almost surely as according to the facts that (60) converges to almost surely as and (63). One more use of the ergodic theorem (e.g., [15]) gives

a.s. (64)

5572

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 12, DECEMBER 2008

=01

:

TABLE III COMPRESSION RATES FOR ENGLISH TEXTS WITH ERASURE RATE

=09

:

meter that results in the smallest compression rate (similarly for the CaiKulkarniVerd (CKV) Basic CTW algorithm [1]). While this criterion is heuristic, it is supported by informationtheoretic results such as [18]. In the binary Markov source experiment, as the algorithms in [1] are already close to the conditional entropy rate per erasure, CTWE exhibits only small improvements; whereas in the moreinteresting case of English texts, CTWE has notable improvements over the algorithms in [1] in most cases. This means that our purpose has been achieved: exploiting the structure of the erasure channel in order to improve upon the general-purpose algorithms in [1].

A. Binary Markov Source For the binary Markov source experiment, every data point is obtained by running the algorithms 100 times and recording . In this the average, each point being for the data length of case, we can compute entropy, conditional entropy, and erasure entropy analytically (cf. (5), (7), and [6]). We have experimented with transition probabilities and erasure rates . are similar to the cases Since the cases (Fig. 2 and Table I), the gures and tables corresponding to are omitted. From Fig. 2, notice that the CKV

5573

Fig. 3. Compression rate per erasure versus erasure rate: English text.

Fig. 4. Compression rate per erasure versus erasure rate: English text.

algorithms essentially have achieved the conditional entropy per erasure in this nite-length regime, CTWE has some small improvements over the CKV algorithms in most cases. CTWE

only works slightly worse than the CKV algorithms in the case . In Table I, we see the compression rate per erasure is indeed close to erasure entropy when erasure rate

5574

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 12, DECEMBER 2008

is close to , and is close to entropy per erasure when is close to . erasure rate 1 Notice for bit per symbol, and CTWE and the CKV basic and extended algorithms [1] work quite well when the compression rate per erasure is very close to . B. English Text A more practical case is when the algorithms are applied to English texts and their erased versions. We use the same 15 English texts as in [10], [11], ordered in the same way, with data to about characlengths ranging from about ters (see Tables II and III) . In this case, entropy, conditional entropy, and erasure entropy are all unknown: entropy is estimated by actually using an entropy-achieving compressor [3], which is meaningful when we compare the compression rate per erasure by not looking at the side information to that of the case when the side information is utilized in compression; all the three compressors (our CTWE, CKV basic, and extended CTW [1]) can be used as proxies of the conditional entropy; erasure entropy estimates are the basic or extended CTW estimators from [10], [11], due to the related consistency results there. Two typical curves of the compression rate per erasure versus erasure rate are drawn in Figs. 3 and 4, corresponding to Novel Number 1 (English translation of Don Quixote) and Novel Number 2 (David Coppereld). When the erasure rate is , CTWE performs much better than the not too high CKV algorithms. But when erasure rate is high, CTWE suffers performance degradation because the side information gives too little and too skewed information. REFERENCES [1] H. Cai, S. R. Kulkarni, and S. Verd, An algorithm for universal lossless compression with side information, IEEE Trans. Inf. Theory, vol. 52, no. 8, pp. 40084016, Aug. 2006.

[2] F. M. J. Willems, Y. M. Shtarkov, and T. J. Tjalkens, The context-tree weighting method: Basic properties, IEEE Trans. Inf. Theory, vol. 41, no. 3, pp. 653664, May 1995. [3] F. M. J. Willems, The context-tree weighting method: Extensions, IEEE Trans. Inf. Theory, vol. 44, no. 2, pp. 792798, Mar. 1998. [4] Y. Hershkovits and J. Ziv, On xed-database universal data compression with limited memory, IEEE Trans. Inf. Theory, vol. 43, no. 6, pp. 19661976, Nov. 1997. [5] Y. Hershkovits and J. Ziv, On sliding-window universal data compression with limited memory, IEEE Trans. Inf. Theory, vol. 44, no. 1, pp. 6678, Jan. 1998. [6] S. Verd and T. Weissman, The information lost in erasures, IEEE Trans. Inform. Theory, vol. 54, no. 11, pp. 50305058, Nov. 2008. [7] D. Slepian and J. K. Wolf, Noiseless coding of correlated information sources, IEEE Trans. Inf. Theory, vol. IT-19, no. 4, pp. 471480, Jul. 1973. [8] T. M. Cover, A proof of the data compression theorem of Slepian and Wolf for ergodic sources, IEEE Trans. Inf. Theory, vol. IT-22, no. 2, pp. 226228, Mar. 1975. [9] R. E. Krichevsky and V. K. Tromov, The performance of universal encoding, IEEE Trans. Inf. Theory, vol. IT-27, no. 2, pp. 199207, Mar. 1981. [10] J. Yu and S. Verd, Universal erasure entropy estimation, in Proc. 2006 IEEE Int. Symp. Information Theory, Seattle, WA, Jul. 2006, pp. 23582362. [11] J. Yu and S. Verd, Universal estimation of erasure entropy, IEEE Trans. Inf. Theory, vol. 55, no. 1, Jan. 2009, to be published. [12] F. M. J. Willems and T. J. Tjalkens, Complexity Reduction of the Context-Tree Weighting Algorithm: A Study for KPN Research, Tech. Univ. Eindhoven, Eindhoven, The Netherlands, EIDMA Rep. RS.97.01, Jan. 1997. [13] P. Volf, Weighting Techniques in Data Compression: Theory and Algorithm, Ph.D. dissertation, Tech. Univ. Eindhoven, Eindhoven, The Netherlands, 2002. [14] J. Yu, Bidirectional Modeling of Finite-Alphabet Sources, Ph.D. dissertation, Princeton Univ., Princeton, NJ, 2007. [15] P. T. Maker, The ergodic theorem for a sequence of functions, Duke Math. J., vol. 6, no. 1, pp. 2730, 1940. [16] P. Elias, Universal codeword sets and representations of the integers, IEEE Trans. Inf. Theory, vol. IT-21, pp. 194203, Mar. 1975. [17] A. D. Wyner and J. Ziv, The sliding-window Lempel-Ziv algorithm is asymptotically optimal, Proc. IEEE, vol. 82, no. 6, pp. 872877, Jun. 1994. [18] D. L. Neuhoff and P. C. Shields, Simplistic universal coding, IEEE Trans. Inf. Theory, vol. 44, no. 2, pp. 778781, Mar. 1998.

- assasins creed soundtrack 1Uploaded byapi-427379542
- International Journal of Engineering Research and DevelopmentUploaded byIJERD
- Image and Signal Processing for Networked E Health Applications Ilias G MaglogiannisUploaded byBMT
- Semantic CompressionUploaded byKristen Fields
- IJDL-0.4Uploaded bydiankusuma123
- Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression AlgorithmsUploaded byNigerian Journal of Technological Development
- IJETTCS-2013-04-08-082Uploaded byAnonymous vQrJlEN
- Tao DissertationUploaded byNgoc Binh Nguyen
- Entropy R PackageUploaded byRicardo Cerri
- p17-mockUploaded byErin Harris
- 2181102Uploaded bymehul03ec
- E110Uploaded bys_cynuf
- FURTHER JPEG COMPRESSIONUploaded bytrismahesh
- 3datapreprocessing_ppt3Uploaded byYogesh Bansal
- Claude Shannon - A Mathematical Theory of CommunicationUploaded byapi-3769743
- 2-3treesUploaded bychs360
- Good Preference API XplanationUploaded byshaikirfanahmed
- Information and Coding Theory ECE533 - Lec-1 Introduction. Information Theory v4.0Uploaded byNikesh Bajaj
- conversational case-based reasoningUploaded byspitraberg
- Image CompressionUploaded byvayalo
- SIPPUploaded byCS & IT
- ShashiUploaded bysudhacarhr
- Introduction in the Commodore Transformer PlatformUploaded byjobruyn
- Sound in FlashUploaded byakfaditadikaparira
- Image CompressionUploaded bySankalp_Kallakur_402
- EE7150 Finalproject-Finalreport Savitha BhallamudiUploaded byAjewole Eben Tope
- InTech-Qoe for Mobile StreamingUploaded byJoão Ramos
- 214. bbm-3A978-3-540-73165-8-2F1Uploaded byebenesarb
- 15 Hellerstein BooleanUploaded byWareesha Lashari
- Lecture07 JPEG2000Uploaded byachutha

- A New Enhanced Nearest Feature Space (ENFS) Classiﬁer for Gabor Wavelets Features-Based Face RecognitionUploaded bygeohawaii
- A high resolution fundamental frequency determination based on phase changes of the Fourier transformUploaded bygeohawaii
- Separating depersonalisation and derealisation: the relevance of the “lesion method”Uploaded bygeohawaii
- Real-Time Fundamental Frequency Estimation byUploaded bygeohawaii
- Automated Texture Analysis with Gabor filterUploaded bygeohawaii
- Factorization of a 768-bit RSA modulusUploaded bygeohawaii
- Neural Associative Memory with Finite State TechnologyUploaded bygeohawaii
- Bitmap Steganography:Uploaded bygeohawaii
- On the Use of Autocorrelation Analysis for Pitch DetectionUploaded bygeohawaii
- Neural memories and search enginesUploaded bygeohawaii
- Neural memories and search enginesUploaded bygeohawaii
- Ron Was Wrong, Whit is RightUploaded byKemal Yaylali
- Ron Was Wrong, Whit is RightUploaded byKemal Yaylali
- Acoustical Measurement of the Human Vocal Tract: Quantifying Speech & Throat-SingingUploaded bygeohawaii
- Batch Mode Active Learning and Its Application to Medical Image Classi¯cationUploaded bygeohawaii
- Separating depersonalisation and derealisation: the relevance of the “lesion method”Uploaded bygeohawaii
- web-14Uploaded bygeohawaii
- web-14Uploaded bygeohawaii
- MIT-CSAIL-TR-2010-013Uploaded bygeohawaii
- De Noise lossless compression-based denoisingUploaded bygeohawaii
- Wavelets TutorialUploaded byArtist Recording
- Mental pratice with motor imagery in strock recoveryUploaded bygeohawaii
- Brisk - binary robust invariant scalable keypointsUploaded bygeohawaii
- Batch Mode Active Learning and Its Application to Medical Image Classi¯cationUploaded bygeohawaii
- Characterisation of Voice Quality in Western Lyrical Singing:Uploaded bygeohawaii
- Dynamic Construction of Stimulus Values in the Ventromedial Prefrontal CortexUploaded bygeohawaii
- Dynamic Construction of Stimulus Values in the Ventromedial Prefrontal CortexUploaded bygeohawaii
- Dynamic Construction of Stimulus Values in the Ventromedial Prefrontal CortexUploaded bygeohawaii
- Expanding (3+1)-dimensional universe from a Lorentzian matrix modelUploaded bygeohawaii

- Video Processing Communications Yao Wang Chapter8aUploaded byAshoka Vanjare
- ADPCMUploaded byGzim Dragusha
- 03-Arithmeding.pdfUploaded byKrisna Sharma
- 5 Putting Data on a DietUploaded bybrandon61266
- Image CompressionUploaded byKuldeep Mankad
- Image CompressionUploaded byguptavikas_1051
- Arithmetic Coding.pptUploaded byKunal Singh
- DCE Assignments 2010Uploaded byGajanan Birajdar
- VB13Uploaded byVijini Pathiraja
- Chapter08-1Uploaded byJangwoo Yong
- Block Sorting Text Compression.psUploaded byΟλυμπίδης Ιωάννης
- A New Joint Lossless Compression And Encryption Scheme Combining A Binary Arithmetic Coding With A Pseudo Random Bit GeneratorUploaded byijcsis
- ITC Imp QuesUploaded byjormn
- BelyaevUploaded bymanu_1608
- EntropyUploaded byanialrakita
- DIP QbankUploaded byANNAMALAI
- COMMPPUploaded byjbenaz
- A Block-Sorting Lossless Data Compression AlgorithmUploaded byDarthShrine
- Data Compression Arithmetic CodingUploaded byervaishu5342
- 10.1.1.107Uploaded byAmol Bengiri
- chap10Uploaded byHàng Không Mẫu Hạm
- Cabac_WeigandUploaded byramkittu3698
- Fractal and Wavelet Image Compression TechniquesUploaded bySuryaMohan
- DIPQNAUNITVIII.pdfUploaded byMdSalahUddinYusuf
- An Introduction to Image CompressionUploaded byRajiv Kumar
- Introduction to Data CompressionUploaded bygerome_vas
- Full Paper for Paper PresentationUploaded byTinish Bhatia
- Efficient Arithmetic Coder Design for SPIHT Image CompressionUploaded byInternational Journal of Innovative Science and Research Technology
- Nabeeg Mukhtar Turnitin ReportUploaded bymemusafir
- The Jpeg Still Picture Compression StandardUploaded byisma807