You are on page 1of 1

RIASSUNTO PRIMO CAPITOLO

We have to take advantage of redundancy and we have seen multiple types of reductions in the data like
coding redundancy and inter-sample redundancy. We also talked about compression using a model and a
coding part in the compression algorithm: the model is a mathematical model of the correlation between
different samples we used to take advantage of the inter-pixel redundancy, while the coding part is an
algorithm that usually considers first order probabilities of the possible values that the data can take.

The two main elements of any compression are the modeling and the coding: for the inter-pixel
redundancy we need a model to take advantage of data and for coding redundancy we need an encoder to
take advantage of that. Since the way we acquire the data is typically very redundant, we get to have
mostly both inter-sample redundancy and coding redundancy.

The more interesting part for the compression is the inter-pixel redundancy or the intercept-redundancy
because there is a lot more variety in the techniques that are used. This is a big advantage because you
know that there is different type of signal, each one have their own statistical characteristics, their own
correlations and you need to come up with a specific algorithm to take advantage of those. So for images
we will use something which is different from what we will use for video, audio, and so on.

So to different data type corresponds different models for the inter-sample redundancy because the model
of the inter-sample redundancy is usually linked to the physical phenomenon that generates that, so the
data are correlated because of the way the data are generated and the way we sample the data. Usually,
for simplicity, we sample the data in a very stupid way like taking a bunch of these pixels, probably a lot
more pixels that we would actually need, that are correlated.

We talked about prediction and how to use traditional linear prediction to take advantage of the inter-
sample redundancy. The corresponding formula, essentially, is telling us what we can say about the current
sample (without observing it) just from the history of the previous samples. Obviously the way this is done
has to depend on the nature of the data themselves: it's related to how the data involve over time and to
dimension of the data.

If you have such formula then we can calculate any time "n" and the difference between a true sample
value, which is not used in the prediction obviously, and the predictive value calculated from the past
sample. Then we can communicate this residual to the decoder which can reconstruct this data without
knowledge of "x" but simply from the sequence of residuals.

You might also like