You are on page 1of 28

----- By BUSHRA SHAHEEN

CONTENTS :
Definition Classification of Image data Need for Compression Compression Algorithms Lossy & lossless compression

JPEG Image
DCT & DFT Encoding proess

DCT encoding
Decoding process Conclusion

DEFINITION
Compression is a reversible

conversion of data to a format that requires fewer bits, usually performed so that the data can be stored or transmitted more efficiently. It minimizes the size in bytes of a graphic file without degrading the quality of the image to an unacceptable level . It analyzes the techniques allowing to reduce the amount of data to describe the information content of the image .

CLASSIFICATION OF IMAGE DATA :


An image is represented as a

two-dimensional array of coefficients .


Each coefficient represents the

brightness level in that point.


Technically, the smooth

variations in color can be termed as low frequency variations.


The sharp variations as high

frequency variations

WHY DO WE NEED COMPRESSION ?


Requirements may outstrip the anticipated increase of storage space and bandwidth

For data storage and data transmission ,


DVD Video conference Printer

The bit rate of uncompressed digital cinema


data exceeds 1 Gbps .

COMPRESSION ALGORITHMS :
The image compression algorithms can be divided into two branches: Lossless algorithms: There is no information loss, and the image can be reconstructed exactly the same as the original.

Applications: Medical imagery, Archiving


Lossy algorithms: Information loss is tolerable . Many-to-1 mapping in compression e.g.. Quantization Applications: commercial distribution (DVD) and rate constrained environment where lossless methods can not provide enough compression ratio

LOSSY &LOSSLESS COMPRESSIONS


A photo

of a cat compressed with successively more lossy compression ratios from right to left .

An image with lossless compression where the bytes are reduced from
83,261 to 1,523 .

DIFFERENT FILE TYPES :


Among the different file types available choosing the proper format results in higher quality images.
The

most commonly found file types in web design are:

1. PNG : <Portable Network Graphics > Works on screen shots esp., of movies , games or similar content 2. JPEG : <Joint Photographic Experts Group>

Works with color and grayscale images, e.g., satellite, medical, ...
3. GIF : <Graphics Interchange format>

Works on line art,illustrations,photos with high contrast

JPEG
(Intraframe coding)
First generation JPEG uses DCT+Run length Huffman

entropy coding.
Second generation JPEG uses wavelet transform + bit plane

coding + Arithmetic entropy coding.


More modern designs such as JPEG 2000 and JPEG XR

exhibit a more graceful degradation of quality as the bit usage decreases by using transforms with a larger spatial extent for the lower frequency coefficients and by using overlapping transform basis functions.

Why DCT Not DFT?


DCT is similar to DFT, but can provide a better approximation with fewer coefficients

The coefficients of DCT are real valued instead of complex valued in DFT.

JPEG Compression Example


Original image 512 x 512 x 8 bits
= 2,097,152 bits

JPEG 27:1 reduction


=77,673 bits

A JPEG COMPRESSED IMAGE

THE ENCODING PROCESS

DISCRETE COSINE TRANSFORM (DCT) The source image samples are grouped into 8x8 blocks, shifted from unsigned integers to signed integers and input to the DCT. The following equation is the idealized mathematical definition of the 8x8 DCT :

1 Where C(u), C(v) = 2


C(u), C(v) = 1 otherwise.

for u,v=0;

THE 64 (8 X 8) DCT BASIS FUNCTIONS


Each 8x8 block can be

looked at as a weighted sum of these basis functions.


The process of 2D DCT is also the process of finding those weights.

The DCT transforms an 88 block of input values to a linear combnation of these 64 patterns. The patterns are referred to as the two-dimensional DCT basis functions, and the output values are referred to as transform coefficients.

DCT PROCESSING
The DCT is related to the Discrete Fourier Transform (DFT). Some simple

intuition for DCT-based compression can be obtained by viewing the FDCT as a harmonic analyzer and the IDCT as a harmonic synthesizer. Each 8x8 block of source image samples is effectively a 64-point discrete signal which is a function of the two spatial dimensions x and y.

The FDCT takes such a signal as its input and decomposes it into 64 orthogonal basic signals. Each contains one of the 64 unique twodimensional (2D) spatial frequencies which comprise the input signals spectrum.
The ouput of the FDCT is the set of 64 basis-signal amplitudes or DCT coefficients whose values are uniquely determined by the particular 64point input signal.

DCT Image Compressing Process:


The DCT coefficient values can thus be regarded as the relative amount of the 2D spatial frequencies contained in the 64-point input signal. The coefficient with zero frequency in both dimensions is called the DC coefficient and the remaining 63 coefficients are called the AC coefficients. Because sample values typically vary slowly from point to point across an image. The FDCT processing step lays the foundation for achieving data compression by concentrating most of the signal in the lower spatial frequencies. For a typical 8x8 sample block from a source image, most of the spatial frequencies have zero or near-zero amplitude and need not be encoded. Because the DCT equations contain transcendental functions, no physical implementation can compute them with perfect accuracy. Besides, JPEG does not specify unique DCT algorithm in its proposed standard.

QUANTIZATION
To achieve further compression, each of the 64 DCT coefficients is uniformly quantized in conjunction with a 64-element Quantization Table, which is specified by the application. The purpose of quantization is to discard information which is not visually significant. Because quantization is a many-to-one mapping, it is fundamentally lossy. Moreover, it is the principal source of lossiness in DCT-based encoder. Quatization is defined as the division of each DCT coefficient by its corresponding quantizer step size followed by rounding to the nearest integer as following equation :

Each step size of quantization ideally should be chosen as the perceptual threshold to compress the image as much as poissible without visible artifacts. It is also a function of the source image characteristeics, display characteristics and viewing distance.

ENTROPY CODING
The final processing step of encoder is entropy coding This step achieves additional compression losslessly by encoding the quantized DCT coefficients more compactly based on their statistical characteristics. It is useful to consider entropy coding as a 2-step process. The first step converts the zig-zag sequence of quantized coefficients into an intermediate sequence of symbols. The second step converts the symbols to a data stream in which the symbols no longer have externally identifiable boundaries. The form and definition of the intermediate symbols is dependent on both the DCT-based mode of operation and the entropy coding method.

ZIG-ZAG SCAN DCT BLOCKS


Why? -- To group low frequency coefficients in top of vector. Maps 8 x 8 to a 1 x 64 vector.

Now the outputs of DPCM(Differential Pulse Code Modulation ) and "zigzag" scanning can be encoded by entropy coding seperately. It encodes the quantized DCT coeffieicts more compactly based on their statistical characteristics

THE DECODING PROCESS

DCT-Based Decoder Processing Steps

Entropy Decoding And Dequantization


ENTROPY DECODING: Similar to entropy coding, entropy decoding also can be considered as 2-step process. The first step converts the input bit stream into the intermediate symbols. The second step converts the intermediate symbols into the quantized DCT coefficients. In fact, the output of the second step is the DC difference, the output of DPCM, and the AC coefficients after zig-zag scan. Therefore, the DC difference is then decoded into the quantized DC coefficient, and the AC coefficients are ordered into original order.

DEQUANTIZATION: The following step is to dequantize the output of entropy decoding, returning the result to a representation appropriate for input to the IDCT. The equation is as followed :

Inverse Discrete Cosine Transform (IDCT)


The last step of decoder is the IDCT. It takes the 64 quantized DCT

coefficients and reconstructs a 64-point output image signal by summing the basis signals.

JPEG does not specify a unique IDCT algorithm in its standard either. The mathematical definition of the 8x8 IDCT is as followed :

PERFORMANCE EVALUATION
The performance of an image compression technique must be evaluated considering three different aspects: Compression efficiency (Compression Ratio/Factor, Bit rate); Image quality (Distortion Measure);

Computational cost,
Transmission time.

CONCLUSION :

Compression is done in order to :


Reduce the consumption of expensive resources such as space on the hard disk . Reduce the consumption of transmission bandwidth. Serve its use in digital cameras and medical imagery

REFERENCES:
1. Introduction To Data Compression By Guy Blelloch 2. Adobe Systems Inc. PostScript Language Reference Manual. Second Ed. Addison Wesley, Menlo Park, Calif. 1990 3. Digital Compression and Coding of Continuous tone Still Images, Part 1, Requirements and Guidelines. ISO/IEC JTC1 Draft International Standard 10918-1, Nov. 1991. 4. Digital Compression and Coding of Continuous tone Still Images, Part 2, Compliance Testing. ISO/IEC JTC1 Committee Draft 10918-2, Dec. 1991. 5. Encoding parameters of digital television for studios. CCIR Recommendations, Recommendation 601, 1982. 6. Howard, P.G., and Vitter, J.S. New methods for lossless image compression using arithmetic coding. Brown University Dept. of Computer Science Tech. Report No. CS-91-47, Aug. 1991. 7. Huffman, D.A. A method for the construction of minimum redundancy codes. In Proceedings IRE, vol. 40, 1962, pp. 1098-1101. 8. Lger, A. Implementations of fast discrete cosine transform for full color videotex services and terminals. In Proceedings of the IEEE Global Telecommunications Conference, IEEE Communications Society, 1984, pp. 333-337. 9. Office Document Architecture (ODA) and Interchange Format, Part 7: Raster Graphics Content Architectures. ISO/IEC JTC1 International Standard 8613-7. 10. Pennebaker, W.B., JPEG Tech. Specification, Revision 8. Informal Working paper JPEG-8R8, Aug. 1990.

ANY QUERIES ???

You might also like