You are on page 1of 33

Effects of Lossless and Lossy Image Compression and Decompression on Archival Image Quality in a Bone Radiograph and an Abdominal

CT Scan
by

Michael Tobin, M.D., Ph.D.


ABSTRACT
The purpose of this study was to investigate the effects of lossless and lossy compression on two types of archived medical images for which interpretation depends on (1) high spatial resolution, e.g. a bone radiograph, and (2) high contrast resolution, e.g., abdominal computed tomography (CT) study. The author found that image quality was preserved with lossless compression and relatively low levels of lossy decompression (~2.5 to 1) but that at higher levels of lossy compression, visible image degradation resulted, sooner for wavelet than for JPEG. Because the level of compression that preserves clinically acceptable image quality may depend on the modality, the anatomy, and the pathology, the author recommends lossless wavelet or JPEG compression algorithms for medical image archiving. KEY WORDS: image compression, lossless, lossy, wavelet, image quality, medical imaging, image archiving

INTRODUCTION
Radiology is undergoing a profound transition from interpretation of images displayed on film to reading images on high-resolution computer monitor screens. As this new era emerges, radiologists are also starting to see traditional film delivery replaced by electronic transfer of digital files, and film storage rooms replaced by archives of computer files. The medical enterprise depends on a system that makes diagnostic images available for radiologic interpretation, that transmits images to physicians throughout the system, and that efficiently stores images pending retrieval for future medical or legal purposes. Computerized medical imaging generates large, data-rich electronic files. To speed electronic transfer and minimize computer storage space, medical images often undergo compression into smaller digital files. The underlying assumption, that computer-based medical imaging remains sufficiently faithful to film display to maintain diagnostic accuracy, raises some important questions. Does compression of digitally archived images affect image quality? Does the process introduce artifacts or result in lost image detail? Do the effects of compression differ among various types of radiology images? The level of diagnostic detail needed for clinical interpretation of medical images varies according to modality. In general, nuclear medicine scans require less detail than computed tomography (CT) or magnetic resonance (MR). Because interpretation of mammography

and radiography depends on high spatial resolution, these images demand more detail than CT or MR, which need high contrast resolution for diagnostic interpretation. The same compression process may affect images that require high spatial resolution differently than images that depend on high contrast resolution To document the effects of compression on diagnostic detail, the author tested two types of radiologic images with various compression algorithms.

MATERIALS AND METHODS


The author selected two radiology images to study under various compression algorithms: (1) a bone radiograph of the hand of a patient with scleroderma (Fig 1); and (2) an abdominal CT scan of a patient with adult-onset polycystic kidney disease (Fig 2). The bone digital image was created by scanning the hand radiograph at 300 dots per inch (dpi) with an Epson 636 scanner equipped with a transparency adapter and connected to an Amiga 1200 computer. The resulting greyscale image was 2136 x 3196 pixels in size and 256 bits deep. The CT digital image was created by scanning the CT film at 320 dpi using the Epson scanner/Amiga computer system described above. The resulting greyscale image was 1163 x 1038 pixels in size and 256 bits deep. Five board certified radiologists at the author's institution assessed the degree of image degradation resulting from various types and amounts of compression associated with several different digital image file formats. A qualitative, rather than a quantitative, approach was chosen because radiologists typically evaluate images qualitatively in their day-to-day practice and, also, because common metrics used for comparing images preand post compression, e.g., mean pixel error, root mean square error, maximum error, etc., may not correlate well with visual assessment of image quality. (1)

File Formats
There are more than 100 image file formats for computerized digital images, which can be divided into those which, in the process of storage, compress the original image and those which do not. Image formats using compression can be further divided into (1) those that maintain full fidelity of the original digital data during the process of compression ("lossless" compression); and (2) those which do not ("lossy" compression). The degree of "lossiness" is usually under operator control.

Non-compressed Format
The TIFF (Tagged Image File Format) format, originally developed by the Aldus Corporation, is well suited for graphics and publishing applications and is widely accepted in business and industry. Files are large but stored data is not subject to compression/decompression noise or other artifacts. TIFF files accommodate 24-bit Images. Non-compressed TIFF images were taken as the standard against which other images were compared.

Lossless Compression Formats TIFF


As a user option, TIFF files can be losslessly compressed at the time of image storage using lossless compression algorithms such as RLE (run length

encoding) or LZW (Lempel-Ziv-Welch). Data integrity is maintained and resulting file size is smaller.

GIF
GIF (Graphics Interchange Format) was developed by Unisys and licensed by Compuserve as a cross-platform image standard for its users on the Internet. Lossless compression is achieved with the proprietary LZW (Lempel-Ziv-Welch) algorithm. GIF files are limited to 256 different colors or shades of grey. GIF files do not store actual grey scale values in the image matrix. Instead, single numbers are used, each one of which corresponds to a specific grey scale value in the image. The one-to-one correspondence between a given (index) number and its greyscale value is kept in a look-up table (lut), or palette, which is stored with the image. It is the matrix of index numbers that is compressed with the LZW algorithm. (2)

PNG
The PNG (Portable Network Graphics) format is intended as a replacement for the GIF file format whose copyright owner, Unisys, requests royalty payment from commercial developers for use of the LZW algorithm. PNG uses the lossless "Deflate" algorithm, which is based on LZ77, the predecessor to LZW. Image processing programs such as those that first "filter" an image so that repeating patterns can be recognized vertically as well as horizontally, for example, can make PNG's use of the "Deflate" algorithm more effective. Although the details need not concern the radiologist, they do help explain why the PNG usually achieves higher compression than GIF. The Deflate algorithm used by PNG is also used by the well-known pkZIP compression program, which itself can be used to compress files losslessly, including image files. The PNG format can accommodate images with 512 levels of grey, a decided benefit when storing medical images of that depth. PNG and GIF formats are particularly well suited for Internet graphics such as logos, where uniformity of color etc. leads to significant redundancy of data and high degrees of compression. PNG has additional advantages that are beyond the scope of this paper, although it has not had the widespread acceptance initially predicted.

BMP
BMP (bitmapped picture) is Microsoft Windows(tm) device-independent bitmap standard. Users of this format can depend on their images being displayed on any Windows device. BMP supports 24-bit images. Lossless compression is possible, using BMP's run length encoding (RLE) algorithm, but the resulting image file supports only 256 levels of grey. Again, this limitation becomes important for medical images with more than 256 greyscale levels.

Lossy File Formats

JPEG
Developed by the Joint Photographics Expiratory group, JPEG is a compression scheme, with JFIF as the associated file format. At present, JPEG compression is based on the DCT (Discrete Cosine Transform) approach. JPEG compression is lossy, although it can be made to operate in lossless mode. With JPEG compression, the degree of lossiness is under operator control. Because the current implementation of JPEG operates on 8 x 8 pixel segments, images can appear blocky at high compression ratios.

Wavelet
The wavelet compression algorithm has features similar to, yet different from, the Fourier transform. (3). Although often used for lossy compression, the wavelet algorithm can be operated in a lossless mode. An important point is that wavelet compression operates on an entire image at once, thus avoiding the "blockiness" associated with JPEG methodology. Compression using wavelets may to offer advantages over current compression techniques and is anticipated as the basis of JPEG 2000. (4)

RESULTS
Tables 1 and 2 summarize results of various compression techniques on the hand radiograph and the abdominal CT image.

Hand Radiograph (AP) TIFF


When stored as an uncompressed 8-bit deep TIFF image, the hand radiograph required 6.82 MB (megabytes) of space on a Syquest Syjet (removable) hard drive. However, when compressed with the lossless LZW algorithm, the same TIFF image required only 4.00 MB, a 1.7:1 compression ratio, with no loss in quality.

BMP
As an uncompressed BMP file, the hand radiograph in Figure 1 required 6.82 MB, which was the same size -- and quality -- of the uncompressed TIFF file. When the BMP image was losslessly compressed according to a Run Length Encoding (RLE) algorithm, the resulting file size is 5.69 MB, a 1.2:1 compression ratio while maintaining image fidelity.

GIF
A GIF file of the hand radiograph was 4.46 MB, 1.5 times smaller than the uncompressed TIFF image. Lossless compression was achieved with the LZW algorithm at the time of GIF file creation. As with other formats using lossless compression, digital image quality was fully maintained.

PNG

Using the "Deflate" algorithm, the PNG image format squeezes the original, uncompressed TIFF file to 3.69 MB, a 1.8:1 compression ratio, a slight improvement over GIF, with no loss of image quality.

JPEG
At Quality Factor (QF) =100 (least compressed; best image), the hand radiograph file size was 2.82 MB (2.4:1 compression ratio), with image quality visually indistinguishable from the original. At QF=3.2, the digital file size was 0.36 MB (a 19:1 compression ratio). Image quality was still extremely good, with loss of detail evident visually only on magnified images. At QF=1.0, the file size was only 0.17 MB (40:1 compression ratio), but the "block" artifact characteristic of JPEG compression was readily visible. Finally, at QF=0.1, the file size was reduced to 0.08 MB (85:1 compression ratio) but the image became unreadable. (Fig 3)

Wavelet (lwf and others)


Using the evaluation software provided by LuraTech, Inc., the highest quality, lowest compression (Q=100) resulted in a 0.98 MB file (a 7:1 compression ratio) and excellent image quality. However, changing the quality to Q=99 (0.58 MB, compression ratio of 12:1) or Q=98 (0.37 MB, compression ratio 18:1), led to loss of trabecular detail that was readily visible. (Fig 4) Similar results were obtained with the freely downloadable software from LizardTech, Inc..

Abdominal CT (Single Axial Slice) TIFF


As an uncompressed TIFF file, the CT image in Figure 2 required 1.21 MB. However, when compressed with the lossless LZW algorithm, the same TIFF image required only 0.90 MB, a 1.3:1 compression ratio. Image quality was maintained.

BMP
The Windows(tm) BMP (uncompressed) CT digital file was the same size (1.21 MB) and quality as the TIFF digital image. With RLE (lossless) compression, the image file was actually larger, 1.41 MB, with a compression ratio of 0.86:1, an 8% gain in size, but withot loss in image quality.

GIF
The GIF CT digital image file, automatically compressed by the LZW algorithm, is 1.02 MB. Compared to the uncompressed TIFF image, this is 1.2:1 compression ratio achieved without image degredation.

PNG

The PNG CT digital image file is 0.79 MB, a 1.5:1 compression when compared to the uncompressed TIFF image. Image quality is maintained and file size is smaller than the corresponding GIF image.

JPEG
At QF =100 (least compressed; highest quality), the CT digital image file size was 0.77 MB (1.6:1 compression ratio), with image quality indistinguishable from the original. Using QF=3.2, the CT image digital image file was only 0.15 MB -- a compression ratio of 8:1 -- with excellent image quality, without block artifact. Using QF=1.0, the CT digital image file size decreased to 0.06 MB (a compression ratio of 22:1). Digital image quality was very good, only slightly inferior to that produced by QF=3.2, with subtle artifact visible at 2x magnification, primarily in the more continuous tone areas and in the alphanumerics. Using QF=0.1, the CT digital image file was further reduced to 0.02 MB (a compression ratio of 67:1), but the image was unreadable. (Fig 5)

Wavelet
Using the evaluation software provided by LuraTech, Inc., highest quality, lowest compression (Q=100) led to a file size of 0.76 MB -- a compression ratio of 1.6:1 -- with excellent image quality. At Q=89, the file size was 0.12 MB, a 10:1 compression ratio. Although image quality was still very good, there was subtle smoothing of liver parenchyma. At Q=85, the file size was 0.07 MB, a 28:1 compression ratio. Artifactual smoothing of liver parenchyma was more obvious. At Q=80, file size was reduced to 0.03 MB, a 45:1 compression ratio. The image appears out of focus, although major pathology remained evident (Fig 6). Regardless of whether the algorithm used was from LuraTech, LizardTech, or SPIHT, wavelet compression ratios of less than 10:1 gave excellent images, while smoothing artifacts were present at higher ratios.

DISCUSSION
Images compressed losslessly occupy less space than the originals, but space-saving gains are modest, with compression ratios in the 2.5:1 range. And as we observed from the RLE (Run Length Encoded) BMP file of the CT image, it was actually possible to increase the file size of complex images with compression. No image data are lost during lossless compression. Decompression restores the original image without loss of fidelity. Images stored in the GIF and PNG formats are compressed automatically, whereas for TIFF and BMP files, the user decides whether or not to compress the file. Lossy compression achieves higher compression ratios than lossless, but at the expense of image quality, with the degree of lossiness under user control. The artifact introduced during lossy compression depends on the compression scheme and how it is implemented. (5)

JPEG has been used on the Internet for many years. (2, 6). Because the standard implementation of JPEG operates on 8 x 8 pixel segments, images can appear blocky at high compression ratios although in our test images, we did not observe this artifact, even at compression ratios of 20:1 and higher. If the JPEG DCT algorithm were applied to an entire image, rather than to 8 x 8 pixel subunits of it, even higher levels of acceptable compression might be achieved. Toney et al. found that DCT compression ratios of 10:1 were suitable for detection of subtle fractures in the pediatric population. (7) Sayre et al., however, showed that there was no loss in diagnostic accuracy in the detection of subperiosteal resorption at compression ratios of 20:1 when full-frame DCT was used. (8) Wavelet transforms are explained in a highly readable book (3), in a book chapter (9) and in numerous papers (10-13). When used to achieve high compression, wavelets can cause images to appear smooth, with a wavy appearance, sometimes described as looking similar to grains of rice. Because wavelets operate on a entire image at once, they avoid the "blockiness" associated with JPEG methodology. Compression with wavelets may offer advantages over current compression techniques, such as the ability to progressively zoom in on an image. This scalability allows a single wavelet file to produce images with different resolutions for different needs. In the popular press, there is excitement about wavelet compression, with one author enthusing, "Save space! Shrink image files with little or no quality lost!" (14) However, the results of this author's study led to a different conclusion. It was JPEG with DCT, not wavelet, that was able to compress the hand radiograph to high compression ratios without visual loss of quality. (Fig 7) Our study showed that, when applying the LuraTech wavelet implementation to the hand radiograph, compression ratios greater than 7:1 led to over-smoothed images with loss of trabecular detail. This rapid fall-off in quality, at even modest wavelet compression ratios, was also reported in studies of ultrasound images by Persons et al. (5,15) There may be good theoretical reasons for their findings. As noted by Ericson et al., the trabecular pattern in bone radiographs, and speckle in ultrasound examinations, are particularly sensitive to blurring from compression because they, like random noise, are characterized by numerous, high-frequency coefficients, which wavelet compression removes. (11) Indeed, slightly compressed images, precisely because they tend to have less noise, are sometimes preferred by observers. (1, 16) Because wavelets are a class of functions, it is possible for one implementation to outperform another. (17) However, regardless of whether this author used the wavelet implementation from LuraTech, SPIHT (Set Partitioning in Hierachical Trees) (18) or LizardTech, only modest image compression could be achieved before loss of detail became visually evident. The consumer version of the LizardTech's image compression program is currently limited to files no greater than 1600 x 2100 pixels, smaller than the digitized hand radiograph used in our study, making it less useful than LuraTech's software, which can process images up to 4096 x 4096 pixels.

Comparing Images Compressed with Wavelets vs. JPEG with DCT


Our study found that JPEG with DCT achieved higher compression than wavelets with less loss of detail visually. (Fig 6) The reasons for this result are not entirely clear. However, Persons et al. suggest that, although wavelet transforms may be more mathematically correct, JPEG with DCT compression is optimized for visual perception and, therefore, may be more visually correct. (10)

Interestingly, it was easier to compress the hand radiograph than the CT slice, regardless of whether lossless wavelet (compression ratio 7:1 vs 1.6:1) or lossless JPEG (compression ratio 2.4:1 vs. 1.6:1) was used. Erickson obtained similar results when comparing the higher wavelet compressibility of chest radiographs with CT and MR images. (11) Goldberg et al. (19), citing Gillespy et al. (20) and Chan et al. (21), suggest that lower compressibility relates to smaller matrix size and less pixel-to-pixel correlation. These authors would therefore predict that the CT slice being smaller than the hand radiograph (1.2 MB vs. 6.82 MB, respectively) should be less compressible, as observed in this study. A second explanation specific for the wavelet transform, which filters the original image into high- and low-frequency sub-band is that higher frequencies, which contain progressively more detail, are usually more subject to compression than the low-frequency sub-band, which contains much of the information needed to reconstruct the image. Persons et al. found that the most compressible images have the greatest percentage of their information (energy) in the lowest frequency sub-band. (22) Chest radiographs, for example, which have 99.69% of their energy in the lowest-frequency sub-band, are the most compressible, whereas CT and MR images, which have, respectively, only 92.12% and 78.03% in the same sub-band, are less compressible. As pointed out by Persons et al., this analysis ignores issues such as the differing levels of compressibility of various structures within an image. Nonetheless, the findings discussed in this paper may account for the different levels of acceptance of, and enthusiasm for, wavelet compression found in the literature, depending on the imaging modality studied and the pathology to be detected. Although studies in the literature differ in the way images are obtained (e.g., analog vs. digital), the imaging modality evaluated (e.g., chest radiography vs. CT), the medium used for interpretation (e.g., film vs. computer monitor), the specific pathology sought, the endpoint of the study (diagnostic accuracy vs. loss of image quality), etc., there does appear to be general agreement with our results. For example, Slone et al., studying compressed images on a workstation, found not only that JPEG and wavelet compression ratios needed to be in the 8:1 to 10:1 range to be visually lossless, but concluded similar to this author's: "Despite expectations for improved performance with wavelet-based algorithms, we found that the JPEG baseline algorithm resulted in performance that was as good as, if not better than performance with the WTCQ at low compression ratios." (23) (WTCQ = wavelet-based trellis-coded quantization)

CONCLUSION
For archival purposes, lossless wavelet compression of medical images may be acceptable, but high levels of lossy compression are not. This is true both for images that depend on spatial resolution for their interpretation (e.g., radiographs) as well as for those that depend on contrast resolution (e.g., CT scans). Specific applications, such as teaching or Internet publishing, may allow quite high compression with adequate results. Indeed, when the author compressed the original CT image 40:1 using the LuraTech wavelet algorithm, and then scaled it to 50% of its original size, the resulting image still demonstrated the important clinical findings. Although this may be suitable for educational purposes (Fig 8), smoothing artifact was still present and one would hesitate to use it for diagnostic purposes. Therefore, when compressing medical images, one must consider not only of the uses that were intended by also on those that might be made. In response to studies that show that compressed images retain sufficient detail to diagnose a specific pathology, this author would argue that diagnostic quality must be maintained

for every possible pathology, now and in the future, which is a much more stringent requirement. This study also suggests that, for greyscale medical images, JPEG compression with DCT may be more maligned than warranted. Nonetheless, this author recommends lossless or minimal compression unless the user conducts the extensive testing required for lossy compression. Blume asks the question, "Are you afraid of data compression?" and concludes that you should not fear it. (24) Based on this study, however, the author is more inclined to agree with Kivijari et al. when they state: "There is always the possibility that a vague detail might give a reason to suspect some critical changes in a patient's condition. For this reason, the lossy techniques, which tend to give high compression ratios, such as 1:10 and 1:30, are not acceptable in medical image compression." (25) (Readers who would like to participate in an active discussion of image compression are referred to Internet newsgroups, such as comp.compression. Also available are basic introductions to image file formats (2, 26) and image compression. (27, 28)

ACKNOWLEDGEMENT
The author would like to thank Linda E. Ketchum for careful review of this manuscript and helpful advice. September, 2001

Table 1. Hand Radiograph: Results of Compression


File Format TIFF TIFF LZW BMP BMP RLE GIF PNG JPEG 100.0 JPEG 3.2 JPEG 1.0 JPEG 0.1 WAVELET 100 WAVELET 99 WAVELET 98 Size (MB) 6.82 4.00 6.82 5.69 4.46 3.69 2.82 0.36 0.17 0.08 0.98 0.58 0.37 Mode N LL N LL LL LL L L L L L L L Compression Ratio 1.0:1 1.7:1 1.0:1 1.2:1 1.5:1 1.8:1 2.4:1 19:1 40:1 85:1 7:1 12:1 18:1 Perceived Quality ***** ***** ***** ***** ***** ***** ***** **** *** * ***** **** ***

Perceived Quality ***** = Best Mode N = No Compression Mode L = Lossy Compression Mode LL = Lossless Compression

Table 2. CT of the Abdomen: Results of Compression


File Format TIFF TIFF LZW BMP BMP RLE GIF PNG JPEG 100.0 JPEG 3.2 JPEG 1.0 JPEG 0.1 WAVELET 100 WAVELET 89 WAVELET 85 Size (MB) 1.21 0.90 1.21 1.41 1.02 0.79 0.77 0.15 0.06 0.02 0.76 0.12 0.04 Mode N LL N LL LL LL L L L L L L L L Compression Ratio 1.0:1 1.3:1 1.0:1 0.8:1 1.2:1 1.5:1 1.6:1 22:1 35:1 67:1 1.6:1 10:1 28:1 45:1 Perceived Quality ***** ***** ***** ***** ***** ***** ***** ***** *** * ***** **** *** ***

WAVELET 80 0.03 Perceived Quality ***** = Best Mode N = No Compression Mode L = Lossy Compression Mode LL = Lossless Compression

REFERENCES
1. Erickson BJ, Manduca A, Persons KR, et. al. Evaluation of irreversible compression of digitized posterior-anterior chest radiographs. J Digit Imaging 1997; 10(3):97102. 2. Webster, T. "Web Designer's Guide to Graphics: PNG, GIF & JPEG." Indianapolis, IN: Hayden Books, 1997.

3. Hubbard, BB. The World According to Wavelets: The Story of a Mathematical Technique in the Making, 2nd ed. Natik, MA: A.K. Peters, Ltd., 1998. 4. Johnson RC, JPEG 2000 wavelet spec approved. EETimes.com, 1999 Dec 12. http://www.eetimes.com/story/industry/systems_and_software_news/OEG19991228 S0028 5. Persons KR, Palisson PM, Manduca A, et al. Ultrasound grayscale image compression with JPEG and wavelet techniques. J Digit Imaging 2000; 13(1):25-32. 6. JPEG FAQ http://www.faqs.org/faqs/jpeg-faq/ 7. Toney MO, Dominguez R, Dao HN, Simmons G. The effect of lossy discrete cosine transform compression on subtle fractures. J Digit Imaging 1997; 10(4):169-73. 8. Sayre JW, Ho BK, Boechat MI, Hall TR, Huang HK. Subperiosteal resorption: Effect of full-frame image compression of hand radiographs on diagnostic accuracy. Radiology 1992 Nov; 185(2):599-603. 9. Goldberg M. Image Compression. In: Siegel EL, Kolodner RM, ed. Filmless Radiology. New York: Springer Verlag, 1999: Chap. 15, 295-310. 10. Reiter, E. Wavelet Compression of Medical Imagery. Telemed J 1996; 2(2):131-137. 11. Erickson BJ, Manduca A, Palisson P, t. al. Wavelet Compression of Medical Images. Radiology 1998 Mar; 206(3): 599-607. 12. Schomer DF, Elekes AA, Huffman JC, et al. Introduction to wavelet-based compression of medical images. Radiographics 1998; 18(2):469-481. 13. Strintzis MG. A review of compression methods for medical images in PACS. Int J Med Inf 1998; 52(1-3):159-165. 14. Sawalich, W. LuraTech LuraWave. PC Photo 2000; July/August: 80. 15. Persons KR, Hangiandreou NJ, Charboneau JW, et al. Clinical evaluation of irreversible compression of ultrasound images using the JPEG algorithm at approximately 9:1. J Digit Imaging 2000; 13(2 Suppl 1):191-192. 16. Smith I, Roszkowski A, Slaughter R, Sterling D. Acceptable levels of digital image compression in chest radiology. Australas Radiol 2000; 44(1):32-35. 17. Mitra S, Yang S., Kustov V. Wavelet-based vector quantization for high-fidelity compression and fast transmission of medical images. J Digit Imaging 1998; 11(4 Suppl 2):24-30.
18. Center for Image Processing Research (CIPR), Rensselaer Polytechnic Institute

(RPI), SPIHT 19. Goldberg MA, Gazelle GS, Boland GW. Focal hepatic lesions: effect of threedimensional wavelet compression on detection at CT. Radiology 1997 Jan; 202(1):159-165. 20. Gillespy T 3rd, Rowberg AH. Displaying radiologic images on personal computers: image storage and compression: Part 1. J Digit Imaging 1993 Nov; 6(4):197-204. 21. Chan KK, Lou SL, Huang HK. Full-frame transform compression of CT and MR images. Radiology 1989 Jun; 171(3):847-851.

22. Persons KR, Palisson PM, Manduca A, et. al. An analytical look at the effects of image compression on medical images. J Digit Imaging 1997; 10(3 Suppl 1):60-65. 23. Slone RP, Foos DH, Whiting BR, et al. Assessment of visually lossless irreversible image compression: comparison of three methods by using and Image-comparison workstation. Radiology 2000; 215(2):543-553. 24. Blume H. DICOM (Digital Imaging and Communications in Medicine) state of the nation. Are you afraid of Image Compression? Adm Radiol J 1996; 15(11):36-40. 25. Kivijarvi J, Ojala T, Kaukoranta T, Kuba A, Nyul L, Nevalainen O. A comparison of lossless compression methods for medical images. Comput Med Imaging Graph 1998; 22(4):323-339.
26. Celeste A. The future of web design -- Parts I-III. designer.com

27. Schilling RB, ed. Understanding Compression. Great Falls, VA: Society for Computer Applications in Radiology, 1997. 28. Erickson BJ. Irreversible Compression of Medical Images. Great Falls, VA: Society for Computer Applications in Radiology, 2000.

[SUMMARY]
The author investigated the effects of lossless and lossy compression on two types of archived medical images for which interpretation depends on (1) high spatial resolution, e.g. a bone radiograph, and (2) high contrast resolution, e.g., abdominal computed tomography (CT) study. A typical bone radiograph (anterior-posterior [AP] view of a hand with scleroderma) and abdominal CT study (7-mm contrast-enhanced transaxial slice from a patient with adultonset polycystic kidney disease) were compressed and decompressed using commonly available file formats and lossless and lossy algorithms. Different levels of compression were used for lossy algorithms. Resulting image quality was assessed qualitatively by five board-certified radiologists. We found that image quality was preserved with lossless compression, (e.g. TIFF, GIF, BMP), JPEG and wavelet compression in their lossless modes, and relatively low levels of lossy decompression (~2.5 to 1). Higher levels of lossy JPEG and wavelet compression, resulted in visible image degradation, sooner for wavelet than for JPEG. Lossy wavelet compression of the bone radiograph led to loss of trabecular detail although pathology, such as calcification and erosion, remained visible. Overall, the abdominal CT image was more resistant to compression than the bone radiograph. It is possible that no one lossy compression ratio with currently available algorithms will yield decompressed images of diagnostic quality for all types of radiology images. The level of compression that preserves clinically acceptable image quality may depend on the modality, the anatomy, and the pathology. At present, the author recommends lossless wavelet or JPEG compression algorithms for medical image archiving. List of Publications -->

Return To Home -->

Top of Form

Search

You are in

FileFormat.Info MPEG

Bottom of Form

Specific Formats

MPEG File Format Summary


Also Known As: MPG, MPEG-1, MPEG-2
Type Colors Compression Audio/video data storage Up to 24-bits (4:2:0 YCbCr color space) DCT and block-based scheme with motion compensation

Maximum Image 4095x4095x30 frames/second Size Multiple Images Yes (multiple program multiplexing) Per File Numerical Format Originator Platform Supporting Applications See Also NA Motion Picture Experts Group (MPEG) of the International Standards Organization (ISO) All Xing Technologies MPEG player, others JPEG File Interchange Format, Intel DVI

Usage Stores an MPEG-encoded data stream on a digital storage medium. MPEG is used to encode audio, video, text, and graphical data within a single, synchronized data stream. Comments MPEG-1 is a finalized standard in wide use. MPEG-2 is still in the development phase and continues to be revised for a wider base of applications. Currently, there are few stable products available for making practical use of the MPEG standard, but this is changing. Vendor specifications are available for this format. Code fragments are available for this format. Sample images are available for this format. MPEG (pronounced "em-peg") is an acronym for the Motion Picture Experts Group, a working group of the International Standards Organization (ISO) that is responsible for creating standards for digital video and audio compression. Contents: File Organization File Details For Further Information The MPEG specification is a specification for an encoded data stream which contains compressed audio and video information. MPEG was designed specifically to store sound and motion-video data on standard audio Compact Discs (CD) and Digital Audio Tapes (DAT). The main application for MPEG is the storage of audio and video data on CD-ROMs for use in multimedia systems, such as those found on the Apple Macintosh platform and in the Microsoft Windows environment. Such systems require the ability to store and play back high-quality audio and video material for commercial, educational, and recreational applications. The new MPEG-2 standard allows the transmission of MPEG data across television and cable network systems. On most systems, you use special hardware to capture MPEG data from a live video source at a real-time sampling rate of 30 frames per second. Each frame of captured video data is then compressed and stored as an MPEG data stream. If an audio source is also being sampled, it too is encoded and multiplexed in with the video stream, with some extra information to synchronize the two streams together for playback. To play back MPEG data, you use either a hardware/software or software-only player. The player reads in the MPEG data stream, decompresses the information, and sends it to the display and audio systems of the computer. Speed of the playback depends upon how quickly the resources of the computer allow the MPEG data to be read, decompressed, and played. Available memory, CPU speed, and disk I/O throughput are all contributing factors. The quality of the MPEG stream is determined during encoding, and there are typically no adjustments available to allow an application to "tweak" the apparent quality of the MPEG output produced during playback. MPEG is based on digital television standards (specified in CCIR-601) used in the United States. In its initial form, MPEG is not actually capable of storing CCIR-601 images. The typical resolution of 720x576 requires more bandwidth than the maximum MPEG data rate of

1.86Mbits/second allows. Standard television images must therefore be decimated by 2:1 into lower resolution SIF format data (352x240) to be stored. European (PAL and SECAM) and Japanese standards are different in many respects, including the display rate (30 frames/second U.S., 25 frames/second European) and the number of lines per field (240 U.S., 288 European). Therefore, an MPEG player must be able to recognize a wide variety of variations possible in the encoded video signal itself. Constrained Parameters Bitstreams (CPB) are a complex aspect of MPEG. CPBs are those bitstreams that are limited in terms of picture size, frame rate, and coded bit-rate parameters. These limitations normalize the computation complexity required of both hardware and software, thus guaranteeing a reasonable, nominal subset of MPEG that can be decoded by the widest possible range of applications while still remaining cost-effective. MPEG bitstreams for video are limited to 1.86 Mbits/second if they meet constrained parameters. If it were not for the constrained parameters, the MPEG syntax could specify a data rate of more than 100 Mbits/second.

File Organization
No actual structured MPEG file format has been defined. Everything required to play back MPEG data is encoded directly in the data stream. Therefore, no header or other type of wrapper is necessary. It is likely that when needed, a multimedia standards committee--perhaps MHEG or the DSM (Digital Storage Medium) MPEG subgroup--will one day define an MPEG file format.

File Details
This section describes the relationship between MPEG, JPEG, and MJPEG, the type of compression used for MPEG files, and the MPEG-2 standard.

Relationship Between MPEG, JPEG, and MJPEG


Some people are confused about the relationship between MPEG and JPEG. The MPEG and JPEG (Joint Photographic Experts Group) committees of the ISO originally started as the same group, but with two different purposes. JPEG focused exclusively on still-image compression, while MPEG focused on the encoding/synchronization of audio and video signals within a single data stream. Although MPEG employs a method of spatial data compression similar to that used for JPEG, they are not the same standard nor were they designed for the same purpose. Another acronym you may hear is MJPEG (Motion JPEG). Several companies have come out with an alternative to MPEG--a simpler solution (but not yet a standard) for how to store motion video. This solution, called Motion JPEG, simply uses a digital video capture device to sample a video signal, to capture frames, and to compress each frame in its entirety using the JPEG compression method. A Motion JPEG data stream is then played back by decompressing and displaying each individual frame. A standard audio compression method is usually included in the Motion JPEG data stream. There are several advantages to using Motion JPEG:
Fast, real-time compression rate No frame-to-frame interpolation (motion compensation) of data is required Motion JPEG files are considerably larger than MPEG files

But there are also disadvantages:

They are somewhat slower to play back (more information per frame than MPEG) They exhibit poor video quality if a higher JPEG compression ratio (quality factor) is used

On average, the temporal compression method used by MPEG provides a compression ratio three times that of JPEG for the same perceived picture quality.

MPEG Compression
MPEG uses an asymmetric compression method. Compression under MPEG is far more complicated than decompression, making MPEG a good choice for applications that need to write data only once, but need to read it many times. An example of such an application is an archiving system. Systems that require audio and video data to be written many times, such as an editing system, are not good choices for MPEG; they will run more slowly when using the MPEG compression scheme. MPEG uses two types of compression methods to encode video data: interframe and intraframe encoding. Interframe encoding is based upon both predictive coding and interpolative coding techniques, as described below. When capturing frames at a rapid rate (typically 30 frames/second for real time video) there will be a lot of identical data contained in any two or more adjacent frames. If a motion compression method is aware of this "temporal redundancy," as many audio and video compression methods are, then it need not encode the entire frame of data, as is done via intraframe encoding. Instead, only the differences (deltas) in information between the frames is encoded. This results in greater compression ratios, with far less data needing to be encoded. This type of interframe encoding is called predictive encoding. A further reduction in data size may be achieved by the use of bi-directional prediction. Differential predictive encoding encodes only the differences between the current frame and the previous frame. Bi-directional prediction encodes the current frame based on the differences between the current, previous, and next frame of the video data. This type of interframe encoding is called motion-compensated interpolative encoding. To support both interframe and intraframe encoding, an MPEG data stream contains three types of coded frames:
I-frames (intraframe encoded) P-frames (predictive encoded) B-frames (bi-directional encoded)

An I-frame contains a single frame of video data that does not rely on the information in any other frame to be encoded or decoded. Each MPEG data stream starts with an I-frame. A P-frame is constructed by predicting the difference between the current frame and closest preceding I- or P-frame. A B-frame is constructed from the two closest I- or P-frames. The Bframe must be positioned between these I- or P-frames. A typical sequence of frames in an MPEG stream might look like this:
IBBPBBPBBPBBIBBPBBPBBPBBI

In theory, the number of B-frames that may occur between any two I- and P-frames is unlimited. In practice, however, there are typically twelve P- and B-frames occurring between each I-frame. One I-frame will occur approximately every 0.4 seconds of video runtime. Remember that the MPEG data is not decoded and displayed in the order that the frames appear within the stream. Because B-frames rely on two reference frames for prediction, both reference frames need to be decoded first from the bitstream, even though the display order may have a Bframe in between the two reference frames. In the previous example, the I-frame is decoded first. But, before the two B-frames can be decoded, the P-frame must be decoded, and stored in memory with the I-frame. Only then may the two B-frames be decoded from the information found in the decoded I- and P-frames. Assume, in this example, that you are at the start of the MPEG data stream. The first ten frames are stored in the sequence IBBPBBPBBP (0123456789), but are decoded in the sequence:
IPBBPBBPBB (0312645978)

and finally are displayed in the sequence:


IBBPBBPBBP (0123456789)

Once an I-, P-, or B-frame is constructed, it is compressed using a DCT compression method similar to JPEG. Where interframe encoding reduces temporal redundancy (data identical over time), the DCT-encoding reduces spatial redundancy (data correlated within a given space). Both the temporal and the spatial encoding information are stored within the MPEG data stream. By combining spatial and temporal subsampling, the overall bandwidth reduction achieved by MPEG can be considered to be upwards of 200:1. However, with respect to the final input source format, the useful compression ratio tends to be between 16:1 and 40:1. The ratio depends upon what the encoding application deems as "acceptable" image quality (higher quality video results in poorer compression ratios). Beyond these figures, the MPEG method becomes inappropriate for an application. In practice, the sizes of the frames tend to be 150 Kbits for I-frames, around 50 Kbits for Pframes, and 20 Kbits for B-frames. The video data rate is typically constrained to 1.15 Mbits/second, the standard for DATs and CD-ROMs. The MPEG standard does not mandate the use of P- and B-frames. Many MPEG encoders avoid the extra overhead of B- and P-frames by encoding I-frames. Each video frame is captured, compressed, and stored in its entirety, in a similar way to Motion JPEG. I-frames are very similar to JPEG-encoded frames. In fact, the JPEG Committee has plans to add MPEG I-frame methods to an enhanced version of JPEG, possibly to be known as JPEG-II. With no delta comparisons to be made, encoding may be performed quickly; with a little hardware assistance, encoding can occur in real time (30 frames/second). Also, random access of the encoded data stream is very fast because I-frames are not as complex and time-consuming to decode as P- and B-frames. Any reference frame needs to be decoded before it can be used as a reference by another frame. There are also some disadvantages to this scheme. The compression ratio of an I-frame-only MPEG file will be lower than the same MPEG file using motion compensation. A one-minute file consisting of 1800 frames would be approximately 2.5Mb in size. The same file encoded using B- and P-frames would be considerably smaller, depending upon the content of the video data. Also, this scheme of MPEG encoding might decompress more slowly on applications that allocate an insufficient amount of buffer space to handle a constant stream of I-frame data.

MPEG-2
The original MPEG standard is now referred to as MPEG-1. The MPEG-1 Video Standard is aimed at small-scale systems using CD-ROM storage and small, lower resolution displays. Its 1.5-Megabit/second data rate, however, limits MPEG-1 from many high-power applications. The next phase in MPEG technology development is MPEG-2. The new MPEG-2 standard is a form of digital audio and video designed for the television industry. It will be used primarily as a way to consolidate and unify the needs of cable, satellite, and television broadcasts, as well as computing, optical storage, Ethernet, VCR, CD-I, HDTV, and blue-laser CD-ROM systems. MPEG-2 is an extension of the MPEG-1 specification and therefore shares many of the same design features. The baseline part of MPEG-2 is called the Video Main Profile and provides a minimum definition of data quality. This definition fills the needs of high-quality television program distribution over a wide variety of data networks. Video Main Profile service over cable and satellite systems could possibly start in 1994. Consumers who need such features as interactive television and vision phones will benefit greatly from this service. Features added by MPEG-2 include:
Interlaced video formats Multiple picture aspect ratios (such as 4:3 and 16:9, as required by HDTV) Conservation of memory usage (by lowering the picture quality below the Video Main Profile definition) Increased video quality over MPEG-1 (when coding for the same target arbitrates) Ability to decode MPEG-1 data streams.

MPEG-2 can also multiplex audio, video, and other information into a single data stream and provides 2- to 15-Mbits/second data rates while maintaining full CCIR-601 image quality. MPEG-2 achieves this by the use of two types of data streams: the Program stream and the Transport stream. The Program stream is similar to the MPEG-1 System stream, with extensions for encoding program-specific information, such as multiple language audio channels. The Transport stream was newly added to MPEG-2 and is used in broadcasting by multiplexing multiple programs comprised of audio, video, and private data, such as combining standard-definition TV and HDTV signals on the same channel. MPEG-2 supports multi-program broadcasts, storage of programs on VCRs, error detection and correction, and synchronization of data streams over complex networks. Just as MPEG-1 encoding and decoding hardware has appeared, so will the same hardware for MPEG-2. With its broad range of applications and its toolkit approach, MPEG-2 encoding and decoding is very difficult to implement fully in a single chip. A "do everything" MPEG-2 chipset is not only difficult to design, but also expensive to sell. It is more likely that MPEG-2 hardware designed for specific applications will appear in the near future, with much more extensible chipsets to come in the more distant future. The compression used on the MPEG audio stream data is based on the European MUSICAM standard, with additional pieces taken from other algorithms. It is similar in conception to the method used to compress MPEG video data. It is a lossy compression scheme, which throws

away (or at least assigns fewer bits of resolution to) audio data that humans cannot hear. It is also a temporal-based compression method, compressing the differences between audio samples rather than the samples themselves. At this writing, a publicly available version of the audio code was due to be released by the MPEG audio group. The typical bandwidth of a CD audio stream is 1.5 Mbits/second. MPEG audio compression can reduce this data down to approximately 256 Kbits/second for a 6:1 compression ratio with no discernible loss in quality (lower reductions are also possible). The remaining 1.25 Mbits/second of the bandwidth contain the MPEG-1 video and system streams. And using basically the same MPEG-1 audio algorithm, MPEG-2 audio will add discrete surround sound channels.

For Further Information


For further information about MPEG, see the MPEG Frequently Asked Questions (FAQ) document included on the CD-ROM. Note, however, that this FAQ is included for background only; because it is constantly updated, you should obtain a more recent version. The MPEG FAQ on USENET is posted monthly to the newsgroups comp.graphics, comp.compression, and comp.multimedia. The FAQ is available by using FTP from rtfm.mit.edu and is located in the directories that are called /pub/usenet/comp.graphics and /pub/usenet/comp.compression. To obtain the full MPEG draft standard, you will have to purchase it from ANSI. The MPEG draft ISO standard is ISO CD 11172. This draft contains four parts:
11172 Synchronization and multiplexing of audio-visual .1 information 11172 Video compression .2 11172 Audio compression .3 11172 Conformance testing .4

Contact ANSI at: American National Standards Institute Sales Department 1430 Broadway New York, NY, 10018 Voice: 212-642-4900 Drafts of the MPEG-2 standard are expected to be available soon. For more information about MPEG, see the following article: Le Gall, Didier, "MPEG: A Video Compression Standard for Multimedia Applications," Communications of the ACM, vol. 3, no. 4, April 1991, pp. 46-58. On the CD-ROM you will find several pieces of MPEG software. The ISO MPEG-2 Codec software, which converts uncompressed video frames into MPEG-1 and MPEG-2 video-coded bitstream sequences, and vice versa, is included in source code form and as a precompiled MSDOS binary. The Sparkle MPEG player is also included for Macintosh platforms.

This page is taken from the Encyclopedia of Graphics File Formats and is licensed by O'Reilly under the Creative Common/Attribution license. More Resources

Feedback

Printer frien dly

Terms of Service | Privacy Policy | Contact Info

Home Free Subscription This Month Past Editions Events Search SatMagazine MilsatMagazine SatNews Daily Last Month's Edition Archive

February 2009 Edition - The U.S. Satellite Market

Download Edition

Contents UPLINK The Stateside View... Transponder Trends In N.A. What To Look For... U.S. Is Market Driver State of the SATCOM World It Was 20 Years Ago Today... SmallSat Urge To Merge Interfence Detection + Tools Radomes Revealed Net Development + Costs Paul Dujardin, Genesis Networks Future Proof Video Compression The Deadliest Catch Calendar Of Events Past News

INSIGHT

FEATURE

EXECUTIVE SPOTLIGHT FOCUS

Print Article

Email Article

Email Alerts

Video

Compression That's Future Proof by Rod Tiede, President + CEO,Broadcast Intl

The ability to reduce video bandwidth needs has become critical as the demand for video skyrockets among consumers and businesses alike. A new study from TNS and the Conference Board shows that, since 2006, the number of U.S. households watching TV programming online has nearly doubled. A front page New York Times story warns of the threat posed by video road hogs which are jamming up the Internet by users uploading and downloading videos. At the same time, traditional TV viewing has not decreased; rather, users are demanding more and higher-quality content. In 2007, Stan Schatt of ABI Research, a leading broadcast industry research firm, warned, Cable providers are going to get killed on bandwidth as HD programming becomes more commonplace. All media delivery platforms, whether satellite, broadcast TV, IPTV, Internet video, and wireless all face the same challenge: lack of video bandwidth. Broadcast International has developed CodecSys video compression technology to break the video bandwidth barrier. CodecSys is a family of ultra-high performance, video compression solutions, built on the industrys first futureproof open software architecture. The CodecSys software suite ranges from market-specific solutions, such as H.264 encoding for the IPTV and Internet video markets, to the industrys first Video Operating System (VOS), supporting multiple

codecs and providing advanced transcoding capabilities for video re-purposing and media management. These solutions are designed from the ground up to evolve and change with the video delivery infrastructure, while providing the highest quality video at the lowest possible bandwidth, said Rod Tiede, Broadcast International president and CEO. The key advantage of the CodecSys software in the rapidly changing world of video delivery is its completely open architecture, enabling it to readily accommodate new codecs and standards as they come on the market. This concept is in sharp contrast to other competitive solutions, which rely on single codecs and embedded hardware/software architectures that are rendered obsolete as soon as a new codec or standard emerges. For example, there are literally billions of dollars in video compression infrastructure on the market that will become useless when the new generation of H.264 standard codecs are widely adopted. Following closely on the heels of that standard is JPEG 2000. With CodecSys, upgrading to a new codec standard is as simple as downloading a new software upgrade. Only the patented open architecture of CodecSys can readily incorporate new standards and technologies, providing an ever-green, futureproof solution to customers. CodecSys Patented Multi-Codec Support CodecSys multi-codec software is patented video compression technology that reduces bandwidth needs by more than 80 percent for HD-quality video over satellite, cable, IP and wireless networks. By dramatically reducing video bandwidth requirements, CodecSyss multi-codec software will enable a new generation of video applications live streaming video over the Internet and via mobile devices. For existing applications such as HDTV over satellite or cable, CodecSys currently provides unprecedented price/performance benefits enabling multiple HDTV channels to be broadcast over the same media that currently support only single channels. CodecSys achieves its breakthrough performance through a patented architecture that uses artificial intelligence to analyze a video stream and then select the codec best-suited to a particular video frame sequence from an entire library of specialist codecs. These specialist codecs are designed to handle particular types of highbandwidth video frames or streams, such as fast-motion sequences in a basketball game or explosions in an action movie. These video streams are extremely bandwidth-intensive and pose chokepoints to generalist codecs. By selecting the best expert codec for the job, CodecSys is able to eliminate these chokepoints and offer performance several times higher than competitive products based on single, general-purpose codecs for every type of video stream. The graphic below depicts the benefits of switching between multiple codecs.

CodecSys: How it Works CodecSys is suited to solving the problems posed by video delivery over any platform, whether IPTV, cable, wireless or Internet. A key feature of the system is its use of a library of video and audio compression codecs in a just-in-time fashion in order to dynamically leverage the strength of each codec rather than trying to use a one-size-fits-all approach. This produces a multi-codec video stream that exhibits superior compression, quality, security, and adaptability over the traditional unicodec encoded multimedia stream. The CodecSys software can be applied in any live or on-demand video environment over virtually any delivery platform whether cable, telco, satellite, wireless, IPTV or Internet streaming, said Tiede. With on-demand applications, content can be prerecorded and made available as needed. The Challenge of Internet Video Delivery Video compression technology is becoming an increasingly critical requirement for video sent over the Internet. Last year, a report from Nemertes Research cited in a recent New York Times story, video uploading and downloading from sites such as You Tube, consumed as much bandwidth as the entire Internet did in 2000. Most experts agree that a bandwidth crisis looms if new technologies are not developed and implemented to alleviate the congestion. The bandwidth crisis will be particularly severe in the U.S., which has dropped from fourth to 15th place on the broadband ranking maintained by the Organization for Economic Cooperation and Development. Michael Kleeman, a senior fellow at the Annenberg Center for Communication at the University of Sourthern California, has cited video compression as a technology critical to the resolution of the bandwidth crisis. There is no question that better video compression technology would take the pressure off the Internet especially as more and more users want to upload video to sites like YouTube and MySpace, in order to share experiences. At present, many content providers are forced to use proprietary streaming solutions that do not solve the fundamental problems associated with Internet video and thus have unsatisfactory quality. IP networks impose packet loss on data, which can severely impede the quality of a compressed video and audio stream with interdependencies and can impair sound synchronization. Most current solutions offer only a 25 percent reduction in bandwidth at best, with compromised picture quality. Even new fiber initiatives such as those from Verizon and AT&T will be challenged to deliver the quantity and quality users are going to demand. CodecSys promises a much more effective and longer-range solution, offering bandwidth reductions up to 80 percent, an open software platform for upgrading compression technology, and scalable hardware to accommodate

inevitable volume increases. The Cable Industry... Staying Competitive Broadcast television and HD video have long been the stronghold of cable companies, but that may well change as IPTV initiatives promise new alternatives to consumers, delivering HD-quality video over newly tooled IP infrastructures. In order to stay competitive, cable providers need to fight back with new and improved services such as more HD programming, Internet gaming, pay-per-view, even social networking, but the cable infrastructure is not up to the task because of inadequate video compression technology.

A recent press release announcing a new report by CMP market research group, Heavy Reading, states, surging demand for HDTV, video on demand, time-shifting video services such as digital video recorders, and Internet video is rapidly depleting bandwidth reserves on cable networks and will force cable MSOs to upgrade their networks with new technologies aimed at conserving bandwidth.

Solutions such as switched digital video (SDV) are currently being explored by cable providers to address the bandwidth crisis, said Tiede. SDV is an extremely expensive solution involving change-out of end-user devices, and will not address the critical upstream bandwidth issues. It will likely be used with more efficient encoding technologies as well as plant upgrades in order to provide a longer-term solution. Nexgen video compression technology, such as CodecSys, offers a much more effective and economical solution. Currently, the vast majority of video is delivered at the MPEG-2 standard of 19.4 Mbps. However, that number needs to come down by nearly 80 percent for live and pre-recorded video in order to make a real impact on the bandwidth crisis in the

cable industry. As CodecSys delivers live HD video at 3 Mbps, it provides a critical solution to the bandwidth crisis in the cable industry.

Satellite bandwidth space is an expensive commodity. The space that a company leases for transmission of their video content is set and if the company wants to add more channels or wants to add band-width intensive HD channels they have to rent more space which increases their expenses. The other option is to decrease their band-width requirements. The BI CodecSys AVC Encoder/Transcoder using the patented CodecSys technology allows for better compression of video without loss of quality. Better compression means more channels on the same bandwidth or the ability to add HD channels. Both content providers and satellite service providers can benefit from this. Content providers can expect to transmit more channels on the same bandwidth space, therefore increasing their ROI. Satellite service providers can attract more customers because of a better ROI to their customers and can service more customers on the same satellite space. About the author

Rod Tiede is the President and CEO of Broadcast International andis responsible for fostering the vision, directing the overall management and providing progressive leadership for the company. Since 1988, Rod has been instrumental in directing the worldwide reach of Broadcast International and his forward-looking strategy has made BI a preferred international technology integrator.

Return To Top

Comments: Posted by: john rawlins - January 26, 2011 Significant breakthroughs have been achieved by AEVD Technology, LLC to address the eigenvalue and eigenvector problem. These encompass computational accuracy improvements to the last half bit of computer precision for a wide range of matrix algebra problem sizes. Eigenvalue solution times have been reduced to approximately the order of the square of the matrix size being solved. These accuracy and speed improvements are the direct result of the mathematical algorithms that have been invented and applied to specific matrix algebra formulations. Eigenvalue convergence is now guaranteed as a consequence of these new algorithms. Eigenvector solution operations have also been reduced by these same ratios. Accuracy improvements are the direct result of fewer required computational operations for a complete solution.

This set of improved methods has been adapted to form the basis of the AEVD solution for the Singular Value Decomposition theorem. Additional applications are set for a variety of system identification formulations, stability analyses, minimization equation sets, and coincident solutions. For each of these applications, the AEVD algorithms have demonstrated the accuracy and speed improvements described above.

A very promising application for AEVD SVD technology is the compression of streaming video.

Here is an extract from the paper "A Hybrid DCT-SVD Video Compression Technique (HDCTSVD)", written by K.R. Rao, PhD, in Electrical Engineering, and Lin Tong, PhD in Applied Mathematics in 2005:

"In the existing video coding standards such as H.264/AVC, H.263, MPEG, etc., discrete cosine transform (DCT) plays the role of transform coding due to its high energy compaction and efficient computation complexity [1]. Singular value decomposition (SVD) is a transform that provides optimal energy compaction for any data [2][4]. However, the high computational complexity associated with computing the eigenvectors and eigenvalues has limited its application."

In other words, the SVD algorithm presents the best compression / loss ratio, but it cannot be used as easily as the DCT algorithm, due to its computational costs. This is the issue we intend to address.

According to our preliminary research, the AEVD implementation of the SVD algorithm provides a tenfold decrease in the computing power necessary for computing the eigenvectors and eigenvalues, as well as a dramatic increase in decomposition accuracy. Both of theses improvements will translate into either higher image accuracy for the same amount of data transmitted, or smaller amount of data transmitted for the same image accuracy.

To this effect, our current research efforts have been directed towards the implementation of a video Codec using our proprietary SVD technology, based on the open-source XviD codec, for demonstration purposes. Our objective is to solve the problem currently faced by academia regarding the necessary trade off between accuracy and computational complexity. We believe our approach can maintain current levels of accuracy while drastically reducing the computational complexity, allowing improvements to several existing technologies.

We have identified numerous firms and industry groups who might benefit tremendously from such a breakthrough. Companies like Netflix and Youtube, as well as any organization consuming high bandwidth for video streaming are amongst our targeted clients, as our technology could save them astronomical bandwidth and hardware costs, while still being able to serve the same high quality content, which represents a very important improvement to their business model.

Broadcasters and News companies can also benefit from the technology for live broadcast, by providing a higher quality compression / decompression in real-time.

It is our conclusion that our SVD technology has the potential to trigger a revolution across several scientific disciplines using data compression and could represent a major game-changer across the board, especially in tasks related to video compression.

References :

A Modified Hybrid DCT-SVD Image-Coding System for Color Image

Y. Wongsawat; H. Ochoa; K.R. Rao; S. Oraintara

A Hybrid DCT-SVD Video Compression Technique (HDCTSVD)

Lin Tong; K.R. Rao

REGARDS, JOHN RAWLINS 678-776-1343 Post a Comment:


Top of Form

2062498803

Public Comment N a m e ( r e q u i r e d )

Private Comment

E m a i l ( w i l l n o t b e p u b l i s h e d ) ( r e q u i r e d ) C o

m p a n y C o m m e n t : V e r i f i c a t i o n C o d e ( r e q u i r e d )

Su bm i tC o m m e nt

Bottom of Form

Return To Top

You might also like