Digital Image Compression



Digital Image Compression

G. Motta, F. Rizzo and J. A. Storer

Brandeis University, Volen Center for Complex Systems,

Computer Science Department

Waltham, MA-02254

{gim, frariz, storer}@cs.brandeis.edu

Digital image compression methods reduce the space necessary to encode, store or transmit digital images by changing the way those images are represented. To see why image compression is desirable, consider a 8x10 inch color picture, digitized using a resolution of 600 pixels per inch and a byte for each color plane – the number of bytes required to represent each image is:

= 8 * 10 * 600 * 600 * 3 bytes

( 85 Mbytes.

On typical current personal computing systems with only a few hundred megabytes to a few gigabytes of disk space, this is problematic, and even in the future with much cheaper storage, the ability to compress images by a factor of 25 or more (typical of standards such as JPEG) will be highly desirable. Similar issues apply to communications.

There are numerous methods for compressing data and each has advantages and disadvantages depending on what kind of image one wishes to compress, how much loss one is willing to tolerate, and what form that loss takes. Methods can also be combined (in fact, most standard compression systems combine more than one technique).

1. Digital Image Representation

The simplest images are bi-level images. This format is often used to encode sharp documents and faxes, where pixel can only assume one out of two values, black or white. Pixel encoding is usually one bit (“0” or “1”).

When several gray midtones are present, like a black and white photo, two values are no longer sufficient to encode all the possible gray intensities, and to each pixel must be associated a numerical value, proportional to the brightness of that point. Typical choices for these values fall in the ranges 0 - 31 or 0 - 255 (requiring respectively 4 and 8 bits for each pixel). This second kind of image is referred to as a gray-level picture.

Color images take advantage of the fact that each color can be expressed as a combination of three “primary” colors (Red, Green and Blue or Yellow, Magenta and Cyan, for example), a color picture can be considered as the superimposition of three “simpler” pictures (which are called color planes), where each of them encodes the brightness of a primary color. In other words, each color plane of an image can be treated much like the gray-level picture with a range of values based on the luminosity of that particular color. This sort of representation, called RGB or YMC according to the color planes, is “hardware oriented” (monitors, printers and photographic devices use these color schemes), and it is preferable when dealing with synthetic (computer generated) images.

In many scientific applications, images may have more than 3 planes of information (e.g., multi-spectral images) and may be higher dimensional.

In natural (or non-computer generated) images, however, the brightness of corresponding pixels in different color planes are highly correlated (what is bright in the Red plane, for example, will often be bright in the Green and Blue planes) and this correlation can be easily exploited using an alternative system of color representation. Instead of dividing the image in three color planes, the overall brightness of each pixel can be encoded in a luminosity plane (Y) which makes two color planes sufficient to encode the chromatic variations (these two planes are called Cb and Cr). This “YCbCr” color format is used, with some variations, in the broadcast of TV transmissions (the luminance component alone gives a representation that is backward compatible with B/W pictures).

2. Lossless vs. Lossy Methods

A basic distinction among image compression algorithms can be made in terms of reconstruction fidelity. When it is possible to recover exactly the original data from the compressed one, the algorithm is named “lossless.” Otherwise, when some distortion is introduced in the coding process and part of the original information is irremediably lost, the algorithm is named “lossy.” Lossy compressors, since they can throw out part of the information, usually achieve a more compact representation than lossless systems.

A compromise between these two categories is represented by “near to lossless” or “transparent” algorithms where a small coding error is allowed only when it is semantically irrelevant. These methods are useful, for example, in compressing medical images where two contrasting needs arise: high compression rates (required by the large amount of data) and the preservation of the diagnostic information.

3. Lossless Methods

Many image and video compression methods, rely upon lossless coding as the final step in the compression process. Here we briefly review standard lossless compression methods that are applied to a one-dimensional stream of data. Standards that specifically address lossless compression of images will be discussed later.

Run-Length Coding

A simple and common lossless compression technique is Run Length Encoding (or RLE). It is mainly used with documents and faxes, where long sequences of pixels with the same color are present. A series (or run) of pixels having the same value is coded by the pair [pic] where [pic] is the recurrent value and [pic] is the length of the run. For example, the sequence of numbers [pic] is encoded as [pic] The CCITT Group 3 and 4 facsimile standards are mainly based on RLE. For further reading on run-length coding, see the book of Held [1983].

Huffman Coding

Usually an alphabet is encoded into binary by associating with each character of a string of a fixed number of bits. For example, the standard ASCII code associates with each of the 128 characters a string of 7 bits. However, any method for encoding characters as variable length bit strings may be acceptable in practice so long as strings are always uniquely decodable; that is, it is always possible to determine where the code for on character ends and the code for the next one begins. A sufficient (but not necessary) condition for an encoding method to be uniquely decodable is that the code for one character is never a prefix of a code for another; such codes are called binary prefix codes, and are naturally represented as root to leaf paths in a trie. Huffman codes are an optimal way of assigning variable length codes to an alphabet with a given probability distribution for each character. The approach can be generalized to the case when the probability distribution for the next character depends on the previous characters seen (e.g., after seeing elephan, you are pretty sure that a “t” is next), but the size of the data structure grows exponentially with the number of characters upon which the probabilities are maintained. The approach can also be made adaptive by incrementally changing the data structure as each character is processed to reflect the probability distribution of characters seen thus far (this is how the UNIX “compact” utility works). For further reading, see the book of Storer [1988] and the original papers of Huffman [1952] and Gallager [1978].

Arithmetic Coding

Arithmetic coding is a simple compression method in itself, but also useful for encoding probabilities provided by an adaptive model in predictive methods (to be discussed shortly). The idea is to encode data as a single long binary number that specifies a point on the real line between 0 and 1. Not only do less frequent characters get shorter codes, but “fractions” of a bit can be assigned. Both the encoder and decoder maintain a coding interval between the points low and high; initially high=1 and low=0. Each time a character c is read by the encoder or written by the decoder, they both refine the coding interval by doing:

length = high-low

high = low + (RIGHT(c)*length)

low = low + (LEFT(c)*length)

As more and more characters are encoded, the leading digits of the left and right ends become the same and can be transmitted to the decoder. Assuming binary notation, then if the current digit received by the decoder is the [pic] digit to the right of the decimal point in the real number r formed by all of the digits received thus far, then the interval from r to [pic] is a bounding interval; the left and right ends of the final interval will both begin with the digits of r. There are many details to practical implementations, especially with regard to using limited precision arithmetic, and avoiding expensive arithmetic operations. For further reading, see the book of Bell, Cleary, and Witten [1990].

Textual Substitution / LZ Methods

With textual substitution methods, the encoder and decoder cooperate to maintain identical copies of a dictionary D that is constantly changing. The encoder reads characters from the input stream that form an entry of D, transmits the index of this entry, and updates D with some method that depends only on the current contents of D and the current match. Textual substitution methods are often referred to as “LZ-type” methods due to the work of Lempel and Ziv [1976] and Ziv and Lempel [1977,1978], where “LZ1-type” or “LZ’77-type” methods are those based on matching to the preceding n characters and “LZ2-type” (see also Storer and Szymanski [1978,1982]) or “LZ’78-type” are those based on a dynamic dictionary, usually represented by a trie data structure in practice. For example, the UNIX “gzip” utility employs the LZ77 approach and the UNIX “compress” utility as well as the V.42bis modem compression standard employs the LZ78 approach. For further reading see the book of Storer [1978]. See also Chapter 7 of Storer [1992] for a discussion of parallel hardware and Storer and Reif [1997] for a discussion of error propagation prevention.

Prediction and Modeling

Prediction allows a compact representation of data by encoding the difference between the data itself and the information extrapolated or “predicted” by a model. If the predictor works well, predicted data is very close to the input and the error is small or negligible, and so it is easier to encode. When compressing images, the value of each pixel is predicted from the values of its (already encoded) neighbors, then the difference between the pixel and the predicted value (prediction error) is encoded. Encoding the prediction error in exact or approximate way, determine if the resulting system is a lossless or a lossy one. The original pixel is then reconstructed by adding, at the receiver, the decoded error to the prediction. Arithmetic coding is often employed with predictive methods due to its ability to incrementally and adaptively encode each token with an fractional number of bits that reflects the current probability distribution determined by the model. For further reading on arithmetic coding, see the book of Bell, Cleary, and Witten [1990].

4. Lossy Image Compression

Transform Coding

Transform coding algorithms are based on a different principle: they use a transformation that exploits peculiar characteristics of the signal, increasing the performance of an entropy coding scheme (Huffman or arithmetic coding for example).

For natural sources (audio, images and video), the information that is relevant for a human user, is better described in the frequency domain. When a pictorial scene is decomposed in its frequency components, the low frequencies (slow luminosity changes) correspond to the scene illumination, and the high frequencies (rapid changes) characterize objects’ contours.

Representing the input in the frequency domain results in better control of the error introduced in lossy compression; information that is not important for a human viewer, can be easily discarded or distorted, so that the image will be smaller or easier to compress.

Several time domain to frequency domain transformations were proposed for digital image signals; one of the most used, the Discrete Cosine Transform (or DCT) has the advantage of a good decorrelation of the signal requiring only an acceptable computational effort.

Transformations are not compression methods in the literal sense (sometimes they even increase the input’s size) but when used properly they provide a powerful enhancement of entropy based compression methods.

For further reading, see the book of Rao and Yip [1990].

Wavelet Compression

Another compression scheme that adopts transform coding is based on wavelet functions. The basic idea of this coding scheme is to process data at different scale or resolution. If we look at a picture from a certain distance, we would notice the macro structure (subject of the painting). Similarly, if we give a close look, we would notice micro structures (painter’s brush-strokes). The result of using wavelet functions is to see both the subject and the brush-strokes. This is achieved using two versions at different scale of the same prototype function: “mother wavelet” or “wavelet basis”. A contracted version of the basis function is used for the analysis in the time domain, while a stretched one is used for the frequency domain. The wavelet basis to be effective has to be an oscillatory localized function (Fourier and Cosine transform are not of this type).

Localization (both in time and frequency) means that it is always possible to find a particular scale at which a specific detail of the signal may be outlined. It also means that wavelet analysis reduces drastically the amplitude of the input signal. This feature is really appealing for image compression since it means that half of the data is almost zero and then easily compressible.

[pic]

Figure1. Wavelet decomposition.

For further reading, see the book of Vetterli and Kovacevic [1995].

Vector Quantization

A dictionary (or codebook) is a collection of small size, statistically relevant patterns (codewords). Every image is encoded dividing it in blocks and assigning to each block the index of the closest codeword in the dictionary. Matching can be exact or approximate, achieving respectively lossless or lossy compression. The codebook is sometimes allowed to change and adapt to the input’s peculiarities.

The most famous dictionary method is Vector Quantization; in its simplest form, it is a lossy compression scheme that uses a static dictionary. A set of images (Training Set), statistically representative of the source’s behavior is carefully selected; each image is divided in blocks and a small set of dictionary codewords is determined.

The codewords are selected to minimize the coding error on the Training Set. Usually, for its mathematical tractability, the Mean Square Error is assumed as a good error metric and it is used to guide both the design and the encoding process.

The compression rate depends both on the size of each block and on the size of the dictionary. A dictionary with [pic] codeword of [pic] pixels each, compresses an image digitized with [pic] bits per pixel with a ratio:

[pic]

Once the dictionary is determined, encoding is performed, assigning to each block of the image the index of the closest (i.e., with the minimum error) codeword in the dictionary. The decoder, also knows the dictionary, simply expands the indexes in their corresponding blocks.

It is clear that in a Vector Quantizer, encoding and decoding are highly asymmetric processes. Searching for the closest block in the dictionary (encoding) is computationally much more expensive than retrieving the codeword associated to a given index (decoding). Even more expensive is the dictionary design; fortunately this process is usually performed off-line.

Due to the independent coding of each block, a blocking artifact is frequently present at high compression rates.

Vector Quantization is proved to be asymptotically optimal when the size of the block increases; unfortunately the size of the codebook grows exponentially with the block size.

Many variations have been proposed to speed up the encoding and to simplify the codebook design, many based on tree-based data structures. For further reading see the book of Gersho and Gray [1992]; see also the paper of Constantinescu and Storer [1994] for a discussion of adaptive approaches.

Fractal Compression

Fractal based algorithms have very good performance and high compression ratios (32 to 1 is not unusual); their use can be limited by the intensive computation required. The basic idea can be described as a “self Vector Quantization”, where an image block is encoded applying a simple transformation to one of the blocks previously encoded. Transformations frequently used are combinations of scaling, reflections and rotations of another block.

Unlike vector quantization, fractal compressors do not maintain an explicit dictionary and the search can be long and computationally intensive. Conceptually, each encoded block must be transformed in all the possible ways and compared with the current to determine the best match. Due to the fact that the blocks are allowed to have different size, there is very little “blocking” noise and the perceived quality of the compressed image is usually very good.

For further reading, see the books of Barnsley [1993] or Fisher [1995].

5. Data Compression Standards

JPEG

The Joint Photographic Expert Group (JPEG) has developed and issued in the 1990, a standard for color image compression. The standard was mainly targeted for compressing natural B/W and color images. JPEG, as almost every other transform coding image compression algorithm, consists of two steps. The first is lossy and involves transformation and quantization; it is used to remove information that is perceptively irrelevant for a human user. The second step is a lossless entropy encoding that eliminates statistical redundancies eventually still present in the compressed representation.

JPEG assumes a color image divided in color planes and compresses each of them independently. The color planes are represented in terms of luminosity (or Y component) and two chrominances (Cb and Cr). This “YCbCr” color representation takes advantage of the fact that the human visual system is more sensitive to luminosity than to color changes, thus it is possible to achieve some compression (even before applying JPEG) just by reducing the resolution of the two chrominance components.

Each color plane is further divided in blocks of 8x8 pixels. This size block was determined as the best compromise between the computational effort and the compression achieved. For each block, a decomposition in the frequency domain is computed using the Discrete Cosine Transform.

The DCT coefficients are quantized by using a different scalar quantizer for each frequency component. The value for each quantizer are stored in a “quantization table.” According to experiments made with the human visual system, the low frequency components are better perceived, so JPEG quantizes them more accurately than high frequencies.

[pic]

Figure 2. JPEG: DCT, Quantization and zigzag scanning for a 8x8 image block.

Quantization results is an 8x8 matrix with few small numbers and several zeroes. Using a zigzag pattern, the matrix is scanned from the low to the high frequencies and it is converted in a mono dimensional vector. Run-length encoding is applied to compress the sequences of consecutive zeroes. The result is further compressed using an Huffman or an Arithmetic coder. The bit stream can be organized in three different ways:

1. Sequential encoding in which each image component is encoded in a single left-to-right, top-to-bottom scan;

2. Progressive encoding in which the image is encoded in multiple scans and the viewer can see the image build up in multiple coarse-to-clear passes (as in many Internet pictures);

3. Hierarchical encoding in which the image is encoded at multiple resolutions so that lower resolution versions may be accessed without first having to decompress the image at its full resolution.

The quality (and then size) of a JPEG-compressed image depends on the quantization that is JPEG’s lossy part. Scaling appropriately the standard quantization tables, it is possible to fine tune the size and the quality of the output. When high performance are necessary, it is also possible to use special user-designed quantization tables.

For further reading see Pennebaker and Mitchell [1993].

JPEG-LS

As we said, JPEG is a lossy image compression technique, but the JPEG standardizing group introduced also a lossless mode that is not DCT-based. Essentially the lossless mode uses a predictive scheme: a linear combination of the three already coded nearest neighbors is used to predict the current pixel. The prediction error is then encoded using a Huffman or arithmetic coder.

Lossless JPEG was not the result of a rigorous competitive evaluation as was the selection of the DCT-based methods and then its performances are not really interesting. For this reason in 1994 ISO/JPEG group issued a call for contribution for a new standard for lossless and near-lossless compression of continuos tone images (2 to 16 bpp). The standard, called JPEG-LS, is an amalgamation of the various proposals, even tough the backbone of the algorithm is based on the HP proposal (LOCO-I).

JPEG-LS is a predictive coder which uses a so called error-feedback technique. The encoder classifies the prediction context and stores the mean prediction residual that happens in each class. After the prediction step, the encoder locates the class to which the current context belongs and adds to the prediction such mean error. The prediction residual is then computed and entropy coded.

[pic]

Figure 3. JPEG-LS: schematic diagram.

Besides a better prediction, the advantage of the error feedback technique is that errors in different context have different probability distribution. This information is used to tune the entropy coder to the specific context.

For further reading, see Weinberger, Seroussi, and Sapiro [1996].

JBIG

Joint Bi-level Image Experts Group (JBIG) defined in 1991 an innovative lossless compression algorithm for B/W images. It is a predictive coder that uses a pool (context template) of already coded neighbor pixels to guess the value of the current pixel. The algorithm simply concatenates the value of the template pixels to identify the context in which the current pixel is going to be predicted. The index of the context is used to choose which probability distribution should be used by the arithmetic coder. It is clear that the coder is effective if different contexts have different probability distributions.

[pic]

Figure 4. Prediction templates for JBIG (serial mode)

The ``A’’ pixel in the figure above is an adaptive pixel. Its position is allowed to change as the image is processed. The use of the adaptive pixel improves compression identifying repeated occurrences of the same block of information.

JBIG is also able to operate in progressive mode: an approximate version of the original image is first transmitted and then improved as time goes by. This is achieved subdividing the image in different layer, each of which is a representation of the original image at a different resolution. The lowest resolution layer is called starting layer and is the first that is coded. Then the other layers (differential layers) are encoded using information from the previous encoded layer. If the resolution-reduction algorithm used by the progressive mode suites well to input image, this mode of operation may be really effective. In fact most of the pixels in the differential layers may be predicted without error and then no information has to be encoded.

Progressive and sequential mode are completely compatible. This compatibility is achieved subdividing the original image in horizontal stripes. Each stripe is coded separately and then transmitted. An image that has been coded in the progressive mode may be decoded in sequential mode decoding each stripe sequentially, to its full resolution, starting from the one on the top of the image. Progressive decoding may be obtained decoding each layer at a time.

JBIG may be also successfully used in coding images with more than one bit-per-pixel (gray-scale or color images). The image is decomposed in bit—planes (i.e. if the image is 4-bpp, each pixel is represented by the binary string b3b2b1b0, the bit-plane i stores only the bit bi of each pixel) and each plane is coded separately. In principle JBIG is able to work with 255-bpp images, but in practice the algorithm has interesting performances for images with at most 8 bpp.

For further reading, see Arps and Truong [1994].

6. Performance evaluation

Performance of an image compression algorithm is mainly determined by two characteristics: compression ratio and error introduced by the encoding. If in a lossless compressor the quality of the encoded image is kept fixed (and equal to the original) while minimizing the size, a lossy algorithm must trade both quantities looking for a good compromise.

A fundamental problem in lossy compression is controlling the error introduced by encoding process. Several quality metrics are commonly used and almost all of them are based on the Mean Square Error (MSE).

The mean squared error between a given image [pic] and its encoded version [pic] is the square root of the sum of the squares of the differences between the corresponding values of the samples in the two signals:

[pic]

The MSE has a simple mathematical expression and gives a good measure of the random error introduced in the compression; this is enough for many applications, but when encoded images are mainly used by humans, the use of distortion measures based on the MSE may give misleading results. The poor correlation of the MSE with the perceived distortion is due to the fact that the human visual system is more sensitive to structured than to random coding errors.

For further reading, see the book of Gonzalez and Wintz [1987].

7. Coding Artifacts

When a very high compression rate has to be achieved or when a complex image has to be encoded, lossy compression methods introduce sometimes visible artifacts that can make the perceived quality very poor. Commonly observed artifacts in low bit rate image compression are:

4. Blocking: occurs in techniques which involve partitioning of the image in blocks and appears to be the major visual defect of coded images. It happens when a gradual change in the intensity or color of a region is coarsely quantized. This results in periodical discontinuities in the image that appears segmented into its constituent blocks.

5. Blurring: It appears in different forms such as the edge smoothness due to the loss of high frequency components or as the texture and color blur due to the loss of resolution.

6. Ringing Effect: it is observable as periodic pseudo-edges around original shape edges for the compressed images. The ringing effect is resulted from improper truncation of high frequency components, also known as the Gibbs effect.

7. Texture Deviation: it appears as granular noise or the “dirty window” effect and it is caused by loss of fidelity in mid-frequency components.

For further reading, see the book of Woods [1991].

8. Related Subjects

For a general introduction to image compression, see the books of Sayood [1996] and Salomon [1997]. For an introduction to speech compression, see the book of Kondoz [1994]. For an introduction to video compression, see the book of Bhaskaran and Konstantinides [1995], and specifically for the MPEG standard, the books of Haskell, Puri, and Netravali [1997] and Mitchell, Pennebaker, Fogg, and LeGall [1997]. For an introduction to information theory, see the book of Cover and Thomas [1991]. For an introduction to Q-Coding, see Pennebaker, Mitchell, Langdon, and Arps [1988].

9. References

R. Arps [1980]. "Bibliography on Binary Image Compression", Proc. of the IEEE, 68:7, 922-924.

R. B. Arps and T. K. Truong [1994]. “Comparison of International Standards for Lossless Still Image Compression”, Special Issue on Data Compression, J. Storer ed., Proceedings of the IEEE 82:6, 889-899.

M. Barnsley and L. Hurd [1993]. Fractal Image Compression, AK Peters, New York, NY.

T. Bell, J. Cleary, and I. Witten [1990]. Text Compression, Prentice Hall, Englewood Cliffs, NJ.

V. Bhaskaran and K. Konstantinides [1995]. Image and Video Compression Standards, Kluwer Academic Press, Boston, MA.

T. Cover and J. Thomas [1991]. Elements of Information Theory, Wiley-Interscience Publication, Yorktown Heights, NY.

Y. Fisher, Ed. [1995]., Fractal Image Compression: Theory and Application, Springer-Verlag, New York.

R. Gallager [1978]. "Variations on a Theme by Huffman", IEEE Transactions on Information Theory 24:6, 668-674.

R. Gonzalez and P. Wintz [1987]. Digital Image Processing, Addison-Wesley Publishing, Reading, MA.

A. Gersho [1994]. “Advances in Speech and Audio Compression”, Special Issue on Data Compression, J. Storer ed., Proceedings of the IEEE 82:6, 900-918.

1992. Gersho, A., and Gray, R. M. “Vector Quantization and Signal Compression”, Kluwer Academic Publishers, Norwell, MA.

B. G. Haskell, A. Puri, and A. N. Netravali [1997]. Digital Video: An Intorduction to MPEG-2, Chapman and Hall, New York, NY.

D. Huffman [1952]. "A Method for the Construction of Minimum-Redundancy Codes", Proceedings of the IRE 40, 1098-1101.

1980. Linde, Y., Buzo, A., and Gray, R. M. “An Algorithm for Vector Quantization Design”, IEEE Trans. on Communication, Vol.28, pp. 84-95.

A. Kondoz [1994]. Digital Speech, John Wiley and Sons, New York, NY.

Y. Linde, A. Buzo, and R. M. Gray [1980]. "An algorithm for vector quantizer design", IEEE Transactions on Communications 28, 84-95.

J. Mitchell, W. Pennebaker, C. Fogg, and D. LeGall [1997]. MPEG Video Compression Standard, Chapman and Hall, New York, NY.

W. Pennebaker and J. Mitchell [1993]. JPEG Still Image Data Compression Standard, Van Nostrand Reinhold, New York, NY.

W. Pennebaker, J. Mitchell, G. Langdon, and R. Arps [1988]. "An overview of the basic principles of the Q-coder", IBM Journal of Research and Development 32:6, 717-726.

D. Rao and P. Yip [1990]. Discrete Cosine Transform, Academic Press, San Diego, CA.

D. Salomon [1997]. “Data Compression: The Complete Reference”, Springer-Verlag, New-York.

K. Sayood [1996]. Introduction to Data Compression, Morgan Kaufmann Publishers, San Francisco, CA.

J. A. Storer [1988]. Data Compression: Methods and Theory, Computer Science Press (a subsidiary of W. H. Freeman Press), New York, NY.

J. A. Storer, Ed. [1992]. Image and Text Compression, Kluwer Academic Press, Norwell, MA.

J. A. Storer and T. Szymanski [1978]. "The Macro Model for Data Compression", Proceedings Tenth Annual ACM Symposium on the Theory of Computing, San Diego, CA, 30-39.

J. A. Storer and T. Szymanski [1982]. "Data Compression Via Textual Substitution", Journal of the ACM 29:4 928-951.

J. A. Storer and J. H. Reif [1997]. “Low-Cost Prevention of Error Propagation for Data Compression with Dynamic Dictionaries”, Proceedings IEEE Data Compression Conference, Snowbird, UT, 171-180.

M. Vetterli and J. Kovacevic [1995]. Wavelets and Subband Coding, Prentice Hall, Englewood Cliffs, NJ.

M. J. Weinberger, G. Seroussi, and G. Sapiro [1996]. “LOCO-I: A Low Complexity, Context-Based, Lossless Image Compression Algorithm”, Proceedings IEEE Data Compression Conference, Snowbird, UT, 140-149.

J. Woods [1991]. Subband Image Coding, Kluwer Academic Publishers.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download