Archive for November, 2004

Wavelets for Image Compression

Posted on November 12, 2004. Filed under: Computer Science |

Wavelets  is usually for the image compression, following is some related links:

Image Compression with Set Partitioning in Hierarchical Trees

http://www.cipr.rpi.edu/research/SPIHT/

Data Compression Bibliography
http://www.ee.washington.edu/research/dcl/biblio.html

Introduction to Wavelets for Image Compression
http://ai.fri.uni-lj.si/aleks/Wavelets/

Wavelet Filter Evaluation for Image Compression

http://www.icsl.ucla.edu/~ipl/Publications/Journal/wavelet_filter_evaluation_for_image_com
pression.pdf

EZW encoding

http://pagesperso-orange.fr/polyvalens/clemens/ezw/ezw.html

An Implementation of EZW

http://pesona.mmu.edu.my/~msng/EZW.html

Wavelets and Signal Processing

http://www.bearcave.com/misl/misl_tech/wavelets/index.html

VcDemo – Image and Video Compression Learning Tool

http://www-ict.its.tudelft.nl/~inald/vcdemo/

Eduard Kriegler’s EZW Encoder

http://esl.ee.sun.ac.za/~kriegler/index.html

GWIC – GNU Wavelet Image Codec

http://www.jole.fi/research/gwic/

The Wavelet Tutorial

http://engineering.rowan.edu/~polikar/WAVELETS/WTtutorial.html

Wavelet Compression Example

http://www.cs.sfu.ca/CourseCentral/365/li/material/misc/wavelet.html

A brief guide to wavelet sources

http://www-ocean.tamu.edu/~baum/wavelets.html.gz

Read Full Post | Make a Comment ( None so far )

The Open Compression Toolkit for C++

Posted on November 11, 2004. Filed under: C/C++, Programming |

The Open Compression Toolkit is a set of modular C++ classes and utilities for implementing and testing compression algorithms.

  • Simple interface and skeleton code for creating new compression algorithms.
  • Complete testing framework for validating and comparing new algorithms.
  • Support for algorithms that use external dictionaries/headers.
  • Utility classes and sample code for bitio, frequency counting, etc.

http://octane.sourceforge.net/

Read Full Post | Make a Comment ( 1 so far )

Data Compression Bibliography

Posted on November 11, 2004. Filed under: Computer Science |

The University of Washington has a nice bibliography here, with pointers to books on Data Compression, VQ, Wavelets, and Information Theory.
http://www.ee.washington.edu/research/dcl/biblio.html

Read Full Post | Make a Comment ( None so far )

Data Compression Researchers

Posted on November 10, 2004. Filed under: Computer Science |

The page from the Google directory.
http://directory.google.com/Top/Computers/Algorithms/Compression/Researchers/

Read Full Post | Make a Comment ( None so far )

Compression via Arithmetic Coding in Java

Posted on November 10, 2004. Filed under: Computer Science, Java |

Bob Carpenter has created a nice Java package that implements a PPM/arithmetic coding compression system. This page includes links to the source code, javadocs, and a fair amount of tutorial material. Very complete!

http://www.colloquial.com/ArithmeticCoding/

Read Full Post | Make a Comment ( None so far )

Run Length Encoding

Posted on November 9, 2004. Filed under: Computer Science |

Run Length Encoding (RLE) is a very simple form of data compression encoding. It is based on simple principle of encoding data. This principle is to every stream which is formed of the same data values (repeating values is called a run) i.e sequence of repeated data values is replaced with count number and a single value. This intuitive principle works best on certain data types in which sequences of repeated data values can be noticed; RLE is usually applied to the files that a contain large number of consecutive occurrences of the same byte pattern.

Following are some related urls:
RLE – Run Length Encoding
http://web.archive.org/web/20020214085725/http://www.rasip.fer.hr/research/compress/algorithms/fund/rl/index.html

http://www.arturocampos.com/ac_rle.html

http://www.dcs.ed.ac.uk/home/mxr/gfx/2d/RLE.txt

A Run Length Encoding Scheme For Block Sort Transformed Data
http://www.geocities.com/m99datacompression/papers/rle/rle.html

Read Full Post | Make a Comment ( None so far )

Arithmetic Coding

Posted on November 8, 2004. Filed under: Computer Science |

Arithmetic coding (AC) is a special kind of entropy coding. Unlike Huffman coding, arithmetic coding doesn´t use a discrete number of bits for each symbol to compress. It reaches for every source almost the optimum compression in the sense of the Shannon theorem and is well suitable for adaptive models. The biggest drawbak of the arithmetic coding is it´s low speed since of several needed multiplications and divisions for each symbol. The main idea behind arithmetic coding is to assign to each symbol an interval. Starting with the interval [0..1), each interval is devided in several subintervals, which sizes are proportional to the current probability of the corresponding symbols of the alphabet. The subinterval from the coded symbol is then taken as the interval for the next symbol. The output is the interval of the last symbol. Implementations write bits of this interval sequence as soon as they are certain.
A fast variant of arithmetic coding, which uses less multiplications and divisions, is a range coder, which works byte oriented. The compression rate of a range coder is only a little bit less than pure arithmetic coding, and the difference in many real implementation is not noticeable.

Some useful link as following:

Arithmetic Coding + Statistical Modeling = Data Compression

http://www.dogma.net/markn/articles/arith/part1.htm

The Arithmetic Coding Page

http://www.cs.mu.oz.au/~alistair/arith_coder/

http://www.arturocampos.com/ac_arithmetic.html

http://www.data-compression.info/Algorithms/AC/

Read Full Post | Make a Comment ( None so far )

Huffman Coding

Posted on November 8, 2004. Filed under: Computer Science |

In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable-length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for each possible value of the source symbol. It was developed by David A. Huffman while he was a Ph.D. student at MIT, and published in the 1952 paper “A Method for the Construction of Minimum-Redundancy Codes”.

References:
Lossless Compression Algorithms (Entropy Encoding)
http://www.cs.cf.ac.uk/Dave/Multimedia/node207.html

Huffman Coding: A CS2 Assignment

http://www.cs.duke.edu/csed/poop/huff/info/

The Huffman Compression Algorithm
http://www.howtodothings.com/showarticle.asp?article=313

Dynamic Huffman Coder
http://www.geocities.com/malbrain/vitter_c.html
Canonical Huffman Coder Construction
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=370afe3d.0303041329.23
f0affe%40posting.google.com
C Library to search over compressed texts
http://butirro.di.unipi.it/~ferrax/CompressedSearch/
Michael Dipperstein’s Huffman Code Page
http://michael.dipperstein.com/huffman/index.html
libhuffman – Huffman encoder/decoder library
http://huffman.sourceforge.net/
Compression and Encryption Sources
http://www.larkin.lanit.ru/compress/compr.htm
Huffman Coding Class
http://codeproject.com/cpp/HandyHuffmanCoding.asp

Adaptive Huffman coding modifies the table as characters are encoded, which allows the encoder to adapt to changing conditions in the input data. Adaptive decoders don’t need a copy of the table when decoding, they start with a fixed decoding table and update the table as characters are read in.

References:
Design and Analysis of Dynamic Huffman Codes
http://www.cs.duke.edu/~jsv/Papers/catalog/node60.html
Adaptive Huffman Encoding
http://www.xcf.berkeley.edu/~ali/K0D/Algorithms/huff/
http://www.ics.uci.edu/~dan/pubs/DC-Sec4.html

Read Full Post | Make a Comment ( None so far )

Liked it here?
Why not try sites on the blogroll...