Assignment 6: Huffman Encoding - Stanford University

[Pages:20]Assignment 6: Huffman Encoding

Thanks to Owen Astrachan (Duke) and Julie Zelenski. Updates by Keith Schwarz, Stuart Reges, Marty Stepp, Chris Piech, , Chris Gregg, Marissa Gemma, and Brahm Capoor.

Due: Wednesday, August 9 at 12:00 PM (noon). You may work in pairs for this assignment.

Assignment Overview and Starter Files

For this assignment, you will build a file compression algorithm that uses binary trees and priority queues. Your program will allow the user to compress and decompress files using the standard Huffman algorithm for encoding and decoding. Along the way, you'll also implement your own hash map, which you'll then put to use in implementing the Huffman encoding.

Huffman encoding is an example of a lossless compression algorithm that works particularly well on text but can, in fact, be applied to any type of file. Using Huffman encoding to compress a file can reduce the storage it requires by a third, half, or even more, in some situations. You'll be impressed with the compression algorithm, and you'll be equally impressed that you're outfitted to implement the core of a tool that imitates one you're already very familiar with.

The starter code for this project is available as a ZIP archive. A demo is available as a JAR. Note that the JAR will look for files in the same directory).

Starter Code

Demo Jar

You must turn in the following files: 1. mymap.cpp: code to implement your hash map 2. mymap.h: header file containing declarations for your map 3. encoding.cpp: code to perform Huffman encoding and decoding 4. secretmessage.huf: a message from you to your section leader, which is compressed by your algorithm.

We provide you with several other support files, but you should not modify them. For example, we provide you with a huffmanmain.cpp that contains the program's overall text menu system; you must implement the functions it calls to perform various file compression/ decompression operations. The section on Implementation details below explains where in the files to program your solution.

Related reading: Huffman on Wikipedia ASCII Table

1

Huffman Encoding

Huffman encoding is an algorithm devised by David A. Huffman of MIT in 1952 for compressing textual data to make a file occupy a smaller number of bytes. Though it is a relatively simple compression algorithm, Huffman is powerful enough that variations of it are still used today in computer networks, fax machines, modems, HDTV, and other areas.

Normally textual data is stored in a standard format of 8 bits per character, using an encoding called ASCII that maps each character to a binary integer value from 0-255. The idea of Huffman encoding is to abandon the rigid 8-bits-per-character requirement, and instead to use binary encodings of different lengths for different characters. The advantage of doing this is that if a character occurs frequently in the file, such as the very common letter 'e', it could be given a shorter encoding (i.e., fewer bits), making the overall file smaller. The tradeoff is that some characters may need to use encodings that are longer than 8 bits, but this is reserved for characters that occur infrequently, so the extra cost is worth it, on the balance.

The table below compares ASCII values of various characters to possible Huffman encodings for some English text. Frequent characters such as space and 'e' have short encodings, while rarer characters (like 'z') have longer ones.

Character ASCII Value ASCII Binary Huffman Binary

' '

32

00100000

10

'a'

97

01100001

0001

'b'

98

01100010

0111010

'c'

99

01100011

001100

'e'

101

01100101

1100

'z'

122

01111010

00100011010

The steps you'll take to do perform a Huffman encoding of a given text source file into a destination compressed file are:

2

1. count frequencies: Examine a source file's contents and count the number of occurrences of each character, and store them in a map using the MyMap class you'll write.

2. build encoding tree: Build a binary tree with a particular structure, where each node represents a character and its count of occurrences in the file. A priority queue is used to help build the tree along the way.

3. build encoding map: Traverse the binary tree to discover the binary encodings of each character.

4. encode data: Re-examine the source file's contents, and for each character, output the encoded binary version of that character to the destination file.

Your program's output format should exactly match example logs of execution here:

Example Run #1 Example Run #2 Example Run #3 Example Run #4

Example Run #5 Example Run #6 Example Run #7 Example Run #8

Encoding a File Step 1: Counting Frequencies

As an example, suppose we have a file named example.txt whose contents are: ab ab cab. In the original file, this text occupies 10 bytes (80 bits) of data, including spaces and a special "end-of-file" (EOF) byte.

In Step 1 of Huffman's algorithm, a count of each character is computed. This frequency table is represented as a map: {' ':2, 'a':3, 'b':3, 'c':1, EOF:1}

Note that all characters must be included in the frequency table, including spaces, any punctuation, and the EOF marker. Below you will find details on implementing the class MyMap, which you must use to store this encoding map as a hash map.

3

Encoding a File Step 2: Building an Encoding Tree

Step 2 of Huffman's algorithm builds an encoding tree as follows. First, we place our counts into node structs (out of which we will build the binary tree); each node stores a character and a count of its occurrences. Then, we put the nodes into a priority queue, which stores them in prioritized order, where smaller counts have a higher priority. This means that characters with lower counts will come out of the queue sooner, as the figure below shows. (As you will see when you implement it, the priority queue is somewhat arbitrary in how it breaks ties, which is why in this example 'c' can end up before EOF, while 'a' is before 'b'.)

Now, to construct the tree, the algorithm repeatedly removes two nodes from the front of the queue (i.e., the two nodes with the smallest frequencies) and joins them into a new node whose frequency is their sum. The two nodes are positioned as children of the new node; the first node removed becomes the left child, and the second becomes the right. The new node is re-inserted into the queue in sorted order (and we can observe that its priority will now be less urgent, since its frequency is the sum of its children's frequencies). This process is repeated until the queue contains only one binary tree node with all the others as its children. This will be the root of our finished Huffman tree. The following diagram illustrates this process Notice that the nodes with low frequencies end up far down in the tree, and nodes with high frequencies end up near the root of the tree. As we shall see, this structure can be used to create an efficient encoding in the next step.

4

Encoding a File Step 3: Building an Encoding Map

The Huffman code for each character is derived from your binary tree by thinking of each left branch as a bit value of 0 and each right branch as a bit value of 1, as shown in the diagram below:

5

The code for each character can be determined by traversing the tree. To reach ' ', we go left twice from the root, so the code for ' ' is 00. Similarly, the code for 'c' is 010, the code for EOF is 011, the code for 'a is 10 and the code for 'b is 11. By traversing the tree, we can produce a map from characters to their binary representations. Though the binary representations are integers, since they consist of binary digits and can have arbitrary lengths, we will store them as strings. For this tree, the encoding map would look like this: {' ':"00", 'a':"10", 'b':"11", 'c':"010", EOF:"011"}

6

Encoding a File Step 4: Encoding the Textual Data

Using the encoding map, we can encode the file's text into a shorter binary representation. Using the preceding encoding map, the text "ab ab cab" would be encoded as: 1011001011000101011011

The following table details the char-to-binary mapping in more detail. The overall encoded contents of the file require 22 bits, or a little under 3 bytes, compared to the original file size of 10 bytes.

Since the character encodings have different lengths, often the length of a Huffman-encoded file does not come out to an exact multiple of 8 bits. Files are stored as sequences of whole bytes, so in cases like this the remaining digits of the last bit are filled with 0s. You do not need to worry about implementing this; it is part of the underlying file system.

It might worry you that the characters are stored without any delimiters between them, since their encodings can be different lengths and characters can cross byte boundaries, as with 'a' at the end of the second byte. But this will not cause problems in decoding the file, because Huffman encodings by definition have a useful prefix property where no character's encoding can ever occur as the start of another's encoding. (If it's not clear to you how this works, trace through the example tree above, or one produced by your own algorithm, to see for yourself.)

7

Decoding a File

You can use a Huffman tree to decode text that was previously encoded with its binary patterns. The decoding algorithm is to read each bit from the file, one at a time, and use this bit to traverse the Huffman tree. If the bit is a 0, you move left in the tree. If the bit is 1, you move right. You do this until you hit a leaf node. Leaf nodes represent characters, so once you reach a leaf, you output that character. For example, suppose we are given the same encoding tree above, and we are asked to decode a file containing the following bits:

1110010001001010011

Using the Huffman tree, we walk from the root until we find characters, then output them and go back to the root.

? We read a 1 (right), then a 1 (right). We reach 'b' and output b. Back to the root. 1110010001001010011

? We read a 1 (right), then a 0 (left). We reach 'a' and output a. Back to root. 1110010001001010011

? We read a 0 (left), then a 1 (right), then a 0 (left). We reach 'c' and output c.1 110010001001010011

? We read a 0 (left), then a 0 (left). We reach ' ' and output a space. 1110010001001010011

? We read a 1 (right), then a 0 (left). We reach 'a' and output a. 1110010001001010011

? We read a 0 (left), then a 1 (right), then a 0 (left). We reach 'c' and output c. 1110010001001010011

? We read a 1 (right), then a 0 (left). We reach 'a' and output a. 1110010001001010011

? We read a 0, 1, 1. This is our EOF encoding pattern, so we stop. The overall decoded text is "bac aca".

Provided Code

We provide you with a file HuffmanNode.h, which declares some useful support code including the HuffmanNode structure, which represents a node in a Huffman encoding tree.

struct HuffmanNode { int character; int count;

// character being represented by this node // number of occurrences of that character

8

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download