-/* Creates a canonical Huffman code from an array of symbol frequencies.
- *
- * The algorithm used is similar to the well-known algorithm that builds a
- * Huffman tree using a minheap. In that algorithm, the leaf nodes are
- * initialized and inserted into the minheap with the frequency as the key.
- * Repeatedly, the top two nodes (nodes with the lowest frequency) are taken out
- * of the heap and made the children of a new node that has a frequency equal to
- * the sum of the two frequencies of its children. This new node is inserted
- * into the heap. When all the nodes have been removed from the heap, what
- * remains is the Huffman tree. The Huffman code for a symbol is given by the
- * path to it in the tree, where each left pointer is mapped to a 0 bit and each
- * right pointer is mapped to a 1 bit.
- *
- * The algorithm used here uses an optimization that removes the need to
- * actually use a heap. The leaf nodes are first sorted by frequency, as
- * opposed to being made into a heap. Note that this sorting step takes O(n log
- * n) time vs. O(n) time for heapifying the array, where n is the number of
- * symbols. However, the heapless method is probably faster overall, due to the
- * time saved later. In the heapless method, whenever an intermediate node is
- * created, it is not inserted into the sorted array. Instead, the intermediate
- * nodes are kept in a separate array, which is easily kept sorted because every
- * time an intermediate node is initialized, it will have a frequency at least
- * as high as that of the previous intermediate node that was initialized. So
- * whenever we want the 2 nodes, leaf or intermediate, that have the lowest
- * frequency, we check the low-frequency ends of both arrays, which is an O(1)
- * operation.
- *
- * The function builds a canonical Huffman code, not just any Huffman code. A
- * Huffman code is canonical if the codeword for each symbol numerically
- * precedes the codeword for all other symbols of the same length that are
- * numbered higher than the symbol, and additionally, all shorter codewords,
- * 0-extended, numerically precede longer codewords. A canonical Huffman code
- * is useful because it can be reconstructed by only knowing the path lengths in
- * the tree. See the make_huffman_decode_table() function to see how to
- * reconstruct a canonical Huffman code from only the lengths of the codes.
- *
- * @num_syms: The number of symbols in the alphabet.
- *
- * @max_codeword_len: The maximum allowed length of a codeword in the code.
- * Note that if the code being created runs up against
- * this restriction, the code ultimately created will be
- * suboptimal, although there are some advantages for
- * limiting the length of the codewords.
- *
- * @freq_tab: An array of length @num_syms that contains the frequencies
- * of each symbol in the uncompressed data.
- *
- * @lens: An array of length @num_syms into which the lengths of the
- * codewords for each symbol will be written.
- *
- * @codewords: An array of @num_syms short integers into which the
- * codewords for each symbol will be written. The first
- * lens[i] bits of codewords[i] will contain the codeword
- * for symbol i.
- */
-void make_canonical_huffman_code(uint num_syms, uint max_codeword_len,
- const u32 freq_tab[], u8 lens[],
- u16 codewords[])