4 * A decompressor for the LZMS compression format.
8 * Copyright (C) 2013 Eric Biggers
10 * This file is part of wimlib, a library for working with WIM files.
12 * wimlib is free software; you can redistribute it and/or modify it under the
13 * terms of the GNU General Public License as published by the Free
14 * Software Foundation; either version 3 of the License, or (at your option)
17 * wimlib is distributed in the hope that it will be useful, but WITHOUT ANY
18 * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
19 * A PARTICULAR PURPOSE. See the GNU General Public License for more
22 * You should have received a copy of the GNU General Public License
23 * along with wimlib; if not, see http://www.gnu.org/licenses/.
27 * This is a decompressor for the LZMS compression format used by Microsoft.
28 * This format is not documented, but it is one of the formats supported by the
29 * compression API available in Windows 8, and as of Windows 8 it is one of the
30 * formats that can be used in WIM files.
32 * This decompressor only implements "raw" decompression, which decompresses a
33 * single LZMS-compressed block. This behavior is the same as that of
34 * Decompress() in the Windows 8 compression API when using a compression handle
35 * created with CreateDecompressor() with the Algorithm parameter specified as
36 * COMPRESS_ALGORITHM_LZMS | COMPRESS_RAW. Presumably, non-raw LZMS data
37 * is a container format from which the locations and sizes (both compressed and
38 * uncompressed) of the constituent blocks can be determined.
40 * A LZMS-compressed block must be read in 16-bit little endian units from both
41 * directions. One logical bitstream starts at the front of the block and
42 * proceeds forwards. Another logical bitstream starts at the end of the block
43 * and proceeds backwards. Bits read from the forwards bitstream constitute
44 * range-encoded data, whereas bits read from the backwards bitstream constitute
45 * Huffman-encoded symbols or verbatim bits. For both bitstreams, the ordering
46 * of the bits within the 16-bit coding units is such that the first bit is the
47 * high-order bit and the last bit is the low-order bit.
49 * From these two logical bitstreams, an LZMS decompressor can reconstitute the
50 * series of items that make up the LZMS data representation. Each such item
51 * may be a literal byte or a match. Matches may be either traditional LZ77
52 * matches or "delta" matches, either of which can have its offset encoded
53 * explicitly or encoded via a reference to a recently used (repeat) offset.
55 * A traditional LZ match consists of a length and offset; it asserts that the
56 * sequence of bytes beginning at the current position and extending for the
57 * length is exactly equal to the equal-length sequence of bytes at the offset
58 * back in the window. On the other hand, a delta match consists of a length,
59 * raw offset, and power. It asserts that the sequence of bytes beginning at
60 * the current position and extending for the length is equal to the bytewise
61 * sum of the two equal-length sequences of bytes (2**power) and (raw_offset *
62 * 2**power) bytes before the current position, minus bytewise the sequence of
63 * bytes beginning at (2**power + raw_offset * 2**power) bytes before the
64 * current position. Although not generally as useful as traditional LZ
65 * matches, delta matches can be helpful on some types of data. Both LZ and
66 * delta matches may overlap with the current position; in fact, the minimum
67 * offset is 1, regardless of match length.
69 * For LZ matches, up to 3 repeat offsets are allowed, similar to some other
70 * LZ-based formats such as LZX and LZMA. They must updated in a LRU fashion,
71 * except for a quirk: updates to the queue must be delayed by one LZMS item,
72 * except for the removal of a repeat match. As a result, 4 entries are
73 * actually needed in the queue, even though it is only possible to decode
74 * references to the first 3 at any given time. The queue must be initialized
75 * to the offsets {1, 2, 3, 4}.
77 * Repeat delta matches are handled similarly, but for them there are two queues
78 * updated in lock-step: one for powers and one for raw offsets. The power
79 * queue must be initialized to {0, 0, 0, 0}, and the raw offset queue must be
80 * initialized to {1, 2, 3, 4}.
82 * Bits from the range decoder must be used to disambiguate item types. The
83 * range decoder must hold two state variables: the range, which must initially
84 * be set to 0xffffffff, and the current code, which must initially be set to
85 * the first 32 bits read from the forwards bitstream. The range must be
86 * maintained above 0xffff; when it falls below 0xffff, both the range and code
87 * must be left-shifted by 16 bits and the low 16 bits of the code must be
88 * filled in with the next 16 bits from the forwards bitstream.
90 * To decode each bit, the range decoder requires a probability that is
91 * logically a real number between 0 and 1. Multiplying this probability by the
92 * current range and taking the floor gives the bound between the 0-bit region
93 * of the range and the 1-bit region of the range. However, in LZMS,
94 * probabilities are restricted to values of n/64 where n is an integer is
95 * between 1 and 63 inclusively, so the implementation may use integer
96 * operations instead. Following calculation of the bound, if the current code
97 * is in the 0-bit region, the new range becomes the current code and the
98 * decoded bit is 0; otherwise, the bound must be subtracted from both the range
99 * and the code, and the decoded bit is 1. More information about range coding
100 * can be found at https://en.wikipedia.org/wiki/Range_encoding. Furthermore,
101 * note that the LZMA format also uses range coding and has public domain code
104 * The probability used to range-decode each bit must be taken from a table, of
105 * which one instance must exist for each distinct context in which a
106 * range-decoded bit is needed. At each call of the range decoder, the
107 * appropriate probability must be obtained by indexing the appropriate
108 * probability table with the last 4 (in the context disambiguating literals
109 * from matches), 5 (in the context disambiguating LZ matches from delta
110 * matches), or 6 (in all other contexts) bits recently range-decoded in that
111 * context, ordered such that the most recently decoded bit is the low-order bit
114 * Furthermore, each probability entry itself is variable, as its value must be
115 * maintained as n/64 where n is the number of 0 bits in the most recently
116 * decoded 64 bits with that same entry. This allows the compressed
117 * representation to adapt to the input and use fewer bits to represent the most
118 * likely data; note that LZMA uses a similar scheme. Initially, the most
119 * recently 64 decoded bits for each probability entry are assumed to be
120 * 0x0000000055555555 (high order to low order); therefore, all probabilities
121 * are initially 48/64. During the course of decoding, each probability may be
122 * updated to as low as 0/64 (as a result of reading many consecutive 1 bits
123 * with that entry) or as high as 64/64 (as a result of reading many consecutive
124 * 0 bits with that entry); however, probabilities of 0/64 and 64/64 cannot be
125 * used as-is but rather must be adjusted to 1/64 and 63/64, respectively,
126 * before being used for range decoding.
128 * Representations of the LZMS items themselves must be read from the backwards
129 * bitstream. For this, there are 5 different Huffman codes used:
131 * - The literal code, used for decoding literal bytes. Each of the 256
132 * symbols represents a literal byte. This code must be rebuilt whenever
133 * 1024 symbols have been decoded with it.
135 * - The LZ offset code, used for decoding the offsets of standard LZ77
136 * matches. Each symbol represents a position slot, which corresponds to a
137 * base value and some number of extra bits which must be read and added to
138 * the base value to reconstitute the full offset. The number of symbols in
139 * this code is the number of position slots needed to represent all possible
140 * offsets in the uncompressed block. This code must be rebuilt whenever
141 * 1024 symbols have been decoded with it.
143 * - The length code, used for decoding length symbols. Each of the 54 symbols
144 * represents a length slot, which corresponds to a base value and some
145 * number of extra bits which must be read and added to the base value to
146 * reconstitute the full length. This code must be rebuilt whenever 512
147 * symbols have been decoded with it.
149 * - The delta offset code, used for decoding the offsets of delta matches.
150 * Each symbol corresponds to a position slot, which corresponds to a base
151 * value and some number of extra bits which must be read and added to the
152 * base value to reconstitute the full offset. The number of symbols in this
153 * code is equal to the number of symbols in the LZ offset code. This code
154 * must be rebuilt whenever 1024 symbols have been decoded with it.
156 * - The delta power code, used for decoding the powers of delta matches. Each
157 * of the 8 symbols corresponds to a power. This code must be rebuilt
158 * whenever 512 symbols have been decoded with it.
160 * All the LZMS Huffman codes must be built adaptively based on symbol
161 * frequencies. Initially, each code must be built assuming that all symbols
162 * have equal frequency. Following that, each code must be rebuilt whenever a
163 * certain number of symbols has been decoded with it.
165 * In general, multiple valid Huffman codes can be constructed from a set of
166 * symbol frequencies. Like other compression formats such as XPRESS, LZX, and
167 * DEFLATE, the LZMS format solves this ambiguity by requiring that all Huffman
168 * codes be constructed in canonical form. This form requires that same-length
169 * codewords be lexicographically ordered the same way as the corresponding
170 * symbols and that all shorter codewords lexicographically precede longer
173 * Codewords in all the LZMS Huffman codes are limited to 15 bits. If the
174 * canonical code for a given set of symbol frequencies has any codewords longer
175 * than 15 bits, then all frequencies must be divided by 2, rounding up, and the
176 * code construction must be attempted again.
178 * A LZMS-compressed block seemingly cannot have a compressed size greater than
179 * or equal to the uncompressed size. In such cases the block must be stored
182 * After all LZMS items have been decoded, the data must be postprocessed to
183 * translate absolute address encoded in x86 instructions into their original
184 * relative addresses.
186 * Details omitted above can be found in the code. Note that in the absence of
187 * an official specification there is no guarantee that this decompressor
188 * handles all possible cases.
196 #include "wimlib/compress.h"
197 #include "wimlib/decompress.h"
198 #include "wimlib/error.h"
199 #include "wimlib/lzms.h"
200 #include "wimlib/util.h"
205 #define LZMS_DECODE_TABLE_BITS 10
207 /* Structure used for range decoding, reading bits forwards. This is the first
208 * logical bitstream mentioned above. */
209 struct lzms_range_decoder_raw {
210 /* The relevant part of the current range. Although the logical range
211 * for range decoding is a very large integer, only a small portion
212 * matters at any given time, and it can be normalized (shifted left)
213 * whenever it gets too small. */
216 /* The current position in the range encoded by the portion of the input
220 /* Pointer to the next little-endian 16-bit integer in the compressed
221 * input data (reading forwards). */
224 /* Number of 16-bit integers remaining in the compressed input data
225 * (reading forwards). */
226 size_t num_le16_remaining;
229 /* Structure used for reading raw bits backwards. This is the second logical
230 * bitstream mentioned above. */
231 struct lzms_input_bitstream {
232 /* Holding variable for bits that have been read from the compressed
233 * data. The bits are ordered from high-order to low-order. */
234 /* XXX: Without special-case code to handle reading more than 17 bits
235 * at a time, this needs to be 64 bits rather than 32 bits. */
238 /* Number of bits in @bitbuf that are are used. */
239 unsigned num_filled_bits;
241 /* Pointer to the one past the next little-endian 16-bit integer in the
242 * compressed input data (reading backwards). */
245 /* Number of 16-bit integers remaining in the compressed input data
246 * (reading backwards). */
247 size_t num_le16_remaining;
250 /* Probability entry for use by the range decoder when in a specific state. */
251 struct lzms_probability_entry {
253 /* Number of zeroes in the most recent LZMS_PROBABILITY_MAX bits that
254 * have been decoded using this probability entry. This is a cached
255 * value because it can be computed as LZMS_PROBABILITY_MAX minus the
256 * Hamming weight of the low-order LZMS_PROBABILITY_MAX bits of
258 u32 num_recent_zero_bits;
260 /* The most recent LZMS_PROBABILITY_MAX bits that have been decoded
261 * using this probability entry. The size of this variable, in bits,
262 * must be at least LZMS_PROBABILITY_MAX. */
266 /* Structure used for range decoding. This wraps around `struct
267 * lzms_range_decoder_raw' to use and maintain probability entries. */
268 struct lzms_range_decoder {
269 /* Pointer to the raw range decoder, which has no persistent knowledge
270 * of probabilities. Multiple lzms_range_decoder's share the same
271 * lzms_range_decoder_raw. */
272 struct lzms_range_decoder_raw *rd;
274 /* Bits recently decoded by this range decoder. This are used as in
275 * index into @prob_entries. */
278 /* Bitmask for @state to prevent its value from exceeding the number of
279 * probability entries. */
282 /* Probability entries being used for this range decoder. */
283 struct lzms_probability_entry prob_entries[LZMS_MAX_NUM_STATES];
286 /* Structure used for Huffman decoding, optionally using the decoded symbols as
287 * slots into a base table to determine how many extra bits need to be read to
288 * reconstitute the full value. */
289 struct lzms_huffman_decoder {
291 /* Bitstream to read Huffman-encoded symbols and verbatim bits from.
292 * Multiple lzms_huffman_decoder's share the same lzms_input_bitstream.
294 struct lzms_input_bitstream *is;
296 /* Pointer to the slot base table to use. It is indexed by the decoded
297 * Huffman symbol that specifies the slot. The entry specifies the base
298 * value to use, and the position of its high bit is the number of
299 * additional bits that must be read to reconstitute the full value.
301 * This member need not be set if only raw Huffman symbols are being
302 * read using this decoder. */
303 const u32 *slot_base_tab;
305 /* Number of symbols that have been read using this code far. Reset to
306 * 0 whenever the code is rebuilt. */
309 /* When @num_syms_read reaches this number, the Huffman code must be
313 /* Number of symbols in the represented Huffman code. */
316 /* Running totals of symbol frequencies. These are diluted slightly
317 * whenever the code is rebuilt. */
318 u32 sym_freqs[LZMS_MAX_NUM_SYMS];
320 /* The length, in bits, of each symbol in the Huffman code. */
321 u8 lens[LZMS_MAX_NUM_SYMS];
323 /* The codeword of each symbol in the Huffman code. */
324 u16 codewords[LZMS_MAX_NUM_SYMS];
326 /* A table for quickly decoding symbols encoded using the Huffman code.
328 u16 decode_table[(1U << LZMS_DECODE_TABLE_BITS) + 2 * LZMS_MAX_NUM_SYMS]
329 _aligned_attribute(DECODE_TABLE_ALIGNMENT);
332 /* State of the LZMS decompressor. */
333 struct lzms_decompressor {
335 /* Pointer to the beginning of the uncompressed data buffer. */
338 /* Pointer to the next position in the uncompressed data buffer. */
341 /* Pointer to one past the end of the uncompressed data buffer. */
344 /* Range decoder, which reads bits from the beginning of the compressed
345 * block, going forwards. */
346 struct lzms_range_decoder_raw rd;
348 /* Input bitstream, which reads from the end of the compressed block,
349 * going backwards. */
350 struct lzms_input_bitstream is;
352 /* Range decoders. */
353 struct lzms_range_decoder main_range_decoder;
354 struct lzms_range_decoder match_range_decoder;
355 struct lzms_range_decoder lz_match_range_decoder;
356 struct lzms_range_decoder lz_repeat_match_range_decoders[LZMS_NUM_RECENT_OFFSETS - 1];
357 struct lzms_range_decoder delta_match_range_decoder;
358 struct lzms_range_decoder delta_repeat_match_range_decoders[LZMS_NUM_RECENT_OFFSETS - 1];
360 /* Huffman decoders. */
361 struct lzms_huffman_decoder literal_decoder;
362 struct lzms_huffman_decoder lz_offset_decoder;
363 struct lzms_huffman_decoder length_decoder;
364 struct lzms_huffman_decoder delta_power_decoder;
365 struct lzms_huffman_decoder delta_offset_decoder;
367 /* LRU (least-recently-used) queue of LZ match offsets. */
368 u64 recent_lz_offsets[LZMS_NUM_RECENT_OFFSETS + 1];
370 /* LRU (least-recently-used) queue of delta match powers. */
371 u32 recent_delta_powers[LZMS_NUM_RECENT_OFFSETS + 1];
373 /* LRU (least-recently-used) queue of delta match offsets. */
374 u32 recent_delta_offsets[LZMS_NUM_RECENT_OFFSETS + 1];
376 /* These variables are used to delay updates to the LRU queues by one
379 u32 prev_delta_power;
380 u32 prev_delta_offset;
381 u32 upcoming_lz_offset;
382 u32 upcoming_delta_power;
383 u32 upcoming_delta_offset;
386 /* A table that maps position slots to their base values. These are constants
387 * computed at runtime by lzms_compute_slot_bases(). */
388 static u32 lzms_position_slot_base[LZMS_MAX_NUM_OFFSET_SYMS + 1];
390 /* A table that maps length slots to their base values. These are constants
391 * computed at runtime by lzms_compute_slot_bases(). */
392 static u32 lzms_length_slot_base[LZMS_NUM_LEN_SYMS + 1];
395 lzms_decode_delta_rle_slot_bases(u32 slot_bases[],
396 const u8 delta_run_lens[], size_t num_run_lens)
401 for (size_t i = 0; i < num_run_lens; i++) {
402 u8 run_len = delta_run_lens[i];
405 slot_bases[slot++] = base;
411 /* Initialize the global position and length slot tables. */
413 lzms_compute_slot_bases(void)
415 /* If an explicit formula that maps LZMS position and length slots to
416 * slot bases exists, then it could be used here. But until one is
417 * found, the following code fills in the slots using the observation
418 * that the increase from one slot base to the next is an increasing
419 * power of 2. Therefore, run-length encoding of the delta of adjacent
420 * entries can be used. */
421 static const u8 position_slot_delta_run_lens[] = {
422 9, 0, 9, 7, 10, 15, 15, 20,
423 20, 30, 33, 40, 42, 45, 60, 73,
427 static const u8 length_slot_delta_run_lens[] = {
428 27, 4, 6, 4, 5, 2, 1, 1,
429 1, 1, 1, 0, 0, 0, 0, 0,
433 lzms_decode_delta_rle_slot_bases(lzms_position_slot_base,
434 position_slot_delta_run_lens,
435 ARRAY_LEN(position_slot_delta_run_lens));
437 lzms_position_slot_base[LZMS_MAX_NUM_OFFSET_SYMS] = 0x7fffffff;
439 lzms_decode_delta_rle_slot_bases(lzms_length_slot_base,
440 length_slot_delta_run_lens,
441 ARRAY_LEN(length_slot_delta_run_lens));
443 lzms_length_slot_base[LZMS_NUM_LEN_SYMS] = 0x400108ab;
446 /* Initialize the global position length slot tables if not done so already. */
448 lzms_init_slot_bases(void)
450 static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
451 static bool already_computed = false;
453 if (unlikely(!already_computed)) {
454 pthread_mutex_lock(&mutex);
455 if (!already_computed) {
456 lzms_compute_slot_bases();
457 already_computed = true;
459 pthread_mutex_unlock(&mutex);
463 /* Return the position slot for the specified offset. */
465 lzms_get_position_slot_raw(u32 offset)
467 u32 position_slot = 0;
468 while (lzms_position_slot_base[position_slot + 1] <= offset)
470 return position_slot;
473 /* Initialize the input bitstream @is to read forwards from the specified
474 * compressed data buffer @in that is @in_limit 16-bit integers long. */
476 lzms_input_bitstream_init(struct lzms_input_bitstream *is,
477 const le16 *in, size_t in_limit)
480 is->num_filled_bits = 0;
481 is->in = in + in_limit;
482 is->num_le16_remaining = in_limit;
485 /* Ensures that @num_bits bits are buffered in the input bitstream. */
487 lzms_input_bitstream_ensure_bits(struct lzms_input_bitstream *is,
490 while (is->num_filled_bits < num_bits) {
493 LZMS_ASSERT(is->num_filled_bits + 16 <= sizeof(is->bitbuf) * 8);
495 if (unlikely(is->num_le16_remaining == 0))
498 next = le16_to_cpu(*--is->in);
499 is->num_le16_remaining--;
501 is->bitbuf |= next << (sizeof(is->bitbuf) * 8 - is->num_filled_bits - 16);
502 is->num_filled_bits += 16;
508 /* Returns the next @num_bits bits that are buffered in the input bitstream. */
510 lzms_input_bitstream_peek_bits(struct lzms_input_bitstream *is,
513 LZMS_ASSERT(is->num_filled_bits >= num_bits);
514 return is->bitbuf >> (sizeof(is->bitbuf) * 8 - num_bits);
517 /* Removes the next @num_bits bits that are buffered in the input bitstream. */
519 lzms_input_bitstream_remove_bits(struct lzms_input_bitstream *is,
522 LZMS_ASSERT(is->num_filled_bits >= num_bits);
523 is->bitbuf <<= num_bits;
524 is->num_filled_bits -= num_bits;
527 /* Removes and returns the next @num_bits bits that are buffered in the input
530 lzms_input_bitstream_pop_bits(struct lzms_input_bitstream *is,
533 u32 bits = lzms_input_bitstream_peek_bits(is, num_bits);
534 lzms_input_bitstream_remove_bits(is, num_bits);
538 /* Reads the next @num_bits from the input bitstream. */
540 lzms_input_bitstream_read_bits(struct lzms_input_bitstream *is,
543 if (unlikely(lzms_input_bitstream_ensure_bits(is, num_bits)))
545 return lzms_input_bitstream_pop_bits(is, num_bits);
548 /* Initialize the range decoder @rd to read forwards from the specified
549 * compressed data buffer @in that is @in_limit 16-bit integers long. */
551 lzms_range_decoder_raw_init(struct lzms_range_decoder_raw *rd,
552 const le16 *in, size_t in_limit)
554 rd->range = 0xffffffff;
555 rd->code = ((u32)le16_to_cpu(in[0]) << 16) |
556 ((u32)le16_to_cpu(in[1]) << 0);
558 rd->num_le16_remaining = in_limit - 2;
561 /* Ensures the current range of the range decoder has at least 16 bits of
564 lzms_range_decoder_raw_normalize(struct lzms_range_decoder_raw *rd)
566 if (rd->range <= 0xffff) {
568 if (unlikely(rd->num_le16_remaining == 0))
570 rd->code = (rd->code << 16) | le16_to_cpu(*rd->in++);
571 rd->num_le16_remaining--;
576 /* Decode and return the next bit from the range decoder (raw version).
578 * @prob is the chance out of LZMS_PROBABILITY_MAX that the next bit is 0.
581 lzms_range_decoder_raw_decode_bit(struct lzms_range_decoder_raw *rd, u32 prob)
585 /* Ensure the range has at least 16 bits of precision. */
586 lzms_range_decoder_raw_normalize(rd);
588 /* Based on the probability, calculate the bound between the 0-bit
589 * region and the 1-bit region of the range. */
590 bound = (rd->range >> LZMS_PROBABILITY_BITS) * prob;
592 if (rd->code < bound) {
593 /* Current code is in the 0-bit region of the range. */
597 /* Current code is in the 1-bit region of the range. */
604 /* Decode and return the next bit from the range decoder. This wraps around
605 * lzms_range_decoder_raw_decode_bit() to handle using and updating the
606 * appropriate probability table. */
608 lzms_range_decode_bit(struct lzms_range_decoder *dec)
610 struct lzms_probability_entry *prob_entry;
614 /* Load the probability entry corresponding to the current state. */
615 prob_entry = &dec->prob_entries[dec->state];
617 /* Treat the number of zero bits in the most recently decoded
618 * LZMS_PROBABILITY_MAX bits with this probability entry as the chance,
619 * out of LZMS_PROBABILITY_MAX, that the next bit will be a 0. However,
620 * don't allow 0% or 100% probabilities. */
621 prob = prob_entry->num_recent_zero_bits;
622 if (prob == LZMS_PROBABILITY_MAX)
623 prob = LZMS_PROBABILITY_MAX - 1;
627 /* Decode the next bit. */
628 bit = lzms_range_decoder_raw_decode_bit(dec->rd, prob);
630 /* Update the state based on the newly decoded bit. */
631 dec->state = (((dec->state << 1) | bit) & dec->mask);
633 /* Update the recent bits, including the cached count of 0's. */
634 BUILD_BUG_ON(LZMS_PROBABILITY_MAX > sizeof(prob_entry->recent_bits) * 8);
636 if (prob_entry->recent_bits & (1ULL << (LZMS_PROBABILITY_MAX - 1))) {
637 /* Replacing 1 bit with 0 bit; increment the zero count.
639 prob_entry->num_recent_zero_bits++;
642 if (!(prob_entry->recent_bits & (1ULL << (LZMS_PROBABILITY_MAX - 1)))) {
643 /* Replacing 0 bit with 1 bit; decrement the zero count.
645 prob_entry->num_recent_zero_bits--;
648 prob_entry->recent_bits = (prob_entry->recent_bits << 1) | bit;
650 /* Return the decoded bit. */
655 /* Build the decoding table for a new adaptive Huffman code using the alphabet
656 * used in the specified Huffman decoder, with the symbol frequencies
659 lzms_rebuild_adaptive_huffman_code(struct lzms_huffman_decoder *dec)
663 /* XXX: This implementation makes use of code already implemented for
664 * the XPRESS and LZX compression formats. However, since for the
665 * adaptive codes used in LZMS we don't actually need the explicit codes
666 * themselves, only the decode tables, it may be possible to optimize
667 * this by somehow directly building or updating the Huffman decode
668 * table. This may be a worthwhile optimization because the adaptive
669 * codes change many times throughout a decompression run. */
670 LZMS_DEBUG("Rebuilding adaptive Huffman code (num_syms=%u)",
672 make_canonical_huffman_code(dec->num_syms, LZMS_MAX_CODEWORD_LEN,
673 dec->sym_freqs, dec->lens, dec->codewords);
674 ret = make_huffman_decode_table(dec->decode_table, dec->num_syms,
675 LZMS_DECODE_TABLE_BITS, dec->lens,
676 LZMS_MAX_CODEWORD_LEN);
677 LZMS_ASSERT(ret == 0);
680 /* Decode and return the next Huffman-encoded symbol from the LZMS-compressed
681 * block using the specified Huffman decoder. */
683 lzms_decode_huffman_symbol(struct lzms_huffman_decoder *dec)
685 const u8 *lens = dec->lens;
686 const u16 *decode_table = dec->decode_table;
687 struct lzms_input_bitstream *is = dec->is;
689 /* The Huffman codes used in LZMS are adaptive and must be rebuilt
690 * whenever a certain number of symbols have been read. Each such
691 * rebuild uses the current symbol frequencies, but the format also
692 * requires that the symbol frequencies be halved after each code
693 * rebuild. This diminishes the effect of old symbols on the current
694 * Huffman codes, thereby causing the Huffman codes to be more locally
696 if (dec->num_syms_read == dec->rebuild_freq) {
697 lzms_rebuild_adaptive_huffman_code(dec);
698 for (unsigned i = 0; i < dec->num_syms; i++) {
699 dec->sym_freqs[i] >>= 1;
700 dec->sym_freqs[i] += 1;
702 dec->num_syms_read = 0;
705 /* In the following Huffman decoding implementation, the first
706 * LZMS_DECODE_TABLE_BITS of the input are used as an offset into a
707 * decode table. The entry will either provide the decoded symbol
708 * directly, or else a "real" Huffman binary tree will be searched to
709 * decode the symbol. */
711 lzms_input_bitstream_ensure_bits(is, LZMS_MAX_CODEWORD_LEN);
713 u16 key_bits = lzms_input_bitstream_peek_bits(is, LZMS_DECODE_TABLE_BITS);
714 u16 sym = decode_table[key_bits];
716 if (sym < dec->num_syms) {
717 /* Fast case: The decode table directly provided the symbol. */
718 lzms_input_bitstream_remove_bits(is, lens[sym]);
720 /* Slow case: The symbol took too many bits to include directly
721 * in the decode table, so search for it in a binary tree at the
722 * end of the decode table. */
723 lzms_input_bitstream_remove_bits(is, LZMS_DECODE_TABLE_BITS);
725 key_bits = sym + lzms_input_bitstream_pop_bits(is, 1);
726 } while ((sym = decode_table[key_bits]) >= dec->num_syms);
729 /* Tally and return the decoded symbol. */
730 ++dec->sym_freqs[sym];
731 ++dec->num_syms_read;
735 /* Decode a number from the LZMS bitstream, encoded as a Huffman-encoded symbol
736 * specifying a "slot" (whose corresponding value is looked up in a static
737 * table) plus the number specified by a number of extra bits depending on the
740 lzms_decode_value(struct lzms_huffman_decoder *dec)
743 unsigned num_extra_bits;
746 /* Read the slot (position slot, length slot, etc.), which is encoded as
747 * a Huffman symbol. */
748 slot = lzms_decode_huffman_symbol(dec);
750 LZMS_ASSERT(dec->slot_base_tab != NULL);
752 /* Get the number of extra bits needed to represent the range of values
753 * that share the slot. */
754 num_extra_bits = bsr32(dec->slot_base_tab[slot + 1] -
755 dec->slot_base_tab[slot]);
757 /* Read the number of extra bits and add them to the slot to form the
758 * final decoded value. */
759 extra_bits = lzms_input_bitstream_read_bits(dec->is, num_extra_bits);
760 return dec->slot_base_tab[slot] + extra_bits;
763 /* Copy a literal to the output buffer. */
765 lzms_copy_literal(struct lzms_decompressor *ctx, u8 literal)
767 *ctx->out_next++ = literal;
771 /* Validate an LZ match and copy it to the output buffer. */
773 lzms_copy_lz_match(struct lzms_decompressor *ctx, u32 length, u32 offset)
778 if (length > ctx->out_end - ctx->out_next) {
779 LZMS_DEBUG("Match overrun!");
782 if (offset > ctx->out_next - ctx->out_begin) {
783 LZMS_DEBUG("Match underrun!");
787 out_next = ctx->out_next;
788 matchptr = out_next - offset;
790 *out_next++ = *matchptr++;
792 ctx->out_next = out_next;
796 /* Validate a delta match and copy it to the output buffer. */
798 lzms_copy_delta_match(struct lzms_decompressor *ctx, u32 length,
799 u32 power, u32 raw_offset)
801 u32 offset1 = 1U << power;
802 u32 offset2 = raw_offset << power;
803 u32 offset = offset1 + offset2;
809 if (length > ctx->out_end - ctx->out_next) {
810 LZMS_DEBUG("Match overrun!");
813 if (offset > ctx->out_next - ctx->out_begin) {
814 LZMS_DEBUG("Match underrun!");
818 out_next = ctx->out_next;
819 matchptr1 = out_next - offset1;
820 matchptr2 = out_next - offset2;
821 matchptr = out_next - offset;
824 *out_next++ = *matchptr1++ + *matchptr2++ - *matchptr++;
826 ctx->out_next = out_next;
830 /* Decode a (length, offset) pair from the input. */
832 lzms_decode_lz_match(struct lzms_decompressor *ctx)
837 /* Decode the match offset. The next range-encoded bit indicates
838 * whether it's a repeat offset or an explicit offset. */
840 bit = lzms_range_decode_bit(&ctx->lz_match_range_decoder);
842 /* Explicit offset. */
843 offset = lzms_decode_value(&ctx->lz_offset_decoder);
848 for (i = 0; i < LZMS_NUM_RECENT_OFFSETS - 1; i++)
849 if (!lzms_range_decode_bit(&ctx->lz_repeat_match_range_decoders[i]))
852 offset = ctx->recent_lz_offsets[i];
854 for (; i < LZMS_NUM_RECENT_OFFSETS; i++)
855 ctx->recent_lz_offsets[i] = ctx->recent_lz_offsets[i + 1];
858 /* Decode match length, which is always given explicitly (there is no
859 * LRU queue for repeat lengths). */
860 length = lzms_decode_value(&ctx->length_decoder);
862 ctx->upcoming_lz_offset = offset;
864 LZMS_DEBUG("Decoded %s LZ match: length=%u, offset=%u",
865 (bit ? "repeat" : "explicit"), length, offset);
867 /* Validate the match and copy it to the output. */
868 return lzms_copy_lz_match(ctx, length, offset);
871 /* Decodes a "delta" match from the input. */
873 lzms_decode_delta_match(struct lzms_decompressor *ctx)
876 u32 length, power, raw_offset;
878 /* Decode the match power and raw offset. The next range-encoded bit
879 * indicates whether these data are a repeat, or given explicitly. */
881 bit = lzms_range_decode_bit(&ctx->delta_match_range_decoder);
883 power = lzms_decode_huffman_symbol(&ctx->delta_power_decoder);
884 raw_offset = lzms_decode_value(&ctx->delta_offset_decoder);
888 for (i = 0; i < LZMS_NUM_RECENT_OFFSETS - 1; i++)
889 if (!lzms_range_decode_bit(&ctx->delta_repeat_match_range_decoders[i]))
892 power = ctx->recent_delta_powers[i];
893 raw_offset = ctx->recent_delta_offsets[i];
895 for (; i < LZMS_NUM_RECENT_OFFSETS; i++) {
896 ctx->recent_delta_powers[i] = ctx->recent_delta_powers[i + 1];
897 ctx->recent_delta_offsets[i] = ctx->recent_delta_offsets[i + 1];
901 length = lzms_decode_value(&ctx->length_decoder);
903 ctx->upcoming_delta_power = power;
904 ctx->upcoming_delta_offset = raw_offset;
906 LZMS_DEBUG("Decoded %s delta match: length=%u, power=%u, raw_offset=%u",
907 (bit ? "repeat" : "explicit"), length, power, raw_offset);
909 /* Validate the match and copy it to the output. */
910 return lzms_copy_delta_match(ctx, length, power, raw_offset);
914 lzms_decode_match(struct lzms_decompressor *ctx)
916 if (!lzms_range_decode_bit(&ctx->match_range_decoder))
917 return lzms_decode_lz_match(ctx);
919 return lzms_decode_delta_match(ctx);
922 /* Decode a literal byte encoded using the literal Huffman code. */
924 lzms_decode_literal(struct lzms_decompressor *ctx)
926 u8 literal = lzms_decode_huffman_symbol(&ctx->literal_decoder);
927 LZMS_DEBUG("Decoded literal: 0x%02x", literal);
928 return lzms_copy_literal(ctx, literal);
931 /* Decode the next LZMS match or literal. */
933 lzms_decode_item(struct lzms_decompressor *ctx)
937 ctx->upcoming_delta_offset = 0;
938 ctx->upcoming_lz_offset = 0;
939 ctx->upcoming_delta_power = 0;
941 if (lzms_range_decode_bit(&ctx->main_range_decoder))
942 ret = lzms_decode_match(ctx);
944 ret = lzms_decode_literal(ctx);
949 /* Update LRU queues */
950 if (ctx->prev_lz_offset != 0) {
951 for (int i = LZMS_NUM_RECENT_OFFSETS - 1; i >= 0; i--)
952 ctx->recent_lz_offsets[i + 1] = ctx->recent_lz_offsets[i];
953 ctx->recent_lz_offsets[0] = ctx->prev_lz_offset;
956 if (ctx->prev_delta_offset != 0) {
957 for (int i = LZMS_NUM_RECENT_OFFSETS - 1; i >= 0; i--) {
958 ctx->recent_delta_powers[i + 1] = ctx->recent_delta_powers[i];
959 ctx->recent_delta_offsets[i + 1] = ctx->recent_delta_offsets[i];
961 ctx->recent_delta_powers[0] = ctx->prev_delta_power;
962 ctx->recent_delta_offsets[0] = ctx->prev_delta_offset;
965 ctx->prev_lz_offset = ctx->upcoming_lz_offset;
966 ctx->prev_delta_offset = ctx->upcoming_delta_offset;
967 ctx->prev_delta_power = ctx->upcoming_delta_power;
972 lzms_init_range_decoder(struct lzms_range_decoder *dec,
973 struct lzms_range_decoder_raw *rd, u32 num_states)
977 dec->mask = num_states - 1;
978 for (u32 i = 0; i < num_states; i++) {
979 dec->prob_entries[i].num_recent_zero_bits = LZMS_INITIAL_PROBABILITY;
980 dec->prob_entries[i].recent_bits = LZMS_INITIAL_RECENT_BITS;
985 lzms_init_huffman_decoder(struct lzms_huffman_decoder *dec,
986 struct lzms_input_bitstream *is,
987 const u32 *slot_base_tab, unsigned num_syms,
988 unsigned rebuild_freq)
991 dec->slot_base_tab = slot_base_tab;
992 dec->num_syms = num_syms;
993 dec->num_syms_read = rebuild_freq;
994 dec->rebuild_freq = rebuild_freq;
995 for (unsigned i = 0; i < num_syms; i++)
996 dec->sym_freqs[i] = 1;
999 /* Prepare to decode items from an LZMS-compressed block. */
1001 lzms_init_decompressor(struct lzms_decompressor *ctx,
1002 const void *cdata, unsigned clen,
1003 void *ubuf, unsigned ulen)
1005 unsigned num_position_slots;
1007 LZMS_DEBUG("Initializing decompressor (clen=%u, ulen=%u)", clen, ulen);
1009 /* Initialize output pointers. */
1010 ctx->out_begin = ubuf;
1011 ctx->out_next = ubuf;
1012 ctx->out_end = (u8*)ubuf + ulen;
1014 /* Initialize the raw range decoder (reading forwards). */
1015 lzms_range_decoder_raw_init(&ctx->rd, cdata, clen / 2);
1017 /* Initialize the input bitstream for Huffman symbols (reading
1019 lzms_input_bitstream_init(&ctx->is, cdata, clen / 2);
1021 /* Initialize position and length slot bases if not done already. */
1022 lzms_init_slot_bases();
1024 /* Calculate the number of position slots needed for this compressed
1026 num_position_slots = lzms_get_position_slot_raw(ulen - 1) + 1;
1028 LZMS_DEBUG("Using %u position slots", num_position_slots);
1030 /* Initialize Huffman decoders for each alphabet used in the compressed
1031 * representation. */
1032 lzms_init_huffman_decoder(&ctx->literal_decoder, &ctx->is,
1033 NULL, LZMS_NUM_LITERAL_SYMS,
1034 LZMS_LITERAL_CODE_REBUILD_FREQ);
1036 lzms_init_huffman_decoder(&ctx->lz_offset_decoder, &ctx->is,
1037 lzms_position_slot_base, num_position_slots,
1038 LZMS_LZ_OFFSET_CODE_REBUILD_FREQ);
1040 lzms_init_huffman_decoder(&ctx->length_decoder, &ctx->is,
1041 lzms_length_slot_base, LZMS_NUM_LEN_SYMS,
1042 LZMS_LENGTH_CODE_REBUILD_FREQ);
1044 lzms_init_huffman_decoder(&ctx->delta_offset_decoder, &ctx->is,
1045 lzms_position_slot_base, num_position_slots,
1046 LZMS_DELTA_OFFSET_CODE_REBUILD_FREQ);
1048 lzms_init_huffman_decoder(&ctx->delta_power_decoder, &ctx->is,
1049 NULL, LZMS_NUM_DELTA_POWER_SYMS,
1050 LZMS_DELTA_POWER_CODE_REBUILD_FREQ);
1053 /* Initialize range decoders, all of which wrap around the same
1054 * lzms_range_decoder_raw. */
1055 lzms_init_range_decoder(&ctx->main_range_decoder,
1056 &ctx->rd, LZMS_NUM_MAIN_STATES);
1058 lzms_init_range_decoder(&ctx->match_range_decoder,
1059 &ctx->rd, LZMS_NUM_MATCH_STATES);
1061 lzms_init_range_decoder(&ctx->lz_match_range_decoder,
1062 &ctx->rd, LZMS_NUM_LZ_MATCH_STATES);
1064 for (size_t i = 0; i < ARRAY_LEN(ctx->lz_repeat_match_range_decoders); i++)
1065 lzms_init_range_decoder(&ctx->lz_repeat_match_range_decoders[i],
1066 &ctx->rd, LZMS_NUM_LZ_REPEAT_MATCH_STATES);
1068 lzms_init_range_decoder(&ctx->delta_match_range_decoder,
1069 &ctx->rd, LZMS_NUM_DELTA_MATCH_STATES);
1071 for (size_t i = 0; i < ARRAY_LEN(ctx->delta_repeat_match_range_decoders); i++)
1072 lzms_init_range_decoder(&ctx->delta_repeat_match_range_decoders[i],
1073 &ctx->rd, LZMS_NUM_DELTA_REPEAT_MATCH_STATES);
1075 /* Initialize the LRU queue for recent match offsets. */
1076 for (size_t i = 0; i < LZMS_NUM_RECENT_OFFSETS + 1; i++)
1077 ctx->recent_lz_offsets[i] = i + 1;
1079 for (size_t i = 0; i < LZMS_NUM_RECENT_OFFSETS + 1; i++) {
1080 ctx->recent_delta_powers[i] = 0;
1081 ctx->recent_delta_offsets[i] = i + 1;
1083 ctx->prev_lz_offset = 0;
1084 ctx->prev_delta_offset = 0;
1085 ctx->prev_delta_power = 0;
1086 ctx->upcoming_lz_offset = 0;
1087 ctx->upcoming_delta_offset = 0;
1088 ctx->upcoming_delta_power = 0;
1090 LZMS_DEBUG("Decompressor successfully initialized");
1093 /* Decode the series of literals and matches from the LZMS-compressed data.
1094 * Returns 0 on success; nonzero if the compressed data is invalid. */
1096 lzms_decode_items(const u8 *cdata, size_t clen, u8 *ubuf, size_t ulen)
1098 /* XXX: The context could be allocated on the heap. */
1099 struct lzms_decompressor ctx;
1101 /* Initialize the LZMS decompressor. */
1102 lzms_init_decompressor(&ctx, cdata, clen, ubuf, ulen);
1104 /* Decode the sequence of items. */
1105 while (ctx.out_next != ctx.out_end) {
1106 LZMS_DEBUG("Position %u", ctx.out_next - ctx.out_begin);
1107 if (lzms_decode_item(&ctx))
1114 lzms_try_x86_translation(u8 *ubuf, s32 i, s32 num_op_bytes,
1115 s32 *closest_target_usage_p, s32 last_target_usages[],
1116 s32 max_trans_offset)
1120 if (i - *closest_target_usage_p <= max_trans_offset) {
1121 LZMS_DEBUG("Performed x86 translation at position %d "
1122 "(opcode 0x%02x)", i, ubuf[i]);
1123 le32 *p32 = (le32*)&ubuf[i + num_op_bytes];
1124 u32 n = le32_to_cpu(*p32);
1125 *p32 = cpu_to_le32(n - i);
1128 pos = i + le16_to_cpu(*(const le16*)&ubuf[i + num_op_bytes]);
1130 i += num_op_bytes + sizeof(le32) - 1;
1132 if (i - last_target_usages[pos] <= LZMS_X86_MAX_GOOD_TARGET_OFFSET)
1133 *closest_target_usage_p = i;
1135 last_target_usages[pos] = i;
1141 lzms_process_x86_translation(u8 *ubuf, s32 i, s32 *closest_target_usage_p,
1142 s32 last_target_usages[])
1144 /* Switch on first byte of the opcode, assuming it is really an x86
1148 if (ubuf[i + 1] == 0x8b) {
1149 if (ubuf[i + 2] == 0x5 || ubuf[i + 2] == 0xd) {
1150 /* Load relative (x86_64) */
1151 return lzms_try_x86_translation(ubuf, i, 3,
1152 closest_target_usage_p,
1154 LZMS_X86_MAX_TRANSLATION_OFFSET);
1156 } else if (ubuf[i + 1] == 0x8d) {
1157 if ((ubuf[i + 2] & 0x7) == 0x5) {
1158 /* Load effective address relative (x86_64) */
1159 return lzms_try_x86_translation(ubuf, i, 3,
1160 closest_target_usage_p,
1162 LZMS_X86_MAX_TRANSLATION_OFFSET);
1168 if (ubuf[i + 1] == 0x8d) {
1169 if ((ubuf[i + 2] & 0x7) == 0x5) {
1170 /* Load effective address relative (x86_64) */
1171 return lzms_try_x86_translation(ubuf, i, 3,
1172 closest_target_usage_p,
1174 LZMS_X86_MAX_TRANSLATION_OFFSET);
1181 return lzms_try_x86_translation(ubuf, i, 1, closest_target_usage_p,
1183 LZMS_X86_MAX_TRANSLATION_OFFSET / 2);
1190 if (ubuf[i + 1] == 0x83 && ubuf[i + 2] == 0x05) {
1191 /* Lock add relative */
1192 return lzms_try_x86_translation(ubuf, i, 3,
1193 closest_target_usage_p,
1195 LZMS_X86_MAX_TRANSLATION_OFFSET);
1200 if (ubuf[i + 1] == 0x15) {
1202 return lzms_try_x86_translation(ubuf, i, 2,
1203 closest_target_usage_p,
1205 LZMS_X86_MAX_TRANSLATION_OFFSET);
1212 /* Postprocess the uncompressed data by undoing the translation of relative
1213 * addresses embedded in x86 instructions into absolute addresses.
1215 * There does not appear to be any way to check to see if this postprocessing
1216 * actually needs to be done (or to plug in alternate filters, like in LZMA),
1217 * and the corresponding preprocessing seems to be done unconditionally. */
1219 lzms_postprocess_data(u8 *ubuf, s32 ulen)
1221 /* Offset (from beginning of buffer) of the most recent reference to a
1222 * seemingly valid target address. */
1223 s32 closest_target_usage = -LZMS_X86_MAX_TRANSLATION_OFFSET - 1;
1225 /* Offset (from beginning of buffer) of the most recently used target
1226 * address beginning with two bytes equal to the array index.
1228 * XXX: This array could be allocated on the heap. */
1229 s32 last_target_usages[65536];
1230 for (s32 i = 0; i < 65536; i++)
1231 last_target_usages[i] = -LZMS_X86_MAX_GOOD_TARGET_OFFSET - 1;
1233 /* Check each byte in the buffer for an x86 opcode for which a
1234 * translation may be possible. No translations are done on any
1235 * instructions starting in the last 11 bytes of the buffer. */
1236 for (s32 i = 0; i < ulen - 11; )
1237 i = lzms_process_x86_translation(ubuf, i, &closest_target_usage,
1238 last_target_usages);
1241 /* API function documented in wimlib.h */
1243 wimlib_lzms_decompress(const void *cdata, unsigned clen,
1244 void *ubuf, unsigned ulen)
1246 /* The range decoder requires that a minimum of 4 bytes of compressed
1247 * data be initially available. */
1249 LZMS_DEBUG("Compressed length too small (got %u, expected >= 4)",
1254 /* A LZMS-compressed data block should be evenly divisible into 16-bit
1256 if (clen % 2 != 0) {
1257 LZMS_DEBUG("Compressed length not divisible by 2 (got %u)", clen);
1261 /* Handle the trivial case where nothing needs to be decompressed.
1262 * (Necessary because a window of size 0 does not have a valid position
1267 /* The x86 post-processor requires that the uncompressed length fit into
1268 * a signed 32-bit integer. Also, the position slot table cannot be
1269 * searched for a position of INT32_MAX or greater. */
1270 if (ulen >= INT32_MAX) {
1271 LZMS_DEBUG("Uncompressed length too large "
1272 "(got %u, expected < INT32_MAX)", ulen);
1276 /* Decode the literals and matches. */
1277 if (lzms_decode_items(cdata, clen, ubuf, ulen))
1280 /* Postprocess the data. */
1281 lzms_postprocess_data(ubuf, ulen);
1283 LZMS_DEBUG("Decompression successful.");