wimlib tries to compress all files, regardless of file extension or file format, since it cannot predict, in general, whether the data will be compressible or not. Note that if the data is incompressible, then the compression speed will be much faster than if the data is moderately compressible.
In addition, if a file's data does not compress to less than its original size, then the data is re-written uncompressed. This work is not represented in the progress information.
Furthermore, since a WIM archive contains each distinct file contents only once, it is preferable to know whether each file's contents is needed in the archive before compressing and writing it. Consequently, if a file's contents does not have a unique size among all known file contents, then wimlib will compute the file's SHA-1 message digest and only archive the file if the SHA-1 message does not match a known SHA-1 message digest. This work is not represented in the progress information.
Therefore, in the worst case, archiving a file's contents could consist of the following passes:
1.) Read and checksum
2.) Read and write compressed
3.) Read and write uncompressed
Since the progress information is only updated for pass (2) and you are working with very large files, this may explain the confusing progress information you are encountering.
I do not think there an easy way to report accurate progress information for these large files, since it cannot be predicted ahead of time whether the file will need one pass, two passes, or three passes. One idea would be to report progress information during each pass, but only for one-third of the bytes. This would keep the progress meter moving, but in the common case (1 or 2 passes needed) it would require that the progress suddenly jump forward as each file is completed.
In theory, pass (1) can be skipped, provided that unneeded data is truncated from the output file. However, if there is a significant amount of duplicate data then this would be a performance loss overall.
Another idea is that there could be a stronger heuristic around not compressing files which are likely to be incompressible. This could be helpful and may seem "obvious" for something like a 20 GB 7z file, but heuristics are not guaranteed to be correct and can be harmful.
Additional note: the --update-of option being unreliable is a known issue which has been reported on these forums:
viewtopic.php?f=1&t=270. Unfortunately, it is caused by a bug in Windows which cannot, to my knowledge, be worked around.