5 1. Review encoder and check for lzma improvements under xz.
6 2. Fix binary tree matcher.
7 3. Compare compression ratio with xz tool using comparable parameters
8 and optimize parameters
9 4. Do some optimizations
10 - rename operation action and make it a simple type of size 8
11 - make maxMatches, wordSize parameters
12 - stop searching after a certain length is found (parameter sweetLen)
17 2. Do statistical analysis to get linear presets.
18 3. Test sync.Pool compatability for xz and lzma Writer and Reader
19 3. Fuzz optimized code.
23 1. Support parallel go routines for writing and reading xz files.
24 2. Support a ReaderAt interface for xz files with small block sizes.
25 3. Improve compatibility between gxz and xz
26 4. Provide manual page for gxz
30 1. Improve documentation
35 1. Full functioning gxz
36 2. Add godoc URL to README.md (godoc.org)
37 3. Resolve all issues.
38 4. Define release candidates.
39 5. Public announcement.
45 - Rewrite Encoder into a simple greedy one-op-at-a-time encoder
47 + simple scan at the dictionary head for the same byte
48 + use the killer byte (requiring matches to get longer, the first
49 test should be the byte that would make the match longer)
54 - There may be a lot of false sharing in lzma.State; check whether this
55 can be improved by reorganizing the internal structure of it.
56 - Check whether batching encoding and decoding improves speed.
60 - Use full buffer to create minimal bit-length above range encoder.
61 - Might be too slow (see v0.4)
63 ### Different match finders
65 - hashes with 2, 3 characters additional to 4 characters
66 - binary trees with 2-7 characters (uint64 as key, use uint32 as
67 pointers into a an array)
68 - rb-trees with 2-7 characters (uint64 as key, use uint32 as pointers
69 into an array with bit-steeling for the colors)
73 - execute goch -l for all packages; probably with lower param like 0.5.
74 - check orthography with gospell
75 - Write release notes in doc/relnotes.
77 - xb copyright . in xz directory to ensure all new files have Copyright
79 - VERSION=<version> go generate github.com/ulikunitz/xz/... to update
81 - Execute test for Linux/amd64, Linux/x86 and Windows/amd64.
82 - Update TODO.md - write short log entry
83 - git checkout master && git merge dev
84 - git tag -a <version>
91 Release v0.5.4 fixes issues #15 of another problem with the padding size
92 check for the xz block header. I removed the check completely.
96 Release v0.5.3 fixes issue #12 regarding the decompression of an empty
97 XZ stream. Many thanks to Tomasz Kłak, who reported the issue.
101 Release v0.5.2 became necessary to allow the decoding of xz files with
102 4-byte padding in the block header. Many thanks to Greg, who reported
107 Release v0.5.1 became necessary to fix problems with 32-bit platforms.
108 Many thanks to Bruno Brigas, who reported the issue.
112 Release v0.5 provides improvements to the compressor and provides support for
113 the decompression of xz files with multiple xz streams.
117 Another compression rate increase by checking the byte at length of the
118 best match first, before checking the whole prefix. This makes the
119 compressor even faster. We have now a large time budget to beat the
120 compression ratio of the xz tool. For enwik8 we have now over 40 seconds
121 to reduce the compressed file size for another 7 MiB.
125 I simplified the encoder. Speed and compression rate increased
126 dramatically. A high compression rate affects also the decompression
127 speed. The approach with the buffer and optimizing for operation
128 compression rate has not been successful. Going for the maximum length
129 appears to be the best approach.
133 The release v0.4 is ready. It provides a working xz implementation,
134 which is rather slow, but works and is interoperable with the xz tool.
135 It is an important milestone.
139 I have the first working implementation of an xz reader and writer. I'm
140 happy about reaching this milestone.
144 I'm now ready to implement xz because, I have a working LZMA2
145 implementation. I decided today that v0.4 will use the slow encoder
146 using the operations buffer to be able to go back, if I intend to do so.
150 I have restarted the work on the library. While trying to implement
151 LZMA2, I discovered that I need to resimplify the encoder and decoder
152 functions. The option approach is too complicated. Using a limited byte
153 writer and not caring for written bytes at all and not to try to handle
154 uncompressed data simplifies the LZMA encoder and decoder much.
155 Processing uncompressed data and handling limits is a feature of the
156 LZMA2 format not of LZMA.
158 I learned an interesting method from the LZO format. If the last copy is
159 too far away they are moving the head one 2 bytes and not 1 byte to
160 reduce processing times.
164 I have now reimplemented the lzma package. The code is reasonably fast,
165 but can still be optimized. The next step is to implement LZMA2 and then
170 Created release v0.3. The version is the foundation for a full xz
171 implementation that is the target of v0.4.
175 The gflag package has been developed because I couldn't use flag and
176 pflag for a fully compatible support of gzip's and lzma's options. It
177 seems to work now quite nicely.
181 The overflow issue was interesting to research, however Henry S. Warren
182 Jr. Hacker's Delight book was very helpful as usual and had the issue
183 explained perfectly. Fefe's information on his website was based on the
184 C FAQ and quite bad, because it didn't address the issue of -MININT ==
189 It has been a productive day. I improved the interface of lzma.Reader
190 and lzma.Writer and fixed the error handling.
194 By computing the bit length of the LZMA operations I was able to
195 improve the greedy algorithm implementation. By using an 8 MByte buffer
196 the compression rate was not as good as for xz but already better then
199 Compression is currently slow, but this is something we will be able to
204 Checked the license of ogier/pflag. The binary lzmago binary should
205 include the license terms for the pflag library.
207 I added the endorsement clause as used by Google for the Go sources the
212 The package lzb contains now the basic implementation for creating or
213 reading LZMA byte streams. It allows the support for the implementation
214 of the DAG-shortest-path algorithm for the compression function.
218 Completed yesterday the lzbase classes. I'm a little bit concerned that
219 using the components may require too much code, but on the other hand
220 there is a lot of flexibility.
224 Implemented Reader and Writer during the Bayern game against Porto. The
225 second half gave me enough time.
229 While showering today morning I discovered that the design for OpEncoder
230 and OpDecoder doesn't work, because encoding/decoding might depend on
231 the current status of the dictionary. This is not exactly the right way
234 Therefore we need to keep the Reader and Writer design. This time around
235 we simplify it by ignoring size limits. These can be added by wrappers
236 around the Reader and Writer interfaces. The Parameters type isn't
239 However I will implement a ReaderState and WriterState type to use
240 static typing to ensure the right State object is combined with the
241 right lzbase.Reader and lzbase.Writer.
243 As a start I have implemented ReaderState and WriterState to ensure
244 that the state for reading is only used by readers and WriterState only
249 Today I implemented the OpDecoder and tested OpEncoder and OpDecoder.
253 Came up with a new simplified design for lzbase. I implemented already
254 the type State that replaces OpCodec.
258 The new lzma package is now fully usable and lzmago is using it now. The
259 old lzma package has been completely removed.
263 Implemented lzma.Reader and tested it.
267 Implemented baseReader by adapting code form lzma.Reader.
271 The opCodec has been copied yesterday to lzma2. opCodec has a high
272 number of dependencies on other files in lzma2. Therefore I had to copy
273 almost all files from lzma.
277 Removed only a TODO item.
279 However in Francesco Campoy's presentation "Go for Javaneros
280 (Javaïstes?)" is the the idea that using an embedded field E, all the
281 methods of E will be defined on T. If E is an interface T satisfies E.
283 https://talks.golang.org/2014/go4java.slide#51
285 I have never used this, but it seems to be a cool idea.
289 Finished the type writerDict and wrote a simple test.
293 I started to implement the writerDict.
297 After thinking long about the LZMA2 code and several false starts, I
298 have now a plan to create a self-sufficient lzma2 package that supports
299 the classic LZMA format as well as LZMA2. The core idea is to support a
300 baseReader and baseWriter type that support the basic LZMA stream
301 without any headers. Both types must support the reuse of dictionaries
306 1. Implemented simple lzmago tool
307 2. Tested tool against large 4.4G file
308 - compression worked correctly; tested decompression with lzma
309 - decompression hits a full buffer condition
310 3. Fixed a bug in the compressor and wrote a test for it
311 4. Executed full cycle for 4.4 GB file; performance can be improved ;-)
315 - Release v0.2 because of the working LZMA encoder and decoder