Overview
Comment: | Fixed a rare case of losing data from the decompressor's internal result buffer. |
---|---|
Downloads: | Tarball | ZIP archive | SQL archive |
Timelines: | family | ancestors | descendants | both | trunk |
Files: | files | file ages | folders |
SHA1: |
7b74fd57f8c51aeb7459a1a27cc1d0a0 |
User & Date: | spaskalev on 2014-12-21 17:23:23 |
Other Links: | manifest | tags |
Context
2014-12-21
| ||
19:38 | Added debug/pprof to ease basic cpu profiling check-in: 1a4bdf36e2 user: spaskalev tags: trunk | |
17:23 | Fixed a rare case of losing data from the decompressor's internal result buffer. check-in: 7b74fd57f8 user: spaskalev tags: trunk | |
01:59 | Added a function that reverses the bits in a byte. Coverage: 100.0% of statements. check-in: 2be2ff6bf7 user: spaskalev tags: trunk | |
Changes
Modified src/0dev.org/predictor/predictor.go from [90c8b12e57] to [d2a3bd9d21].
︙ | ︙ | |||
137 138 139 140 141 142 143 | // Check whether we have leftover data in the buffer if len(ctx.input) > 0 { readCount = copy(output, ctx.input) // Check whether we still have leftover data in the buffer :) if readCount < len(ctx.input) { | < | | 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | // Check whether we have leftover data in the buffer if len(ctx.input) > 0 { readCount = copy(output, ctx.input) // Check whether we still have leftover data in the buffer :) if readCount < len(ctx.input) { ctx.input = ctx.input[:copy(ctx.input, ctx.input[readCount:])] } return readCount, nil } // This is single-iteration only but it is fine according to io.Reader's contract ?! // TODO - read all bytes from a block based on the hamming weight of the flag // and just shuffle them for predictions instead of bite-sized reads ;) |
︙ | ︙ |