As discussed, we want to remove the old `REDIS_STATIC` flag which is no
longer relevant.
When moving from Redis to Valkey we renamed all REDIS flags in Makefile.
The REDIS_STATIC flag was renamed to SERVER_STATIC, but this change was
not updated in some of the files.
After discussing it with @madolson and @ranshid, we decided that since
this was introduced 10 years ago, and in many places in the code base we
simply use `static`, we should simplify and remove the flag entirely.
---------
Signed-off-by: Ouri Half <ourih@amazon.com>
I have validated that these settings closely match the existing coding
style with one major exception on `BreakBeforeBraces`, which will be
`Attach` going forward. The mixed `BreakBeforeBraces` styles in the
current codebase are hard to imitate and also very odd IMHO - see below
```
if (a == 1) { /*Attach */
}
```
```
if (a == 1 ||
b == 2)
{ /* Why? */
}
```
Please do NOT merge just yet. Will add the github action next once the
style is reviewed/approved.
---------
Signed-off-by: Ping Xie <pingxie@google.com>
This patch try to do following things:
1. Rename `redis_*` and `REDIS_*` macros defined in config.h to
`valkey_*`, `VALKEY_*` and update associated used files. (`redis_fstat`,
`redis_fsync`, `REDIS_THREAD_STACK_SIZE`, etc.)
2. Remove the leading double underscore for guard macro in config.h.
---------
Signed-off-by: Lipeng Zhu <lipeng.zhu@intel.com>
The test flag `REDIS_TEST` has already be changed to `SERVER_TEST` in
`.github/workflows/daily.yml`, the name in the src directory need to be
changed as well.
```shell
run: |
sudo apt-get update && sudo apt-get install libc6-dev-i386
make 32bit SERVER_CFLAGS='-Werror -DSERVER_TEST'
```
Signed-off-by: Vitah Lin <vitahlin@gmail.com>
Following #12568
In issue #9357, when inserting an element larger than 1GB, we currently
store it in a plain node instead of a listpack.
Presently, when we insert an element that exceeds the maximum size of a
packed node, it cannot be accommodated in any other nodes, thus ending
up isolated like a large element.
I.e. it's a node with only one element, but it's listpack encoded rather
than a plain buffer.
This PR lowers the threshold for considering an element as 'large' from
1GB to the maximum size of a node.
While this change doesn't completely resolve the bug mentioned in the
previous PR, it does mitigate its potential impact.
As a result of this change, we can now only use LSET to replace an
element with another element that falls below the maximum size
threshold.
In the worst-case scenario, with a fill of -5, the largest packed node
we can create is 2GB (32k * 64k):
* 32k: The smallest element in a listpack is 2 bytes, which allows us to
store up to 32k elements.
* 64k: This is the maximum size for a single quicklist node.
## Others
To fully fix#9357, we need more work, as discussed in #12568, when we
insert an element into a quicklistNode, it may be created in a new node,
put into another node, or merged, and we can't correctly delete the node
that was supposed to be deleted.
I'm not sure it's worth it, since it involves a lot of modifications.
We forgot to call quicklistSetOptions after createQuicklistObject,
in the sort store scenario, we will create a quicklist with default
fill or compress options.
This PR adds fill and depth parameters to createQuicklistObject to
specify that options need to be set after creating a quicklist.
This closes#12871.
release notes:
> Fix lists created by SORT STORE to respect list compression and
packing configs.
Fix two crash introducted by #12955
When a quicklist node can't be inserted and split, we eventually merge
the current node with its neighboring
nodes after inserting, and compress the current node and its siblings.
1. When the current node is merged with another node, the current node
may become invalid and can no longer be used.
Solution: let `_quicklistMergeNodes()` return the merged nodes.
3. If the current node is a LZF quicklist node, its recompress will be
1. If the split node can be merged with a sibling node to become head or
tail, recompress may cause the head and tail to be compressed, which is
not allowed.
Solution: always recompress to 0 after merging.
Fix#12864
The main reason for this crash is that when replacing a element of a
quicklist packed node with lpReplace() method,
if the final size is larger than 4GB, lpReplace() will fail and returns
NULL, causing `node->entry` to be incorrectly set to NULL.
Since the inserted data is not a large element, we can't just replace it
like a large element, first quicklistInsertAfter()
and then quicklistDelIndex(), because the current node may be merged and
invalidated in quicklistInsertAfter().
The solution of this PR:
When replacing a node fails (listpack exceeds 4GB), split the current
node, create a new node to put in the middle, and try to merge them.
This is the same as inserting a large element.
In the worst case, its size will not exceed 4GB.
Improve memory efficiency of list keys
## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.
## Conversion rules
* Convert listpack to quicklist
When the listpack length or size reaches the `list-max-listpack-size` limit,
it will be converted to a quicklist.
* Convert quicklist to listpack
When a quicklist has only one node, and its length or size is reduced to half
of the `list-max-listpack-size` limit, it will be converted to a listpack.
This is done to avoid frequent conversions when we add or remove at the bounding size or length.
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
so when changing the direction, we need to use the current node (listTypeEntry->p) to
update `listTypeIterator->lpi` to the next node in the reverse direction.
## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement
### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement
From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
the main enhancement is brought by `addListListpackRangeReply()`.
## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.
## Note
1. Add conversion callback to support doing some work before conversion
Since the quicklist iterator decompresses the current node when it is released, we can
no longer decompress the quicklist after we convert the list.
Currently, we add -flto to the compile flags only. We are supposed
to add it to the linker flags as well. Clang build fails because of this.
Added a change to add -flto to REDIS_CFLAGS and REDIS_LDFLAGS
if the build optimization flag is -O3. (noopt build will not use -flto)
This PR mainly deals with 2 crashes introduced in #9357,
and fix the QUICKLIST-PACKED-THRESHOLD mess in external test mode.
1. Fix crash due to deleting an entry from a compress quicklistNode
When inserting a large element, we need to create a new quicklistNode first,
and then delete its previous element, if the node where the deleted element is
located is compressed, it will cause a crash.
Now add `dont_compress` to quicklistNode, if we want to use a quicklistNode
after some operation, we can use this flag like following:
```c
node->dont_compress = 1; /* Prevent to be compressed */
some_operation(node); /* This operation might try to compress this node */
some_other_operation(node); /* We can use this node without decompress it */
node->dont_compress = 0; /* Re-able compression */
quicklistCompressNode(node);
```
Perhaps in the future, we could just disable the current entry from being
compressed during the iterator loop, but that would require more work.
2. Fix crash due to wrongly split quicklist
before #9357, the offset param of _quicklistSplitNode() will not negative.
For now, when offset is negative, the split extent will be wrong.
following example:
```c
int orig_start = after ? offset + 1 : 0;
int orig_extent = after ? -1 : offset;
int new_start = after ? 0 : offset;
int new_extent = after ? offset + 1 : -1;
# offset: -2, after: 1, node->count: 2
# current wrong range: [-1,-1] [0,-1]
# correct range: [1,-1] [0, 1]
```
Because only `_quicklistInsert()` splits the quicklistNode and only
`quicklistInsertAfter()`, `quicklistInsertBefore()` call _quicklistInsert(),
so `quicklistReplaceEntry()` and `listTypeInsert()` might occur this crash.
But the iterator of `listTypeInsert()` is alway from head to tail(iter->offset is
always positive), so it is not affected.
The final conclusion is this crash only occur when we insert a large element
with negative index into a list, that affects `LSET` command and `RM_ListSet`
module api.
3. In external test mode, we need to restore quicklist packed threshold after
when the end of test.
4. Show `node->count` in quicklistRepr().
5. Add new tcl proc `config_get_set` to support restoring config in tests.
fix typo ` the largest possible limit is 16k` -> ` the largest possible limit is 64k`.
The count field is 16 bits so the largest possible limit is 64k(2**16).
### Describe
Fix crash found by CI, Introduced by #9849.
When we do any operation on the quicklist, we should make sure that all nodes
of the quicklist should not be in the recompressed state.
### Issues
This PR fixes two issues with incorrect recompression.
1. The current quicklist node is full and the previous node isn't full,
the current node is not recompressed correctly after inserting elements into the previous node.
2. The current quicklist node is full and the next node isn't full,
the current node is not recompressed correctly after inserting elements into the next node.
### Test
Add two tests to cover incorrect compression issues.
### Other
Fix unittest test failure caused by assertion introduced by #9849.
This pr is following #9779 .
## Describe of feature
Now when we turn on the `list-compress-depth` configuration, the list will compress
the ziplist between `[list-compress-depth, -list-compress-depth]`.
When we need to use the compressed data, we will first decompress it, then use it,
and finally compress it again.
It's controlled by `quicklistNode->recompress`, which is designed to avoid the need to
re-traverse the entire quicklist for compression after each decompression, we only need
to recompress the quicklsitNode being used.
In order to ensure the correctness of recompressing, we should normally let
quicklistDecompressNodeForUse and quicklistCompress appear in pairs, otherwise,
it may lead to the head and tail being compressed or the middle ziplist not being
compressed correctly, which is exactly the problem this pr needs to solve.
## Solution
1. Reset `quicklistIter` after insert and replace.
The quicklist node will be compressed in `quicklistInsertAfter`, `quicklistInsertBefore`,
`quicklistReplaceAtIndex`, so we can safely reset the quicklistIter to avoid it being used again
2. `quicklistIndex` will return an iterator that can be used to recompress the current node after use.
## Test
1. In the `Stress Tester for #3343-Similar Errors` test, when the server crashes or when
`valgrind` or `asan` error is detected, print violating commands.
2. Add a crash test due to wrongly recompressing after `lrem`.
3. Remove `insert before with 0 elements` and `insert after with 0 elements`,
Now we forbid any operation on an NULL quicklistIter.
Part three of implementing #8702, following #8887 and #9366 .
## Description of the feature
1. Replace the ziplist container of quicklist with listpack.
2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation.
## Interface changes
1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`.
2. Replace `debug ziplist` command with `debug listpack`.
## Internal changes
1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`)
2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`)
3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ).
It represent that a quicklistNode is a packed node, as opposed to a plain node.
4. Remove `createZiplistObject` method, which is never used.
5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`.
We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k.
## Improvements
1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366.
2. Optimize `quicklistAppendPlainNode` to avoid memcpy data.
## Bugfix
1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366.
## Test
1. Add unittest for `lpMerge`.
2. Modify the old quicklist ziplist corrupt dump test.
Co-authored-by: Oran Agra <oran@redislabs.com>
Recently we started using list-compress-depth in tests (was completely untested till now).
Turns this triggered test failures with the external mode, since the tests left the setting enabled
and then it was used in other tests (specifically the fuzzer named "Stress tester for #3343-alike bugs").
This PR fixes the issue of the `recompress` flag being left set by mistake, which caused the code to
later to compress the head or tail nodes (which should never be compressed)
The solution is to reset the recompress flag when it should have been (when it was decided not to compress).
Additionally we're adding some assertions and improve the tests so in order to catch other similar bugs.
Redis supports inserting data over 4GB into string (and recently for lists too, see #9357),
But LZF compression used in RDB files (see `rdbcompression` config), and in quicklist
(see `list-compress-depth` config) does not support compress/decompress data over
UINT32_MAX, which will result in corrupting the rdb after compression.
Internal changes:
1. Modify the `unsigned int` parameter of `lzf_compress/lzf_decompress` to `size_t`.
2. Modify the variable types in `lzf_compress` involving offsets and lengths to `size_t`.
3. Set LZF_USE_OFFSETS to 0.
When LZF_USE_OFFSETS is 1, lzf store offset into `LZF_HSLOT`(32bit).
Even in 64-bit, `LZF_USE_OFFSETS` defaults to 1, because lzf assumes that it only
compresses and decompresses data smaller than UINT32_MAX.
But now we need to make lzf support 64-bit, turning on `LZF_USE_OFFSETS` will make
it impossible to store 64-bit offsets or pointers.
BTW, disable LZF_USE_OFFSETS also brings a few performance improvements.
Tests:
1. Add test for compress/decompress string large than UINT32_MAX.
2. Add unittest for compress/decompress quicklistNode.
- Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.
- To build Redis with desired sanitizer : `make SANITIZER=undefined`
- There were some sanitizer findings, cleaned up codebase
- Added tests with address and undefined behavior sanitizers to daily CI.
- Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
Basically, there are three types of issues :
**1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
does not support unaligned access. Redis does unaligned access only on supported platforms.
**2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
and change how compiler generates code, current findings mostly about signed shift or simple
addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
as far as I can tell (checked generated code on godbolt.org).
**3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
UB means nothing guaranteed and risky to reason about program behavior but I don't think any
of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
will be the real benefit.
Redis lists are stored in quicklist, which is currently a linked list of ziplists.
Ziplists are limited to storing elements no larger than 4GB, so when bigger
items are added they're getting truncated.
This PR changes quicklists so that they're capable of storing large items
in quicklist nodes that are plain string buffers rather than ziplist.
As part of the PR there were few other changes in redis:
1. new DEBUG sub-commands:
- QUICKLIST-PACKED-THRESHOLD - set the threshold of for the node type to
be plan or ziplist. default (1GB)
- QUICKLIST <key> - Shows low level info about the quicklist encoding of <key>
2. rdb format change:
- A new type was added - RDB_TYPE_LIST_QUICKLIST_2 .
- container type (packed / plain) was added to the beginning of the rdb object
(before the actual node list).
3. testing:
- Tests that requires over 100MB will be by default skipped. a new flag was
added to 'runtest' to run the large memory tests (not used by default)
Co-authored-by: sundb <sundbcn@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
- fix possible heap corruption in ziplist and listpack resulting by trying to
allocate more than the maximum size of 4GB.
- prevent ziplist (hash and zset) from reaching size of above 1GB, will be
converted to HT encoding, that's not a useful size.
- prevent listpack (stream) from reaching size of above 1GB.
- XADD will start a new listpack if the new record may cause the previous
listpack to grow over 1GB.
- XADD will respond with an error if a single stream record is over 1GB
- List type (ziplist in quicklist) was truncating strings that were over 4GB,
now it'll respond with an error.
Co-authored-by: sundb <sundbcn@gmail.com>
List functions operating on elements by index:
* RM_ListGet
* RM_ListSet
* RM_ListInsert
* RM_ListDelete
Iteration is done using a simple for loop over indices.
The index based functions use an internal iterator as an optimization.
This is explained in the docs:
```
* Many of the list functions access elements by index. Since a list is in
* essence a doubly-linked list, accessing elements by index is generally an
* O(N) operation. However, if elements are accessed sequentially or with
* indices close together, the functions are optimized to seek the index from
* the previous index, rather than seeking from the ends of the list.
*
* This enables iteration to be done efficiently using a simple for loop:
*
* long n = RM_ValueLength(key);
* for (long i = 0; i < n; i++) {
* RedisModuleString *elem = RedisModule_ListGet(key, i);
* // Do stuff...
* }
```
Until now, giving a negative index seeks from the end of a list and a
positive seeks from the beginning. This change makes it seek from
the nearest end, regardless of the sign of the given index.
quicklistIndex is used by all list commands which operate by index.
LINDEX key 999999 in a list if 1M elements is greately optimized by
this change. Latency is cut by 75%.
LINDEX key -1000000 in a list of 1M elements, likewise.
LRANGE key -1 -1 is affected by this, since LRANGE converts the
indices to positive numbers before seeking.
The tests for corrupt dumps are updated to make sure the corrup
data is seeked in the same direction as before.
The test try to test `insert before 1 element`, but it use quicklist
InsertAfter, a copy-paste typo.
The commit also add an assert to verify results in some tests
to make sure it is as expected.
Some background:
This fixes a problem that used to be dead code till now,
but became alive (only in the unit tests, not in redis) when #9113 got merged.
The problem it fixes doesn't actually cause any significant harm,
but that PR also added a test that fails verification because of that.
This test was merged with that problem due to human error, we didn't run it
on the last modified version before merging.
The fix in this PR existed in #8641 (closed because it's just dead code)
and #4674 (still pending but has other changes in it).
Now to the actual fix:
On quicklist insertion, if the insertion offset is -1 or `-(quicklist->count)`,
we can insert into the head of the next node rather than the tail of the
current node. this is especially important when the current node is full,
and adding anything to it will cause it to be split (or be over it's fill limit setting).
The bug was that the code attempted to determine that we're adding to
the tail of the current node by matching `offset == node->count` when in
fact it should have been `offset == node->count-1` (so it never entered that `if`).
and also that since we take negative offsets too, we can also match `-1`.
same applies for the head, i.e. `0` and `-count`.
The bug will cause the code to attempt inserting into the current node (thinking
we have to insert into the middle of the node rather than head or tail), and
in case the current node is full it'll have to be split (something that also
happens in valid cases).
On top of that, since it calls _quicklistSplitNode with an edge case, it'll actually
split the node in a way that all the entries fall into one split, and 0 into the other,
and then still insert the new entry into the first one, causing it to be populated
beyond it's intended fill limit.
This problem does not create any bug in redis, because the existing code does
not iterate from tail to head, and the offset never has a negative value when insert.
The other change this PR makes in the test code is just for some coverage,
insertion at index 0 is tested a lot, so it's nice to test some negative offsets too.
In _quicklistInsert when `at_head` / `at_tail` is true, but `prev` / `next` is NULL,
the code was reaching the last if-else block at the bottom of the function,
and would have unnecessarily executed _quicklistSplitNode, instead of just creating a new node.
This was because the penultimate if-else was checking `node->next && full_next`.
but in fact it was unnecessary to check if `node->next` exists, if we're gonna create one anyway,
we only care that it's not full, or doesn't exist, so the condition could have been changed to `!node->next || full_next`.
Instead, this PR makes a small refactory to negate `full_next` to a more meaningful variable
`avail_next` that indicates that the next node is available for pushing additional elements or
not (this would be true only if it exists and it is non-full)
1. Add `redis-server test all` support to run all tests.
2. Add redis test to daily ci.
3. Add `--accurate` option to run slow tests for more iterations (so that
by default we run less cycles (shorter time, and less prints).
4. Move dict benchmark to REDIS_TEST.
5. fix some leaks in tests
6. make quicklist tests run on a specific fill set of options rather than huge ranges
7. move some prints in quicklist test outside their loops to reduce prints
8. removing sds.h from dict.c since it is now used in both redis-server and
redis-cli (uses hiredis sds)
When a quicklist has quicklist->compress * 2 nodes, then call
__quicklistCompress, all nodes will be decompressed and the middle
two nodes will be recompressed again. This violates the fact that
quicklist->compress * 2 nodes are uncompressed. It's harmless
because when visit a node, we always try to uncompress node first.
This only happened when a quicklist has quicklist->compress * 2 + 1
nodes, then delete a node. For other scenarios like insert node and
iterate this will not happen.
When the length of the quicklist is 1(only one zipmap), the rotate operation will cause
memory overlap when moving an entity from the tail of the zipmap to the head.
quicklistRotate is a dead code, so it has no impact on the existing code.
This commit fixes a bug in what's currently dead code in redis.
In quicklistDelRange when delete entry from entry.offset to node tail,
extent only need gte node->count - entry.offset, not node->count
Co-authored-by: Yoav Steinberg <yoav@redislabs.com>
Avoids memmove and reallocs when replacing a ziplist element of the
same encoded size as the new value.
Affects HSET, HINRBY, HINCRBYFLOAT (via hashTypeSet) and LSET (via
quicklistReplaceAtIndex).
First, if the ziplist header is surely inside the ziplist, do fast path
decoding rather than the careful one.
In that case, streamline the encoding if-else chain to be executed only
once, and the encoding validity tested at the end.
encourage inlining
likely / unlikely hints for speculative execution
Assertion used _exit(1) to tell the compiler that the code after them is
not reachable and get rid of warnings.
But in some cases assertions are placed inside tight loops, and any
piece of code in them can slow down execution (code cache and other
reasons), instead using either abort() or better yet, unreachable
builtin.
The test creates keys with various encodings, DUMP them, corrupt the payload
and RESTORES it.
It utilizes the recently added use-exit-on-panic config to distinguish between
asserts and segfaults.
If the restore succeeds, it runs random commands on the key to attempt to
trigger a crash.
It runs in two modes, one with deep sanitation enabled and one without.
In the first one we don't expect any assertions or segfaults, in the second one
we expect assertions, but no segfaults.
We also check for leaks and invalid reads using valgrind, and if we find them
we print the commands that lead to that issue.
Changes in the code (other than the test):
- Replace a few NPD (null pointer deference) flows and division by zero with an
assertion, so that it doesn't fail the test. (since we set the server to use
`exit` rather than `abort` on assertion).
- Fix quite a lot of flows in rdb.c that could have lead to memory leaks in
RESTORE command (since it now responds with an error rather than panic)
- Add a DEBUG flag for SET-SKIP-CHECKSUM-VALIDATION so that the test don't need
to bother with faking a valid checksum
- Remove a pile of code in serverLogObjectDebugInfo which is actually unsafe to
run in the crash report (see comments in the code)
- fix a missing boundary check in lzf_decompress
test suite infra improvements:
- be able to run valgrind checks before the process terminates
- rotate log files when restarting servers
When active defrag kicks in and finds a big list, it will create a bookmark to
a node so that it is able to resume iteration from that node later.
The quicklist manages that bookmark, and updates it in case that node is deleted.
This will increase memory usage only on lists of over 1000 (see
active-defrag-max-scan-fields) quicklist nodes (1000 ziplists, not 1000 items)
by 16 bytes.
In 32 bit build, this change reduces the maximum effective config of
list-compress-depth and list-max-ziplist-size (from 32767 to 8191)
The quicklist takes a cached version of the ziplist representation size
in bytes. The implementation must update this length every time the
underlying ziplist changes. However quicklistReplaceAtIndex() failed to
fix the length.
During LSET calls, the size of the ziplist blob and the cached size
inside the quicklist diverged. Later, when this size is used in an
authoritative way, for example during nodes splitting in order to copy
the nodes, we end with a duplicated node that may contain random
garbage.
This commit should fix issue #3343, however several problems were found
reviewing the quicklist.c code in search of this bug that should be
addressed soon or later.
For example:
1. To take a cached ziplist length is fragile since failing to update it
leads to this kind of issues.
2. The node splitting code needs auditing. For example it works just for
a side effect of ziplistDeleteRange() to be able to cope with a wrong
count of elements to remove. The code inside quicklist.c assumes that
-1 means "delete till the end" while actually it's just a count of how
many elements to delete, and is an unsigned count. So -1 gets converted
into the maximum integer, and just by chance the ziplist code stops
deleting elements after there are no more to delete.
3. Node splitting is extremely inefficient, it copies the node and
removes elements from both nodes even when actually there is to move a
single entry from one node to the other, or when the new resulting node
is empty at all so there is nothing to copy but just to create a new
node.
However at least for Redis 3.2 to introduce fresh code inside
quicklist.c may be even more risky, so instead I'm writing a better
fuzzy tester to stress the internals a bit more in order to anticipate
other possible bugs.
This bug was found using a fuzzy tester written after having some clue
about where the bug could be. The tester eventually created a ~2000
commands sequence able to always crash Redis. I wrote a better version
of the tester that searched for the smallest sequence that could crash
Redis automatically. Later this smaller sequence was minimized by
removing random commands till it still crashed the server. This resulted
into a sequence of 7 commands. With this small sequence it was just a
matter of filling the code with enough printf() to understand enough
state to fix the bug.