Apply the security fixes for the release.
(CVE-2024-31449) Lua library commands may lead to stack overflow and
potential RCE.
(CVE-2024-31227) Potential Denial-of-service due to malformed ACL
selectors.
(CVE-2024-31228) Potential Denial-of-service due to unbounded pattern
matching.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
In RdbLoad, we disable AOF before emptyData and rdbLoad to prevent copy-on-write issues. After rdbLoad completes, AOF should be re-enabled, but the code incorrectly checks server.aof_state, which has been reset to AOF_OFF in stopAppendOnly. This leads to AOF not being re-enabled after being disabled.
---------
Signed-off-by: Binbin <binloveplay1314@qq.com>
Fix#784
Prior to the change, `CLUSTER SHARDS` command processing might pick a
failed primary node which won't have the slot coverage information and
the slots `output` in turn would be empty. This change finds an
appropriate node which has the slot coverage information served by a
given shard and correctly displays it as part of `CLUSTER SHARDS`
output.
Before:
```
1) 1) "slots"
2) (empty array)
3) "nodes"
4) 1) 1) "id"
2) "2936f22a490095a0a851b7956b0a88f2b67a5d44"
...
9) "role"
10) "master"
...
13) "health"
14) "fail"
```
After:
```
1) 1) "slots"
2) 1) 0
2) 5461
3) "nodes"
4) 1) 1) "id"
2) "2936f22a490095a0a851b7956b0a88f2b67a5d44"
...
9) "role"
10) "master"
...
13) "health"
14) "fail"
```
---------
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Module Authentication using a blocking implementation currently gets
rejected when the "cluster is down" from the client timeout cron job
(`clientsCronHandleTimeout`).
This PR exempts clients blocked on Module Authentication from being
rejected here.
---------
Signed-off-by: KarthikSubbarao <karthikrs2021@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
In markNodeAsFailingIfNeeded we will count needed_quorum and failures,
needed_quorum is the half the cluster->size and plus one, and
cluster-size
is the size of primary node which contain slots, but when counting
failures, we dit not check if primary has slots.
Only the primary has slots that has the rights to vote, adding a new
clusterNodeIsVotingPrimary to formalize this concept.
Release notes:
bugfix where nodes not in the quorum group might spuriously mark nodes
as failed
---------
Signed-off-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Signed-off-by: Ping Xie <pingxie@google.com>
When there is a link failure while an ongoing MEET request is sent the
sending node stops sending anymore MEET and starts sending PINGs. Since
every node responds to PINGs from unknown nodes with a PONG, the
receiving node never adds the sending node. But the sending node adds
the receiving node when it sees a PONG. This can lead to asymmetry in
cluster membership. This changes makes the sender keep sending MEET
until it sees a PONG, avoiding the asymmetry.
---------
Signed-off-by: Sankar <1890648+srgsanky@users.noreply.github.com>
Signed-off-by: Ping Xie <pingxie@google.com>
In #11012, we changed the way command durations were computed to handle
the same command being executed multiple times. In #11970, we added an
assert if the duration is not properly reset, potentially indicating
that a call to report statistics was missed.
I found an edge case where this happens - easily reproduced by blocking
a client on `XGROUPREAD` and migrating the stream's slot. This causes
the engine to process the `XGROUPREAD` command twice:
1. First time, we are blocked on the stream, so we wait for unblock to
come back to it a second time. In most cases, when we come back to
process the command second time after unblock, we process the command
normally, which includes recording the duration and then resetting it.
2. After unblocking we come back to process the command, and this is
where we hit the edge case - at this point, we had already migrated the
slot to another node, so we return a `MOVED` response. But when we do
that, we don’t reset the duration field.
Fix: also reset the duration when returning a `MOVED` response. I think
this is right, because the client should redirect the command to the
right node, which in turn will calculate the execution duration.
Also wrote a test which reproduces this, it fails without the fix and
passes with it.
---------
Signed-off-by: Nitai Caro <caronita@amazon.com>
Co-authored-by: Nitai Caro <caronita@amazon.com>
Signed-off-by: Ping Xie <pingxie@google.com>
In `beginResultEmission`, -1 means the result length is not known in
advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it
will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`.
Although `dictExpand` won't succeed because the size overflows, I think
we'd better to avoid this wrong conversion.
This bug can be triggered when the source of `zrangestore` doesn't exist
or we use `zrangestore` command with `byscore` or `bylex`.
The impact is that dst keys will be converted to use skiplist instead of
listpack.
Signed-off-by: Ping Xie <pingxie@google.com>
The check in fileIsManifest misjudged the manifest file. For example,
if resp aof contains "file", it will be considered a manifest file and
the check will fail:
```
*3
$3
set
$4
file
$4
file
```
In #12951, if the preamble aof also contains it, it will also fail.
Fixes#12951.
the bug was happening if the the word "file" is mentioned
in the first 1024 lines of the AOF. and now as soon as it finds
a non-comment line it'll break (if it contains "file" or doesn't)
Signed-off-by: Ping Xie <pingxie@google.com>
Since lua_Number is not explicitly an integer or a double, we need to
make an effort
to convert it as an integer when that's possible, since the string could
later be used
in a context that doesn't support scientific notation (e.g. 1e9 instead
of 100000000).
Since fpconv_dtoa converts numbers with the equivalent of `%f` or `%e`,
which ever is shorter,
this would break if we try to pass a long integer number to a command
that takes integer.
we'll get an implicit conversion to string in Lua, and then the parsing
in getLongLongFromObjectOrReply will fail.
```
> eval "redis.call('hincrby', 'key', 'field', '1000000000')" 0
(nil)
> eval "redis.call('hincrby', 'key', 'field', tonumber('1000000000'))" 0
(error) ERR value is not an integer or out of range script: ac99c32e4daf7e300d593085b611de261954a946, on @user_script:1.
```
Switch to using ll2string if the number can be safely represented as a
long long.
The problem was introduced in #10587 (Redis 7.2).
closes#13113.
---------
Co-authored-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: debing.sun <debing.sun@redis.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Signed-off-by: Ping Xie <pingxie@google.com>
These tests have all failed in daily CI:
```
*** [err]: Blocking XREADGROUP for stream key that has clients blocked on stream - reprocessing command in tests/unit/type/stream-cgroups.tcl
Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
*** [err]: BLPOP unblock but the key is expired and then block again - reprocessing command in tests/unit/type/list.tcl
Expected '1101' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
*** [err]: BZPOPMIN unblock but the key is expired and then block again - reprocessing command in tests/unit/type/zset.tcl
Expected '1103' to be between to '1000' and '1100' (context: type eval line 23 cmd {assert_range [expr $end-$start] 1000 1100} proc ::test)
```
Increase the range to avoid failures, and improve the comment to be
clearer.
tests was introduced in #13004.
Signed-off-by: Ping Xie <pingxie@google.com>
Fix two crash introducted by #12955
When a quicklist node can't be inserted and split, we eventually merge
the current node with its neighboring
nodes after inserting, and compress the current node and its siblings.
1. When the current node is merged with another node, the current node
may become invalid and can no longer be used.
Solution: let `_quicklistMergeNodes()` return the merged nodes.
3. If the current node is a LZF quicklist node, its recompress will be
1. If the split node can be merged with a sibling node to become head or
tail, recompress may cause the head and tail to be compressed, which is
not allowed.
Solution: always recompress to 0 after merging.
Signed-off-by: Ping Xie <pingxie@google.com>
Fix#12864
The main reason for this crash is that when replacing a element of a
quicklist packed node with lpReplace() method,
if the final size is larger than 4GB, lpReplace() will fail and returns
NULL, causing `node->entry` to be incorrectly set to NULL.
Since the inserted data is not a large element, we can't just replace it
like a large element, first quicklistInsertAfter()
and then quicklistDelIndex(), because the current node may be merged and
invalidated in quicklistInsertAfter().
The solution of this PR:
When replacing a node fails (listpack exceeds 4GB), split the current
node, create a new node to put in the middle, and try to merge them.
This is the same as inserting a large element.
In the worst case, its size will not exceed 4GB.
Signed-off-by: Ping Xie <pingxie@google.com>
In #11012, we will reprocess command when client is unblocked on keys,
in some blocking commands, for example, in the XREADGROUP BLOCK
scenario,
because of the re-processing command, we will recalculate the block
timeout,
causing the blocking time to be reset.
This commit add a new CLIENT_REPROCESSING_COMMAND clent flag, explicitly
let the command know that it is being re-processed, later in
blockForKeys
we will not reset the timeout.
Affected BLOCK cases:
- list / zset / stream, added test cases for each.
Unaffected cases:
- module (never re-process the commands).
- WAIT / WAITAOF (never re-process the commands).
Fixes#12998.
Signed-off-by: Ping Xie <pingxie@google.com>
The test was introduced in #10745, but we forgot to add it to the
test_helper.tcl, so our CI did not actually run it. This PR adds it
and ensures it passes CI tests.
Signed-off-by: Ping Xie <pingxie@google.com>
* Tested it on local instance. This was originally part of
https://github.com/valkey-io/valkey/pull/288 but I am pushing this
separately, so that we can easily merge it into the upcoming release.
```
lua debugger> server ping
<redis> ping
<reply> "+PONG"
lua debugger> redis ping
<redis> ping
<reply> "+PONG"
```
* I also searched for lua debugger related unit tests to add coverage
for this but did not find any relevant test to modify. Leaving it at
that for now.
---------
Signed-off-by: Parth Patel <661497+parthpatel@users.noreply.github.com>
ValkeyModuleEvent_MasterLinkChange was updated to use more inclusive
language, but it was done in the compatibility layer as well
(RedisModuleEvent_).
---------
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Ref: https://github.com/redis/redis/pull/12760
enabled by default) with older Redis cluster (< 7.0 - extensions not
handled) .
With some of the extensions enabled by default in 7.2 version, new nodes
running 7.2 and above start sending out larger clusterbus message
payload including the ping extensions. This caused an incompatibility
with node running engine versions < 7.0. Old nodes (< 7.0) would receive
the payload from new nodes (> 7.2) would observe a payload length
(totlen) > (estlen) and would perform an early exit and won't process
the message.
This fix introduces a flag `extensions_supported` on the clusterMsg
indicating the sender node can handle extensions parsing. Once, a
receiver nodes receives a message with this flag set to 1, it would
update clusterNode new field extensions_supported and start sending out
extensions if it has any.
This PR also introduces a DEBUG sub command to enable/disable cluster
message extensions `process-clustermsg-extensions` feature.
Note: A successful `PING`/`PONG` is required as a sender for a given
node to be marked as `extensions_supported` and then extensions message
will be sent to it. This could cause a slight delay in receiving the
extensions message(s).
TCL test verifying the cluster state is healthy irrespective of
enabling/disabling cluster message extensions feature.
---------
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Remove trademarked wording on configuration layer.
Following changes for release notes:
1. Rename redis.conf to valkey.conf
2. Pre-filled config in the template config file: Changing pidfile to `/var/run/valkey_6379.pid`
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
`CONFIG SET oom-score-adj handles configuration failures` test failed in
some CI jobs today.
Failed CI: https://github.com/redis/redis/actions/runs/8152519326
Not sure why the github action's docker image perssions have changed,
but the issue is similar to #12887,
where we can't assume the range of oom_score_adj that a user can change.
## Solution:
Modify the way of determining whether the current user has no privileges
or not,
instead of relying on whether the user id is 0 or not.
Fix#12792
On ubuntu 23(lunar), non-root users will not be allowed to change the
oom_score_adj of a process to a value that is too low.
Since terminal's default oom_score_adj is 200, if we run the test on
terminal, we won't be able to set the oom_score_adj of the redis process
to 9 or 22, which is too low.
Reproduction on ubuntu 23(lunar) terminal:
```sh
$ cat /proc/`pgrep redis-server`/oom_score_adj
200
$ echo 100 > /proc/`pgrep redis-server`/oom_score_adj
# success without error
$ echo 99 > /proc/`pgrep redis-server`/oom_score_adj
echo: write error: Permission denied
```
As from the output above, we can only set the minimum oom score of redis
processes to 100.
By modifying the test, make oom_score_adj only increase upwards and not
decrease.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
This commit does not remove redis.call/pcall just yet. It also does not
rename Redis in error messages such as "Please specify at least one
argument for this redis lib call". This allows users to maintain full
backwards compatibility while introducing an option to use server.call
for new code.
I verified that the unit tests pass. Also manually verified that the
redis-server responds to server.call invocations within lua scripting.
Also verified that function registration works as expected.
```
[ok]: EVAL - is Lua able to call Redis API? (0 ms)
[ok]: EVAL - is Lua able to call Server API? (1 ms)
[ok]: EVAL - No arguments to redis.call/pcall is considered an error (0 ms)
[ok]: EVAL - No arguments to server.call/pcall is considered an error (1 ms)
```
---------
Signed-off-by: Parth Patel <661497+parthpatel@users.noreply.github.com>
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
This commit updates the following fields:
1. server_version -> valkey_version in server info. Since we would like
to advertise specific compatibility, we are making the version specific
to valkey. servername will remain as an optional indicator, and other
valkey compatible stores might choose to advertise something else.
1. We dropped redis-ver from the API. This isn't related to API
compatibility, but we didn't want to "fake" that valkey was creating an
rdb from a Redis version.
1. Renamed server-ver -> valkey_version in rdb info. Same as point one,
we want to explicitly indicate this was created by a valkey server.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
New info information to be used to determine the valkey versioning info.
Internally, introduce new define values for "SERVER_VERSION" which is
different from the Redis compatibility version, "REDIS_VERSION".
Add two new info fields:
`server_version`: The Valkey server version
`server_name`: Indicates that the server is valkey.
Add one new RDB field: `server_ver`, which indicates the valkey version
that produced the server.
Add 3 new LUA globals: `SERVER_VERSION_NUM`, `SERVER_VERSION`, and
`SERVER_NAME`. Which reflect the valkey version instead of the Redis
compatibility version.
Also clean up various places where Redis and configuration was being
used that is no longer necessary.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
When we register notification or server event in RedisModule_OnLoad, but
RedisModule_OnLoad eventually fails, triggering notification or server
event
will cause the server to crash.
If the loading fails on a later stage of moduleLoad, we do call
moduleUnload
which handles all un-registration, but when it fails on the
RedisModule_OnLoad
call, we only un-register several specific things and these were
missing:
- moduleUnsubscribeNotifications
- moduleUnregisterFilters
- moduleUnsubscribeAllServerEvents
Refactored the code to reuse the code from moduleUnload.
Fixes#12808.
(cherry picked from commit d6f19539d2414bce1b94af9f814ce09adef6d5f2)
Redis 7.2 (#9406) introduced a new modules event, `RedisModuleEvent_Key`.
This new event allows the module to read the key data just before it is removed
from the database (either deleted, expired, evicted, or overwritten).
When the key is removed from the database, either by active expire or eviction.
The new event was not called as part of an execution unit. This can cause an
issue if the module registers a post notification job inside the event. This job will
not be executed atomically with the expiration/eviction operation and will not
replicated inside a Multi/Exec. Moreover, the post notification job will be executed
right after the event where it is still not safe to perform any write operation, this will
violate the promise that post notification job will be called atomically with the
operation that triggered it and **only when it is safe to write**.
This PR fixes the issue by wrapping each expiration/eviction of a key with an execution
unit. This makes sure the entire operation will run atomically and all the post notification
jobs will be executed at the end where it is safe to write.
Tests were modified to verify the fix.
(cherry picked from commit 0ffb9d2ea9f5ea7d9840bfb77543b04a63303f9b)
If we set `fsynced_reploff_pending` in `startAppendOnly`, and the fork doesn't start
immediately (e.g. there's another fork active at the time), any subsequent commands
will increment `server.master_repl_offset`, but will not cause a fsync (given they were
executed before the fork started, they just ended up in the RDB part of it)
Therefore, any WAITAOF will wait on the new master_repl_offset, but it will time out
because no fsync will be executed.
Release notes:
```
WAITAOF could timeout in the absence of write traffic in case a new AOF is created and
an AOFRW can't immediately start.
This can happen by the appendonly config is changed at runtime, but also after FLUSHALL,
and replica full sync.
```
(cherry picked from commit bfa3931a04bddec8eb37c91c25d3a77c70e33000)
When adding a new ACL rule was added, an attempt was made to remove
any "overlapping" rules. However, there when a match was found, the search
was not resumed at the right location, but instead after the original position of
the original command.
For example, if the current rules were `-config +config|get` and a rule `+config`
was added. It would identify that `-config` was matched, but it would skip over
`+config|get`, leaving the compacted rule `-config +config`. This would be evaluated
safely, but looks weird.
This bug can only be triggered with subcommands, since that is the only way to
have sequential matching rules. Resolves#12470. This is also only present in 7.2.
I think there was also a minor risk of removing another valid rule, since it would start
the search of the next command at an arbitrary point. I couldn't find a valid offset that
would have cause a match using any of the existing commands that have subcommands
with another command.
Fix the assertion when a busy script (timeout) signal ready keys (like LPUSH),
and then an arbitrary client's `allow-busy` command steps into `handleClientsBlockedOnKeys`
try wake up clients blocked on keys (like BLPOP).
Reproduction process:
1. start a redis with aof
`./redis-server --appendonly yes`
2. exec blpop
`127.0.0.1:6379> blpop a 0`
3. use another client call a busy script and this script push the blocked key
`127.0.0.1:6379> eval "redis.call('lpush','a','b') while(1) do end" 0`
4. user a new client call an allow-busy command like auth
`127.0.0.1:6379> auth a`
BTW, this issue also break the atomicity of script.
This bug has been around for many years, the old versions only have the
atomic problem, only 7.0/7.2 has the assertion problem.
Co-authored-by: Oran Agra <oran@redislabs.com>
This PR mainly fixes a possible integer overflow in `json_append_string()`.
When we use `cjson.encoding()` to encode a string larger than 2GB, at specific
compilation flags, an integer overflow may occur leading to truncation, resulting
in the part of the string larger than 2GB not being encoded.
On the other hand, this overflow doesn't cause any read or write out-of-range or segment fault.
1) using -O0 for lua_cjson (`make LUA_DEBUG=yes`)
In this case, `i` will overflow and leads to truncation.
When `i` reaches `INT_MAX+1` and overflows to INT_MIN, when compared to
len, `i` (1000000..00) is expanded to 64 bits signed integer (1111111.....000000) .
At this point i will be greater than len and jump out of the loop, so `for (i = 0; i < len; i++)`
will loop up to 2^31 times, and the part of larger than 2GB will be truncated.
```asm
`i` => -0x24(%rbp)
<+253>: addl $0x1,-0x24(%rbp) ; overflow if i large than 2^31
<+257>: mov -0x24(%rbp),%eax
<+260>: movslq %eax,%rdx ; move a 32-bit value with sign extension into a 64-bit signed
<+263>: mov -0x20(%rbp),%rax
<+267>: cmp %rax,%rdx ; check `i < len`
<+270>: jb 0x212600 <json_append_string+148>
```
2) using -O2/-O3 for lua_cjson (`make LUA_DEBUG=no`, **the default**)
In this case, because singed integer overflow is an undefined behavior, `i` will not overflow.
`i` will be optimized by the compiler and use 64-bit registers for all subsequent instructions.
```asm
<+180>: add $0x1,%rbx ; Using 64-bit register `rbx` for i++
<+184>: lea 0x1(%rdx),%rsi
<+188>: mov %rsi,0x10(%rbp)
<+192>: mov %al,(%rcx,%rdx,1)
<+195>: cmp %rbx,(%rsp) ; check `i < len`
<+199>: ja 0x20b63a <json_append_string+154>
```
3) using 32bit
Because `strbuf_ensure_empty_length()` preallocates memory of length (len * 6 + 2),
in 32-bit `cjson.encode()` can only handle strings smaller than ((2 ^ 32) - 3 ) / 6.
So 32bit is not affected.
Also change `i` in `strbuf_append_string()` to `size_t`.
Since its second argument `str` is taken from the `char2escape` string array which is never
larger than 6, so `strbuf_append_string()` is not at risk of overflow (the bug was unreachable).
When doing merge selector, we should check whether the merge
has started (i.e., whether open_bracket_start is -1) every time.
Otherwise, encountering an illegal selector pattern could succeed
and also cause memory leaks, for example:
```
acl setuser test1 (+PING (+SELECT (+DEL )
```
The above would leak memory and succeed with only DEL being applied,
and would now error after the fix.
Co-authored-by: Oran Agra <oran@redislabs.com>
Additional test coverage for incr/decr operation.
integer number could be present in raw encoding format due to operation like append. A incr/decr operation following it optimize the string to int encoding format.
When the redis server cluster running on cluster-preferred-endpoint-type unknown-endpoint mode, and receive a request that should be redirected to another redis server node, it does not reply the hostip, but a empty host like MOVED 3999 :6381.
The redis-cli would try to connect to an address without a host, which cause the issue:
```
127.0.0.1:7002> set bar bar
-> Redirected to slot [5061] located at :7000
Could not connect to Redis at :7000: No address associated with hostname
Could not connect to Redis at :7000: No address associated with hostname
not connected> exit
```
In this case, the redis-cli should use the previous hostip when there's no host provided by the server.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Madelyn Olson <madelynolson@gmail.com>
* Fix integer overflows due to using wrong integer size.
* Add assertions / panic when overflow still happens.
* Deletion of dead code to avoid need to maintain it
* Some changes are not because of bugs, but rather paranoia.
* Improve cmsgpack and cjson test coverage.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Process loss of slot ownership in cluster bus
When a node no longer owns a slot, it clears the bit corresponding
to the slot in the cluster bus messages. The receiving nodes
currently don't record the fact that the sender stopped claiming
a slot until some other node in the cluster starts claiming the slot.
This can cause a slot to go missing during slot migration when subjected
to inopportune race with addition of new shards or a failover.
This fix forces the receiving nodes to process the loss of ownership
to avoid spreading wrong information.
The test fails on freebsd CI:
```
*** [err]: stats: eventloop metrics in tests/unit/info.tcl
Expected '31777' to be less than '16183' (context: type eval line 17 cmd
{assert_lessthan $el_sum2 [expr $el_sum1+10000] } proc ::test)
```
The test added in #11963, fails on freebsd CI which is slow,
increase tollerance and also add some verbose logs, now we can
see these logs in verbose mode (for better views):
```
eventloop metrics cycle1: 12, cycle2: 15
eventloop metrics el_sum1: 315, el_sum2: 411
eventloop metrics cmd_sum1: 126, cmd_sum2: 137
[ok]: stats: eventloop metrics (111 ms)
instantaneous metrics instantaneous_eventloop_cycles_per_sec: 8
instantaneous metrics instantaneous_eventloop_duration_usec: 55
[ok]: stats: instantaneous metrics (1603 ms)
[ok]: stats: debug metrics (112 ms)
```
When getKeysUsingKeySpecs processes a command with more than one key-spec,
and called with a total of more than 256 keys, it'll call getKeysPrepareResult again,
but since numkeys isn't updated, getKeysPrepareResult will not bother to copy key
names from the old result (leaving these slots uninitialized). Furthermore, it did not
consider the keys it already found when allocating more space.
Co-authored-by: Oran Agra <oran@redislabs.com>
The negative offset check was added in #9052, we realized
that this is a non-mandatory breaking change and we would
like to add it only in 8.0.
This reverts PR #9052, will be re-introduced later in 8.0.
Optimized the performance of the SCAN command in a few ways:
1. Move the key filtering (by MATCH pattern) in the scan callback,
so as to avoid collecting them for later filtering.
2. Reduce a many memory allocations and copying (use a reference
to the original sds, instead of creating an robj, an excessive 2 mallocs
and one string duplication)
3. Compare TYPE filter directly (as integers), instead of inefficient string
compare per key.
4. fixed a small bug: when scan zset and hash types, maxiterations uses
a more accurate number to avoid wrong double maxiterations.
Changes **postponed** for a later version (8.0):
1. Prepare to move the TYPE filtering to the scan callback as well. this was
put on hold since it has side effects that can be considered a breaking
change, which is that we will not attempt to do lazy expire (delete) a key
that was filtered by not matching the TYPE (changing it would mean TYPE filter
starts behaving the same as MATCH filter already does in that respect).
2. when the specified key TYPE filter is an unknown type, server will reply a error
immediately instead of doing a full scan that comes back empty handed.
Benchmark result:
For different scenarios, we obtained about 30% or more performance improvement.
Co-authored-by: Oran Agra <oran@redislabs.com>