This commit adds tests to make sure that relative and absolute expire commands
are propagated as is to replicas and stop any future attempt to change that without
a proper discussion. see #8327 and #5171
Additionally it slightly improve the AOF test that tests the opposite (always
propagating absolute times), by covering more commands, and shaving 2
seconds from the test time.
This was a regression from #7625 (only in 6.2 RC2).
This makes it possible again to implement blocking list and zset
commands using the modules API.
This commit also includes a test case for the reverse: A module
unblocks a client blocked on BLPOP by inserting elements using
RedisModule_ListPush(). This already works, but it was untested.
This adds basic coverage to IO threads by running the cluster and few selected Redis test suite tests with the IO threads enabled.
Also provides some necessary additional improvements to the test suite:
* Add --config to sentinel/cluster tests for arbitrary configuration.
* Fix --tags whitelisting which was broken.
* Add a `network` tag to some tests that are more network intensive. This is work in progress and more tests should be properly tagged in the future.
Previously invalid configuration errors were not very specific and in some cases hard to understand.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
* Adds ASYNC and SYNC arguments to SCRIPT FLUSH
* Adds SYNC argument to FLUSHDB and FLUSHALL
* Adds new config to control the default behavior of FLUSHDB, FLUSHALL and SCRIPT FLUASH.
the new behavior is as follows:
* FLUSH[ALL|DB],SCRIPT FLUSH: Determine sync or async according to the
value of lazyfree-lazy-user-flush.
* FLUSH[ALL|DB],SCRIPT FLUSH ASYNC: Always flushes the database in an async manner.
* FLUSH[ALL|DB],SCRIPT FLUSH SYNC: Always flushes the database in a sync manner.
The prefix is changed from `RM_` to `module` on the following
internal functions, to prevent them from appearing in the API docs:
RM_LogRaw -> moduleLogRaw
RM_FreeCallReplyRec -> moduleFreeCallReplyRec
RM_ZsetAddFlagsToCoreFlags -> moduleZsetAddFlagsToCoreFlags
RM_ZsetAddFlagsFromCoreFlags -> moduleZsetAddFlagsFromCoreFlags
Fixes markdown formatting errors and some functions not showing
up in the generated documentation at all.
Ruby script (gendoc.rb) fixes:
* Modified automatic instertion of backquotes:
* Don't add backquotes around names which are already preceded by a
backquote. Fixes for example \`RedisModule_Reply\*\` which turning
into \`\`RedisModule_Reply\`\*\` messes up the formatting.
* Add backquotes around types such as RedisModuleString (in addition
to function names `RedisModule_[A-z()]*` and macro names
`REDISMODULE_[A-z]*`).
* Require 4 spaces indentation for disabling automatic backquotes, i.e.
code blocks. Fixes continuations of list items (indented 2 spaces).
* More permissive extraction of doc comments:
* Allow doc comments starting with `/**`.
* Make space before `*` on each line optional.
* Make space after `/*` and `/**` optional (needed when appearing on
its own line).
Markdown fixes in module.c:
* Fix code blocks not indented enough (4 spaces needed).
* Add black line before code blocks and lists where missing (needed).
* Enclose special markdown characters `_*^<>` in backticks to prevent them
from messing up formatting.
* Lists with `1)` changed to `1.` for proper markdown lists.
* Remove excessive indentation which causes text to be unintentionally
rendered as code blocks.
* Other minor formatting fixes.
Other fixes in module.c:
* Remove blank lines between doc comment and function definition. A blank
line here makes the Ruby script exclude the function in docs.
* Change zunionInterDiffGenericCommand to use lookupKeyRead if dstkey is null
* Change zrangeGenericCommand to use lookupKey Write if dstkey isn't null
ZRANGESTORE and UNION, ZINTER, ZDIFF are all new commands (6.2 RC1 and RC2).
In redis 6.0 the ZRANGE was using lookupKeyRead, and ZUNIONSTORE / ZINTERSTORE were using lookupKeyWrite.
So there bugs are introduced in 6.2 and will be resolved before it is released.
the implications of this bug are also not big:
The sole difference between LookupKeyRead and LookupKeyWrite is for command executed on a replica, which are not received from its master client. (for the master, and for the master client on the replica, these two functions behave the same)!
b640e2944 added a test that now fails with valgrind
it fails for two resons:
1) the test samples the used memory and then limits the maxmemory to
that value, but it turns out this is not atomic and on slow machines
the background cron process that clean out old query buffers reduces
the memory so that the setting doesn't cause eviction.
2) the dbsize was tested late, after reading some invalidation messages
by that time more and more keys got evicted, partially draining the
db. this is not the focus of this fix (still a known limitation)
(cherry picked from commit 080ad5b0f297bc91f38cf49a7ff25605f4fcbe64)
The test was trying to wait for the replica to start loading the rdb
from the master before it kills the master, but it was actually waiting
for ROLE to be in "sync" mode, which corresponds to REPL_STATE_TRANSFER
that starts before the actual loading starts.
now instead it waits for the loading flag to be set.
Besides, the test was dependent on the previous configuration of the
servers, relying on the fact the replica is configured to persist
(either RDB of AOF), now it is set explicitly.
(cherry picked from commit 37cf8d8f7faedce704d4c18319e6bf72703e1ddc)
Saving string of more than 2GB to the RDB file, can result in corrupt RDB, or failure in rdbSave.
S
(cherry picked from commit 3fb4197a742d064236ae4fdccf9dc00ed3b538d3)
This will allow to use: RedisModule_CreateStringPrintf(ctx, "%s %c %s", "string1", 0, "string2");
On large string, the previous code would incrementally retry to double the output buffer.
now it uses the the return value of snprintf and grows to the right size in one step.
and also avoids an excessive strlen in sdscat at the end.
(cherry picked from commit 1ad4e18394eac68ab8bae1ef5a2920086ce0f9ba)
The bug occurs when 'callback' re-registers itself to a point
in the future and the execution time in non-negligible:
'now' refers to time BEFORE callback was executed and is used
to calculate 'next_period'.
We must get the actual current time when calculating 'next_period'
(cherry picked from commit 9cbdc8dcdbaf96869251dd9728c0876adf1b2492)
The RMAPI_FUNC_SUPPORTED was defined in the wrong place on redismodule.h
and was not visible to modules.
(cherry picked from commit 560d2dc0081bc35b81b6a64d25c4077fa2e69ad9)
Turns out this was broken since version 4.0 when we added sds size
classes.
The cluster code uses sds for the receive buffer, and then casts it to a
struct and accesses a 64 bit variable.
This commit replaces the use of sds with a simple reallocated buffer.
(cherry picked from commit b71d06c269878887eed85b63a731c3d4ad7a8b12)
When client tracking is enabled signalModifiedKey can increase memory usage,
this can cause the loop in performEvictions to keep running since it was measuring
the memory usage impact of signalModifiedKey.
The section that measures the memory impact of the eviction should be just on dbDelete,
excluding keyspace notification, client tracking, and propagation to AOF and replicas.
This resolves part of the problem described in #8069
p.s. fix took 1 minute, test took about 3 hours to write.
(cherry picked from commit b640e2944e42759412ac67228cf64c43dfbed9c3)
This PR not only fixes the problem that swapdb does not make the
transaction fail, but also optimizes the FLUSHALL and FLUSHDB command to
set the CLIENT_DIRTY_CAS flag to avoid unnecessary traversal of clients.
FLUSHDB was changed to first iterate on all watched keys, and then on the
clients watching each key.
Instead of iterating though all clients, and for each iterate on watched keys.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit f571f8467cecb69a1c7c6810a445addfb802d85a)
This isn't a leak, just an warning due to unreachable
allocation on the fork child.
Problem created by 4192faa
(cherry picked from commit 997c2dc7ec652ac49a6f1a3a5bb79627aff1a545)
Turns out that when the fork child crashes, the crash log was deleting
the pidfile from the disk (although the parent is still running.
Now we set the pidfile of the fork process to NULL so the fork process
will never deletes it.
(cherry picked from commit 4192faa9821aa4c19be3bd245d8366a1bc1b0332)
instead of asking for the extra new space it wanted, it asked to grow the
string by the size it already has too.
i.e. a string of 1000 bytes, needing to grow by 10 bytes, would have been
asking for an **additional** 1010 bytes.
(cherry picked from commit f2bde2268a5420ed2d481fc0141cb73de0b1f200)
This is a recent problem, introduced by 9b2a426 (redis 6.0)
The implications are:
The sole difference between LookupKeyRead and LookupKeyWrite is for command
executed on a replica, which are not received from its master client. (for the master,
and for the master client on the replica, these two functions behave the same)!
Since SORT is a write command, this bug only implicates a writable-replica.
And these are its implications:
- SORT STORE will behave as it did before the above mentioned commit (like before
redis 6.0). on a writable-replica an already logically expired the key would have
appeared missing. (store dest key would be deleted, instead of being populated
with the data from the already logically expired key)
- SORT (the non store variant, which in theory could have been executed on
read-only-replica if it weren't for the write flag), will (in redis 6.0) have a new bug
and return the data from the already logically expired key instead of empty response.
(cherry picked from commit 6230ba081109beaf367d8fe3552bc848dc3896f4)
Turns out the RDB checksum in Redis 6.0 on bigendian is broken.
It always returned 0, so the RDB files are generated as if checksum is
disabled, and will be loaded ok on littleendian, and on bigendian.
But it'll not be able to load RDB files generated on littleendian or older versions.
Similarly DUMP and RESTORE will work on the same version (0==0),
but will be unable to exchange dump payloads with littleendian or old versions.
(cherry picked from commit a67621f495762dfe09f9a7e28d12b0729a7b0d12)
getRDB is "designed" to work in two modes: one for redis-cli --rdb and
one for redis-cli --cluster backup.
in the later case it uses the hiredis connection from the cluster nodes
and it used to free it without nullifying the context, so a later
attempt to free the context would crash.
I suppose the reason it seems to want to free the hiredis context ASAP
is that it wants to disconnect the replica link, so that replication
buffers will not be accumulated.
(cherry picked from commit 93b8b139305a99678528fabc9bf9cc5e9133b5a4)
When a Lua script returns a map to redis (a feature which was added in
redis 6 together with RESP3), it would have returned the value first and
the key second.
If the client was using RESP2, it was getting them out of order, and if
the client was in RESP3, it was getting a map of value => key.
This was happening regardless of the Lua script using redis.setresp(3)
or not.
This also affects a case where the script was returning a map which it got
from from redis by doing something like: redis.setresp(3); return redis.call()
This fix is a breaking change for redis 6.0 users who happened to rely
on the wrong order (either ones that used redis.setresp(3), or ones that
returned a map explicitly).
This commit also includes other two changes in the tests:
1. The test suite now handles RESP3 maps as dicts rather than nested
lists
2. Remove some redundant (duplicate) tests from tracking.tcl
(cherry picked from commit bcde1d978d74f84cb4a66dc8bbd746842217f8b2)
The crash log attempts to print the current client info, and when it
does that it attempts to check if the first argument happens to be a key
but it did so for commands with no arguments too, which caused the crash
log to crash half way and not reach its end.
(cherry picked from commit 318d58192289a471987aa12ca913a3569d4f54d0)