When client tracking is enabled signalModifiedKey can increase memory usage,
this can cause the loop in performEvictions to keep running since it was measuring
the memory usage impact of signalModifiedKey.
The section that measures the memory impact of the eviction should be just on dbDelete,
excluding keyspace notification, client tracking, and propagation to AOF and replicas.
This resolves part of the problem described in #8069
p.s. fix took 1 minute, test took about 3 hours to write.
(cherry picked from commit c4fdf09c0584a3cee32b92f01b7958c72776aedc)
This PR not only fixes the problem that swapdb does not make the
transaction fail, but also optimizes the FLUSHALL and FLUSHDB command to
set the CLIENT_DIRTY_CAS flag to avoid unnecessary traversal of clients.
FLUSHDB was changed to first iterate on all watched keys, and then on the
clients watching each key.
Instead of iterating though all clients, and for each iterate on watched keys.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 10f94b0ab12f9315939dcccf39d64b9388c0c7fa)
When a Lua script returns a map to redis (a feature which was added in
redis 6 together with RESP3), it would have returned the value first and
the key second.
If the client was using RESP2, it was getting them out of order, and if
the client was in RESP3, it was getting a map of value => key.
This was happening regardless of the Lua script using redis.setresp(3)
or not.
This also affects a case where the script was returning a map which it got
from from redis by doing something like: redis.setresp(3); return redis.call()
This fix is a breaking change for redis 6.0 users who happened to rely
on the wrong order (either ones that used redis.setresp(3), or ones that
returned a map explicitly).
This commit also includes other two changes in the tests:
1. The test suite now handles RESP3 maps as dicts rather than nested
lists
2. Remove some redundant (duplicate) tests from tracking.tcl
(cherry picked from commit 2017407b4d1d19a91af1e7c0b199f2c1775dbaf9)
Module blocked clients cache the response in a temporary client,
the reply list in this client would be affected by the recent fix
in #7202, but when the reply is later copied into the real client,
it would have bypassed all the checks for output buffer limit, which
would have resulted in both: responding with a partial response to
the client, and also not disconnecting it at all.
(cherry picked from commit 48efc25f749c3620f9245786582ac76cb40e9bf4)
When replica diskless-load type is swapdb in cluster mode, we didn't backup
keys to slots map, so we will lose keys to slots map if fail to sync.
Now we backup keys to slots map at first, and restore it properly when fail.
This commit includes a refactory/cleanup of the backups mechanism (moving it to db.c and re-structuring it a bit).
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit b55a827ea2e19bd6cd48f216e8e6caa34299f9b9)
The bug was introduced by #5021 which only attempted avoid EXIST on an
already expired key from returning 1 on a replica.
Before that commit, dbExists was used instead of
lookupKeyRead (which had an undesired effect to "touch" the LRU/LFU)
Other than that, this commit fixes OBJECT to also come empty handed on
expired keys in replica.
And DEBUG DIGEST-VALUE to behave like DEBUG OBJECT (get the data from
the key regardless of it's expired state)
(cherry picked from commit f8ae991717f10c837c1a76b2954dae56ecb0e6bc)
In redisFork(), we don't set child pid, so updateDictResizePolicy()
doesn't take effect, that isn't friendly for copy-on-write.
The bug was introduced this in redis 6.0: 56258c6
(cherry picked from commit 89c78a980807aa1de0a6d0ccde6042450333a783)
The tests sometimes fail to find a log message.
Recently i added a print that shows the log files that are searched
and it shows that the message was in deed there.
The only reason i can't think of for this seach to fail, is we we
happened to read an incomplete line, which didn't match our pattern and
then on the next iteration we would continue reading from the line after
it.
The fix is to always re-evaluation the previous line.
(cherry picked from commit 4e2e5be201439cae4c0a03cfc8b6a60be4bff625)
This wrong behavior was backed by a test, and also documentation, and dates back to 2010.
But it makes no sense to anyone involved so it was decided to change that.
Note that 20eeddf (invalidate watch on expire on access) was released in 6.0 RC2
and 2d1968f released in in 6.0.0 GA (invalidate watch when key is evicted).
both of which do similar changes.
(cherry picked from commit 556acefe7556443b6d1741d804add92047bf4a8b)
This cleans up and simplifies the API by passing the command name as the
first argument. Previously the command name was specified explicitly,
but was still included in the argv.
(cherry picked from commit 056a43e1a615080cf9ba405be0538a6a979c821e)
* Introduce a new API's: RM_GetContextFlagsAll, and
RM_GetKeyspaceNotificationFlagsAll that will return the
full flags mask of each feature. The module writer can
check base on this value if the Flags he needs are
supported or not.
* For each flag, introduce a new value on redismodule.h,
this value represents the LAST value and should be there
as a reminder to update it when a new value is added,
also it will be used in the code to calculate the full
flags mask (assuming flags are incrementally increasing).
In addition, stated that the module writer should not use
the LAST flag directly and he should use the GetFlagAll API's.
* Introduce a new API: RM_IsSubEventSupported, that returns for a given
event and subevent, whether or not the subevent supported.
* Introduce a new macro RMAPI_FUNC_SUPPORTED(func) that returns whether
or not a function API is supported by comparing it to NULL.
* Introduce a new API: int RM_GetServerVersion();, that will return the
current Redis version in the format 0x00MMmmpp; e.g. 0x00060008;
* Changed unstable version from 999.999.999 to 255.255.255
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
(cherry picked from commit adc3183cd2b54238424895e4298548df4526f8c3)
This API function makes it possible to retrieve the X.509 certificate
used by clients to authenticate TLS connections.
(cherry picked from commit 0aec98dce2acbd280ad8ff4feac631e8afa833b1)
The main motivation here is to provide a way for modules to create a
single, global context that can be used for logging.
Currently, it is possible to obtain a thread-safe context that is not
attached to any blocked client by using `RM_GetThreadSafeContext`.
However, the attached context is not linked to the module identity so
log messages produced are not tagged with the module name.
Ideally we'd fix this in `RM_GetThreadSafeContext` itself but as it
doesn't accept the current context as an argument there's no way to do
that in a backwards compatible manner.
(cherry picked from commit 907da0580b57013c9e5c38b27f351597c72e4e25)
This is essentially the same as calling COMMAND GETKEYS but provides a
more efficient interface that can be used in every context (i.e. not a
Redis command).
(cherry picked from commit 7d117d7591656e947f526f5d5f8a022b88b38ad9)
I suppose that it was overlooked, since till recently none of the blocked commands were readonly.
other changes:
- add test for the above.
- add better support for additional (and deferring) clients for
cluster tests
- improve a test which left the client in MULTI state.
(cherry picked from commit 216c1106099f7c66c991a5a811c8fbacbb0ab17c)
track and report memory used by clients argv.
this is very usaful in case clients started sending a command and didn't
complete it. in which case the first args of the command are already
trimmed from the query buffer.
in an effort to avoid cache misses and overheads while keeping track of
these, i avoid calling sdsZmallocSize and instead use the sdslen /
bulk-len which can at least give some insight into the problem.
This memory is now added to the total clients memory usage, as well as
the client list.
(cherry picked from commit bea40e6a41e31a52e9e6efee77ea5a4bd873b759)
PROBLEM:
[$rd1 read] reads invalidation messages one by one, so it's never going to see the second invalidation message produced after INCR b, whether or not it exists. Adding another read will block incase no invalidation message is produced.
FIX:
We switch the order of "INCR a" and "INCR b" - now "INCR b" comes first. We still only read the first invalidation message produces. If an invalidation message is wrongly produces for b - then it will be produced before that of a, since "INCR b" comes before "INCR a".
Co-authored-by: Nitai Caro <caronita@amazon.com>
(cherry picked from commit 8fb89a572892e600146bd933c4f8f99d46a519c7)
Before this commit, we would have continued to add replies to the reply buffer even if client
output buffer limit is reached, so the used memory would keep increasing over the configured limit.
What's more, we shouldn’t write any reply to the client if it is set 'CLIENT_CLOSE_ASAP' flag
because that doesn't conform to its definition and we will close all clients flagged with
'CLIENT_CLOSE_ASAP' in ‘beforeSleep’.
Because of code execution order, before this, we may firstly write to part of the replies to
the socket before disconnecting it, but in fact, we may can’t send the full replies to clients
since OS socket buffer is limited. But this unexpected behavior makes some commands work well,
for instance ACL DELUSER, if the client deletes the current user, we need to send reply to client
and close the connection, but before, we close the client firstly and write the reply to reply
buffer. secondly, we shouldn't do this despite the fact it works well in most cases.
We add a flag 'CLIENT_CLOSE_AFTER_COMMAND' to mark clients, this flag means we will close the
client after executing commands and send all entire replies, so that we can write replies to
reply buffer during executing commands, send replies to clients, and close them later.
We also fix some implicit problems. If client output buffer limit is enforced in 'multi/exec',
all commands will be executed completely in redis and clients will not read any reply instead of
partial replies. Even more, if the client executes 'ACL deluser' the using user in 'multi/exec',
it will not read the replies after 'ACL deluser' just like before executing 'client kill' itself
in 'multi/exec'.
We added some tests for output buffer limit breach during multi-exec and using a pipeline of
many small commands rather than one with big response.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 57709c4bc663ddcb9313777c551a92dabf07095e)
We're already using bg_unlink in several places to delete the rdb file in the background,
and avoid paying the cost of the deletion from our main thread.
This commit uses bg_unlink to remove the temporary rdb file in the background too.
However, in case we delete that rdb file just before exiting, we don't actually wait for the
background thread or the main thread to delete it, and just let the OS clean up after us.
i.e. we open the file, unlink it and exit with the fd still open.
Furthermore, rdbRemoveTempFile can be called from a thread and was using snprintf which is
not async-signal-safe, we now use ll2string instead.
(cherry picked from commit b002d2b4f1415f4db805081bc8f5b85d00f30e33)
Before this commit, following command did not show --tls option:
./runtest-cluster --help
./runtest-sentinel --help
(cherry picked from commit 98c8ac0d705608e866a8c0055b09d4a40655f492)
This test was nearly always failing on MacOS github actions.
This is because of bugs in the test that caused it to nearly always run
all 3 attempts and just look at the last one as the pass/fail creteria.
i.e. the test was nearly always running all 3 attempts and still sometimes
succeed. this is because the break condition was different than the test
completion condition.
The reason the test succeeded is because the break condition tested the
results of all 3 tests (PSETEX/PEXPIRE/PEXPIREAT), but the success check
at the end was only testing the result of PSETEX.
The reason the PEXPIREAT test nearly always failed is because it was
getting the current time wrong: getting the current second and loosing
the sub-section time, so the only chance for it to succeed is if it run
right when a certain second started.
Because i now get the time from redis, adding another round trip, i
added another 100ms to the PEXPIRE test to make it less fragile, and
also added many more attempts.
Adding many more attempts before failure to account for slow platforms,
github actions and valgrind
(cherry picked from commit ed9bfe226289759756126f35583394e5db93048f)
The key save delay is too short and on certain systems the child process
is gone before we have a chance to inspect it.
(cherry picked from commit b2a73c404bf277bac287c72494a4c4cd2ba02f8c)
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 042189fd8707544139337b3ddcf38b5c5fea1bf0)
This is a catch-all test to confirm that that rewrite produces a valid
output for all parameters and that this process does not introduce
undesired configuration changes.
(cherry picked from commit a8b7268911803f4ac827ab22e8e5b10134664832)
There was a bug. Although cluster replicas would allow read commands,
they would not allow a MULTI-EXEC that's composed solely of read commands.
Adds tests for coverage.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Eran Liberty <eranl@amazon.com>
(cherry picked from commit b120366d48d9e488a406965773e64f29ba2946f7)
if there are nested tests and nested servers, we need to restore the
previous value of cur_test when a test exist.
example:
```
test{test 1} {
start_server {
test{test 1.1 - master only} {
}
start_server {
test{test 1.2 - with replication} {
}
}
}
}
```
when `test 1.1 - master only exists`, we're still inside `test 1`
(cherry picked from commit 0a1e7341935dbca4bae582de1a4a26d5ed4c652d)
1) cur_test: when restart_server, "no such variable" error occurs
./runtest --single integration/rdb
test {client freed during loading}
SET ::cur_test
restart_server
kill_server
test "Check for memory leaks (pid $pid)"
SET ::cur_test
UNSET ::cur_test
UNSET ::cur_test // This global variable has been unset.
2) `ps --ppid` not available on macOS platform, can be replaced with
`pgrep -P pid`.
(cherry picked from commit f22fa9594d536cb53f83ed8e508c03d4278778b0)
This test was failing from time to time see discussion at the bottom of #7635
This was probably due to timing, the DEBUG SLEEP executed by redis-cli
didn't sleep for enough time.
This commit changes:
1) use SET-ACTIVE-EXPIRE instead of DEBUG SLEEP
2) reduce many `after` sleeps with retry loops to speed up the test.
3) add many comment explaining the different steps of the test and
it's purpose.
4) config appendonly before populating the volatile keys, so that they'll
be part of the AOF command stream rather than the preamble RDB portion.
other complications: recently kill_instance switched from SIGKILL to
SIGTERM, and this would sometimes fail since there was an AOFRW running
in the background. now we wait for it to end before attempting the kill.
(cherry picked from commit b491d477c3062d1fb064a794d74643c82d1c4adc)
There is an inherent race condition in port allocation for spawned
servers. If a server fails to start because a port is taken, a new port
is allocated. This fixes a problem where the logs are not truncated and
as a result a large number of unmonitored servers are started.
(cherry picked from commit 2df4cb93acabf10bb0ff39c12030791b0947e719)
2b998de46 added a file for stderr to keep valgrind log but i forgot to
add a similar thing when valgrind isn't being used.
the result is that `glob */err.txt` fails.
(cherry picked from commit 42ba7a1b75aa100ba99d2024420bc658439452fb)
- redirect valgrind reports to a dedicated file rather than console
- try to avoid killing instances with SIGKILL so that we get the memory
leak report (killing with SIGTERM before resorting to SIGKILL)
- search for valgrind reports when done, print them and fail the tests
- add --dont-clean option to keep the logs on exit
- fix exit error code when crash is found (would have exited with 0)
changes that affect the normal redis test suite:
- refactor check_valgrind_errors into two functions one to search and
one to report
- move the search half into util.tcl to serve the cluster tests too
- ignore "address range perms" valgrind warnings which seem non relevant.
(cherry picked from commit 2b998de46078c172c6b19ac3b779318e7992c60a)
in some cases a command that returns an error possibly due to a timing
issue causes the tcl code to crash and thus prevents the rest of the
tests from running. this adds an option to make the test proceed despite
the crash.
maybe it should be the default mode some day.
(cherry picked from commit fe5da2e60d8d6d907062f4789673fbe06fa8773e)
reduce code duplication in aof.tcl.
move creation of clients into the test so that it can be skipped
(cherry picked from commit 1b7ba44e7917082ac6d5523666d3b4ab210dfbad)
- skip full units
- skip a single test (not just a list of tests)
- when skipping tag, skip spinning up servers, not just the tests
- skip tags when running against an external server too
- allow using multiple tags (split them)
(cherry picked from commit 677d14c2137ab50fa25c8163d20b14bc563261c7)