PROBLEM:
[$rd1 read] reads invalidation messages one by one, so it's never going to see the second invalidation message produced after INCR b, whether or not it exists. Adding another read will block incase no invalidation message is produced.
FIX:
We switch the order of "INCR a" and "INCR b" - now "INCR b" comes first. We still only read the first invalidation message produces. If an invalidation message is wrongly produces for b - then it will be produced before that of a, since "INCR b" comes before "INCR a".
Co-authored-by: Nitai Caro <caronita@amazon.com>
(cherry picked from commit 8fb89a572892e600146bd933c4f8f99d46a519c7)
When REDISMODULE_EVENT_CLIENT_CHANGE events are delivered, modules may
want to mutate the client state (e.g. perform authentication).
This change links the module context with the real client rather than a
fake client for these events.
(cherry picked from commit 67b43f75e23ba7a3915bc87e7edaa306fd1eee65)
The client pointed to by the module context may in some cases be a fake
client. RM_Authenticate*() calls in this case would be ineffective but
appear to succeed, and this change fails them to make it easier to catch
such cases.
(cherry picked from commit cfccfbd6f4ad23c1a9feb44ebb91da9cd2a09734)
The tls-ca-cert or tls-ca-cert-dir configuration parameters are only
used when Redis needs to authenticate peer certificates, in one of these
scenarios:
1. Incoming clients or replicas, with `tls-auth-clients` enabled.
2. A replica authenticating the master's peer certificate.
3. Cluster nodes authenticating other nodes when establishing the bus
protocol connection.
(cherry picked from commit 1591e3479d46e7e21a98ae685fab2ccab076d094)
when slaveof config is "no one", reset any pre-existing config and resume.
also solve a memory leak if slaveof appears twice.
and fail loading if port number is out of range or not an integer.
Co-authored-by: caozhengbin <caozb@yidingyun.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit a295770e32b4bd71ff560c5ec7bc5c0ad12bf068)
There's currently an issue with IO threads and gopher (issuing lookupKey from within the thread).
simply fix is to just not support it for now.
(cherry picked from commit c9f00bcce2ce7a59a815642674a12c940bbe2207)
We may access and modify these two variables in signal handler function,
to guarantee them async-signal-safe, so we should set them to volatile
sig_atomic_t type.
It doesn't look like this could have caused any real issue, and it seems that
signals are handled in main thread on most platforms. But we want to follow C
and POSIX standard in signal handler function.
(cherry picked from commit f1863a1fe760610cebfd1c121623a3c12a79d600)
Make sure we handle short writes correctly, sync to disk after writing and use
rename to make sure the replacement is actually atomic.
In any case of failure old configuration will remain in place.
Also, add some additional logging to make it easier to diagnose rewrite problems.
(cherry picked from commit c30bd02c9d251c63b49ee77a6fd456fb82ff1382)
We should sync temp DB file before renaming as rdb_fsync_range does not use
flag `SYNC_FILE_RANGE_WAIT_AFTER`.
Refer to `Linux Programmer's Manual`:
SYNC_FILE_RANGE_WAIT_AFTER
Wait upon write-out of all pages in the range after performing any write.
(cherry picked from commit 0d62caab2113fc12077aadd469e80dd19ca78db2)
When fclose would fail, the previous implementation would have attempted to do fclose again
this can in theory lead to segfault.
other changes:
check for non-zero return value as failure rather than a specific error code.
this doesn't fix a real bug, just a minor cleanup.
(cherry picked from commit 323029baa6958781ffae5da331dfc918d66a7117)
Before this commit, we would have continued to add replies to the reply buffer even if client
output buffer limit is reached, so the used memory would keep increasing over the configured limit.
What's more, we shouldn’t write any reply to the client if it is set 'CLIENT_CLOSE_ASAP' flag
because that doesn't conform to its definition and we will close all clients flagged with
'CLIENT_CLOSE_ASAP' in ‘beforeSleep’.
Because of code execution order, before this, we may firstly write to part of the replies to
the socket before disconnecting it, but in fact, we may can’t send the full replies to clients
since OS socket buffer is limited. But this unexpected behavior makes some commands work well,
for instance ACL DELUSER, if the client deletes the current user, we need to send reply to client
and close the connection, but before, we close the client firstly and write the reply to reply
buffer. secondly, we shouldn't do this despite the fact it works well in most cases.
We add a flag 'CLIENT_CLOSE_AFTER_COMMAND' to mark clients, this flag means we will close the
client after executing commands and send all entire replies, so that we can write replies to
reply buffer during executing commands, send replies to clients, and close them later.
We also fix some implicit problems. If client output buffer limit is enforced in 'multi/exec',
all commands will be executed completely in redis and clients will not read any reply instead of
partial replies. Even more, if the client executes 'ACL deluser' the using user in 'multi/exec',
it will not read the replies after 'ACL deluser' just like before executing 'client kill' itself
in 'multi/exec'.
We added some tests for output buffer limit breach during multi-exec and using a pipeline of
many small commands rather than one with big response.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 57709c4bc663ddcb9313777c551a92dabf07095e)
This happens only on diskless replicas when attempting to reconnect after
failing to load an RDB file. It is more likely to occur with larger datasets.
After reconnection is initiated, replicationEmptyDbCallback() may get called
and try to write to an unconnected socket. This triggered another issue where
the connection is put into an error state and the connect handler never gets
called. The problem is a regression introduced by commit c17e597.
(cherry picked from commit 1980f639b161f46da2944d60f1c2facaf547dc1a)
redis-check-rdb was unable to parse rdb files containing module aux data.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 63a05dde462c1be4bd74c32630eca6e794ae440a)
This commit adds streamIteratorStop call in rewriteStreamObject function in some of the return statement. Although currently this will not cause memory leak since stream id is only 16 bytes long.
(cherry picked from commit 23b50bccccf4efac4a1672a332e470a82b7bd514)
Refine comment of makeThreadKillable().
This commit can be backported to 5.0, only if we also backport 8b70cb0.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 647cac5bb469171a5b97eade276335ab9c552edc)
We're already using bg_unlink in several places to delete the rdb file in the background,
and avoid paying the cost of the deletion from our main thread.
This commit uses bg_unlink to remove the temporary rdb file in the background too.
However, in case we delete that rdb file just before exiting, we don't actually wait for the
background thread or the main thread to delete it, and just let the OS clean up after us.
i.e. we open the file, unlink it and exit with the fd still open.
Furthermore, rdbRemoveTempFile can be called from a thread and was using snprintf which is
not async-signal-safe, we now use ll2string instead.
(cherry picked from commit b002d2b4f1415f4db805081bc8f5b85d00f30e33)
If one thread got SIGSEGV, function sigsegvHandler() would be triggered,
it would call bioKillThreads(). But call pthread_cancel() to cancel itself
would make it block. Also note that if SIGSEGV is caught by bio thread, it
should kill the main thread in order to give a positive report.
(cherry picked from commit 8b70cb0ef8e654d09d0d2974ad72388f46be9fde)
This commit makes stream object returning "stream" as encoding type in OBJECT ENCODING subcommand and DEBUG OBJECT command.
Till now, it would return "unknown"
(cherry picked from commit 6ff741b5cf4d1cf7e23610aa9eef7ed485bd3113)
Before this commit, following command did not show --tls option:
./runtest-cluster --help
./runtest-sentinel --help
(cherry picked from commit 98c8ac0d705608e866a8c0055b09d4a40655f492)
These tests started failing every day on http 404 (not being able to
install valgrind)
(cherry picked from commit 78a6e5eb2b19adb40b5ec8fc867345b3c5479586)
This test was nearly always failing on MacOS github actions.
This is because of bugs in the test that caused it to nearly always run
all 3 attempts and just look at the last one as the pass/fail creteria.
i.e. the test was nearly always running all 3 attempts and still sometimes
succeed. this is because the break condition was different than the test
completion condition.
The reason the test succeeded is because the break condition tested the
results of all 3 tests (PSETEX/PEXPIRE/PEXPIREAT), but the success check
at the end was only testing the result of PSETEX.
The reason the PEXPIREAT test nearly always failed is because it was
getting the current time wrong: getting the current second and loosing
the sub-section time, so the only chance for it to succeed is if it run
right when a certain second started.
Because i now get the time from redis, adding another round trip, i
added another 100ms to the PEXPIRE test to make it less fragile, and
also added many more attempts.
Adding many more attempts before failure to account for slow platforms,
github actions and valgrind
(cherry picked from commit ed9bfe226289759756126f35583394e5db93048f)
The key save delay is too short and on certain systems the child process
is gone before we have a chance to inspect it.
(cherry picked from commit b2a73c404bf277bac287c72494a4c4cd2ba02f8c)
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 042189fd8707544139337b3ddcf38b5c5fea1bf0)