Currently, in case a blocked command is unblocked externally (eg. due to
the relevant slot being migrated or the CLIENT UNBLOCK command was
issued, the command statistics will always update the failed_calls error
statistic. This leads to missalignment with
90b9f08e5d
as well as some inconsistencies. For example when a key is migrated
during cluster slot migration, clients blocked on XREADGROUP will be
unblocked and update the rejected_calls stat, while clients blocked on
BLPOP will get unblocked updating the failed_calls stat.
In this PR we add explicit indication in updateStatsOnUnblock thet
indicates if the command was rejected or failed.
---------
Signed-off-by: ranshid <ranshid@amazon.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
This PR fixes the missing stat update for `total_net_repl_output_bytes`
that was removed during the refactoring in PR #758. The metric was not
being updated when writing to replica connections.
Changes:
- Restored the stat update in postWriteToClient for replica connections
- Added integration test to verify the metric is properly updated
Signed-off-by: Uri Yagelnik <uriy@amazon.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
A new option for diskless replication on the replica side.
After a network failure, the replica may need to perform a full sync.
The other option for diskless full sync is `swapdb`, but it uses twice
as much memory, temporarily. In situations where this is not acceptable,
and where losing data is acceptable, the `flush-before-load` can be
useful. If the full sync fails, the old data is lost though. Therefore,
the new option is marked as "dangerous".
---------
Signed-off-by: kronwerk <ca11e5e22g@gmail.com>
Signed-off-by: kronwerk <kronwerk@users.noreply.github.com>
Co-authored-by: kronwerk <ca11e5e22g@gmail.com>
Call emptyData right before rdbLoad to prevent errors in the middle
and we drop the replication stream and leaving an empty database.
The real changes is in disk-based part, the rest is just code movement.
Signed-off-by: Binbin <binloveplay1314@qq.com>
In this PR we introduce the main benefit of dual channel replication by
continuously steaming the COB (client output buffers) in parallel to the
RDB and thus keeping the primary's side COB small AND accelerating the
overall sync process. By streaming the replication data to the replica
during the full sync, we reduce
1. Memory load from the primary's node.
2. CPU load from the primary's main process. [Latest performance
tests](#data)
## Motivation
* Reduce primary memory load. We do that by moving the COB tracking to
the replica side. This also decrease the chance for COB overruns. Note
that primary's input buffer limits at the replica side are less
restricted then primary's COB as the replica plays less critical part in
the replication group. While increasing the primary’s COB may end up
with primary reaching swap and clients suffering, at replica side we’re
more at ease with it. Larger COB means better chance to sync
successfully.
* Reduce primary main process CPU load. By opening a new, dedicated
connection for the RDB transfer, child processes can have direct access
to the new connection. Due to TLS connection restrictions, this was not
possible using one main connection. We eliminate the need for the child
process to use the primary's child-proc -> main-proc pipeline, thus
freeing up the main process to process clients queries.
## Dual Channel Replication high level interface design
- Dual channel replication begins when the replica sends a `REPLCONF
CAPA DUALCHANNEL` to the primary during initial
handshake. This is used to state that the replica is capable of dual
channel sync and that this is the replica's main channel, which is not
used for snapshot transfer.
- When replica lacks sufficient data for PSYNC, the primary will send
`-FULLSYNCNEEDED` response instead
of RDB data. As a next step, the replica creates a new connection
(rdb-channel) and configures it against
the primary with the appropriate capabilities and requirements. The
replica then requests a sync
using the RDB channel.
- Prior to forking, the primary sends the replica the snapshot's end
repl-offset, and attaches the replica
to the replication backlog to keep repl data until the replica requests
psync. The replica uses the main
channel to request a PSYNC starting at the snapshot end offset.
- The primary main threads sends incremental changes via the main
channel, while the bgsave process
sends the RDB directly to the replica via the rdb-channel. As for the
replica, the incremental
changes are stored on a local buffer, while the RDB is loaded into
memory.
- Once the replica completes loading the rdb, it drops the
rdb-connection and streams the accumulated incremental
changes into memory. Repl steady state continues normally.
## New replica state machine

## Data <a name="data"></a>



## Explanation
These graphs demonstrate performance improvements during full sync
sessions using rdb-channel + streaming rdb directly from the background
process to the replica.
First graph- with at most 50 clients and light weight commands, we saw
5%-7.5% improvement in write latency during sync session.
Two graphs below- full sync was tested during heavy read commands from
the primary (such as sdiff, sunion on large sets). In that case, the
child process writes to the replica without sharing CPU with the loaded
main process. As a result, this not only improves client response time,
but may also shorten sync time by about 50%. The shorter sync time
results in less memory being used to store replication diffs (>60% in
some of the tested cases).
## Test setup
Both primary and replica in the performance tests ran on the same
machine. RDB size in all tests is 3.7gb. I generated write load using
valkey-benchmark ` ./valkey-benchmark -r 100000 -n 6000000 lpush my_list
__rand_int__`.
---------
Signed-off-by: naglera <anagler123@gmail.com>
Signed-off-by: naglera <58042354+naglera@users.noreply.github.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
This PR is 1 of 3 PRs intended to achieve the goal of 1 million requests
per second, as detailed by [dan touitou](https://github.com/touitou-dan)
in https://github.com/valkey-io/valkey/issues/22. This PR modifies the
IO threads to be fully asynchronous, which is a first and necessary step
to allow more work offloading and better utilization of the IO threads.
### Current IO threads state:
Valkey IO threads were introduced in Redis 6.0 to allow better
utilization of multi-core machines. Before this, Redis was
single-threaded and could only use one CPU core for network and command
processing. The introduction of IO threads helps in offloading the IO
operations to multiple threads.
**Current IO Threads flow:**
1. Initialization: When Redis starts, it initializes a specified number
of IO threads. These threads are in addition to the main thread, each
thread starts with an empty list, the main thread will populate that
list in each event-loop with pending-read-clients or
pending-write-clients.
2. Read Phase: The main thread accepts incoming connections and reads
requests from clients. The reading of requests are offloaded to IO
threads. The main thread puts the clients ready-to-read in a list and
set the global io_threads_op to IO_THREADS_OP_READ, the IO threads pick
the clients up, perform the read operation and parse the first incoming
command.
3. Command Processing: After reading the requests, command processing is
still single-threaded and handled by the main thread.
4. Write Phase: Similar to the read phase, the write phase is also be
offloaded to IO threads. The main thread prepares the response in the
clients’ output buffer then the main thread puts the client in the list,
and sets the global io_threads_op to the IO_THREADS_OP_WRITE. The IO
threads then pick the clients up and perform the write operation to send
the responses back to clients.
5. Synchronization: The main-thread communicate with the threads on how
many jobs left per each thread with atomic counter. The main-thread
doesn’t access the clients while being handled by the IO threads.
**Issues with current implementation:**
* Underutilized Cores: The current implementation of IO-threads leads to
the underutilization of CPU cores.
* The main thread remains responsible for a significant portion of
IO-related tasks that could be offloaded to IO-threads.
* When the main-thread is processing client’s commands, the IO threads
are idle for a considerable amount of time.
* Notably, the main thread's performance during the IO-related tasks is
constrained by the speed of the slowest IO-thread.
* Limited Offloading: Currently, Since the Main-threads waits
synchronously for the IO threads, the Threads perform only read-parse,
and write operations, with parsing done only for the first command. If
the threads can do work asynchronously we may offload more work to the
threads reducing the load from the main-thread.
* TLS: Currently, we don't support IO threads with TLS (where offloading
IO would be more beneficial) since TLS read/write operations are not
thread-safe with the current implementation.
### Suggested change
Non-blocking main thread - The main thread and IO threads will operate
in parallel to maximize efficiency. The main thread will not be blocked
by IO operations. It will continue to process commands independently of
the IO thread's activities.
**Implementation details**
**Inter-thread communication.**
* We use a static, lock-free ring buffer of fixed size (2048 jobs) for
the main thread to send jobs and for the IO to receive them. If the ring
buffer fills up, the main thread will handle the task itself, acting as
back pressure (in case IO operations are more expensive than command
processing). A static ring buffer is a better candidate than a dynamic
job queue as it eliminates the need for allocation/freeing per job.
* An IO job will be in the format: ` [void* function-call-back | void
*data] `where data is either a client to read/write from and the
function-ptr is the function to be called with the data for example
readQueryFromClient using this format we can use it later to offload
other types of works to the IO threads.
* The Ring buffer is one way from the main-thread to the IO thread, Upon
read/write event the main thread will send a read/write job then in
before sleep it will iterate over the pending read/write clients to
checking for each client if the IO threads has already finished handling
it. The IO thread signals it has finished handling a client read/write
by toggling an atomic flag read_state / write_state on the client
struct.
**Thread Safety**
As suggested in this solution, the IO threads are reading from and
writing to the clients' buffers while the main thread may access those
clients.
We must ensure no race conditions or unsafe access occurs while keeping
the Valkey code simple and lock free.
Minimal Action in the IO Threads
The main change is to limit the IO thread operations to the bare
minimum. The IO thread will access only the client's struct and only the
necessary fields in this struct.
The IO threads will be responsible for the following:
* Read Operation: The IO thread will only read and parse a single
command. It will not update the server stats, handle read errors, or
parsing errors. These tasks will be taken care of by the main thread.
* Write Operation: The IO thread will only write the available data. It
will not free the client's replies, handle write errors, or update the
server statistics.
To achieve this without code duplication, the read/write code has been
refactored into smaller, independent components:
* Functions that perform only the read/parse/write calls.
* Functions that handle the read/parse/write results.
This refactor accounts for the majority of the modifications in this PR.
**Client Struct Safe Access**
As we ensure that the IO threads access memory only within the client
struct, we need to ensure thread safety only for the client's struct's
shared fields.
* Query Buffer
* Command parsing - The main thread will not try to parse a command from
the query buffer when a client is offloaded to the IO thread.
* Client's memory checks in client-cron - The main thread will not
access the client query buffer if it is offloaded and will handle the
querybuf grow/shrink when the client is back.
* CLIENT LIST command - The main thread will busy-wait for the IO thread
to finish handling the client, falling back to the current behavior
where the main thread waits for the IO thread to finish their
processing.
* Output Buffer
* The IO thread will not change the client's bufpos and won't free the
client's reply lists. These actions will be done by the main thread on
the client's return from the IO thread.
* bufpos / block→used: As the main thread may change the bufpos, the
reply-block→used, or add/delete blocks to the reply list while the IO
thread writes, we add two fields to the client struct: io_last_bufpos
and io_last_reply_block. The IO thread will write until the
io_last_bufpos, which was set by the main-thread before sending the
client to the IO thread. If more data has been added to the cob in
between, it will be written in the next write-job. In addition, the main
thread will not trim or merge reply blocks while the client is
offloaded.
* Parsing Fields
* Client's cmd, argc, argv, reqtype, etc., are set during parsing.
* The main thread will indicate to the IO thread not to parse a cmd if
the client is not reset. In this case, the IO thread will only read from
the network and won't attempt to parse a new command.
* The main thread won't access the c→cmd/c→argv in the CLIENT LIST
command as stated before it will busy wait for the IO threads.
* Client Flags
* c→flags, which may be changed by the main thread in multiple places,
won't be accessed by the IO thread. Instead, the main thread will set
the c→io_flags with the information necessary for the IO thread to know
the client's state.
* Client Close
* On freeClient, the main thread will busy wait for the IO thread to
finish processing the client's read/write before proceeding to free the
client.
* Client's Memory Limits
* The IO thread won't handle the qb/cob limits. In case a client crosses
the qb limit, the IO thread will stop reading for it, letting the main
thread know that the client crossed the limit.
**TLS**
TLS is currently not supported with IO threads for the following
reasons:
1. Pending reads - If SSL has pending data that has already been read
from the socket, there is a risk of not calling the read handler again.
To handle this, a list is used to hold the pending clients. With IO
threads, multiple threads can access the list concurrently.
2. Event loop modification - Currently, the TLS code
registers/unregisters the file descriptor from the event loop depending
on the read/write results. With IO threads, multiple threads can modify
the event loop struct simultaneously.
3. The same client can be sent to 2 different threads concurrently
(https://github.com/redis/redis/issues/12540).
Those issues were handled in the current PR:
1. The IO thread only performs the read operation. The main thread will
check for pending reads after the client returns from the IO thread and
will be the only one to access the pending list.
2. The registering/unregistering of events will be similarly postponed
and handled by the main thread only.
3. Each client is being sent to the same dedicated thread (c→id %
num_of_threads).
**Sending Replies Immediately with IO threads.**
Currently, after processing a command, we add the client to the
pending_writes_list. Only after processing all the clients do we send
all the replies. Since the IO threads are now working asynchronously, we
can send the reply immediately after processing the client’s requests,
reducing the command latency. However, if we are using AOF=always, we
must wait for the AOF buffer to be written, in which case we revert to
the current behavior.
**IO threads dynamic adjustment**
Currently, we use an all-or-nothing approach when activating the IO
threads. The current logic is as follows: if the number of pending write
clients is greater than twice the number of threads (including the main
thread), we enable all threads; otherwise, we enable none. For example,
if 8 IO threads are defined, we enable all 8 threads if there are 16
pending clients; else, we enable none.
It makes more sense to enable partial activation of the IO threads. If
we have 10 pending clients, we will enable 5 threads, and so on. This
approach allows for a more granular and efficient allocation of
resources based on the current workload.
In addition, the user will now be able to change the number of I/O
threads at runtime. For example, when decreasing the number of threads
from 4 to 2, threads 3 and 4 will be closed after flushing their job
queues.
**Tests**
Currently, we run the io-threads tests with 4 IO threads
(443d80f168/.github/workflows/daily.yml (L353)).
This means that we will not activate the IO threads unless there are 8
(threads * 2) pending write clients per single loop, which is unlikely
to happened in most of tests, meaning the IO threads are not currently
being tested.
To enforce the main thread to always offload work to the IO threads,
regardless of the number of pending events, we add an
events-per-io-thread configuration with a default value of 2. When set
to 0, this configuration will force the main thread to always offload
work to the IO threads.
When we offload every single read/write operation to the IO threads, the
IO-threads are running with 100% CPU when running multiple tests
concurrently some tests fail as a result of larger than expected command
latencies. To address this issue, we have to add some after or wait_for
calls to some of the tests to ensure they pass with IO threads as well.
Signed-off-by: Uri Yagelnik <uriy@amazon.com>
More rebranding of
* Log messages (#252)
* The DENIED error reply
* Internal function names and comments, mainly Lua API
---------
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Updated procedure redis_deferring_client in test environent to
valkey_deferring_client.
Signed-off-by: Shivshankar-Reddy <shiva.sheri.github@gmail.com>
The MacOS CI in github actions often hangs without any logs. GH argues that
it's due to resource utilization, either running out of disk space, memory, or CPU
starvation, and thus the runner is terminated.
This PR contains multiple attempts to resolve this:
1. introducing pause_process instead of SIGSTOP, which waits for the process
to stop before resuming the test, possibly resolving race conditions in some tests,
this was a suspect since there was one test that could result in an infinite loop in that
case, in practice this didn't help, but still a good idea to keep.
2. disable the `save` config in many tests that don't need it, specifically ones that use
heavy writes and could create large files.
3. change the `populate` proc to use short pipeline rather than an infinite one.
4. use `--clients 1` in the macos CI so that we don't risk running multiple resource
demanding tests in parallel.
5. enable `--verbose` to be repeated to elevate verbosity and print more info to stdout
when a test or a server starts.
* Make it clear that current_client is the root client that was called by
external connection
* add executing_client which is the client that runs the current command
(can be a module or a script)
* Remove script_caller that was used for commands that have CLIENT_SCRIPT
to get the client that called the script. in most cases, that's the current_client,
and in others (when being called from a module), it could be an intermediate
client when we actually want the original one used by the external connection.
bugfixes:
* RM_Call with C flag should log ACL errors with the requested user rather than
the one used by the original client, this also solves a crash when RM_Call is used
with C flag from a detached thread safe context.
* addACLLogEntry would have logged info about the script_caller, but in case the
script was issued by a module command we actually want the current_client. the
exception is when RM_Call is called from a timer event, in which case we don't
have a current_client.
behavior changes:
* client side tracking for scripts now tracks the keys that are read by the script
instead of the keys that are declared by the caller for EVAL
other changes:
* Log both current_client and executing_client in the crash log.
* remove prepareLuaClient and resetLuaClient, being dead code that was forgotten.
* remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot
that serves all commands and is reset only when execution nesting starts.
* remove code to propagate CLIENT_FORCE_REPL from the executed command
to the script caller since scripts aren't propagated anyway these days and anyway
this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun.
* fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
Test on x86 + TLS fail with this error:
```
*** [err]: Slave is able to detect timeout during handshake in tests/integration/replication.tcl
Replica is not able to detect timeout
```
The replica logs is:
```
### Starting test Slave is able to detect timeout during handshake in tests/integration/replication.tcl
7681:S 05 Jan 2023 00:21:56.635 * Non blocking connect for SYNC fired the event.
7681:S 05 Jan 2023 00:21:56.638 * Master replied to PING, replication can continue...
7681:S 05 Jan 2023 00:21:56.638 * Trying a partial resynchronization (request ef70638885500aad12dd673c68ca1541116a59fe:1).
7681:S 05 Jan 2023 00:22:56.894 # Failed to read response from the server: error:0A000126:SSL routines::unexpected eof while reading
7681:S 05 Jan 2023 00:22:56.894 # Master did not reply to PSYNC, will try later
```
This is another issue that appeared after #11640 was merged. This PR try to fix it.
The idea is to make it stable in `wait_bgsave`, for example, it may wait until the
next psync retry in the following situation: `Master did not reply to PSYNC, will try later`
Other than that, the change will make the test more consistent / predictable since
it'll mean the master is always frozen in the desired state (waiting for repl-diskless-sync-delay
to happen, rather than earlier stages of the handshake).
Test on ARM + TLS often fail with this error:
```
*** [err]: Slave is able to detect timeout during handshake in tests/integration/replication.tcl
Replica is not able to detect timeout
```
https://github.com/redis/redis-extra-ci/actions/runs/3727554226/jobs/6321797837
The replica logs show that in this case the replica got timeout before even getting a response to the PING command (instead of the SYNC command).
it should have shown these:
```
* MASTER <-> REPLICA sync started
* REPLICAOF 127.0.0.1:22112 enabled ....
### Starting test Slave enters handshake in tests/integration/replication.tcl
* Non blocking connect for SYNC fired the event.
```
then:
```
* Master replied to PING, replication can continue...
* Trying a partial resynchronization (request 50da9eff70d774f4e6cb723eb4b091440f215772:1).
```
and then hang for 5 seconds:
```
# Timeout connecting to the MASTER...
* Reconnecting to MASTER 127.0.0.1:21112 after failure
```
but instead it got this (looks like it disconnected too early, and then tried to re-connect):
```
10890:M 19 Dec 2022 01:32:54.794 * Ready to accept connections tls
10890:M 19 Dec 2022 01:32:54.809 - Accepted 127.0.0.1:41047
10890:M 19 Dec 2022 01:32:54.878 - Reading from client: error:0A000126:SSL routines::unexpected eof while reading
10890:M 19 Dec 2022 01:32:54.925 - Accepted 127.0.0.1:39207
10890:S 19 Dec 2022 01:32:55.463 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
10890:S 19 Dec 2022 01:32:55.463 * Connecting to MASTER 127.0.0.1:24126
10890:S 19 Dec 2022 01:32:55.463 * MASTER <-> REPLICA sync started
10890:S 19 Dec 2022 01:32:55.463 * REPLICAOF 127.0.0.1:24126 enabled (user request from 'id=4 addr=127.0.0.1:39207 laddr=127.0.0.1:24125 fd=8 name= age=1 idle=0 flags=N db=9 sub=0 psub=0 ssub=0 multi=-1 qbuf=43 qbuf-free=20431 argv-mem=21 multi-mem=0 rbs=1024 rbp=5 obl=0 oll=0 omem=0 tot-mem=22317 events=r cmd=slaveof user=default redir=-1 resp=2')
### Starting test Slave enters handshake in tests/integration/replication.tcl
10890:S 19 Dec 2022 01:32:55.476 * Non blocking connect for SYNC fired the event.
10890:S 19 Dec 2022 01:33:00.701 # Failed to read response from the server: (null) <- note this!!
10890:S 19 Dec 2022 01:33:00.701 # Master did not respond to command during SYNC handshake
10890:S 19 Dec 2022 01:33:01.002 * Connecting to MASTER 127.0.0.1:24126
10890:S 19 Dec 2022 01:33:01.002 * MASTER <-> REPLICA sync started
### Starting test Slave is able to detect timeout during handshake in tests/integration/replication.tcl
10890:S 19 Dec 2022 01:33:05.497 * Non blocking connect for SYNC fired the event.
10890:S 19 Dec 2022 01:33:05.500 * Master replied to PING, replication can continue...
10890:S 19 Dec 2022 01:33:05.510 * Trying a partial resynchronization (request 947e1956372a0e6c819cfec51c42cc7979b0c221:1).
10890:S 19 Dec 2022 01:34:05.833 # Failed to read response from the server: error:0A000126:SSL routines::unexpected eof while reading
10890:S 19 Dec 2022 01:34:05.833 # Master did not reply to PSYNC, will try later
```
This PR sets enables the 5 seconds timeout at a later stage to try and prevent the early disconnection.
*TL;DR*
---------------------------------------
Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551)
We decided to refactor the client blocking code to eliminate some of the code duplications
and to rebuild the infrastructure better for future key blocking cases.
*In this PR*
---------------------------------------
1. reprocess the command once a client becomes unblocked on key (instead of running
custom code for the unblocked path that's different than the one that would have run if
blocking wasn't needed)
2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
details in the notes section below)
4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
XREAD we need to read from the last msg and not wait for new msg in order to prevent
endless block loop.
5. Added statistics to the info "Clients" section to report the:
* `total_blocking_keys` - number of blocking keys
* `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
which would like
to be unblocked on when the key is deleted.
6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
and make an explicit verification in the call() function in order to decide if stats update should take place.
This should simplify the logic and also mitigate existing issues: for example module calls which are
triggered as part of AOF loading might still report stats even though they are called during AOF loading.
*Behavior changes*
---------------------------------------------------
1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
since we now re-process the command once the client is unblocked some errors will be reported differently.
The old implementation used to issue
``UNBLOCKED the stream key no longer exists``
in the following cases:
- The stream key has been deleted (ie. calling DEL)
- The stream and group existed but the key type was changed by overriding it (ie. with set command)
- The key not longer exists after we swapdb with a db which does not contains this key
- After swapdb when the new db has this key but with different type.
In the new implementation the reported errors will be the same as if the command was processed after effect:
**NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.
2. Reprocessing the command means that some checks will be reevaluated once the
client is unblocked.
For example, ACL rules might change since the command originally was executed and
will fail once the client is unblocked.
Another example is OOM condition checks which might enable the command to run and
block but fail the command reprocess once the client is unblocked.
3. One of the changes in this PR is that no command stats are being updated once the
command is blocked (all stats will be updated once the client is unblocked). This implies
that when we have many clients blocked, users will no longer be able to get that information
from the command stats. However the information can still be gathered from the client list.
**Client blocking**
---------------------------------------------------
the blocking on key will still be triggered the same way as it is done today.
in order to block the current client on list of keys, the call to
blockForKeys will still need to be made which will perform the same as it is today:
* add the client to the list of blocked clients on each key
* keep the key with a matching list node (position in the global blocking clients list for that key)
in the client private blocking key dict.
* flag the client with CLIENT_BLOCKED
* update blocking statistics
* register the client on the timeout table
**Key Unblock**
---------------------------------------------------
Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
the implementation in that part will stay the same as today - adding the key to the global readyList.
The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
is in order to keep the signal operation as short as possible, since it is called during the command processing.
The main change is that instead of going through a dedicated code path that operates the blocked command
we will just call processPendingCommandsAndResetClient.
**ClientUnblock (keys)**
---------------------------------------------------
1. Unblocking clients on keys will be triggered after command is
processed and during the beforeSleep
8. the general schema is:
9. For each key *k* in the readyList:
```
For each client *c* which is blocked on *k*:
in case either:
1. *k* exists AND the *k* type matches the current client blocking type
OR
2. *k* exists and *c* is blocked on module command
OR
3. *k* does not exists and *c* was blocked with the flag
unblock_on_deleted_key
do:
1. remove the client from the list of clients blocked on this key
2. remove the blocking list node from the client blocking key dict
3. remove the client from the timeout list
10. queue the client on the unblocked_clients list
11. *NEW*: call processCommandAndResetClient(c);
```
*NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
which will queue the client for processing in moduleUnblockedClients list.
**Process Unblocked clients**
---------------------------------------------------
The process of all unblocked clients is done in the beforeSleep and no change is planned
in that part.
The general schema will be:
For each client *c* in server.unblocked_clients:
* remove client from the server.unblocked_clients
* set back the client readHandler
* continue processing the pending command and input buffer.
*Some notes regarding the new implementation*
---------------------------------------------------
1. Although it was proposed, it is currently difficult to remove the
read handler from the client while it is blocked.
The reason is that a blocked client should be unblocked when it is
disconnected, or we might consume data into void.
2. While this PR mainly keep the current blocking logic as-is, there
might be some future additions to the infrastructure that we would
like to have:
- allow non-preemptive blocking of client - sometimes we can think
that a new kind of blocking can be expected to not be preempt. for
example lets imagine we hold some keys on disk and when a command
needs to process them it will block until the keys are uploaded.
in this case we will want the client to not disconnect or be
unblocked until the process is completed (remove the client read
handler, prevent client timeout, disable unblock via debug command etc...).
- allow generic blocking based on command declared keys - we might
want to add a hook before command processing to check if any of the
declared keys require the command to block. this way it would be
easier to add new kinds of key-based blocking mechanisms.
Co-authored-by: Oran Agra <oran@redislabs.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
There is a race in the test:
```
*** [err]: Diskless load swapdb (async_loading): new database is exposed after swapping in tests/integration/replication.tcl
Expected 'myvalue' to be equal to '' (context: type eval line 3 cmd {assert_equal [$replica GET mykey] ""} proc ::test)
```
When doing `$replica GET mykey`, the replica is using the old database.
The reason may be that when doing `master client kill type replica`,
the replica did not yet realize it got disconnected from the master.
So the check of master_link_status fails, and the replica did not
finish the swapdb and the loading.
In that case, i think the solution is to check the sync_full stat on
the master and wait for it to get incremented from the previous value.
i.e. the way to know that we're done with the full sync is not to check
that our state is up (could be up if we check too early), but rather
check that the sync_full counter got incremented.
During the reviewing, we found another race, in Aborted testType,
the `$master config set rdb-key-save-delay 10000` is done after we
already initiated the disconnection, so there's a chance that the replica
will attempt to reconnect before that call, in which case if we fork() before
it, the config will not take effect. Move it to above the disconnection.
Co-authored-by: Oran Agra <oran@redislabs.com>
attach_to_replication_stream already stops pings, but it stops them on
the server we connect to, and in this case it's a replica, and we need
to stop them on the real master.
There is a timing issue in the test, happens with valgrind:
```
*** [err]: diskless fast replicas drop during rdb pipe in tests/integration/replication.tcl
log message of '"*Loading DB in memory*"' not found in ./tests/tmp/server.3580.246/stdout after line: 0 till line: 39
```
The server logs:
```
43465:S 03 Dec 2022 01:26:25.664 * Trying a partial resynchronization (request 15155fa24af0539b70428f9b41f4f7129d774560:1).
43465:S 03 Dec 2022 01:26:35.133 * Full resync from master: 8ddf5a3f7c8ca1061c6b29aa84e7c985c5b29c61:680
```
From the logs, we can see it took almost 10s to get full resync response,
happens with valgrind. it's extremely slow. So i guess it's just an
insufficient wait_for_condition timeout.
Set the time to 15s, and modify other similar places at the same time.
Fix a few issues with the recent #11463
* use exitFromChild instead of exit
* test should ignore defunct process since that's what we expect to
happen for thees child processes when the parent dies.
* fix typo
Co-authored-by: Binbin <binloveplay1314@qq.com>
During a diskless sync, if the master main process crashes, the child would
have hung in `write`. This fix closes the read fd on the child side, so that if the
parent crashes, the child will get a write error and exit.
This change also fixes disk-based replication, BGSAVE and AOFRW.
In that case the child wouldn't have been hang, it would have just kept
running until done which may be pointless.
There is a certain degree of risk here. in case there's a BGSAVE child that could
maybe succeed and the parent dies for some reason, the old code would have let
the child keep running and maybe succeed and avoid data loss.
On the other hand, if the parent is restarted, it would have loaded an old rdb file
(or none), and then the child could reach the end and rename the rdb file (data
conflicting with what the parent has), or also have a race with another BGSAVE
child that the new parent started.
Note that i removed a comment saying a write error will be ignored in the child
and handled by the parent (this comment was very old and i don't think relevant).
Fix `Test replication with lazy expire` test to not timeout the wait command.
This fix will allow the test to pass on slow environments and when running with valgrind.
The PR reverts the changes made on #10969.
The reason for revert was trigger because of occasional test failure
that started after the PR was merged.
The issue is that if there is a lazy expire during the command invocation,
the `del` command is added to the replication stream after the command
placeholder. So the logical order on the primary is:
* Delete the key (lazy expiration)
* Command invocation
But the replication stream gets it the other way around:
* Command invocation (because the command is written into the placeholder)
* Delete the key (lazy expiration)
So if the command write to the key that was just lazy expired we will get
inconsistency between primary and replica.
One solution we considered is to add another lazy expire replication stream
and write all the lazy expire there. Then when replicating, we will replicate the
lazy expire replication stream first. This will solve this specific test failure but
we realize that the issues does not ends here and the more we dig the more
problems we find.One of the example we thought about (that can actually
crashes Redis) is as follow:
* User perform SINTERSTORE
* When Redis tries to fetch the second input key it triggers lazy expire
* The lazy expire trigger a module logic that deletes the first input key
* Now Redis hold the robj of the first input key that was actually freed
We believe we took the wrong approach and we will come up with another
PR that solve the problem differently, for now we revert the changes so we
will not have the tests failure.
Notice that not the entire code was revert, some parts of the PR are changes
that we would like to keep. The changes that **was** reverted are:
* Saving a placeholder for replication at the beginning of the command (`call` function)
* Order of the replication stream on active expire and eviction (we will decide how
to handle it correctly on follow up PR)
* `Spop` changes are no longer needed (because we reverted the placeholder code)
Changes that **was not** reverted:
* On expire/eviction, wrap the `del` and the notification effect in a multi exec.
* `PropagateNow` function can still accept a special dbid, -1, indicating not to replicate select.
* Keep optimisation for reusing the `alsoPropagate` array instead of allocating it each time.
Tests:
* All tests was kept and only few tests was modify to work correctly with the changes
* Test was added to verify that the revert fixes the issues.
The kill above is sometimes successful and sometimes already too late.
The PING in pysnc wrong offset test got rejected by bgsaveerr because
lastbgsave_status is C_ERR.
In theory, using diskless can avoid PING being affected, because when
the replica is dropped, we will kill the child with SIGUSR1, and this
will not affect lastbgsave_status.
Anyway, this kill is not particularly needed here, dropping the kill
is the best one, since we do have the waitForBgsave, so just let it
take care of the bgsave. No need for fast termination.
## FLUSHALL
We used to restore the dirty counter after `rdbSave` zeroed it if we enable save.
Otherwise FLUSHALL will not be replicated nor put into the AOF.
And then we do increment it again below.
Without that extra dirty++, when db was already empty, FLUSHALL
will not be replicated nor put into the AOF.
We now gonna replace all that dirty counter magic with a call
to forceCommandPropagation (REPL and AOF), instead of all the
messing around with the dirty counter.
Added tests to cover three part (dirty counter, REPL, AOF).
One benefit other than cleaner code is that the `rdb_changes_since_last_save` is correct in this case.
## FLUSHDB
FLUSHDB was not replicated nor put into the AOF when db was already empty.
Unlike DEL on a non-existing key, FLUSHDB always does something, and that's to call the module hook.
So basically FLUSHDB is never a NOP, and thus it should always be propagated.
Not doing that, could mean that if a module does something in that hook, and wants to
avoid issues of that hook being missing on the replica if the db is empty, it'll need to do complicated things.
So now FLUSHDB add call forceCommandPropagation, we will always propagate FLUSHDB.
Always propagating FLUSHDB seems like a safe approach that shouldn't have any drawbacks (other than looking odd)
This was mentioned in #8972
## Test section:
We actually found it while solving a race condition in the BGSAVE test (other.tcl).
It was found in extra_ci Daily Arm64 (test-libc-malloc).
```
[exception]: Executing test client: ERR Background save already in progress.
ERR Background save already in progress
```
It look like `r flushdb` trigger (schedule) a bgsave right after `waitForBgsave r` and before `r save`.
Changing flushdb to flushall, FLUSHALL will do a foreground save and then set the dirty counter to 0.
## Move library meta data to be part of the library payload.
Following the discussion on https://github.com/redis/redis/issues/10429 and the intention to add (in the future) library versioning support, we believe that the entire library metadata (like name and engine) should be part of the library payload and not provided by the `FUNCTION LOAD` command. The reasoning behind this is that the programmer who developed the library should be the one who set those values (name, engine, and in the future also version). **It is not the responsibility of the admin who load the library into the database.**
The PR moves all the library metadata (engine and function name) to be part of the library payload. The metadata needs to be provided on the first line of the payload using the shebang format (`#!<engine> name=<name>`), example:
```lua
#!lua name=test
redis.register_function('foo', function() return 1 end)
```
The above script will run on the Lua engine and will create a library called `test`.
## API Changes (compare to 7.0 rc2)
* `FUNCTION LOAD` command was change and now it simply gets the library payload and extract the engine and name from the payload. In addition, the command will now return the function name which can later be used on `FUNCTION DELETE` and `FUNCTION LIST`.
* The description field was completely removed from`FUNCTION LOAD`, and `FUNCTION LIST`
## Breaking Changes (compare to 7.0 rc2)
* Library description was removed (we can re-add it in the future either as part of the shebang line or an additional line).
* Loading an AOF file that was generated by either 7.0 rc1 or 7.0 rc2 will fail because the old command syntax is invalid.
## Notes
* Loading an RDB file that was generated by rc1 / rc2 **is** supported, Redis will automatically add the shebang to the libraries payloads (we can probably delete that code after 7.0.3 or so since there's no need to keep supporting upgrades from an RC build).
In order to make sure no more commands processed, we wait that
the 'load handlers' will disconncet.
The test by mistake waited on the (last) slave instead of the master.
Added regression tests for #10020 / #10081 / #10243.
The above PRs fixed some crashes due to an asserting,
see function `clientHasPendingReplies` (introduced in #9166).
This commit added some tests to cover the above scenario.
These tests will all fail in #9166, althought fixed not,
there is value in adding these tests to cover and verify
the changes. And it also can cover #8868 (verify the logs).
Other changes:
1. Reduces the wait time in `waitForBgsave` and `waitForBgrewriteaof`
from 1s to 50ms, which should reduce the time for some tests.
2. Improve the test infra to print context when `assert_match` fails.
3. Improve the test infra to print `$error` when `assert_error` fails.
```
Expected an error matching 'ERR*' but got 'OK' (context: type eval line 4 cmd {assert_error "ERR*" {r set a b}} proc ::test)
```
1. enable diskless replication by default
2. add a new config named repl-diskless-sync-max-replicas that enables
replication to start before the full repl-diskless-sync-delay was
reached.
3. put replica online sooner on the master (see below)
4. test suite uses repl-diskless-sync-delay of 0 to be faster
5. a few tests that use multiple replica on a pre-populated master, are
now using the new repl-diskless-sync-max-replicas
6. fix possible timing issues in a few cluster tests (see below)
put replica online sooner on the master
----------------------------------------------------
there were two tests that failed because they needed for the master to
realize that the replica is online, but the test code was actually only
waiting for the replica to realize it's online, and in diskless it could
have been before the master realized it.
changes include two things:
1. the tests wait on the right thing
2. issues in the master, putting the replica online in two steps.
the master used to put the replica as online in 2 steps. the first
step was to mark it as online, and the second step was to enable the
write event (only after getting ACK), but in fact the first step didn't
contains some of the tasks to put it online (like updating good slave
count, and sending the module event). this meant that if a test was
waiting to see that the replica is online form the point of view of the
master, and then confirm that the module got an event, or that the
master has enough good replicas, it could fail due to timing issues.
so now the full effect of putting the replica online, happens at once,
and only the part about enabling the writes is delayed till the ACK.
fix cluster tests
--------------------
I added some code to wait for the replica to sync and avoid race
conditions.
later realized the sentinel and cluster tests where using the original 5
seconds delay, so changed it to 0.
this means the other changes are probably not needed, but i suppose
they're still better (avoid race conditions)
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
This pr is mainly to solve the problem that redis process cannot be exited normally, due to changes in #10003.
When a test uses the `key-load-delay` config to delay loading, but does not reset it at the end of the test, will lead to server wait for the loading to reach the event
loop (once in 2mb) before actually shutting down.
issue started failing after #9878 was merged (made an exiting test more sensitive)
looks like #9982 didn't help, tested this one and it seems to work better.
this commit does two things:
1. reduce the extra delay i added earlier and instead add more keys, the effect no duration
of replication is the same, but the intervals in which the server is responsive to the tcl client is higher.
2. improve the test infra to print context when assert_error fails.
## background
Till now CONFIG SET was blocked during loading.
(In the not so distant past, GET was disallowed too)
We recently (not released yet) added an async-loading mode, see #9323,
and during that time it'll serve CONFIG SET and any other command.
And now we realized (#9770) that some configs, and commands are dangerous
during async-loading.
## changes
* Allow most CONFIG SET during loading (both on async-loading and normal loading)
* Allow CONFIG REWRITE and CONFIG RESETSTAT during loading
* Block a few config during loading (`appendonly`, `repl-diskless-load`, and `dir`)
* Block a few commands during loading (list below)
## the blocked commands:
* SAVE - obviously we don't wanna start a foregreound save during loading 8-)
* BGSAVE - we don't mind to schedule one, but we don't wanna fork now
* BGREWRITEAOF - we don't mind to schedule one, but we don't wanna fork now
* MODULE - we obviously don't wanna unload a module during replication / rdb loading
(MODULE HELP and MODULE LIST are not blocked)
* SYNC / PSYNC - we're in the middle of RDB loading from master, must not allow sync
requests now.
* REPLICAOF / SLAVEOF - we're in the middle of replicating, maybe it makes sense to let
the user abort it, but he couldn't do that so far, i don't wanna take any risk of bugs due to odd state.
* CLUSTER - only allow [HELP, SLOTS, NODES, INFO, MYID, LINKS, KEYSLOT, COUNTKEYSINSLOT,
GETKEYSINSLOT, RESET, REPLICAS, COUNT_FAILURE_REPORTS], for others, preserve the status quo
## other fixes
* processEventsWhileBlocked had an issue when being nested, this could happen with a busy script
during async loading (new), but also in a busy script during AOF loading (old). this lead to a crash in
the scenario described in #6988
Redis function unit is located inside functions.c
and contains Redis Function implementation:
1. FUNCTION commands:
* FUNCTION CREATE
* FCALL
* FCALL_RO
* FUNCTION DELETE
* FUNCTION KILL
* FUNCTION INFO
2. Register engine
In addition, this commit introduce the first engine
that uses the Redis Function capabilities, the
Lua engine.
In #9323, when `repl-diskless-load` is enabled and set to `swapdb`,
if the master replication ID hasn't changed, we can load data-set
asynchronously, and serving read commands during the full resync.
In `diskless loading short read` test, after a loading successfully,
we will wait for the loading to stop and continue the for loop.
After the introduction of `async_loading`, we also need to check it.
Otherwise the next loop will start too soon, may trigger a timing issue.
In both tests, "diskless loading short read" and "diskless loading short read with module",
the timeout of waiting for the replica to respond to a short read and log it, is too short.
Also, add --dump-logs in runtest-moduleapi for valgrind runs.
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
When repl-diskless-load is enabled, the connection is set to the blocking state.
The connection may be interrupted by a signal during a system call.
This would have resulted in a disconnection and possibly a reconnection loop.
Co-authored-by: Oran Agra <oran@redislabs.com>
So it looks like sampling set loglines [count_log_lines -2] was
executed too late, and the replication managed to complete before that.
```
*** [err]: diskless no replicas drop during rdb pipe in tests/integration/replication.tcl
log message of '"*Diskless rdb transfer, done reading from pipe, 2 replicas still up*"' not found in ./tests/tmp/server.6124.69/stdout after line: 52 till line: 52
```
Changes:
1. when we search the master log file, we start to search from before we sent the REPLICAOF
command, to prevent a race in which the replication completed before we sampled the log line count.
2. we don't need to sample the replica loglines sine it's a fresh resplica that's just been started, so the message
we're looking for is the first occurrence in the log, we can start search from 0.
Co-authored-by: Oran Agra <oran@redislabs.com>
## Background
For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
more replicas more waste, and allocate/free memory for every reply list also cost much.
If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
replicas and can't finish synchronization with replica. If we set client-output-buffer-limit big,
master may be OOM when there are many replicas that separately keep much memory.
Because replication buffers of different replica client are the same, one simple idea is that
all replicas only use one replication buffer, that will effectively save memory.
Since replication backlog content is the same as replicas' output buffer, now we
can discard replication backlog memory and use global shared replication buffer
to implement replication backlog mechanism.
## Implementation
I create one global "replication buffer" which contains content of replication stream.
The structure of "replication buffer" is similar to the reply list that exists in every client.
But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
```c
/* Replication buffer blocks is the list of replBufBlock.
*
* +--------------+ +--------------+ +--------------+
* | refcount = 1 | ... | refcount = 0 | ... | refcount = 2 |
* +--------------+ +--------------+ +--------------+
* | / \
* | / \
* | / \
* Repl Backlog Replia_A Replia_B
*
* Each replica or replication backlog increments only the refcount of the
* 'ref_repl_buf_node' which it points to. So when replica walks to the next
* node, it should first increase the next node's refcount, and when we trim
* the replication buffer nodes, we remove node always from the head node which
* refcount is 0. If the refcount of the head node is not 0, we must stop
* trimming and never iterate the next node. */
/* Similar with 'clientReplyBlock', it is used for shared buffers between
* all replica clients and replication backlog. */
typedef struct replBufBlock {
int refcount; /* Number of replicas or repl backlog using. */
long long id; /* The unique incremental number. */
long long repl_offset; /* Start replication offset of the block. */
size_t size, used;
char buf[];
} replBufBlock;
```
So now when we feed replication stream into replication backlog and all replicas, we only need
to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
replication backlog and replicas to references of the global replication buffer blocks. And we also
need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
replication backlog if exceeding `repl-backlog-size`.
When sending reply to replicas, we also need to iterate replication buffer blocks and send its
content, when totally sending one block for replica, we decrease current node count and
increase the next current node count, and then free the block which reference is 0 from the
head of replication buffer blocks.
Since now we use linked list to manage replication backlog, it may cost much time for iterating
all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
store some nodes for index, but to avoid rax tree occupying too much memory, i record
one per 64 nodes for index.
Currently, to make partial resynchronization as possible as much, we always let replication
backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
we trim backlog incrementally (free 64 blocks per call now), and make it faster in
`beforeSleep` (free 640 blocks).
### Other changes
- `mem_total_replication_buffers`: we add this field in INFO command, it means the total
memory of replication buffers used.
- `mem_clients_slaves`: now even replica is slow to replicate, and its output buffer memory
is not 0, but it still may be 0, since replication backlog and replicas share one global replication
buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
of repl backlog.
- Key eviction
Since all replicas and replication backlog share global replication buffer, we think only the
part of exceeding backlog size the extra separate consumption of replicas.
Because we trim backlog incrementally in the background, backlog size may exceeds our
setting if slow replicas that reference vast replication buffer blocks disconnect.
To avoid massive eviction loop, we don't count the delayed freed replication backlog into
used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
- `client-output-buffer-limit` check for replica clients
It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
config (partial sync will succeed and then replica will get disconnected). Such a configuration is
ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
implications since the replica client will share the backlog buffers memory.
- Drop replication backlog after loading data if needed
We always create replication backlog if server is a master, we need it because we put DELs in
it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
we drop it.
- Multi IO threads
Since all replicas and replication backlog use global replication buffer, if I/O threads are enabled,
to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
## Other optimizations
This solution resolve some other problem:
- When replicas disconnect with master since of out of output buffer limit, releasing the output
buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
it doesn't cause freezing.
- This implementation may mitigate reply list copy cost time(also freezes server) when one replication
has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
reference info, it is very light.
- If we set replication backlog size big, it also may cost much time to copy replication backlog into
replica's output buffer. But this commit eliminates this problem.
- Resizing replication backlog size doesn't empty current replication backlog content.
in the past few days i've seen two failures in the valgrind daily test.
*** [err]: slave fails full sync and diskless load swapdb recovers it in tests/integration/replication.tcl
Replica didn't get into loading mode
can't reproduce it, but i'm hoping it's just too slow (to start loading within 5 seconds)
* Delay to discard cache master when full synchronization
* Don't disconnect with replicas before loading transferred RDB when full sync
Previously, once replica need to start full synchronization with master,
it will discard cached master whatever full synchronization is failed or
not.
Now we discard cached master only when transferring RDB is finished
and start to change data space, this make replica could start partial
resynchronization with another new master if new master is failed
during full synchronization.
# replication-3.tcl
had a test timeout failure with valgrind on daily CI:
```
*** [err]: SLAVE can reload "lua" AUX RDB fields of duplicated scripts in tests/integration/replication-3.tcl
Replication not started.
```
replication took more than 70 seconds.
https://github.com/redis/redis/runs/2854037905?check_suite_focus=true
on my machine it takes only about 30, but i can see how 50 seconds isn't enough.
# replication.tcl
loading was over too quickly in freebsd daily CI:
```
*** [err]: slave fails full sync and diskless load swapdb recovers it in tests/integration/replication.tcl
Expected '0' to be equal to '1' (context: type eval line 44 cmd {assert_equal [s -1 loading] 1} proc ::start_server)
```
# rdb.tcl
loading was over too quickly.
increase the time loading takes, and decrease the amount of work we try to achieve in that time.
This PR adds a spell checker CI action that will fail future PRs if they introduce typos and spelling mistakes.
This spell checker is based on blacklist of common spelling mistakes, so it will not catch everything,
but at least it is also unlikely to cause false positives.
Besides that, the PR also fixes many spelling mistakes and types, not all are a result of the spell checker we use.
Here's a summary of other changes:
1. Scanned the entire source code and fixes all sorts of typos and spelling mistakes (including missing or extra spaces).
2. Outdated function / variable / argument names in comments
3. Fix outdated keyspace masks error log when we check `config.notify-keyspace-events` in loadServerConfigFromString.
4. Trim the white space at the end of line in `module.c`. Check: https://github.com/redis/redis/pull/7751
5. Some outdated https link URLs.
6. Fix some outdated comment. Such as:
- In README: about the rdb, we used to said create a `thread`, change to `process`
- dbRandomKey function coment (about the dictGetRandomKey, change to dictGetFairRandomKey)
- notifyKeyspaceEvent fucntion comment (add type arg)
- Some others minor fix in comment (Most of them are incorrectly quoted by variable names)
7. Modified the error log so that users can easily distinguish between TCP and TLS in `changeBindAddr`
This commit revives the improves the ability to run the test suite against
external servers, instead of launching and managing `redis-server` processes as
part of the test fixture.
This capability existed in the past, using the `--host` and `--port` options.
However, it was quite limited and mostly useful when running a specific tests.
Attempting to run larger chunks of the test suite experienced many issues:
* Many tests depend on being able to start and control `redis-server` themselves,
and there's no clear distinction between external server compatible and other
tests.
* Cluster mode is not supported (resulting with `CROSSSLOT` errors).
This PR cleans up many things and makes it possible to run the entire test suite
against an external server. It also provides more fine grained controls to
handle cases where the external server supports a subset of the Redis commands,
limited number of databases, cluster mode, etc.
The tests directory now contains a `README.md` file that describes how this
works.
This commit also includes additional cleanups and fixes:
* Tests can now be tagged.
* Tag-based selection is now unified across `start_server`, `tags` and `test`.
* More information is provided about skipped or ignored tests.
* Repeated patterns in tests have been extracted to common procedures, both at a
global level and on a per-test file basis.
* Cleaned up some cases where test setup was based on a previous test executing
(a major anti-pattern that repeats itself in many places).
* Cleaned up some cases where test teardown was not part of a test (in the
future we should have dedicated teardown code that executes even when tests
fail).
* Fixed some tests that were flaky running on external servers.
In diskless replication, we create a read pipe for the RDB, between the child and the parent.
When we close this pipe (fd), the read handler also needs to be removed from the event loop (if it still registered).
Otherwise, next time we will use the same fd, the registration will be fail (panic), because
we will use EPOLL_CTL_MOD (the fd still register in the event loop), on fd that already removed from epoll_ctl
When test stop 'load handler' by killing the process that generating the load,
some commands that already in the input buffer, still might be processed by the server.
This may cause some instability in tests, that count on that no more commands
processed after we stop the `load handler'
In this commit, new proc 'wait_load_handlers_disconnected' added, to verify that no more
cammands from any 'load handler' prossesed, by checking that the clients who
genreate the load is disconnceted.
Also, replacing check of dbsize with wait_for_ofs_sync before comparing debug digest, as
it would fail in case the last key the workload wrote was an overridden key (not a new one).
Affected tests
Race fix:
- failover command to specific replica works
- Connect multiple replicas at the same time (issue #141), master diskless=$mdl, replica diskless=$sdl
- AOF rewrite during write load: RDB preamble=$rdbpre
Cleanup and speedup:
- Test replication with blocking lists and sorted sets operations
- Test replication with parallel clients writing in different DBs
- Test replication partial resync: $descr (diskless: $mdl, $sdl, reconnect: $reconnect
In github actions CI with valgrind, i saw that even the fast replica
(one that wasn't paused), didn't get to complete the replication fast
enough, and ended up getting disconnected by timeout.
Additionally, due to a typo in uname, we didn't get to actually run the
CPU efficiency part of the test.
Starting redis 6.0 (part of the TLS feature), diskless master uses pipe from the fork
child so that the parent is the one sending data to the replicas.
This mechanism has an issue in which a hung replica will cause the master to wait
for it to read the data sent to it forever, thus preventing the fork child from terminating
and preventing the creations of any other forks.
This PR adds a timeout mechanism, much like the ACK-based timeout,
we disconnect replicas that aren't reading the RDB file fast enough.