This PR is 1 of 3 PRs intended to achieve the goal of 1 million requests
per second, as detailed by [dan touitou](https://github.com/touitou-dan)
in https://github.com/valkey-io/valkey/issues/22. This PR modifies the
IO threads to be fully asynchronous, which is a first and necessary step
to allow more work offloading and better utilization of the IO threads.
### Current IO threads state:
Valkey IO threads were introduced in Redis 6.0 to allow better
utilization of multi-core machines. Before this, Redis was
single-threaded and could only use one CPU core for network and command
processing. The introduction of IO threads helps in offloading the IO
operations to multiple threads.
**Current IO Threads flow:**
1. Initialization: When Redis starts, it initializes a specified number
of IO threads. These threads are in addition to the main thread, each
thread starts with an empty list, the main thread will populate that
list in each event-loop with pending-read-clients or
pending-write-clients.
2. Read Phase: The main thread accepts incoming connections and reads
requests from clients. The reading of requests are offloaded to IO
threads. The main thread puts the clients ready-to-read in a list and
set the global io_threads_op to IO_THREADS_OP_READ, the IO threads pick
the clients up, perform the read operation and parse the first incoming
command.
3. Command Processing: After reading the requests, command processing is
still single-threaded and handled by the main thread.
4. Write Phase: Similar to the read phase, the write phase is also be
offloaded to IO threads. The main thread prepares the response in the
clients’ output buffer then the main thread puts the client in the list,
and sets the global io_threads_op to the IO_THREADS_OP_WRITE. The IO
threads then pick the clients up and perform the write operation to send
the responses back to clients.
5. Synchronization: The main-thread communicate with the threads on how
many jobs left per each thread with atomic counter. The main-thread
doesn’t access the clients while being handled by the IO threads.
**Issues with current implementation:**
* Underutilized Cores: The current implementation of IO-threads leads to
the underutilization of CPU cores.
* The main thread remains responsible for a significant portion of
IO-related tasks that could be offloaded to IO-threads.
* When the main-thread is processing client’s commands, the IO threads
are idle for a considerable amount of time.
* Notably, the main thread's performance during the IO-related tasks is
constrained by the speed of the slowest IO-thread.
* Limited Offloading: Currently, Since the Main-threads waits
synchronously for the IO threads, the Threads perform only read-parse,
and write operations, with parsing done only for the first command. If
the threads can do work asynchronously we may offload more work to the
threads reducing the load from the main-thread.
* TLS: Currently, we don't support IO threads with TLS (where offloading
IO would be more beneficial) since TLS read/write operations are not
thread-safe with the current implementation.
### Suggested change
Non-blocking main thread - The main thread and IO threads will operate
in parallel to maximize efficiency. The main thread will not be blocked
by IO operations. It will continue to process commands independently of
the IO thread's activities.
**Implementation details**
**Inter-thread communication.**
* We use a static, lock-free ring buffer of fixed size (2048 jobs) for
the main thread to send jobs and for the IO to receive them. If the ring
buffer fills up, the main thread will handle the task itself, acting as
back pressure (in case IO operations are more expensive than command
processing). A static ring buffer is a better candidate than a dynamic
job queue as it eliminates the need for allocation/freeing per job.
* An IO job will be in the format: ` [void* function-call-back | void
*data] `where data is either a client to read/write from and the
function-ptr is the function to be called with the data for example
readQueryFromClient using this format we can use it later to offload
other types of works to the IO threads.
* The Ring buffer is one way from the main-thread to the IO thread, Upon
read/write event the main thread will send a read/write job then in
before sleep it will iterate over the pending read/write clients to
checking for each client if the IO threads has already finished handling
it. The IO thread signals it has finished handling a client read/write
by toggling an atomic flag read_state / write_state on the client
struct.
**Thread Safety**
As suggested in this solution, the IO threads are reading from and
writing to the clients' buffers while the main thread may access those
clients.
We must ensure no race conditions or unsafe access occurs while keeping
the Valkey code simple and lock free.
Minimal Action in the IO Threads
The main change is to limit the IO thread operations to the bare
minimum. The IO thread will access only the client's struct and only the
necessary fields in this struct.
The IO threads will be responsible for the following:
* Read Operation: The IO thread will only read and parse a single
command. It will not update the server stats, handle read errors, or
parsing errors. These tasks will be taken care of by the main thread.
* Write Operation: The IO thread will only write the available data. It
will not free the client's replies, handle write errors, or update the
server statistics.
To achieve this without code duplication, the read/write code has been
refactored into smaller, independent components:
* Functions that perform only the read/parse/write calls.
* Functions that handle the read/parse/write results.
This refactor accounts for the majority of the modifications in this PR.
**Client Struct Safe Access**
As we ensure that the IO threads access memory only within the client
struct, we need to ensure thread safety only for the client's struct's
shared fields.
* Query Buffer
* Command parsing - The main thread will not try to parse a command from
the query buffer when a client is offloaded to the IO thread.
* Client's memory checks in client-cron - The main thread will not
access the client query buffer if it is offloaded and will handle the
querybuf grow/shrink when the client is back.
* CLIENT LIST command - The main thread will busy-wait for the IO thread
to finish handling the client, falling back to the current behavior
where the main thread waits for the IO thread to finish their
processing.
* Output Buffer
* The IO thread will not change the client's bufpos and won't free the
client's reply lists. These actions will be done by the main thread on
the client's return from the IO thread.
* bufpos / block→used: As the main thread may change the bufpos, the
reply-block→used, or add/delete blocks to the reply list while the IO
thread writes, we add two fields to the client struct: io_last_bufpos
and io_last_reply_block. The IO thread will write until the
io_last_bufpos, which was set by the main-thread before sending the
client to the IO thread. If more data has been added to the cob in
between, it will be written in the next write-job. In addition, the main
thread will not trim or merge reply blocks while the client is
offloaded.
* Parsing Fields
* Client's cmd, argc, argv, reqtype, etc., are set during parsing.
* The main thread will indicate to the IO thread not to parse a cmd if
the client is not reset. In this case, the IO thread will only read from
the network and won't attempt to parse a new command.
* The main thread won't access the c→cmd/c→argv in the CLIENT LIST
command as stated before it will busy wait for the IO threads.
* Client Flags
* c→flags, which may be changed by the main thread in multiple places,
won't be accessed by the IO thread. Instead, the main thread will set
the c→io_flags with the information necessary for the IO thread to know
the client's state.
* Client Close
* On freeClient, the main thread will busy wait for the IO thread to
finish processing the client's read/write before proceeding to free the
client.
* Client's Memory Limits
* The IO thread won't handle the qb/cob limits. In case a client crosses
the qb limit, the IO thread will stop reading for it, letting the main
thread know that the client crossed the limit.
**TLS**
TLS is currently not supported with IO threads for the following
reasons:
1. Pending reads - If SSL has pending data that has already been read
from the socket, there is a risk of not calling the read handler again.
To handle this, a list is used to hold the pending clients. With IO
threads, multiple threads can access the list concurrently.
2. Event loop modification - Currently, the TLS code
registers/unregisters the file descriptor from the event loop depending
on the read/write results. With IO threads, multiple threads can modify
the event loop struct simultaneously.
3. The same client can be sent to 2 different threads concurrently
(https://github.com/redis/redis/issues/12540).
Those issues were handled in the current PR:
1. The IO thread only performs the read operation. The main thread will
check for pending reads after the client returns from the IO thread and
will be the only one to access the pending list.
2. The registering/unregistering of events will be similarly postponed
and handled by the main thread only.
3. Each client is being sent to the same dedicated thread (c→id %
num_of_threads).
**Sending Replies Immediately with IO threads.**
Currently, after processing a command, we add the client to the
pending_writes_list. Only after processing all the clients do we send
all the replies. Since the IO threads are now working asynchronously, we
can send the reply immediately after processing the client’s requests,
reducing the command latency. However, if we are using AOF=always, we
must wait for the AOF buffer to be written, in which case we revert to
the current behavior.
**IO threads dynamic adjustment**
Currently, we use an all-or-nothing approach when activating the IO
threads. The current logic is as follows: if the number of pending write
clients is greater than twice the number of threads (including the main
thread), we enable all threads; otherwise, we enable none. For example,
if 8 IO threads are defined, we enable all 8 threads if there are 16
pending clients; else, we enable none.
It makes more sense to enable partial activation of the IO threads. If
we have 10 pending clients, we will enable 5 threads, and so on. This
approach allows for a more granular and efficient allocation of
resources based on the current workload.
In addition, the user will now be able to change the number of I/O
threads at runtime. For example, when decreasing the number of threads
from 4 to 2, threads 3 and 4 will be closed after flushing their job
queues.
**Tests**
Currently, we run the io-threads tests with 4 IO threads
(443d80f168/.github/workflows/daily.yml (L353)).
This means that we will not activate the IO threads unless there are 8
(threads * 2) pending write clients per single loop, which is unlikely
to happened in most of tests, meaning the IO threads are not currently
being tested.
To enforce the main thread to always offload work to the IO threads,
regardless of the number of pending events, we add an
events-per-io-thread configuration with a default value of 2. When set
to 0, this configuration will force the main thread to always offload
work to the IO threads.
When we offload every single read/write operation to the IO threads, the
IO-threads are running with 100% CPU when running multiple tests
concurrently some tests fail as a result of larger than expected command
latencies. To address this issue, we have to add some after or wait_for
calls to some of the tests to ensure they pass with IO threads as well.
Signed-off-by: Uri Yagelnik <uriy@amazon.com>
I have validated that these settings closely match the existing coding
style with one major exception on `BreakBeforeBraces`, which will be
`Attach` going forward. The mixed `BreakBeforeBraces` styles in the
current codebase are hard to imitate and also very odd IMHO - see below
```
if (a == 1) { /*Attach */
}
```
```
if (a == 1 ||
b == 2)
{ /* Why? */
}
```
Please do NOT merge just yet. Will add the github action next once the
style is reviewed/approved.
---------
Signed-off-by: Ping Xie <pingxie@google.com>
Signed-off-by: Daniel House <daniel.house@huawei.com>
Signed-off-by: daniel-house <danny@cs.toronto.edu>
Co-authored-by: Daniel House <daniel.house@huawei.com>
This includes comments used for module API documentation.
* Strategy for replacement: Regex search: `(//|/\*| \*|#).* ("|\()?(r|R)edis( |\.
|'|\n|,|-|\)|")(?!nor the names of its contributors)(?!Ltd.)(?!Labs)(?!Contributors.)`
* Don't edit copyright comments
* Replace "Redis version X.X" -> "Redis OSS version X.X" to distinguish
from newly licensed repository
* Replace "Redis Object" -> "Object"
* Exclude markdown for now
* Don't edit Lua scripting comments referring to redis.X API
* Replace "Redis Protocol" -> "RESP"
* Replace redis-benchmark, -cli, -server, -check-aof/rdb with "valkey-"
prefix
* Most other places, I use best judgement to either remove "Redis", or
replace with "the server" or "server"
Fixes#148
---------
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Fix#146
Removed REDISMODULE_ prefixes from the core source code to align with
the new SERVERMODULE_ naming convention. Added a new 'redismodule.h'
header file to ensure full backward compatibility with existing modules.
This compatibility layer maps all legacy REDISMODULE_ prefixed
identifiers to their new SERVERMODULE_ equivalents, allowing existing
Redis modules to function without modification.
---------
Signed-off-by: Ping Xie <pingxie@google.com>
There are situations (especially in TLS) in which the engine gets too occupied managing a large number of new connections. Existing connections may time-out while the server is processing the new connections initial TLS handshakes, which may cause cause new connections to be established, perpetuating the problem. To better manage the tradeoff between new connection rate and other workloads, this change adds a new config to manage maximum number of new connections per event loop cycle, instead of using a predetermined number (currently 1000).
This change introduces two new configurations, max-new-connections-per-cycle and max-new-tls-connections-per-cycle. The default value of the tcp connections is 10 per cycle and the default value of tls connections per cycle is 1.
---------
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Rather than a fixed iovcnt for connWritev, support maxiov per connection type instead.
A minor change to reduce memory for struct connection.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
Introduce .is_local method to connection, and implement for TCP/TLS/
Unix socket, also drop 'int islocalClient(client *c)'. Then we can
hide the detail into the specific connection types.
Uplayer tests a connection is local or not by abstract method only.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Introduce socket `shutdown()` into connection type, and use it
on normal socket if a fork is active. This allows us to close
client connections when there are child processes sharing the
file descriptors.
Fixes#10077. The reason is that since the `fork()` child is holding
the file descriptors, the `close` in `unlinkClient -> connClose`
isn't sufficient. The client will not realize that the connection is
disconnected until the child process ends.
Let's try to be conservative and only use shutdown when the fork is active.
* Support BUILD_TLS=module to be loaded as a module via config file or
command line. e.g. redis-server --loadmodule redis-tls.so
* Updates to redismodule.h to allow it to be used side by side with
server.h by defining REDISMODULE_CORE_MODULE
* Changes to server.h, redismodule.h and module.c to avoid repeated
type declarations (gcc 4.8 doesn't like these)
* Add a mechanism for non-ABI neutral modules (ones who include
server.h) to refuse loading if they detect not being built together with
redis (release.c)
* Fix wrong signature of RedisModuleDefragFunc, this could break
compilation of a module, but not the ABI
* Move initialization of listeners in server.c to be after loading
the modules
* Config TLS after initialization of listeners
* Init cluster after initialization of listeners
* Add TLS module to CI
* Fix a test suite race conditions:
Now that the listeners are initialized later, it's not sufficient to
wait for the PID message in the log, we need to wait for the "Server
Initialized" message.
* Fix issues with moduleconfigs test as a result from start_server
waiting for "Server Initialized"
* Fix issues with modules/infra test as a result of an additional module
present
Notes about Sentinel:
Sentinel can't really rely on the tls module, since it uses hiredis to
initiate connections and depends on OpenSSL (won't be able to use any
other connection modules for that), so it was decided that when TLS is
built as a module, sentinel does not support TLS at all.
This means that it keeps using redis_tls_ctx and redis_tls_client_ctx directly.
Example code of config in redis-tls.so(may be use in the future):
RedisModuleString *tls_cfg = NULL;
void tlsInfo(RedisModuleInfoCtx *ctx, int for_crash_report) {
UNUSED(for_crash_report);
RedisModule_InfoAddSection(ctx, "");
RedisModule_InfoAddFieldLongLong(ctx, "var", 42);
}
int tlsCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
{
if (argc != 2) return RedisModule_WrongArity(ctx);
return RedisModule_ReplyWithString(ctx, argv[1]);
}
RedisModuleString *getStringConfigCommand(const char *name, void *privdata) {
REDISMODULE_NOT_USED(name);
REDISMODULE_NOT_USED(privdata);
return tls_cfg;
}
int setStringConfigCommand(const char *name, RedisModuleString *new, void *privdata, RedisModuleString **err) {
REDISMODULE_NOT_USED(name);
REDISMODULE_NOT_USED(err);
REDISMODULE_NOT_USED(privdata);
if (tls_cfg) RedisModule_FreeString(NULL, tls_cfg);
RedisModule_RetainString(NULL, new);
tls_cfg = new;
return REDISMODULE_OK;
}
int RedisModule_OnLoad(void *ctx, RedisModuleString **argv, int argc)
{
....
if (RedisModule_CreateCommand(ctx,"tls",tlsCommand,"",0,0,0) == REDISMODULE_ERR)
return REDISMODULE_ERR;
if (RedisModule_RegisterStringConfig(ctx, "cfg", "", REDISMODULE_CONFIG_DEFAULT, getStringConfigCommand, setStringConfigCommand, NULL, NULL) == REDISMODULE_ERR)
return REDISMODULE_ERR;
if (RedisModule_LoadConfigs(ctx) == REDISMODULE_ERR) {
if (tls_cfg) {
RedisModule_FreeString(ctx, tls_cfg);
tls_cfg = NULL;
}
return REDISMODULE_ERR;
}
...
}
Co-authored-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Introduce listen method into connection type, this allows no hard code
of listen logic. Originally, we initialize server during startup like
this:
if (server.port)
listenToPort(server.port,&server.ipfd);
if (server.tls_port)
listenToPort(server.port,&server.tlsfd);
if (server.unixsocket)
anetUnixServer(...server.unixsocket...);
...
if (createSocketAcceptHandler(&server.ipfd, acceptTcpHandler) != C_OK)
if (createSocketAcceptHandler(&server.tlsfd, acceptTcpHandler) != C_OK)
if (createSocketAcceptHandler(&server.sofd, acceptTcpHandler) != C_OK)
...
If a new connection type gets supported, we have to add more hard code
to setup listener.
Introduce .listen and refactor listener, and Unix socket supports this.
this allows to setup listener arguments and create listener in a loop.
What's more, '.listen' is defined in connection.h, so we should include
server.h to import 'struct socketFds', but server.h has already include
'connection.h'. To avoid including loop(also to make code reasonable),
define 'struct connListener' in connection.h instead of 'struct socketFds'
in server.h. This leads this commit to get more changes.
There are more fields in 'struct connListener', hence it's possible to
simplify changeBindAddr & applyTLSPort() & updatePort() into a single
logic: update the listener config from the server.xxx, and re-create
the listener.
Because of the new field 'priv' in struct connListener, we expect to pass
this to the accept handler(even it's not used currently), this may be used
in the future.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Suggested by Oran, use an array to store all the connection types
instead of a linked list, and use connection name of string. The index
of a connection is dynamically allocated.
Currently we support max 8 connection types, include:
- tcp
- unix socket
- tls
and RDMA is in the plan, then we have another 4 types to support, it
should be enough in a long time.
Introduce 3 functions to get connection type by a fast path:
- connectionTypeTcp()
- connectionTypeTls()
- connectionTypeUnix()
Note that connectionByType() is designed to use only in unlikely code path.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Abstract accept handler for socket&TLS, and add helper function
'connAcceptHandler' to get accept handler by specified type.
Also move acceptTcpHandler into socket.c, and move
acceptTLSHandler into tls.c.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Abstract common interface of connection type, so Redis can hide the
implementation and uplayer only calls connection API without macro.
uplayer
|
connection layer
/ \
socket TLS
Currently, for both socket and TLS, all the methods of connection type
are declared as static functions.
It's possible to build TLS(even socket) as a shared library, and Redis
loads it dynamically in the next step.
Also add helper function connTypeOfCluster() and
connTypeOfReplication() to simplify the code:
link->conn = server.tls_cluster ? connCreateTLS() : connCreateSocket();
-> link->conn = connCreate(connTypeOfCluster());
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Introduce .get_peer_cert, .get_ctx and .get_client_ctx for TLS, also
hide redis_tls_ctx & redis_tls_client_ctx.
Then outside could access the variables by connection API only:
- redis_tls_ctx -> connTypeGetCtx(CONN_TYPE_TLS)
- redis_tls_client_ctx -> connTypeGetClientCtx(CONN_TYPE_TLS)
Also remove connTLSGetPeerCert(), use connGetPeerCert() instead.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Introduce .has_pending_data and .process_pending_data for connection
type, and hide tlsHasPendingData() and tlsProcessPendingData(). Also
set .has_pending_data and .process_pending_data as NULL explicitly in
socket.c.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Use connTypeRegister() to register a connection type into redis, and
query connection by connectionByType() via type.
With this change, we can hide TLS specified methods into connection
type:
- void tlsInit(void);
- void tlsCleanup(void);
- int tlsConfigure(redisTLSContextConfig *ctx_config);
- int isTlsConfigured(void);
Merge isTlsConfigured & tlsConfigure, use an argument *reconfigure*
to distinguish:
tlsConfigure(&server.tls_ctx_config)
-> onnTypeConfigure(CONN_TYPE_TLS, &server.tls_ctx_config, 1)
isTlsConfigured() && tlsConfigure(&server.tls_ctx_config)
-> connTypeConfigure(CONN_TYPE_TLS, &server.tls_ctx_config, 0)
Finally, we can remove USE_OPENSSL from config.c. If redis is built
without TLS, and still run redis with TLS, then redis reports:
# Missing implement of connection type 1
# Failed to configure TLS. Check logs for more info.
The log can be optimised, let's leave it in the future. Maybe we can
use connection type as a string.
Although uninitialized fields of a static struct are zero, we still
set them as NULL explicitly in socket.c, let them clear to read & maintain:
.init = NULL,
.cleanup = NULL,
.configure = NULL,
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Originally, connPeerToString is designed to get the address info from
socket only(for both TCP & TLS), and the API 'connPeerToString' is
oriented to operate a FD like:
int connPeerToString(connection *conn, char *ip, size_t ip_len, int *port) {
return anetFdToString(conn ? conn->fd : -1, ip, ip_len, port, FD_TO_PEER_NAME);
}
Introduce connAddr and implement .addr method for socket and TLS,
thus the API 'connAddr' and 'connFormatAddr' become oriented to a
connection like:
static inline int connAddr(connection *conn, char *ip, size_t ip_len, int *port, int remote) {
if (conn && conn->type->addr) {
return conn->type->addr(conn, ip, ip_len, port, remote);
}
return -1;
}
Also remove 'FD_TO_PEER_NAME' & 'FD_TO_SOCK_NAME', use a boolean type
'remote' to get local/remote address of a connection.
With these changes, it's possible to support the other connection
types which does not use socket(Ex, RDMA).
Thanks to Oran for suggestions!
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Reorder methods for CT_Socket & CT_TLS, also add comments to make the
methods clear.
Also move the CT_TLS to the end of file, other methods can be static
in the next step.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Use SSL_shutdown(), in a best-effort manner, when closing a TLS
connection. This change better supports OpenSSL 3.x clients that will
not silently ignore the socket-level EOF.
There are scenarios where it results in many small objects in the reply list,
such as commands heavily using deferred array replies (`addReplyDeferredLen`).
E.g. what COMMAND command and CLUSTER SLOTS used to do (see #10056, #7123),
but also in case of a transaction or a pipeline of commands that use just one differed array reply.
We used to have to run multiple loops along with multiple calls to `write()` to send data back to
peer based on the current code, but by means of `writev()`, we can gather those scattered
objects in reply list and include the static reply buffer as well, then send it by one system call,
that ought to achieve higher performance.
In the case of TLS, we simply check and concatenate buffers into one big buffer and send it
away by one call to `connTLSWrite()`, if the amount of all buffers exceeds `NET_MAX_WRITES_PER_EVENT`,
then invoke `connTLSWrite()` multiple times to avoid a huge massive of memory copies.
Note that aside of reducing system calls, this change will also reduce the amount of
small TCP packets sent.
* Drop obsolete initialization calls.
* Use decoder API for DH parameters.
* Enable auto DH parameters if not explicitly used, which should be the
preferred configuration going forward.
We can now do: `config set maxmemory 10m repl-backlog-size 5m`
## Basic algorithm to support "transaction like" config sets:
1. Backup all relevant current values (via get).
2. Run "verify" and "set" on everything, if we fail run "restore".
3. Run "apply" on everything (optional optimization: skip functions already run). If we fail run "restore".
4. Return success.
### restore
1. Run set on everything in backup. If we fail log it and continue (this puts us in an undefined
state but we decided it's better than the alternative of panicking). This indicates either a bug
or some unsupported external state.
2. Run apply on everything in backup (optimization: skip functions already run). If we fail log
it (see comment above).
3. Return error.
## Implementation/design changes:
* Apply function are idempotent (have no effect if they are run more than once for the same config).
* No indication in set functions if we're reading the config or running from the `CONFIG SET` command
(removed `update` argument).
* Set function should set some config variable and assume an (optional) apply function will use that
later to apply. If we know this setting can be safely applied immediately and can always be reverted
and doesn't depend on any other configuration we can apply immediately from within the set function
(and not store the setting anywhere). This is the case of this `dir` config, for example, which has no
apply function. No apply function is need also in the case that setting the variable in the `server` struct
is all that needs to be done to make the configuration take effect. Note that the original concept of `update_fn`,
which received the old and new values was removed and replaced by the optional apply function.
* Apply functions use settings written to the `server` struct and don't receive any inputs.
* I take care that for the generic (non-special) configs if there's no change I avoid calling the setter (possible
optimization: avoid calling the apply function as well).
* Passing the same config parameter more than once to `config set` will fail. You can't do `config set my-setting
value1 my-setting value2`.
Note that getting `save` in the context of the conf file parsing to work here as before was a pain.
The conf file supports an aggregate `save` definition, where each `save` line is added to the server's
save params. This is unlike any other line in the config file where each line overwrites any previous
configuration. Since we now support passing multiple save params in a single line (see top comments
about `save` in https://github.com/redis/redis/pull/9644) we should deprecate the aggregate nature of
this config line and perhaps reduce this ugly code in the future.
* Clean up EINTR handling so EINTR will not change connection state to begin with.
* On TLS, catch EINTR and return it as-is before going through OpenSSL error handling (which seems to not distinguish it from EAGAIN).
This PR adds a spell checker CI action that will fail future PRs if they introduce typos and spelling mistakes.
This spell checker is based on blacklist of common spelling mistakes, so it will not catch everything,
but at least it is also unlikely to cause false positives.
Besides that, the PR also fixes many spelling mistakes and types, not all are a result of the spell checker we use.
Here's a summary of other changes:
1. Scanned the entire source code and fixes all sorts of typos and spelling mistakes (including missing or extra spaces).
2. Outdated function / variable / argument names in comments
3. Fix outdated keyspace masks error log when we check `config.notify-keyspace-events` in loadServerConfigFromString.
4. Trim the white space at the end of line in `module.c`. Check: https://github.com/redis/redis/pull/7751
5. Some outdated https link URLs.
6. Fix some outdated comment. Such as:
- In README: about the rdb, we used to said create a `thread`, change to `process`
- dbRandomKey function coment (about the dictGetRandomKey, change to dictGetFairRandomKey)
- notifyKeyspaceEvent fucntion comment (add type arg)
- Some others minor fix in comment (Most of them are incorrectly quoted by variable names)
7. Modified the error log so that users can easily distinguish between TCP and TLS in `changeBindAddr`
This adds a new `tls-client-cert-file` and `tls-client-key-file`
configuration directives which make it possible to use different
certificates for the TLS-server and TLS-client functions of Redis.
This is an optional directive. If it is not specified the `tls-cert-file`
and `tls-key-file` directives are used for TLS client functions as well.
Also, `utils/gen-test-certs.sh` now creates additional server-only and client-only certs and will skip intensive operations if target files already exist.
The tls-ca-cert or tls-ca-cert-dir configuration parameters are only
used when Redis needs to authenticate peer certificates, in one of these
scenarios:
1. Incoming clients or replicas, with `tls-auth-clients` enabled.
2. A replica authenticating the master's peer certificate.
3. Cluster nodes authenticating other nodes when establishing the bus
protocol connection.