Introduce a new hidden server configuration, `enable-debug-assert`, which
allows selectively enabling or disabling, at runtime, expensive or risky
assertions used primarily for debugging and testing.
Fix#569
---------
Signed-off-by: Ping Xie <pingxie@google.com>
Although I think this improves the readability of individual configs,
the fact there are now 1k more lines of configs makes this overall much
harder to parse. So reverting it back to the way it was before.
`,\n [ ]+` replace with `, `.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
I have validated that these settings closely match the existing coding
style with one major exception on `BreakBeforeBraces`, which will be
`Attach` going forward. The mixed `BreakBeforeBraces` styles in the
current codebase are hard to imitate and also very odd IMHO - see below
```
if (a == 1) { /*Attach */
}
```
```
if (a == 1 ||
b == 2)
{ /* Why? */
}
```
Please do NOT merge just yet. Will add the github action next once the
style is reviewed/approved.
---------
Signed-off-by: Ping Xie <pingxie@google.com>
This commit adds a logic to cache `CLUSTER SLOTS` response for reduced
latency and also updates the cache when a change in the cluster is
detected.
Historically, `CLUSTER SLOTS` command was deprecated, however all the
server clients have been using `CLUSTER SLOTS` and have not migrated to
`CLUSTER SHARDS`. In future this logic can be added to any other
commands to improve the performance of the engine.
---------
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Default value for the "syslog-ident" config changed from "redis" to
"valkey".
Fixes#301.
---------
Signed-off-by: Karthick Ariyaratnam <karthyuom@gmail.com>
This patch try to do following things:
1. Rename `redis_*` and `REDIS_*` macros defined in config.h to
`valkey_*`, `VALKEY_*` and update associated used files. (`redis_fstat`,
`redis_fsync`, `REDIS_THREAD_STACK_SIZE`, etc.)
2. Remove the leading double underscore for guard macro in config.h.
---------
Signed-off-by: Lipeng Zhu <lipeng.zhu@intel.com>
New config 'extended-redis-compatibility' (yes/no) default no
* When yes:
* Use "Redis" in the following error replies:
- `-LOADING Redis is loading the dataset in memory`
- `-BUSY Redis is busy`...
- `-MISCONF Redis is configured to`...
* Use `=== REDIS BUG REPORT` in the crash log delimiters (START and
END).
* The HELLO command returns `"server" => "redis"` and `"version" =>
"7.2.4"` (our Redis OSS compatibility version).
* The INFO field for mode is called `"redis_mode"`.
* When no:
* Use "Valkey" instead of "Redis" in the mentioned errors and crash log
delimiters.
* The HELLO command returns `"server" => "valkey"` and the Valkey
version for `"version"`.
* The INFO field for mode is called `"server_mode"`.
* Documentation added in valkey.conf:
> Valkey is largely compatible with Redis OSS, apart from a few cases
where
> Redis OSS compatibility mode makes Valkey pretend to be Redis. Enable
this
> only if you have problems with tools or clients. This is a temporary
> configuration added in Valkey 8.0 and is scheduled to have no effect
in Valkey
> 9.0 and be completely removed in Valkey 10.0.
* A test case for the config is added. It is designed to fail if the
config is not deprecated (has no effect) in Valkey 9 and deleted in
Valkey 10.
* Other test cases are adjusted to work regardless of this config.
Fixes#274Fixes#61
---------
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
This macros is used to add rewrite string in src/config.c and removing
the redis will not effect log or output.
Signed-off-by: Shivshankar-Reddy <shiva.sheri.github@gmail.com>
Low-risk error replies containing "Redis" are changed.
In most cases, the word "Redis" is simply removed from the error message,
such as in "This Redis instance is not configured to use an ACL file. (...)",
the message is changed to "This instance is not configured to use an ACL
file. (...)".
Additionally, error replies from `redis.call` in a Lua script are
affected, such as
* "Please specify at least one argument for this redis lib call"
* "Wrong number of args calling Redis command from script"
* "Unknown Redis command called from script"
* "Invalid command passed to redis.acl_check_cmd()"
The name Redis is simply removed from these error message. In the last
one above, "redis.acl_check_cmd()" is replaced by
"server.acl_check_cmd()" in the error message.
The following error replies are considered high of causing problems for
clients, so they are not changed in this commit:
* (not in scope) "-MISCONF Redis is configured to save RDB snapshots
(...)"
* (not in scope) "-LOADING Redis is loading the dataset in memory"
* (not in scope) "-BUSY Redis is busy running a script (...)"
Fixes#204
---------
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
This includes comments used for module API documentation.
* Strategy for replacement: Regex search: `(//|/\*| \*|#).* ("|\()?(r|R)edis( |\.
|'|\n|,|-|\)|")(?!nor the names of its contributors)(?!Ltd.)(?!Labs)(?!Contributors.)`
* Don't edit copyright comments
* Replace "Redis version X.X" -> "Redis OSS version X.X" to distinguish
from newly licensed repository
* Replace "Redis Object" -> "Object"
* Exclude markdown for now
* Don't edit Lua scripting comments referring to redis.X API
* Replace "Redis Protocol" -> "RESP"
* Replace redis-benchmark, -cli, -server, -check-aof/rdb with "valkey-"
prefix
* Most other places, I use best judgement to either remove "Redis", or
replace with "the server" or "server"
Fixes#148
---------
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
This commit updates the following fields:
1. server_version -> valkey_version in server info. Since we would like
to advertise specific compatibility, we are making the version specific
to valkey. servername will remain as an optional indicator, and other
valkey compatible stores might choose to advertise something else.
1. We dropped redis-ver from the API. This isn't related to API
compatibility, but we didn't want to "fake" that valkey was creating an
rdb from a Redis version.
1. Renamed server-ver -> valkey_version in rdb info. Same as point one,
we want to explicitly indicate this was created by a valkey server.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Fix#146
Removed REDISMODULE_ prefixes from the core source code to align with
the new SERVERMODULE_ naming convention. Added a new 'redismodule.h'
header file to ensure full backward compatibility with existing modules.
This compatibility layer maps all legacy REDISMODULE_ prefixed
identifiers to their new SERVERMODULE_ equivalents, allowing existing
Redis modules to function without modification.
---------
Signed-off-by: Ping Xie <pingxie@google.com>
New info information to be used to determine the valkey versioning info.
Internally, introduce new define values for "SERVER_VERSION" which is
different from the Redis compatibility version, "REDIS_VERSION".
Add two new info fields:
`server_version`: The Valkey server version
`server_name`: Indicates that the server is valkey.
Add one new RDB field: `server_ver`, which indicates the valkey version
that produced the server.
Add 3 new LUA globals: `SERVER_VERSION_NUM`, `SERVER_VERSION`, and
`SERVER_NAME`. Which reflect the valkey version instead of the Redis
compatibility version.
Also clean up various places where Redis and configuration was being
used that is no longer necessary.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Remove trademarked wording on configuration layer.
Following changes for release notes:
1. Rename redis.conf to valkey.conf
2. Pre-filled config in the template config file: Changing pidfile to `/var/run/valkey_6379.pid`
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Currently, once active defrag starts, we can not adjust
active_defrag_running
downwards. This is because active_defrag_running will be dynamically
compute
based on the fragmentation, we think we should not lower the effort when
the
fragmentation drops.
However, we need to note that active_defrag_running will also be
dynamically
computed based on configurations. In this case, we are not respecting
cycle-min
or cycle-max. Some people may realize halfway through that defrag
consumes a
lot and want to adjust it.
Previously we could only turn off activedefrag and then turn it on again
to
adjust active_defrag_running downwards. So in this PR, when a active
defrag
configuration change is made, we will re-compute it.
These configuration items are:
- active-defrag-cycle-min
- active-defrag-cycle-max
- active-defrag-threshold-upper
We have not limited the value of maxmemory-samples in the past, it can
be set very large. If it is set very large, we will have stack overflow
in evictionPoolPopulate when we trigger the key eviction.
There is no reason for this config to be set too high, so just limit its
range to [1,64].
There are situations (especially in TLS) in which the engine gets too occupied managing a large number of new connections. Existing connections may time-out while the server is processing the new connections initial TLS handshakes, which may cause cause new connections to be established, perpetuating the problem. To better manage the tradeoff between new connection rate and other workloads, this change adds a new config to manage maximum number of new connections per event loop cycle, instead of using a predetermined number (currently 1000).
This change introduces two new configurations, max-new-connections-per-cycle and max-new-tls-connections-per-cycle. The default value of the tcp connections is 10 per cycle and the default value of tls connections per cycle is 1.
---------
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
If fopen() is successful and redis_fstat determines that the file is 0
bytes, the file handle stored in fp will leak. This change closes the
filehandle stored in fp if the file is 0 bytes.
Second attempt at fixing Coverity 390029
This is a follow-up to #12796
In #11489, we consider acl username to be sensitive information,
and consider the ACL GETUSER a sensitive command and remove it
from redis-cli historyfile.
This PR redact username information in ACL GETUSER and ACL DELUSER
from SLOWLOG, and also remove ACL DELUSER from redis-cli historyfile.
This PR also mark tls-key-file-pass and tls-client-key-file-pass
as sensitive config, will redact it from SLOWLOG and also
remove them from redis-cli historyfile.
The following four configurations are renamed to align with Redis style:
1. server_cpulist renamed to server-cpulist
2. bio_cpulist renamed to bio-cpulist
3. aof_rewrite_cpulist renamed to aof-rewrite-cpulist
4. bgsave_cpulist renamed to bgsave-cpulist
The original names are retained as aliases to ensure compatibility with
old configuration files. We recommend users to gradually transition to
using the new configuration names to maintain consistency in style.
If fopen() is successful, but redis_fstat() fails, the file handle
stored in fp will leak. This change closes the filehandle stored in fp
if redis_fstat() fails.
Fixes Coverity 390029
Fixed some usages of tabs which caused weird indentation in the code. Tried to find all of the places so their was one PR. I ignored all of the usages of tabs which don't really affect readability.
This PR purpose is to make the crash report process thread safe.
main changes include:
1. `setupSigSegvHandler()` is introduced to initialize the signal handler.
This function first initializes the signal handler mutex (if not initialized yet)
and then registers the process to the signal handler.
2. **sigsegvHandler** flags :
SA_NODEFER - don't add the signal to the process signal mask. We use this
flag because we want to be able to handle a second call to the signal manually.
removed SA_RESETHAND: this flag resets the signal handler function upon the first
entrance to the registered function. The reason to use this flag is to protect from
recursively entering the signal handler by the same thread. But, it also means
that if a second thread crashes while handling a signal, the process will be
terminated immediately and we won't get the crash report.
In this PR we discard this flag. The signal handler guard described below purpose
is to solve the above issues.
3. Add a **signal handler lock** with ERRORCHECK attributes.
The lock's purpose is to ensure that only one thread generates a crash report.
Once a second thread enters the signal handler it will be blocked.
We use the ERRORCHECK lock in order to protect from possible deadlock in
case the thread handling the crash gets a signal. In the latest scenario, we log
what we have collected until the handler crashed.
At the end of the crash report we reset the signal handler SIG_DFL, with no flags, and
rethrow the signal to generate a core dump (if enabled) and exit the process.
During the work on this PR we wanted to understand the historical reasons for
how crash is handled.
With respect to the choice of the flag, we believe the **SA_RESETHAND** was not
added for any specific purpose.
**SA_ONSTACK** which is removed here from bugReportEnd(), was originally also
set in the initial registration to signal handler, but removed in 3ada43e73. In addition,
it was removed from another location in deee2c1ef with the following description,
which is also relevant to why it should be removed from bugReportEnd:
> it seems to be some valgrind bug with SA_ONSTACK.
> SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
> also, not sure if it's even valid without a call to sigaltstack()
aof-disable-auto-gc was created for testing purposes,
to check if certain AOF files were actually generated
and if they were deletedcorrectly during testing.
So hiding it, see #12249 for more discussion.
Observed that the sanitizer reported memory leak as clean up is not done
before the process termination in negative/following cases:
**- when we passed '--invalid' as option to redis-server.**
```
-vm:~/mem-leak-issue/redis$ ./src/redis-server --invalid
*** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
Reading the configuration file, at line 2
>>> 'invalid'
Bad directive or wrong number of arguments
=================================================================
==865778==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x7f0985f65867 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x558ec86686ec in ztrymalloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:117
#2 0x558ec86686ec in ztrymalloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:135
#3 0x558ec86686ec in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:276
#4 0x558ec86686ec in zrealloc /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:327
#5 0x558ec865dd7e in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1172
#6 0x558ec87a1be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472
#7 0x558ec87a13b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718
#8 0x558ec85e6f15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258
#9 0x7f09856e5d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
SUMMARY: AddressSanitizer: 8 byte(s) leaked in 1 allocation(s).
```
**- when we pass '--port' as option and missed to add port number to redis-server.**
```
vm:~/mem-leak-issue/redis$ ./src/redis-server --port
*** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
Reading the configuration file, at line 2
>>> 'port'
wrong number of arguments
=================================================================
==865846==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x7fdcdbb1f867 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x557e8b04f6ec in ztrymalloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:117
#2 0x557e8b04f6ec in ztrymalloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:135
#3 0x557e8b04f6ec in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:276
#4 0x557e8b04f6ec in zrealloc /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:327
#5 0x557e8b044d7e in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1172
#6 0x557e8b188be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472
#7 0x557e8b1883b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718
#8 0x557e8afcdf15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258
#9 0x7fdcdb29fd8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
Indirect leak of 10 byte(s) in 1 object(s) allocated from:
#0 0x7fdcdbb1fc18 in __interceptor_realloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:164
#1 0x557e8b04f9aa in ztryrealloc_usable_internal /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:287
#2 0x557e8b04f9aa in ztryrealloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:317
#3 0x557e8b04f9aa in zrealloc_usable /home/ubuntu/mem-leak-issue/redis/src/zmalloc.c:342
#4 0x557e8b033f90 in _sdsMakeRoomFor /home/ubuntu/mem-leak-issue/redis/src/sds.c:271
#5 0x557e8b033f90 in sdsMakeRoomFor /home/ubuntu/mem-leak-issue/redis/src/sds.c:295
#6 0x557e8b033f90 in sdscatlen /home/ubuntu/mem-leak-issue/redis/src/sds.c:486
#7 0x557e8b044e1f in sdssplitargs /home/ubuntu/mem-leak-issue/redis/src/sds.c:1165
#8 0x557e8b188be7 in loadServerConfigFromString /home/ubuntu/mem-leak-issue/redis/src/config.c:472
#9 0x557e8b1883b3 in loadServerConfig /home/ubuntu/mem-leak-issue/redis/src/config.c:718
#10 0x557e8afcdf15 in main /home/ubuntu/mem-leak-issue/redis/src/server.c:7258
#11 0x7fdcdb29fd8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
SUMMARY: AddressSanitizer: 18 byte(s) leaked in 2 allocation(s).
```
As part analysis found that the sdsfreesplitres is not called when this condition checks are being hit.
Output after the fix:
```
vm:~/mem-leak-issue/redis$ ./src/redis-server --invalid
*** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
Reading the configuration file, at line 2
>>> 'invalid'
Bad directive or wrong number of arguments
vm:~/mem-leak-issue/redis$
===========================================
vm:~/mem-leak-issue/redis$ ./src/redis-server --jdhg
*** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
Reading the configuration file, at line 2
>>> 'jdhg'
Bad directive or wrong number of arguments
---------------------------------------------------------------------------
vm:~/mem-leak-issue/redis$ ./src/redis-server --port
*** FATAL CONFIG FILE ERROR (Redis 255.255.255) ***
Reading the configuration file, at line 2
>>> 'port'
wrong number of arguments
```
Co-authored-by: Oran Agra <oran@redislabs.com>
This PR adds a human readable name to a node in clusters that are visible as part of error logs. This is useful so that admins and operators of Redis cluster have better visibility into failures without having to cross-reference the generated ID with some logical identifier (such as pod-ID or EC2 instance ID). This is mentioned in #8948. Specific nodenames can be set by using the variable cluster-announce-human-nodename. The nodename is gossiped using the clusterbus extension in #9530.
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Users can record logs of different levels by setting the `loglevel`.
However, sometimes there are many logs even at the warning level,
which can affect the performance of Redis.
For example, when a user accesses the tls-port using a non-encrypted link,
Redis will log lots of "# Error accepting a client connection: ...".
We can provide the ability to disable logging so that users can temporarily turn
off logging and turn it back on after the problem is resolved.
Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
Fix the following config file error
```
*** FATAL CONFIG FILE ERROR (Redis 6.2.7) ***
Reading the configuration file, at line 152
>>> 'sentinel known-replica XXXX 127.0.0.1 5001'
Duplicate hostname and port for replica.
```
that is happening when a user uses the legacy key "known-slave" in
the config file and a config rewrite occurs. The config rewrite logic won't
replace the old line "sentinel known-slave XXXX 127.0.0.1 5001" and
would add a new line with "sentinel known-replica XXXX 127.0.0.1 5001"
which results in the error above "Duplicate hostname and port for replica."
example:
Current sentinal.conf
```
...
sentinel known-slave XXXX 127.0.0.1 5001
sentinel example-random-option X
...
```
after the config rewrite logic runs:
```
....
sentinel known-slave XXXX 127.0.0.1 5001
sentinel example-random-option X
# Generated by CONFIG REWRITE
sentinel known-replica XXXX 127.0.0.1 5001
```
This bug only exists in Redis versions >=6.2 because prior to that it was hidden
by the effects of this bug https://github.com/redis/redis/issues/5388 that was fixed
in https://github.com/redis/redis/pull/8271 and was released in versions >=6.2
Implementing the WAITAOF functionality which would allow the user to
block until a specified number of Redises have fsynced all previous write
commands to the AOF.
Syntax: `WAITAOF <num_local> <num_replicas> <timeout>`
Response: Array containing two elements: num_local, num_replicas
num_local is always either 0 or 1 representing the local AOF on the master.
num_replicas is the number of replicas that acknowledged the a replication
offset of the last write being fsynced to the AOF.
Returns an error when called on replicas, or when called with non-zero
num_local on a master with AOF disabled, in all other cases the response
just contains number of fsync copies.
Main changes:
* Added code to keep track of replication offsets that are confirmed to have
been fsynced to disk.
* Keep advancing master_repl_offset even when replication is disabled (and
there's no replication backlog, only if there's an AOF enabled).
This way we can use this command and it's mechanisms even when replication
is disabled.
* Extend REPLCONF ACK to `REPLCONF ACK <ofs> FACK <ofs>`, the FACK
will be appended only if there's an AOF on the replica, and already ignored on
old masters (thus backwards compatible)
* WAIT now no longer wait for the replication offset after your last command, but
rather the replication offset after your last write (or read command that caused
propagation, e.g. lazy expiry).
Unrelated changes:
* WAIT command respects CLIENT_DENY_BLOCKING (not just CLIENT_MULTI)
Implementation details:
* Add an atomic var named `fsynced_reploff_pending` that's updated
(usually by the bio thread) and later copied to the main `fsynced_reploff`
variable (only if the AOF base file exists).
I.e. during the initial AOF rewrite it will not be used as the fsynced offset
since the AOF base is still missing.
* Replace close+fsync bio job with new BIO_CLOSE_AOF (AOF specific)
job that will also update fsync offset the field.
* Handle all AOF jobs (BIO_CLOSE_AOF, BIO_AOF_FSYNC) in the same bio
worker thread, to impose ordering on their execution. This solves a
race condition where a job could set `fsynced_reploff_pending` to a higher
value than another pending fsync job, resulting in indicating an offset
for which parts of the data have not yet actually been fsynced.
Imposing an ordering on the jobs guarantees that fsync jobs are executed
in increasing order of replication offset.
* Drain bio jobs when switching `appendfsync` to "always"
This should prevent a write race between updates to `fsynced_reploff_pending`
in the main thread (`flushAppendOnlyFile` when set to ALWAYS fsync), and
those done in the bio thread.
* Drain the pending fsync when starting over a new AOF to avoid race conditions
with the previous AOF offsets overriding the new one (e.g. after switching to
replicate from a new master).
* Make sure to update the fsynced offset at the end of the initial AOF rewrite.
a must in case there are no additional writes that trigger a periodic fsync,
specifically for a replica that does a full sync.
Limitations:
It is possible to write a module and a Lua script that propagate to the AOF and doesn't
propagate to the replication stream. see REDISMODULE_ARGV_NO_REPLICAS and luaRedisSetReplCommand.
These features are incompatible with the WAITAOF command, and can result
in two bad cases. The scenario is that the user executes command that only
propagates to AOF, and then immediately
issues a WAITAOF, and there's no further writes on the replication stream after that.
1. if the the last thing that happened on the replication stream is a PING
(which increased the replication offset but won't trigger an fsync on the replica),
then the client would hang forever (will wait for an fack that the replica will never
send sine it doesn't trigger any fsyncs).
2. if the last thing that happened is a write command that got propagated properly,
then WAITAOF will be released immediately, without waiting for an fsync (since
the offset didn't change)
Refactoring:
* Plumbing to allow bio worker to handle multiple job types
This introduces infrastructure necessary to allow BIO workers to
not have a 1-1 mapping of worker to job-type. This allows in the
future to assign multiple job types to a single worker, either as
a performance/resource optimization, or as a way of enforcing
ordering between specific classes of jobs.
Co-authored-by: Oran Agra <oran@redislabs.com>
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
We have cases where we print information (might be important but by
no means an error indicator) with the LL_WARNING level.
Demoting these to LL_NOTICE:
- oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
- User requested shutdown...
This is also true for cases that we encounter a rare but normal situation.
Demoting to LL_NOTICE. Examples:
- AOF was enabled but there is already another background operation. An AOF background was scheduled to start when possible.
- Connection with master lost.
base on yoav-steinberg's https://github.com/redis/redis/pull/10650#issuecomment-1112280554
and yossigo's https://github.com/redis/redis/pull/10650#pullrequestreview-967677676
In cluster-mode, only DB0 is supported so all data must reside in that database. There is a single check that validates that data loaded from an RDB all resides in DB0. This check is performed after all the data is loaded which makes it difficult to identify where the non DB0 data resides as well as does a bunch of unnecessary work to load incompatible data. This change override the database config at startup to 1 to throw an error when attempting to add data to a database other than DB0.
Co-authored-by: Eran Liberty <eranl@amazon.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Introduce .is_local method to connection, and implement for TCP/TLS/
Unix socket, also drop 'int islocalClient(client *c)'. Then we can
hide the detail into the specific connection types.
Uplayer tests a connection is local or not by abstract method only.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
## Issue
During the client input/output buffer processing, the memory usage is
incrementally updated to keep track of clients going beyond a certain
threshold `maxmemory-clients` to be evicted. However, this additional
tracking activity leads to unnecessary CPU cycles wasted when no
client-eviction is required. It is applicable in two cases.
* `maxmemory-clients` is set to `0` which equates to no client eviction
(applicable to all clients)
* `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
client not applicable for eviction.
## Solution
* Disable client memory usage tracking during the read/write flow when
`maxmemory-clients` is set to `0` or `client no-evict` is `on`.
The memory usage is tracked only during the `clientCron` i.e. it gets
periodically updated.
* Cleanup the clients from the memory usage bucket when client eviction
is disabled.
* When the maxmemory-clients config is enabled or disabled at runtime,
we immediately update the memory usage buckets for all clients (tested
scanning 80000 took some 20ms)
Benchmark shown that this can improve performance by about 5% in
certain situations.
Co-authored-by: Oran Agra <oran@redislabs.com>
The cluster-announce-port/cluster-announce-bus-port/cluster-announce-tls-port should take effect at runtime
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Small sets with not only integer elements are listpack encoded, by default
up to 128 elements, max 64 bytes per element, new config `set-max-listpack-entries`
and `set-max-listpack-value`. This saves memory for small sets compared to using a hashtable.
Sets with only integers, even very small sets, are still intset encoded (up to 1G
limit, etc.). Larger sets are hashtable encoded.
This PR increments the RDB version, and has an effect on OBJECT ENCODING
Possible conversions when elements are added:
intset -> listpack
listpack -> hashtable
intset -> hashtable
Note: No conversion happens when elements are deleted. If all elements are
deleted and then added again, the set is deleted and recreated, thus implicitly
converted to a smaller encoding.
This PR introduces a couple of changes to improve cluster test stability:
1. Increase the cluster node timeout to 3 seconds, which is similar to the
normal cluster tests, but introduce a new mechanism to increase the ping
period so that the tests are still fast. This new config is a debug config.
2. Set `cluster-replica-no-failover yes` on a wider array of tests which are
sensitive to failovers. This was occurring on the ARM CI.
Adds a number of user management/ACL validaiton/command execution functions to improve a
Redis module's ability to enforce ACLs correctly and easily.
* RM_SetContextUser - sets a RedisModuleUser on the context, which RM_Call will use to both
validate ACLs (if requested and set) as well as assign to the client so that scripts executed via
RM_Call will have proper ACL validation.
* RM_SetModuleUserACLString - Enables one to pass an entire ACL string, not just a single OP
and have it applied to the user
* RM_GetModuleUserACLString - returns a stringified version of the user's ACL (same format as dump
and list). Contains an optimization to cache the stringified version until the underlying ACL is modified.
* Slightly re-purpose the "C" flag to RM_Call from just being about ACL check before calling the
command, to actually running the command with the right user, so that it also affects commands
inside EVAL scripts. see #11231
Introduce listen method into connection type, this allows no hard code
of listen logic. Originally, we initialize server during startup like
this:
if (server.port)
listenToPort(server.port,&server.ipfd);
if (server.tls_port)
listenToPort(server.port,&server.tlsfd);
if (server.unixsocket)
anetUnixServer(...server.unixsocket...);
...
if (createSocketAcceptHandler(&server.ipfd, acceptTcpHandler) != C_OK)
if (createSocketAcceptHandler(&server.tlsfd, acceptTcpHandler) != C_OK)
if (createSocketAcceptHandler(&server.sofd, acceptTcpHandler) != C_OK)
...
If a new connection type gets supported, we have to add more hard code
to setup listener.
Introduce .listen and refactor listener, and Unix socket supports this.
this allows to setup listener arguments and create listener in a loop.
What's more, '.listen' is defined in connection.h, so we should include
server.h to import 'struct socketFds', but server.h has already include
'connection.h'. To avoid including loop(also to make code reasonable),
define 'struct connListener' in connection.h instead of 'struct socketFds'
in server.h. This leads this commit to get more changes.
There are more fields in 'struct connListener', hence it's possible to
simplify changeBindAddr & applyTLSPort() & updatePort() into a single
logic: update the listener config from the server.xxx, and re-create
the listener.
Because of the new field 'priv' in struct connListener, we expect to pass
this to the accept handler(even it's not used currently), this may be used
in the future.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Suggested by Oran, use an array to store all the connection types
instead of a linked list, and use connection name of string. The index
of a connection is dynamically allocated.
Currently we support max 8 connection types, include:
- tcp
- unix socket
- tls
and RDMA is in the plan, then we have another 4 types to support, it
should be enough in a long time.
Introduce 3 functions to get connection type by a fast path:
- connectionTypeTcp()
- connectionTypeTls()
- connectionTypeUnix()
Note that connectionByType() is designed to use only in unlikely code path.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Abstract accept handler for socket&TLS, and add helper function
'connAcceptHandler' to get accept handler by specified type.
Also move acceptTcpHandler into socket.c, and move
acceptTLSHandler into tls.c.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Use connTypeRegister() to register a connection type into redis, and
query connection by connectionByType() via type.
With this change, we can hide TLS specified methods into connection
type:
- void tlsInit(void);
- void tlsCleanup(void);
- int tlsConfigure(redisTLSContextConfig *ctx_config);
- int isTlsConfigured(void);
Merge isTlsConfigured & tlsConfigure, use an argument *reconfigure*
to distinguish:
tlsConfigure(&server.tls_ctx_config)
-> onnTypeConfigure(CONN_TYPE_TLS, &server.tls_ctx_config, 1)
isTlsConfigured() && tlsConfigure(&server.tls_ctx_config)
-> connTypeConfigure(CONN_TYPE_TLS, &server.tls_ctx_config, 0)
Finally, we can remove USE_OPENSSL from config.c. If redis is built
without TLS, and still run redis with TLS, then redis reports:
# Missing implement of connection type 1
# Failed to configure TLS. Check logs for more info.
The log can be optimised, let's leave it in the future. Maybe we can
use connection type as a string.
Although uninitialized fields of a static struct are zero, we still
set them as NULL explicitly in socket.c, let them clear to read & maintain:
.init = NULL,
.cleanup = NULL,
.configure = NULL,
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>