Add aof_rewrites and rdb_snapshots counters to info.
This is useful to figure our if a rewrite or snapshot happened since last check.
This was part of the (ongoing) effort to provide a safe backup solution for multipart-aof backups.
This is an enhancement for INFO command, previously INFO only support one argument
for different info section , if user want to get more categories information, either perform
INFO all / default or calling INFO for multiple times.
**Description of the feature**
The goal of adding this feature is to let the user retrieve multiple categories via the INFO
command, and still avoid emitting the same section twice.
A use case for this is like Redis Sentinel, which periodically calling INFO command to refresh
info from monitored Master/Slaves, only Server and Replication part categories are used for
parsing information. If the INFO command can return just enough categories that client side
needs, it can save a lot of time for client side parsing it as well as network bandwidth.
**Implementation**
To share code between redis, sentinel, and other users of INFO (DEBUG and modules),
we have a new `genInfoSectionDict` function that returns a dict and some boolean flags
(e.g. `all`) to the caller (built from user input).
Sentinel is later purging unwanted sections from that, and then it is forwarded to the info `genRedisInfoString`.
**Usage Examples**
INFO Server Replication
INFO CPU Memory
INFO default commandstats
Co-authored-by: Oran Agra <oran@redislabs.com>
- add COMMAND GETKEYSANDFLAGS sub-command
- add RM_KeyAtPosWithFlags and GetCommandKeysWithFlags
- RM_KeyAtPos and RM_CreateCommand set flags requiring full access for keys
- RM_CreateCommand set VARIABLE_FLAGS
- expose `variable_flags` flag in COMMAND INFO key-specs
- getKeysFromCommandWithSpecs prefers key-specs over getkeys-api
- add tests for all of these
If summary or since is empty, we used to return NULL in
COMMAND DOCS. Currently all redis commands will have these
two fields.
But not for module command, summary and since are optional
for RM_SetCommandInfo. With the change in #10043, if a module
command doesn't have the summary or since, redis-cli will
crash (see #10250).
In this commit, COMMAND DOCS avoid adding summary or since
when they are missing.
Adds RM_SetCommandInfo, allowing modules to provide the following command info:
* summary
* complexity
* since
* history
* hints
* arity
* key specs
* args
This information affects the output of `COMMAND`, `COMMAND INFO` and `COMMAND DOCS`,
Cluster, ACL and is used to filter commands with the wrong number of arguments before
the call reaches the module code.
The recently added API functions for key specs (never released) are removed.
A minimalist example would look like so:
```c
RedisModuleCommand *mycmd = RedisModule_GetCommand(ctx,"mymodule.mycommand");
RedisModuleCommandInfo mycmd_info = {
.version = REDISMODULE_COMMAND_INFO_VERSION,
.arity = -5,
.summary = "some description",
};
if (RedisModule_SetCommandInfo(mycmd, &mycmd_info) == REDISMODULE_ERR)
return REDISMODULE_ERR;
````
Notes:
* All the provided information (including strings) is copied, not keeping references to the API input data.
* The version field is actually a static struct that contains the sizes of the the structs used in arrays,
so we can extend these in the future and old version will still be able to take the part they can support.
Change the sentinel config file to a directory in SENTINEL SET test.
So it will now fail on the `rename` in `rewriteConfigOverwriteFile`.
The test used to set the sentinel config file permissions to `000` to
simulate failure. But it fails on centos7 / freebsd / alpine. (introduced in #10151)
Other changes:
1. More error messages after the config rewrite failure.
2. Modify arg name `force_all` in `rewriteConfig` to `force_write`. (was rename in #9304)
3. Fix a typo in debug quicklist-packed-threshold, then -> than. (#9357)
As Sentinel relies upon consensus algorithm, all sentinel instances,
randomize a time to initiate their next attempt to become the
leader of the group. But time after time, all raffled the same value.
The problem is in the line `srand(time(NULL)^getpid())` such that
all spinned up containers get same time (in seconds) and same pid
which is always 1. Added material `tv_usec` and verify that even
consecutive calls brings different values and makes the difference.
Add optional `notes` to keyspecs.
Other changes:
1. Remove the "incomplete" flag from SORT and SORT_RO: it is misleading since "incomplete" means "this spec may not return all the keys it describes" but SORT and SORT_RO's specs (except the input key) do not return any keys at all.
So basically:
If a spec's begin_search is "unknown" you should not use it at all, you must use COMMAND KEYS;
if a spec itself is "incomplete", you can use it to get a partial list of keys, but if you want all of them you must use COMMAND GETKEYS;
otherwise, the spec will return all the keys
2. `getKeysUsingKeySpecs` handles incomplete specs internally
In #10025 we added a mechanism for flagging certain properties for Redis Functions.
This lead us to think we'd like to "port" this mechanism to Redis Scripts (`EVAL`) as well.
One good reason for this, other than the added functionality is because it addresses the
poor behavior we currently have in `EVAL` in case the script performs a (non DENY_OOM) write operation
during OOM state. See #8478 (And a previous attempt to handle it via #10093) for details.
Note that in Redis Functions **all** write operations (including DEL) will return an error during OOM state
unless the function is flagged as `allow-oom` in which case no OOM checking is performed at all.
This PR:
- Enables setting `EVAL` (and `SCRIPT LOAD`) script flags as defined in #10025.
- Provides a syntactical framework via [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) for
additional script annotations and even engine selection (instead of just lua) for scripts.
- Provides backwards compatibility so scripts without the new annotations will behave as they did before.
- Appropriate tests.
- Changes `EVAL[SHA]/_RO` to be flagged as `STALE` commands. This makes it possible to flag individual
scripts as `allow-stale` or not flag them as such. In backwards compatibility mode these commands will
return the `MASTERDOWN` error as before.
- Changes `SCRIPT LOAD` to be flagged as a `STALE` command. This is mainly to make it logically
compatible with the change to `EVAL` in the previous point. It enables loading a script on a stale server
which is technically okay it doesn't relate directly to the server's dataset. Running the script does, but that
won't work unless the script is explicitly marked as `allow-stale`.
Note that even though the LUA syntax doesn't support hash tag comments `.lua` files do support a shebang
tag on the top so they can be executed on Unix systems like any shell script. LUA's `luaL_loadfile` handles
this as part of the LUA library. In the case of `luaL_loadbuffer`, which is what Redis uses, I needed to fix the
input script in case of a shebang manually. I did this the same way `luaL_loadfile` does, by replacing the
first line with a single line feed character.
Summary of changes:
1. Rename `redisCommand->name` to `redisCommand->declared_name`, it is a
const char * for native commands and SDS for module commands.
2. Store the [sub]command fullname in `redisCommand->fullname` (sds).
3. List subcommands in `ACL CAT`
4. List subcommands in `COMMAND LIST`
5. `moduleUnregisterCommands` now will also free the module subcommands.
6. RM_GetCurrentCommandName returns full command name
Other changes:
1. Add `addReplyErrorArity` and `addReplyErrorExpireTime`
2. Remove `getFullCommandName` function that now is useless.
3. Some cleanups about `fullname` since now it is SDS.
4. Delete `populateSingleCommand` function from server.h that is useless.
5. Added tests to cover this change.
6. Add some module unload tests and fix the leaks
7. Make error messages uniform, make sure they always contain the full command
name and that it's quoted.
7. Fixes some typos
see the history in #9504, fixes#10124
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: guybe7 <guy.benoish@redislabs.com>
Recently we added extensive support for sub-commands in for redis 7.0,
this meant that the old ACL mechanism for
sub-commands wasn't needed, or actually was improved (to handle both include
and exclude control, like for commands), but only for real sub-commands.
The old mechanism in ACL was renamed to first-arg, and was able to match the
first argument of any command (including sub-commands).
We now realized that we might wanna completely delete that first-arg feature some
day, so the first step was not to give it new capabilities in 7.0 and it didn't have before.
Changes:
1. ACL: Block the first-arg mechanism on subcommands (we keep if in non-subcommands
for backward compatibility)
2. COMMAND: When looking up a command, insist the command name doesn't contain
extra words. Example: When a user issues `GET key` we want `lookupCommand` to return
`getCommand` but when if COMMAND calls `lookupCommand` with `get|key` we want it to fail.
Other changes:
1. ACLSetUser: prevent a redundant command lookup
* Implemented selectors which provide multiple different sets of permissions to users
* Implemented key based permissions
* Added a new ACL dry-run command to test permissions before execution
* Updated module APIs to support checking key based permissions
Co-authored-by: Oran Agra <oran@redislabs.com>
Adding command tips (see https://redis.io/topics/command-tips) to commands.
Breaking changes:
1. Removed the "random" and "sort_for_script" flags. They are now command tips.
(this isn't affecting redis behavior since #9812, but could affect some client applications
that's relying on COMMAND command flags)
Summary of changes:
1. add BLOCKING flag (new flag) for all commands that could block. The ACL category with
the same name is now implicit.
2. move RANDOM flag to a `nondeterministic_output` tip
3. move SORT_FOR_SCRIPT flag to `nondeterministic_output_order` tip
3. add REQUEST_POLICY and RESPONSE_POLICY where appropriate as documented in the tips
4. deprecate (ignore) the `random` flag for RM_CreateCommand
Other notes:
1. Proxies need to send `RANDOMKEY` to all shards and then select one key randomly.
The other option is to pick a random shard and transfer `RANDOMKEY `to it, but that scheme
fails if this specific shard is empty
2. Remove CMD_RANDOM from `XACK` (i.e. XACK does not have RANDOM_OUTPUT)
It was added in 9e4fb96ca12476b1c7468b143efca86b478bfb4a, I guess by mistake.
Also from `(P)EXPIRETIME` (new command, was flagged "random" by mistake).
3. Add `nondeterministic_output` to `OBJECT ENCODING` (for the same reason `XTRIM` has it:
the reply may differ depending on the internal representation in memory)
4. RANDOM on `HGETALL` was wrong (there due to a limitation of the old script sorting logic), now
it's `nondeterministic_output_order`
5. Unrelated: Hide CMD_PROTECTED from COMMAND
Some modules might perform a long-running logic in different stages of Redis lifetime, for example:
* command execution
* RDB loading
* thread safe context
During this long-running logic Redis is not responsive.
This PR offers
1. An API to process events while a busy command is running (`RM_Yield`)
2. A new flag (`ALLOW_BUSY`) to mark the commands that should be handled during busy
jobs which can also be used by modules (`allow-busy`)
3. In slow commands and thread safe contexts, this flag will start rejecting commands with -BUSY only
after `busy-reply-threshold`
4. During loading (`rdb_load` callback), it'll process events right away (not wait for `busy-reply-threshold`),
but either way, the processing is throttled to the server hz rate.
5. Allow modules to Yield to redis background tasks, but not to client commands
* rename `script-time-limit` to `busy-reply-threshold` (an alias to the pre-7.0 `lua-time-limit`)
Co-authored-by: Oran Agra <oran@redislabs.com>
The new ACL key based permissions in #9974 require the key-specs (#8324) to have more
explicit flags rather than just READ and WRITE. See discussion in #10040
This PR defines two groups of flags:
One about how redis internally handles the key (mutually-exclusive).
The other is about the logical operation done from the user's point of view (3 mutually exclusive
write flags, and one read flag, all optional).
In both groups, if we can't explicitly flag something as explicit read-only, delete-only, or
insert-only, we flag it as `RW` or `UPDATE`.
here's the definition from the code:
```
/* Key-spec flags *
* -------------- */
/* The following refer what the command actually does with the value or metadata
* of the key, and not necessarily the user data or how it affects it.
* Each key-spec may must have exaclty one of these. Any operation that's not
* distinctly deletion, overwrite or read-only would be marked as RW. */
#define CMD_KEY_RO (1ULL<<0) /* Read-Only - Reads the value of the key, but
* doesn't necessarily returns it. */
#define CMD_KEY_RW (1ULL<<1) /* Read-Write - Modifies the data stored in the
* value of the key or its metadata. */
#define CMD_KEY_OW (1ULL<<2) /* Overwrite - Overwrites the data stored in
* the value of the key. */
#define CMD_KEY_RM (1ULL<<3) /* Deletes the key. */
/* The follwing refer to user data inside the value of the key, not the metadata
* like LRU, type, cardinality. It refers to the logical operation on the user's
* data (actual input strings / TTL), being used / returned / copied / changed,
* It doesn't refer to modification or returning of metadata (like type, count,
* presence of data). Any write that's not INSERT or DELETE, would be an UPADTE.
* Each key-spec may have one of the writes with or without access, or none: */
#define CMD_KEY_ACCESS (1ULL<<4) /* Returns, copies or uses the user data from
* the value of the key. */
#define CMD_KEY_UPDATE (1ULL<<5) /* Updates data to the value, new value may
* depend on the old value. */
#define CMD_KEY_INSERT (1ULL<<6) /* Adds data to the value with no chance of,
* modification or deletion of existing data. */
#define CMD_KEY_DELETE (1ULL<<7) /* Explicitly deletes some content
* from the value of the key. */
```
Unrelated changes:
- generate-command-code.py is only compatible with python3 (modified the shabang)
- generate-command-code.py print file on json parsing error
- rename `shard_channel` key-spec flag to just `channel`.
- add INCOMPLETE flag in input spec of SORT and SORT_RO
Modules can now register sockets/pipe to the Redis main thread event loop and do network operations asynchronously. Previously, modules had to maintain an event loop and another thread for asynchronous network operations.
Also, if a module is calling API functions after doing some network operations, it had to synchronize its event loop thread's access with Redis main thread by locking the GIL, causing contention on the lock. After this commit, no synchronization is needed as module can operate in Redis main thread context. So, this commit may improve the performance for some use cases.
Added three functions to the module API:
* RedisModule_EventLoopAdd(int fd, int mask, RedisModuleEventLoopFunc func, void *user_data)
* RedisModule_EventLoopDel(int fd, int mask)
* RedisModule_EventLoopAddOneShot(RedisModuleEventLoopOneShotFunc func, void *user_data) - This function can be called from other threads to trigger callback on Redis main thread. Callback will be triggered only once. If Redis main thread is sleeping, this call will wake up the Redis main thread.
Event loop callbacks are called by Redis main thread after locking the GIL. Inside callbacks, modules can operate as if they are holding the GIL.
Added REDISMODULE_EVENT_EVENTLOOP event with two subevents:
* REDISMODULE_SUBEVENT_EVENTLOOP_BEFORE_SLEEP
* REDISMODULE_SUBEVENT_EVENTLOOP_AFTER_SLEEP
These events are for modules that want to participate in the before and after sleep action. e.g It might be useful to implement batching : Read data from the network, write all to a file in one go on BEFORE_SLEEP event.
since `info commandstats` already shows sub-commands, we should do the same in `info latencystats`.
similarly, the LATENCY HISTOGRAM command now shows sub-commands (with their full name) when:
* asking for all commands
* asking for a specific container command
* asking for a specific sub-command)
Co-authored-by: Oran Agra <oran@redislabs.com>
# Redis Functions Flags
Following the discussion on #10025 Added Functions Flags support.
The PR is divided to 2 sections:
* Add named argument support to `redis.register_function` API.
* Add support for function flags
## `redis.register_function` named argument support
The first part of the PR adds support for named argument on `redis.register_function`, example:
```
redis.register_function{
function_name='f1',
callback=function()
return 'hello'
end,
description='some desc'
}
```
The positional arguments is also kept, which means that it still possible to write:
```
redis.register_function('f1', function() return 'hello' end)
```
But notice that it is no longer possible to pass the optional description argument on the positional
argument version. Positional argument was change to allow passing only the mandatory arguments
(function name and callback). To pass more arguments the user must use the named argument version.
As with positional arguments, the `function_name` and `callback` is mandatory and an error will be
raise if those are missing. Also, an error will be raise if an unknown argument name is given or the
arguments type is wrong.
Tests was added to verify the new syntax.
## Functions Flags
The second part of the PR is adding functions flags support.
Flags are given to Redis when the engine calls `functionLibCreateFunction`, supported flags are:
* `no-writes` - indicating the function perform no writes which means that it is OK to run it on:
* read-only replica
* Using FCALL_RO
* If disk error detected
It will not be possible to run a function in those situations unless the function turns on the `no-writes` flag
* `allow-oom` - indicate that its OK to run the function even if Redis is in OOM state, if the function will
not turn on this flag it will not be possible to run it if OOM reached (even if the function declares `no-writes`
and even if `fcall_ro` is used). If this flag is set, any command will be allow on OOM (even those that is
marked with CMD_DENYOOM). The assumption is that this flag is for advance users that knows its
meaning and understand what they are doing, and Redis trust them to not increase the memory usage.
(e.g. it could be an INCR or a modification on an existing key, or a DEL command)
* `allow-state` - indicate that its OK to run the function on stale replica, in this case we will also make
sure the function is only perform `stale` commands and raise an error if not.
* `no-cluster` - indicate to disallow running the function if cluster is enabled.
Default behaviure of functions (if no flags is given):
1. Allow functions to read and write
2. Do not run functions on OOM
3. Do not run functions on stale replica
4. Allow functions on cluster
### Lua API for functions flags
On Lua engine, it is possible to give functions flags as `flags` named argument:
```
redis.register_function{function_name='f1', callback=function() return 1 end, flags={'no-writes', 'allow-oom'}, description='description'}
```
The function flags argument must be a Lua table that contains all the requested flags, The following
will result in an error:
* Unknown flag
* Wrong flag type
Default behaviour is the same as if no flags are used.
Tests were added to verify all flags functionality
## Additional changes
* mark FCALL and FCALL_RO with CMD_STALE flag (unlike EVAL), so that they can run if the function was
registered with the `allow-stale` flag.
* Verify `CMD_STALE` on `scriptCall` (`redis.call`), so it will not be possible to call commands from script while
stale unless the command is marked with the `CMD_STALE` flags. so that even if the function is allowed while
stale we do not allow it to bypass the `CMD_STALE` flag of commands.
* Flags section was added to `FUNCTION LIST` command to provide the set of flags for each function:
```
> FUNCTION list withcode
1) 1) "library_name"
2) "test"
3) "engine"
4) "LUA"
5) "description"
6) (nil)
7) "functions"
8) 1) 1) "name"
2) "f1"
3) "description"
4) (nil)
5) "flags"
6) (empty array)
9) "library_code"
10) "redis.register_function{function_name='f1', callback=function() return 1 end}"
```
* Added API to get Redis version from within a script, The redis version can be provided using:
1. `redis.REDIS_VERSION` - string representation of the redis version in the format of MAJOR.MINOR.PATH
2. `redis.REDIS_VERSION_NUM` - number representation of the redis version in the format of `0x00MMmmpp`
(`MM` - major, `mm` - minor, `pp` - patch). The number version can be used to check if version is greater or less
another version. The string version can be used to return to the user or print as logs.
This new API is provided to eval scripts and functions, it also possible to use this API during functions loading phase.
Use `getFullCommandName` to get the full name of the command.
It can also get the full name of the subcommand, like "script|help".
Before:
```
> SCRIPT HELP
(error) NOPERM this user has no permissions to run the 'help' command or its subcommand
> ACL LOG
7) "object"
8) "help"
```
After:
```
> SCRIPT HELP
(error) NOPERM this user has no permissions to run the 'script|help' command
> ACL LOG
7) "object"
8) "script|help"
```
Fix#10094
Added a pool for temporary client objects to reuse in module operations.
By reusing temporary clients, we are avoiding expensive createClient()/freeClient()
calls and improving performance of RM_BlockClient() and RM_GetThreadSafeContext() calls.
This commit contains two optimizations:
1 - RM_BlockClient() and RM_GetThreadSafeContext() calls create temporary clients and they are freed in
RM_UnblockClient() and RM_FreeThreadSafeContext() calls respectively. Creating/destroying client object
takes quite time. To avoid that, added a pool of temporary clients. Pool expands when more clients are needed.
Also, added a cron function to shrink the pool and free unused clients after some time. Pool starts with zero
clients in it. It does not have max size and can grow unbounded as we need it. We will keep minimum of 8
temporary clients in the pool once created. Keeping small amount of clients to avoid client allocation costs
if temporary clients are required after some idle period.
2 - After unblocking a client (RM_UnblockClient()), one byte is written to pipe to wake up Redis main thread.
If there are many clients that will be unblocked, each operation requires one write() call which is quite expensive.
Changed code to avoid subsequent calls if possible.
There are a few more places that need temporary client objects (e.g RM_Call()). These are now using the same
temporary client pool to make things more centralized.
Syntax:
`COMMAND DOCS [<command name> ...]`
Background:
Apparently old version of hiredis (and thus also redis-cli) can't
support more than 7 levels of multi-bulk nesting.
The solution is to move all the doc related metadata from COMMAND to a
new COMMAND DOCS sub-command.
The new DOCS sub-command returns a map of commands (not an array like in COMMAND),
And the same goes for the `subcommands` field inside it (also contains a map)
Besides that, the remaining new fields of COMMAND (hints, key-specs, and
sub-commands), are placed in the outer array rather than a nested map.
this was done mainly for consistency with the old format.
Other changes:
---
* Allow COMMAND INFO with no arguments, which returns all commands, so that we can some day deprecated
the plain COMMAND (no args)
* Reduce the amount of deferred replies from both COMMAND and COMMAND
DOCS, especially in the inner loops, since these create many small
reply objects, which lead to many small write syscalls and many small
TCP packets.
To make this easier, when populating the command table, we count the
history, args, and hints so we later know their size in advance.
Additionally, the movablekeys flag was moved into the flags register.
* Update generate-commands-json.py to take the data from both command, it
now executes redis-cli directly, instead of taking input from stdin.
* Sub-commands in both COMMAND (and COMMAND INFO), and also COMMAND DOCS,
show their full name. i.e. CONFIG
* GET will be shown as `config|get` rather than just `get`.
This will be visible both when asking for `COMMAND INFO config` and COMMAND INFO config|get`, but is
especially important for the later.
i.e. imagine someone doing `COMMAND INFO slowlog|get config|get` not being able to distinguish between the two
items in the array response.
The following error commands will crash redis-server:
```
> get|
Error: Server closed the connection
> get|set
Error: Server closed the connection
> get|other
```
The reason is in #9504, we use `lookupCommandBySds` for find the
container command. And it split the command (argv[0]) with `|`.
If we input something like `get|other`, after the split, `get`
will become a valid command name, pass the `ERR unknown command`
check, and finally crash in `addReplySubcommandSyntaxError`
In this case we do not need to split the command name with `|`
and just look in the commands dict to find if `argv[0]` is a
container command.
So this commit introduce a new function call `isContainerCommandBySds`
that it will return true if a command name is a container command.
Also with the old code, there is a incorrect error message:
```
> config|get set
(error) ERR Unknown subcommand or wrong number of arguments for 'set'. Try CONFIG|GET HELP.
```
The crash was reported in #10070.
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
Callers of redisFork() are logging `strerror(errno)` on failure.
`errno` is not set when there is already a child process, causing printing
current value of errno which was set before `redisFork()` call.
Setting errno to EEXIST on this failure to provide more meaningful error message.
# Short description
The Redis extended latency stats track per command latencies and enables:
- exporting the per-command percentile distribution via the `INFO LATENCYSTATS` command.
**( percentile distribution is not mergeable between cluster nodes ).**
- exporting the per-command cumulative latency distributions via the `LATENCY HISTOGRAM` command.
Using the cumulative distribution of latencies we can merge several stats from different cluster nodes
to calculate aggregate metrics .
By default, the extended latency monitoring is enabled since the overhead of keeping track of the
command latency is very small.
If you don't want to track extended latency metrics, you can easily disable it at runtime using the command:
- `CONFIG SET latency-tracking no`
By default, the exported latency percentiles are the p50, p99, and p999.
You can alter them at runtime using the command:
- `CONFIG SET latency-tracking-info-percentiles "0.0 50.0 100.0"`
## Some details:
- The total size per histogram should sit around 40 KiB. We only allocate those 40KiB when a command
was called for the first time.
- With regards to the WRITE overhead As seen below, there is no measurable overhead on the achievable
ops/sec or full latency spectrum on the client. Including also the measured redis-benchmark for unstable
vs this branch.
- We track from 1 nanosecond to 1 second ( everything above 1 second is considered +Inf )
## `INFO LATENCYSTATS` exposition format
- Format: `latency_percentiles_usec_<CMDNAME>:p0=XX,p50....`
## `LATENCY HISTOGRAM [command ...]` exposition format
Return a cumulative distribution of latencies in the format of a histogram for the specified command names.
The histogram is composed of a map of time buckets:
- Each representing a latency range, between 1 nanosecond and roughly 1 second.
- Each bucket covers twice the previous bucket's range.
- Empty buckets are not printed.
- Everything above 1 sec is considered +Inf.
- At max there will be log2(1000000000)=30 buckets
We reply a map for each command in the format:
`<command name> : { `calls`: <total command calls> , `histogram` : { <bucket 1> : latency , < bucket 2> : latency, ... } }`
Co-authored-by: Oran Agra <oran@redislabs.com>
This makes redis-cli --replica much faster and reduces COW/fork risks on server side.
This commit also improves the RDB filtering via REPLCONF rdb-filter-only to support no "include" specifiers at all.
Creating fork (or even a foreground SAVE) during a transaction breaks the atomicity of the transaction.
In addition to that, it could mess up the propagated transaction to the AOF file.
This change blocks SAVE, PSYNC, SYNC and SHUTDOWN from being executed inside MULTI-EXEC.
It does that by adding a command flag, so that modules can flag their commands with that flag too.
Besides it changes BGSAVE, BGREWRITEAOF, and CONFIG SET appendonly, to turn the
scheduled flag instead of forking righ taway.
Other changes:
* expose `protected`, `no-async-loading`, and `no_multi` flags in COMMAND command
* add a test to validate propagation of FLUSHALL inside a transaction.
* add a test to validate how CONFIG SET that errors reacts in a transaction
Co-authored-by: Oran Agra <oran@redislabs.com>
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
This commit implements a sharded pubsub implementation based off of shard channels.
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
To avoid data loss, this commit adds a grace period for lagging replicas to
catch up the replication offset.
Done:
* Wait for replicas when shutdown is triggered by SIGTERM and SIGINT.
* Wait for replicas when shutdown is triggered by the SHUTDOWN command. A new
blocked client type BLOCKED_SHUTDOWN is introduced, allowing multiple clients
to call SHUTDOWN in parallel.
Note that they don't expect a response unless an error happens and shutdown is aborted.
* Log warning for each replica lagging behind when finishing shutdown.
* CLIENT_PAUSE_WRITE while waiting for replicas.
* Configurable grace period 'shutdown-timeout' in seconds (default 10).
* New flags for the SHUTDOWN command:
- NOW disables the grace period for lagging replicas.
- FORCE ignores errors writing the RDB or AOF files which would normally
prevent a shutdown.
- ABORT cancels ongoing shutdown. Can't be combined with other flags.
* New field in the output of the INFO command: 'shutdown_in_milliseconds'. The
value is the remaining maximum time to wait for lagging replicas before
finishing the shutdown. This field is present in the Server section **only**
during shutdown.
Not directly related:
* When shutting down, if there is an AOF saving child, it is killed **even** if AOF
is disabled. This can happen if BGREWRITEAOF is used when AOF is off.
* Client pause now has end time and type (WRITE or ALL) per purpose. The
different pause purposes are *CLIENT PAUSE command*, *failover* and
*shutdown*. If clients are unpaused for one purpose, it doesn't affect client
pause for other purposes. For example, the CLIENT UNPAUSE command doesn't
affect client pause initiated by the failover or shutdown procedures. A completed
failover or a failed shutdown doesn't unpause clients paused by the CLIENT
PAUSE command.
Notes:
* DEBUG RESTART doesn't wait for replicas.
* We already have a warning logged when a replica disconnects. This means that
if any replica connection is lost during the shutdown, it is either logged as
disconnected or as lagging at the time of exit.
Co-authored-by: Oran Agra <oran@redislabs.com>
This is needed in order to ease the deployment of functions for ephemeral cases, where user
needs to spin up a server with functions pre-loaded.
#### Details:
* Added `--functions-rdb` option to _redis-cli_.
* Functions only rdb via `REPLCONF rdb-filter-only functions`. This is a placeholder for a space
separated inclusion filter for the RDB. In the future can be `REPLCONF rdb-filter-only
"functions db:3 key-patten:user*"` and a complementing `rdb-filter-exclude` `REPLCONF`
can also be added.
* Handle "slave requirements" specification to RDB saving code so we can use the same RDB
when different slaves express the same requirements (like functions-only) and not share the
RDB when their requirements differ. This is currently just a flags `int`, but can be extended to
a more complex structure with various filter fields.
* make sure to support filters only in diskless replication mode (not to override the persistence file),
we do that by forcing diskless (even if disabled by config)
other changes:
* some refactoring in rdb.c (extract portion of a big function to a sub-function)
* rdb_key_save_delay used in AOFRW too
* sendChildInfo takes the number of updated keys (incremental, rather than absolute)
Co-authored-by: Oran Agra <oran@redislabs.com>
on the signal handler we are enabling server.shutdown_asap flag instead of just doing `exit()`,
and then catch it on the whileBlockedCron() where we prepare for shutdown correctly.
this is a more ore organized and safe termination, the old approach was missing these for example:
1. removal of the pidfile
2. shutdown event to modules
Following #9656, this script generates a "commands.json" file from the output
of the new COMMAND. The output of this script is used in redis/redis-doc#1714
and by redis/redis-io#259. This also converts a couple of rogue dashes (in
'key-specs' and 'multiple-token' flags) to underscores (continues #9959).
The mess:
Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
causing edge cases, ugly/hacky code, and the tendency for bugs
The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
top-most call() is responsible for going over that list and actually propagating them (and wrapping
them in MULTI/EXEC if there's more than one command). This is done in the new function,
propagatePendingCommands.
Callers to propagatePendingCommands:
1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
one to propagate them) - via `afterCommand`
2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`.
3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
the deletion explicitly.
4. cron stuff: active-expire and eviction may also propagate stuff
5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
context may cause propagation.
6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
releasing the GIL.
A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
first all the commands from RM_Call, and then the ones from RM_Replicate
Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
not anymore.
This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
Optimizations:
1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
Bugfixes:
1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
we need to prevent it from propagating to AOF/replicas
2. We need to set current_client in RM_Call. buggy scenario:
- CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
- assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
(we always send a notification before propagating the command)
## background
Till now CONFIG SET was blocked during loading.
(In the not so distant past, GET was disallowed too)
We recently (not released yet) added an async-loading mode, see #9323,
and during that time it'll serve CONFIG SET and any other command.
And now we realized (#9770) that some configs, and commands are dangerous
during async-loading.
## changes
* Allow most CONFIG SET during loading (both on async-loading and normal loading)
* Allow CONFIG REWRITE and CONFIG RESETSTAT during loading
* Block a few config during loading (`appendonly`, `repl-diskless-load`, and `dir`)
* Block a few commands during loading (list below)
## the blocked commands:
* SAVE - obviously we don't wanna start a foregreound save during loading 8-)
* BGSAVE - we don't mind to schedule one, but we don't wanna fork now
* BGREWRITEAOF - we don't mind to schedule one, but we don't wanna fork now
* MODULE - we obviously don't wanna unload a module during replication / rdb loading
(MODULE HELP and MODULE LIST are not blocked)
* SYNC / PSYNC - we're in the middle of RDB loading from master, must not allow sync
requests now.
* REPLICAOF / SLAVEOF - we're in the middle of replicating, maybe it makes sense to let
the user abort it, but he couldn't do that so far, i don't wanna take any risk of bugs due to odd state.
* CLUSTER - only allow [HELP, SLOTS, NODES, INFO, MYID, LINKS, KEYSLOT, COUNTKEYSINSLOT,
GETKEYSINSLOT, RESET, REPLICAS, COUNT_FAILURE_REPORTS], for others, preserve the status quo
## other fixes
* processEventsWhileBlocked had an issue when being nested, this could happen with a busy script
during async loading (new), but also in a busy script during AOF loading (old). this lead to a crash in
the scenario described in #6988
# Background
The main goal of this PR is to remove relevant logics on Lua script verbatim replication,
only keeping effects replication logic, which has been set as default since Redis 5.0.
As a result, Lua in Redis 7.0 would be acting the same as Redis 6.0 with default
configuration from users' point of view.
There are lots of reasons to remove verbatim replication.
Antirez has listed some of the benefits in Issue #5292:
>1. No longer need to explain to users side effects into scripts.
They can do whatever they want.
>2. No need for a cache about scripts that we sent or not to the slaves.
>3. No need to sort the output of certain commands inside scripts
(SMEMBERS and others): this both simplifies and gains speed.
>4. No need to store scripts inside the RDB file in order to startup correctly.
>5. No problems about evicting keys during the script execution.
When looking back at Redis 5.0, antirez and core team decided to set the config
`lua-replicate-commands yes` by default instead of removing verbatim replication
directly, in case some bad situations happened. 3 years later now before Redis 7.0,
it's time to remove it formally.
# Changes
- configuration for lua-replicate-commands removed
- created config file stub for backward compatibility
- Replication script cache removed
- this is useless under script effects replication
- relevant statistics also removed
- script persistence in RDB files is also removed
- Propagation of SCRIPT LOAD and SCRIPT FLUSH to replica / AOF removed
- Deterministic execution logic in scripts removed (i.e. don't run write commands
after random ones, and sorting output of commands with random order)
- the flags indicating which commands have non-deterministic results are kept as hints to clients.
- `redis.replicate_commands()` & `redis.set_repl()` changed
- now `redis.replicate_commands()` does nothing and return an 1
- ...and then `redis.set_repl()` can be issued before `redis.replicate_commands()` now
- Relevant TCL cases adjusted
- DEBUG lua-always-replicate-commands removed
# Other changes
- Fix a recent bug comparing CLIENT_ID_AOF to original_client->flags instead of id. (introduced in #9780)
Co-authored-by: Oran Agra <oran@redislabs.com>
Recent PRs have introduced some failures, this commit
try to fix these CI failures. Here are the changes:
1. Enable debug-command in sentinel test.
```
Master reboot in very short time: ERR DEBUG command not allowed. If the
enable-debug-command option is set to "local", you can run it from a
local connection, otherwise you need to set this option in the
configuration file, and then restart the server.
```
2. Enable protected-config in sentinel test.
```
SDOWN is triggered by misconfigured instance replying with errors: ERR
CONFIG SET failed (possibly related to argument 'dir') - can't set
protected config
```
3. Enable debug-command in cluster test.
```
Verify slaves consistency: ERR DEBUG command not allowed. If the
enable-debug-command option is set to "local", you can run it from a
local connection, otherwise you need to set this option in the
configuration file, and then restart the server.
```
4. quicklist fill should be signed int.
The reason for the modification is to eliminate the warning.
Modify `int fill: QL_FILL_BITS` to `signed int fill: QL_FILL_BITS`
The first three were introduced at #9920 (same issue).
And the last one was introduced at #9962.
Block sensitive configs and commands by default.
* `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag.
Currently we add this flag to `dbfilename`, and `dir` configs,
all of which are non-mutable configs that can set a file redis will write to.
* `enable-debug-command` - block the `DEBUG` command
* `enable-module-command` - block the `MODULE` command
These have a default value set to `no`, so that these features are not
exposed by default to client connections, and can only be set by modifying the config file.
Users can change each of these to either `yes` (allow all access), or `local` (allow access from
local TCP connections and unix domain connections)
Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default).
I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it),
and the few configs we protected are unlikely to have been set at runtime anyway.
On the other hand, it's likely to assume some users who use modules, load them from the config file anyway.
Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on
innocent users, so secure defaults will necessarily mean a breaking change.
some languages can build a json-like object by parsing a textual json,
but it works poorly when attributes contain hyphens
example in JS:
```
let j = JSON.parse(json)
j['key-name'] <- works
j.key-name <= illegal syntax
```
Introduce memory management on cluster link buffers:
* Introduce a new `cluster-link-sendbuf-limit` config that caps memory usage of cluster bus link send buffers.
* Introduce a new `CLUSTER LINKS` command that displays current TCP links to/from peers.
* Introduce a new `mem_cluster_links` field under `INFO` command output, which displays the overall memory usage by all current cluster links.
* Introduce a new `total_cluster_links_buffer_limit_exceeded` field under `CLUSTER INFO` command output, which displays the accumulated count of cluster links freed due to `cluster-link-sendbuf-limit`.