2012-11-08 18:25:23 +01:00
|
|
|
/*
|
2024-08-14 17:20:36 +01:00
|
|
|
* Copyright (c) 2009-2012, Redis Ltd.
|
2012-11-08 18:25:23 +01:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions are met:
|
|
|
|
*
|
|
|
|
* * Redistributions of source code must retain the above copyright notice,
|
|
|
|
* this list of conditions and the following disclaimer.
|
|
|
|
* * Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* * Neither the name of Redis nor the names of its contributors may be used
|
|
|
|
* to endorse or promote products derived from this software without
|
|
|
|
* specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
|
|
|
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
|
|
|
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
|
|
|
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
|
|
|
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
|
|
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
|
|
|
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
|
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
|
|
|
* POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2015-07-26 15:14:57 +02:00
|
|
|
#include "server.h"
|
2013-10-09 15:37:20 +02:00
|
|
|
#include "cluster.h"
|
2021-08-05 16:09:24 +08:00
|
|
|
#include "latency.h"
|
2021-10-05 19:37:03 +03:00
|
|
|
#include "script.h"
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
#include "functions.h"
|
Io thread work offload (#763)
### IO-Threads Work Offloading
This PR is the 2nd of 3 PRs intended to achieve the goal of 1M requests
per second.
(1st PR: https://github.com/valkey-io/valkey/pull/758)
This PR offloads additional work to the I/O threads, beyond the current
read-parse/write operations, to better utilize the I/O threads and
reduce the load on the main thread.
It contains the following 3 commits:
### Poll Offload
Currently, the main thread is responsible for executing the poll-wait
system call, while the IO threads wait for tasks from the main thread.
The poll-wait operation is expensive and can consume up to 30% of the
main thread's time. We could have let the IO threads do the poll-wait by
themselves, with each thread listening to some of the clients and
notifying the main thread when a client's command is ready to execute.
However, the current approach, where the main thread listens for events
from the network, has several benefits. The main thread remains in
charge, allowing it to know the state of each client
(idle/read/write/close) at any given time. Additionally, it makes the
threads flexible, enabling us to drain an IO thread's job queue and stop
a thread when the load is light without modifying the event loop and
moving its clients to a different IO thread. Furthermore, with this
approach, the IO threads don't need to wait for both messages from the
network and from the main thread instead, the threads wait only for
tasks from the main thread.
To enjoy the benefits of both the main thread remaining in charge and
the poll being offloaded, we propose offloading the poll-wait as a
single-time, non-blocking job to one of the IO threads. The IO thread
will perform a poll-wait non-blocking call while the main thread
processes the client commands. Later, in `aeProcessEvents`, instead of
sleeping on the poll, we check for the IO thread's poll-wait results.
The poll-wait will be offloaded in `beforeSleep` only when there are
ready events for the main thread to process. If no events are pending,
the main thread will revert to the current behavior and sleep on the
poll by itself.
**Implementation Details**
A new call back `custompoll` was added to the `aeEventLoop` when not set
to `NULL` the ae will call the `custompoll` callback instead of the
`aeApiPoll`.
When the poll is offloaded we will set the `custompoll` to
`getIOThreadPollResults` and send a poll-job to the thread. the thread
will take a mutex, call a non-blocking (with timeout 0) to `aePoll`
which will populate the fired events array. the IO thread will set the
`server.io_fired_events` to the number of the returning `numevents`,
later the main-thread in `custompoll` will return the
`server.io_fired_events` and will set the `customPoll` back to `NULL`.
To ensure thread safety when accessing server.el, all functions that
modify the eventloop events were wrapped with a mutex to ensure mutual
exclusion when modifying the events.
### Command Lookup Offload
As the IO thread parses the command from the client's Querybuf, it can
perform a command lookup in the commands dictionary, which can consume
up to ~5% of the main-thread runtime.
**Implementation details**
The IO thread will store the looked-up command in the client's new field
`io_parsed_cmd` field. We can't use `c->cmd` for that since we use
`c->cmd `to check if a command was reprocessed or not.
To ensure thread safety when accessing the command dictionary, we make
sure the main thread isn't changing the dictionary while IO threads are
accessing it. This is accomplished by introducing a new flag called
`no_incremental_rehash` for the `dictType` commands. When performing
`dictResize`, we will rehash the entire dictionary in place rather than
deferring the process.
### Free Offload
Since the command arguments are allocated by the I/O thread, it would be
beneficial if they were also freed by the same thread. If the main
thread frees objects allocated by the I/O thread, two issues arise:
1. During the freeing process, the main thread needs to access the SDS
pointed to by the object to get its length.
2. With Jemalloc, each thread manages thread local pool (`tcache`) of
buffers for quick reallocation without accessing the arena. If the main
thread constantly frees objects allocated by other threads, those
threads will have to frequently access the shared arena to obtain new
memory allocations
**Implementation Details**
When freeing the client's argv, we will send the argv array to the
thread that allocated it. The thread will be identified by the client
ID. When freeing an object during `dbOverwrite`, we will offload the
object free as well. We will extend this to offload the free during
`dbDelete` in a future PR, as its effects on defrag/memory evictions
need to be studied.
---------
Signed-off-by: Uri Yagelnik <uriy@amazon.com>
2024-07-19 05:21:45 +03:00
|
|
|
#include "io_threads.h"
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
#include <signal.h>
|
2011-11-09 16:51:19 +01:00
|
|
|
#include <ctype.h>
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
/*-----------------------------------------------------------------------------
|
|
|
|
* C-level DB API
|
|
|
|
*----------------------------------------------------------------------------*/
|
|
|
|
|
2022-08-24 19:39:15 +03:00
|
|
|
/* Flags for expireIfNeeded */
|
|
|
|
#define EXPIRE_FORCE_DELETE_EXPIRED 1
|
|
|
|
#define EXPIRE_AVOID_DELETE_EXPIRED 2
|
|
|
|
|
2024-02-26 18:50:04 +08:00
|
|
|
/* Return values for expireIfNeeded */
|
|
|
|
typedef enum {
|
|
|
|
KEY_VALID = 0, /* Could be volatile and not yet expired, non-volatile, or even non-existing key. */
|
|
|
|
KEY_EXPIRED, /* Logically expired but not yet deleted. */
|
|
|
|
KEY_DELETED /* The key was deleted now. */
|
|
|
|
} keyStatus;
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
keyStatus expireIfNeededWithDictIndex(serverDb *db, robj *key, int flags, int dict_index);
|
2024-04-03 10:02:43 +07:00
|
|
|
keyStatus expireIfNeeded(serverDb *db, robj *key, int flags);
|
2024-10-16 17:40:11 -07:00
|
|
|
int keyIsExpiredWithDictIndex(serverDb *db, robj *key, int dict_index);
|
2024-04-03 10:02:43 +07:00
|
|
|
int keyIsExpired(serverDb *db, robj *key);
|
|
|
|
static void dbSetValue(serverDb *db, robj *key, robj *val, int overwrite, dictEntry *de);
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
static int getKVStoreIndexForKey(sds key);
|
2024-10-16 17:40:11 -07:00
|
|
|
dictEntry *dbFindExpiresWithDictIndex(serverDb *db, void *key, int dict_index);
|
2018-10-19 18:00:57 +08:00
|
|
|
|
2017-10-15 20:17:55 +08:00
|
|
|
/* Update LFU when an object is accessed.
|
|
|
|
* Firstly, decrement the counter if the decrement time is reached.
|
|
|
|
* Then logarithmically increment the counter, and update the access time. */
|
|
|
|
void updateLFU(robj *val) {
|
|
|
|
unsigned long counter = LFUDecrAndReturn(val);
|
|
|
|
counter = LFULogIncr(counter);
|
|
|
|
val->lru = (LFUGetTimeInMinutes() << 8) | counter;
|
|
|
|
}
|
|
|
|
|
2021-11-28 10:26:28 +01:00
|
|
|
/* Lookup a key for read or write operations, or return NULL if the key is not
|
|
|
|
* found in the specified DB. This function implements the functionality of
|
|
|
|
* lookupKeyRead(), lookupKeyWrite() and their ...WithFlags() variants.
|
|
|
|
*
|
|
|
|
* Side-effects of calling this function:
|
|
|
|
*
|
|
|
|
* 1. A key gets expired if it reached it's TTL.
|
|
|
|
* 2. The key's last access time is updated.
|
|
|
|
* 3. The global keys hits/misses stats are updated (reported in INFO).
|
|
|
|
* 4. If keyspace notifications are enabled, a "keymiss" notification is fired.
|
|
|
|
*
|
|
|
|
* Flags change the behavior of this command:
|
|
|
|
*
|
|
|
|
* LOOKUP_NONE (or zero): No special flags are passed.
|
|
|
|
* LOOKUP_NOTOUCH: Don't alter the last access time of the key.
|
|
|
|
* LOOKUP_NONOTIFY: Don't trigger keyspace event on key miss.
|
2022-01-04 15:19:28 +01:00
|
|
|
* LOOKUP_NOSTATS: Don't increment key hits/misses counters.
|
2021-11-28 10:26:28 +01:00
|
|
|
* LOOKUP_WRITE: Prepare the key for writing (delete expired keys even on
|
|
|
|
* replicas, use separate keyspace stats and events (TODO)).
|
2022-08-24 19:39:15 +03:00
|
|
|
* LOOKUP_NOEXPIRE: Perform expiration check, but avoid deleting the key,
|
|
|
|
* so that we don't have to propagate the deletion.
|
2021-11-28 10:26:28 +01:00
|
|
|
*
|
|
|
|
* Note: this function also returns NULL if the key is logically expired but
|
|
|
|
* still existing, in case this is a replica and the LOOKUP_WRITE is not set.
|
|
|
|
* Even if the key expiry is primary-driven, we can correctly report a key is
|
|
|
|
* expired on replicas even if the primary is lagging expiring our key via DELs
|
|
|
|
* in the replication link. */
|
2024-04-03 10:02:43 +07:00
|
|
|
robj *lookupKey(serverDb *db, robj *key, int flags) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
dictEntry *de = dbFind(db, key->ptr);
|
2021-11-28 10:26:28 +01:00
|
|
|
robj *val = NULL;
|
2010-06-22 00:07:48 +02:00
|
|
|
if (de) {
|
2021-11-28 10:26:28 +01:00
|
|
|
val = dictGetVal(de);
|
2022-02-08 12:19:27 +01:00
|
|
|
/* Forcing deletion of expired keys on a replica makes the replica
|
|
|
|
* inconsistent with the primary. We forbid it on readonly replicas, but
|
|
|
|
* we have to allow it on writable replicas to make write commands
|
|
|
|
* behave consistently.
|
|
|
|
*
|
|
|
|
* It's possible that the WRITE flag is set even during a readonly
|
|
|
|
* command, since the command may trigger events that cause modules to
|
|
|
|
* perform additional writes. */
|
|
|
|
int is_ro_replica = server.primary_host && server.repl_replica_ro;
|
2022-08-24 19:39:15 +03:00
|
|
|
int expire_flags = 0;
|
|
|
|
if (flags & LOOKUP_WRITE && !is_ro_replica) expire_flags |= EXPIRE_FORCE_DELETE_EXPIRED;
|
|
|
|
if (flags & LOOKUP_NOEXPIRE) expire_flags |= EXPIRE_AVOID_DELETE_EXPIRED;
|
2024-02-26 18:50:04 +08:00
|
|
|
if (expireIfNeeded(db, key, expire_flags) != KEY_VALID) {
|
2021-11-28 10:26:28 +01:00
|
|
|
/* The key is no longer valid. */
|
|
|
|
val = NULL;
|
|
|
|
}
|
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2021-11-28 10:26:28 +01:00
|
|
|
if (val) {
|
2013-01-17 01:00:20 +08:00
|
|
|
/* Update the access time for the ageing algorithm.
|
2010-11-02 18:59:48 +01:00
|
|
|
* Don't do it if we have a saving child, as this will trigger
|
|
|
|
* a copy on write madness. */
|
2024-07-01 02:33:10 +08:00
|
|
|
if (server.current_client && server.current_client->flag.no_touch &&
|
2023-02-23 15:07:49 +08:00
|
|
|
server.current_client->cmd->proc != touchCommand)
|
|
|
|
flags |= LOOKUP_NOTOUCH;
|
2019-09-27 12:03:09 +02:00
|
|
|
if (!hasActiveChildProcess() && !(flags & LOOKUP_NOTOUCH)) {
|
Unshare object to avoid LRU/LFU being messed up (#250)
When LRU/LFU enabled, Valkey does not allow using shared objects, as
value objects may be shared among many different keys and they can't
share LRU/LFU information.
However `maxmemory-policy` is modifiable at runtime. If LRU/LFU is not
enabled at start, but then enabled when some shared objects are already
used, there could be some confusion in LRU/LFU information.
For `set` command it is OK since it is going to create a new object when
LRU/LFU enabled, but `get` command will not unshare the object and just
update LRU/LFU information.
So we may duplicate the object in this case. It is a one-time task for
each key using shared objects, unless this is the case for so many keys,
there should be no serious performance degradation.
Still, LRU will be updated anyway, no matter LRU/LFU is enabled or not,
because `OBJECT IDLETIME` needs it, unless `maxmemory-policy` is set to
LFU. So idle time of a key may still be messed up.
---------
Signed-off-by: chentianjie.ctj <chentianjie.ctj@alibaba-inc.com>
Signed-off-by: Chen Tianjie <TJ_Chen@outlook.com>
2024-06-01 16:09:20 +08:00
|
|
|
if (!canUseSharedObject() && val->refcount == OBJ_SHARED_REFCOUNT) {
|
|
|
|
val = dupStringObject(val);
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
kvstoreDictSetVal(db->keys, getKVStoreIndexForKey(key->ptr), de, val);
|
Unshare object to avoid LRU/LFU being messed up (#250)
When LRU/LFU enabled, Valkey does not allow using shared objects, as
value objects may be shared among many different keys and they can't
share LRU/LFU information.
However `maxmemory-policy` is modifiable at runtime. If LRU/LFU is not
enabled at start, but then enabled when some shared objects are already
used, there could be some confusion in LRU/LFU information.
For `set` command it is OK since it is going to create a new object when
LRU/LFU enabled, but `get` command will not unshare the object and just
update LRU/LFU information.
So we may duplicate the object in this case. It is a one-time task for
each key using shared objects, unless this is the case for so many keys,
there should be no serious performance degradation.
Still, LRU will be updated anyway, no matter LRU/LFU is enabled or not,
because `OBJECT IDLETIME` needs it, unless `maxmemory-policy` is set to
LFU. So idle time of a key may still be messed up.
---------
Signed-off-by: chentianjie.ctj <chentianjie.ctj@alibaba-inc.com>
Signed-off-by: Chen Tianjie <TJ_Chen@outlook.com>
2024-06-01 16:09:20 +08:00
|
|
|
}
|
2016-07-15 12:12:52 +02:00
|
|
|
if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
|
2017-10-15 20:17:55 +08:00
|
|
|
updateLFU(val);
|
2016-07-15 12:12:52 +02:00
|
|
|
} else {
|
|
|
|
val->lru = LRU_CLOCK();
|
|
|
|
}
|
2016-06-14 15:33:59 +02:00
|
|
|
}
|
2021-11-28 10:26:28 +01:00
|
|
|
|
|
|
|
if (!(flags & (LOOKUP_NOSTATS | LOOKUP_WRITE))) server.stat_keyspace_hits++;
|
|
|
|
/* TODO: Use separate hits stats for WRITE */
|
2010-06-22 00:07:48 +02:00
|
|
|
} else {
|
2021-11-28 10:26:28 +01:00
|
|
|
if (!(flags & (LOOKUP_NONOTIFY | LOOKUP_WRITE))) notifyKeyspaceEvent(NOTIFY_KEY_MISS, "keymiss", key, db->id);
|
|
|
|
if (!(flags & (LOOKUP_NOSTATS | LOOKUP_WRITE))) server.stat_keyspace_misses++;
|
|
|
|
/* TODO: Use separate misses stats and notify event for WRITE */
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2021-11-28 10:26:28 +01:00
|
|
|
|
|
|
|
return val;
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2016-06-14 15:33:59 +02:00
|
|
|
/* Lookup a key for read operations, or return NULL if the key is not found
|
|
|
|
* in the specified DB.
|
|
|
|
*
|
|
|
|
* This API should not be used when we write to the key after obtaining
|
|
|
|
* the object linked to the key, but only for read only operations.
|
|
|
|
*
|
2021-11-28 10:26:28 +01:00
|
|
|
* This function is equivalent to lookupKey(). The point of using this function
|
|
|
|
* rather than lookupKey() directly is to indicate that the purpose is to read
|
|
|
|
* the key. */
|
2024-04-03 10:02:43 +07:00
|
|
|
robj *lookupKeyReadWithFlags(serverDb *db, robj *key, int flags) {
|
2021-11-28 10:26:28 +01:00
|
|
|
serverAssert(!(flags & LOOKUP_WRITE));
|
|
|
|
return lookupKey(db, key, flags);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2016-06-14 15:33:59 +02:00
|
|
|
/* Like lookupKeyReadWithFlags(), but does not use any flag, which is the
|
|
|
|
* common case. */
|
2024-04-03 10:02:43 +07:00
|
|
|
robj *lookupKeyRead(serverDb *db, robj *key) {
|
2016-06-14 15:33:59 +02:00
|
|
|
return lookupKeyReadWithFlags(db, key, LOOKUP_NONE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Lookup a key for write operations, and as a side effect, if needed, expires
|
2021-11-28 10:26:28 +01:00
|
|
|
* the key if its TTL is reached. It's equivalent to lookupKey() with the
|
|
|
|
* LOOKUP_WRITE flag added.
|
2016-06-14 15:33:59 +02:00
|
|
|
*
|
|
|
|
* Returns the linked value object if the key exists or NULL if the key
|
|
|
|
* does not exist in the specified DB. */
|
2024-04-03 10:02:43 +07:00
|
|
|
robj *lookupKeyWriteWithFlags(serverDb *db, robj *key, int flags) {
|
2021-11-28 10:26:28 +01:00
|
|
|
return lookupKey(db, key, flags | LOOKUP_WRITE);
|
2019-10-23 11:53:15 +03:00
|
|
|
}
|
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
robj *lookupKeyWrite(serverDb *db, robj *key) {
|
2019-10-23 11:53:15 +03:00
|
|
|
return lookupKeyWriteWithFlags(db, key, LOOKUP_NONE);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2021-08-10 10:19:21 +03:00
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
robj *lookupKeyReadOrReply(client *c, robj *key, robj *reply) {
|
2010-06-22 00:07:48 +02:00
|
|
|
robj *o = lookupKeyRead(c->db, key);
|
2021-08-10 10:19:21 +03:00
|
|
|
if (!o) addReplyOrErrorObject(c, reply);
|
2010-06-22 00:07:48 +02:00
|
|
|
return o;
|
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
robj *lookupKeyWriteOrReply(client *c, robj *key, robj *reply) {
|
2010-06-22 00:07:48 +02:00
|
|
|
robj *o = lookupKeyWrite(c->db, key);
|
2021-08-10 10:19:21 +03:00
|
|
|
if (!o) addReplyOrErrorObject(c, reply);
|
2010-06-22 00:07:48 +02:00
|
|
|
return o;
|
|
|
|
}
|
|
|
|
|
2024-07-02 15:45:37 -07:00
|
|
|
/* Add the key to the DB.
|
|
|
|
*
|
|
|
|
* In this case a copy of `key` is copied in kvstore, the caller must ensure the `key` is properly freed.
|
|
|
|
*
|
|
|
|
* It's up to the caller to increment the reference
|
2013-01-17 01:00:20 +08:00
|
|
|
* counter of the value if needed.
|
2011-06-14 15:34:27 +02:00
|
|
|
*
|
2023-09-08 21:10:17 +08:00
|
|
|
* If the update_if_existing argument is false, the program is aborted
|
|
|
|
* if the key already exists, otherwise, it can fall back to dbOverwrite. */
|
2024-04-03 10:02:43 +07:00
|
|
|
static void dbAddInternal(serverDb *db, robj *key, robj *val, int update_if_existing) {
|
2023-05-28 10:58:29 +03:00
|
|
|
dictEntry *existing;
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
int dict_index = getKVStoreIndexForKey(key->ptr);
|
|
|
|
dictEntry *de = kvstoreDictAddRaw(db->keys, dict_index, key->ptr, &existing);
|
2023-05-28 10:58:29 +03:00
|
|
|
if (update_if_existing && existing) {
|
|
|
|
dbSetValue(db, key, val, 1, existing);
|
|
|
|
return;
|
|
|
|
}
|
2021-08-31 08:25:36 +02:00
|
|
|
serverAssertWithInfo(NULL, key, de != NULL);
|
2023-05-24 14:40:11 +08:00
|
|
|
initObjectLRUOrLFU(val);
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
kvstoreDictSetVal(db->keys, dict_index, de, val);
|
2020-08-11 13:18:09 +08:00
|
|
|
signalKeyAsReady(db, key, val->type);
|
2022-04-13 04:36:38 -04:00
|
|
|
notifyKeyspaceEvent(NOTIFY_NEW, "new", key, db->id);
|
2017-09-06 15:43:28 +02:00
|
|
|
}
|
2011-06-14 15:34:27 +02:00
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
void dbAdd(serverDb *db, robj *key, robj *val) {
|
2023-05-28 10:58:29 +03:00
|
|
|
dbAddInternal(db, key, val, 0);
|
|
|
|
}
|
|
|
|
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
/* Returns which dict index should be used with kvstore for a given key. */
|
|
|
|
static int getKVStoreIndexForKey(sds key) {
|
|
|
|
return server.cluster_enabled ? getKeySlot(key) : 0;
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
}
|
|
|
|
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
/* Returns the cluster hash slot for a given key, trying to use the cached slot that
|
|
|
|
* stored on the server.current_client first. If there is no cached value, it will compute the hash slot
|
|
|
|
* and then cache the value.*/
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
int getKeySlot(sds key) {
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
serverAssert(server.cluster_enabled);
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
/* This is performance optimization that uses pre-set slot id from the current command,
|
|
|
|
* in order to avoid calculation of the key hash.
|
2024-07-26 19:06:16 -04:00
|
|
|
*
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
* This optimization is only used when current_client flag `CLIENT_EXECUTING_COMMAND` is set.
|
|
|
|
* It only gets set during the execution of command under `call` method. Other flows requesting
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
* the key slot would fallback to keyHashSlot.
|
2024-07-26 19:06:16 -04:00
|
|
|
*
|
|
|
|
* Modules and scripts executed on the primary may get replicated as multi-execs that operate on multiple slots,
|
|
|
|
* so we must always recompute the slot for commands coming from the primary.
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
*/
|
2024-07-26 19:06:16 -04:00
|
|
|
if (server.current_client && server.current_client->slot >= 0 && server.current_client->flag.executing_command &&
|
|
|
|
!server.current_client->flag.primary) {
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
debugServerAssertWithInfo(server.current_client, NULL,
|
|
|
|
(int)keyHashSlot(key, (int)sdslen(key)) == server.current_client->slot);
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
return server.current_client->slot;
|
|
|
|
}
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
int slot = keyHashSlot(key, (int)sdslen(key));
|
2024-07-26 19:06:16 -04:00
|
|
|
/* For the case of replicated commands from primary, getNodeByQuery() never gets called,
|
|
|
|
* and thus c->slot never gets populated. That said, if this command ends up accessing a key,
|
|
|
|
* we are able to backfill c->slot here, where the key's hash calculation is made. */
|
|
|
|
if (server.current_client && server.current_client->flag.primary) {
|
|
|
|
server.current_client->slot = slot;
|
|
|
|
}
|
|
|
|
return slot;
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
}
|
|
|
|
|
2020-04-09 16:21:48 +02:00
|
|
|
/* This is a special version of dbAdd() that is used only when loading
|
|
|
|
* keys from the RDB file: the key is passed as an SDS string that is
|
2024-07-02 15:45:37 -07:00
|
|
|
* copied by the function and freed by the caller.
|
2020-04-09 16:21:48 +02:00
|
|
|
*
|
|
|
|
* Moreover this function will not abort if the key is already busy, to
|
|
|
|
* give more control to the caller, nor will signal the key as ready
|
|
|
|
* since it is not useful in this context.
|
|
|
|
*
|
2024-07-02 15:45:37 -07:00
|
|
|
* The function returns 1 if the key was added to the database, otherwise 0 is returned.
|
|
|
|
*
|
|
|
|
* In this case a copy of `key` is copied in kvstore, the caller must ensure the `key` is properly freed.
|
|
|
|
*/
|
2024-04-03 10:02:43 +07:00
|
|
|
int dbAddRDBLoad(serverDb *db, sds key, robj *val) {
|
2024-10-16 17:40:11 -07:00
|
|
|
int dict_index = getKVStoreIndexForKey(key);
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
dictEntry *de = kvstoreDictAddRaw(db->keys, dict_index, key, NULL);
|
2021-08-31 08:25:36 +02:00
|
|
|
if (de == NULL) return 0;
|
2023-05-24 14:40:11 +08:00
|
|
|
initObjectLRUOrLFU(val);
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
kvstoreDictSetVal(db->keys, dict_index, de, val);
|
2020-04-09 16:21:48 +02:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2011-06-14 15:34:27 +02:00
|
|
|
/* Overwrite an existing key with a new value. Incrementing the reference
|
|
|
|
* count of the new value is up to the caller.
|
|
|
|
* This function does not modify the expire time of the existing key.
|
|
|
|
*
|
2022-11-30 17:56:36 +08:00
|
|
|
* The 'overwrite' flag is an indication whether this is done as part of a
|
|
|
|
* complete replacement of their key, which can be thought as a deletion and
|
|
|
|
* replacement (in which case we need to emit deletion signals), or just an
|
|
|
|
* update of a value of an existing key (when false).
|
|
|
|
*
|
2023-05-28 10:58:29 +03:00
|
|
|
* The dictEntry input is optional, can be used if we already have one.
|
|
|
|
*
|
2011-06-14 15:34:27 +02:00
|
|
|
* The program is aborted if the key was not already present. */
|
2024-04-03 10:02:43 +07:00
|
|
|
static void dbSetValue(serverDb *db, robj *key, robj *val, int overwrite, dictEntry *de) {
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
int dict_index = getKVStoreIndexForKey(key->ptr);
|
|
|
|
if (!de) de = kvstoreDictFind(db->keys, dict_index, key->ptr);
|
2015-07-26 15:29:53 +02:00
|
|
|
serverAssertWithInfo(NULL, key, de != NULL);
|
2018-07-31 12:07:57 +08:00
|
|
|
robj *old = dictGetVal(de);
|
2023-05-24 14:40:11 +08:00
|
|
|
|
|
|
|
val->lru = old->lru;
|
|
|
|
|
2022-11-30 17:56:36 +08:00
|
|
|
if (overwrite) {
|
|
|
|
/* RM_StringDMA may call dbUnshareStringValue which may free val, so we
|
|
|
|
* need to incr to retain old */
|
|
|
|
incrRefCount(old);
|
|
|
|
/* Although the key is not really deleted from the database, we regard
|
|
|
|
* overwrite as two steps of unlink+add, so we still need to call the unlink
|
|
|
|
* callback of the module. */
|
|
|
|
moduleNotifyKeyUnlink(key, old, db->id, DB_FLAG_KEY_OVERWRITE);
|
|
|
|
/* We want to try to unblock any module clients or clients using a blocking XREADGROUP */
|
|
|
|
signalDeletedKeyAsReady(db, key, old->type);
|
|
|
|
decrRefCount(old);
|
|
|
|
/* Because of RM_StringDMA, old may be changed, so we need get old again */
|
|
|
|
old = dictGetVal(de);
|
|
|
|
}
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
kvstoreDictSetVal(db->keys, dict_index, de, val);
|
Io thread work offload (#763)
### IO-Threads Work Offloading
This PR is the 2nd of 3 PRs intended to achieve the goal of 1M requests
per second.
(1st PR: https://github.com/valkey-io/valkey/pull/758)
This PR offloads additional work to the I/O threads, beyond the current
read-parse/write operations, to better utilize the I/O threads and
reduce the load on the main thread.
It contains the following 3 commits:
### Poll Offload
Currently, the main thread is responsible for executing the poll-wait
system call, while the IO threads wait for tasks from the main thread.
The poll-wait operation is expensive and can consume up to 30% of the
main thread's time. We could have let the IO threads do the poll-wait by
themselves, with each thread listening to some of the clients and
notifying the main thread when a client's command is ready to execute.
However, the current approach, where the main thread listens for events
from the network, has several benefits. The main thread remains in
charge, allowing it to know the state of each client
(idle/read/write/close) at any given time. Additionally, it makes the
threads flexible, enabling us to drain an IO thread's job queue and stop
a thread when the load is light without modifying the event loop and
moving its clients to a different IO thread. Furthermore, with this
approach, the IO threads don't need to wait for both messages from the
network and from the main thread instead, the threads wait only for
tasks from the main thread.
To enjoy the benefits of both the main thread remaining in charge and
the poll being offloaded, we propose offloading the poll-wait as a
single-time, non-blocking job to one of the IO threads. The IO thread
will perform a poll-wait non-blocking call while the main thread
processes the client commands. Later, in `aeProcessEvents`, instead of
sleeping on the poll, we check for the IO thread's poll-wait results.
The poll-wait will be offloaded in `beforeSleep` only when there are
ready events for the main thread to process. If no events are pending,
the main thread will revert to the current behavior and sleep on the
poll by itself.
**Implementation Details**
A new call back `custompoll` was added to the `aeEventLoop` when not set
to `NULL` the ae will call the `custompoll` callback instead of the
`aeApiPoll`.
When the poll is offloaded we will set the `custompoll` to
`getIOThreadPollResults` and send a poll-job to the thread. the thread
will take a mutex, call a non-blocking (with timeout 0) to `aePoll`
which will populate the fired events array. the IO thread will set the
`server.io_fired_events` to the number of the returning `numevents`,
later the main-thread in `custompoll` will return the
`server.io_fired_events` and will set the `customPoll` back to `NULL`.
To ensure thread safety when accessing server.el, all functions that
modify the eventloop events were wrapped with a mutex to ensure mutual
exclusion when modifying the events.
### Command Lookup Offload
As the IO thread parses the command from the client's Querybuf, it can
perform a command lookup in the commands dictionary, which can consume
up to ~5% of the main-thread runtime.
**Implementation details**
The IO thread will store the looked-up command in the client's new field
`io_parsed_cmd` field. We can't use `c->cmd` for that since we use
`c->cmd `to check if a command was reprocessed or not.
To ensure thread safety when accessing the command dictionary, we make
sure the main thread isn't changing the dictionary while IO threads are
accessing it. This is accomplished by introducing a new flag called
`no_incremental_rehash` for the `dictType` commands. When performing
`dictResize`, we will rehash the entire dictionary in place rather than
deferring the process.
### Free Offload
Since the command arguments are allocated by the I/O thread, it would be
beneficial if they were also freed by the same thread. If the main
thread frees objects allocated by the I/O thread, two issues arise:
1. During the freeing process, the main thread needs to access the SDS
pointed to by the object to get its length.
2. With Jemalloc, each thread manages thread local pool (`tcache`) of
buffers for quick reallocation without accessing the arena. If the main
thread constantly frees objects allocated by other threads, those
threads will have to frequently access the shared arena to obtain new
memory allocations
**Implementation Details**
When freeing the client's argv, we will send the argv array to the
thread that allocated it. The thread will be identified by the client
ID. When freeing an object during `dbOverwrite`, we will offload the
object free as well. We will extend this to offload the free during
`dbDelete` in a future PR, as its effects on defrag/memory evictions
need to be studied.
---------
Signed-off-by: Uri Yagelnik <uriy@amazon.com>
2024-07-19 05:21:45 +03:00
|
|
|
/* For efficiency, let the I/O thread that allocated an object also deallocate it. */
|
|
|
|
if (tryOffloadFreeObjToIOThreads(old) == C_OK) {
|
|
|
|
/* OK */
|
|
|
|
} else if (server.lazyfree_lazy_server_del) {
|
2021-06-16 14:45:49 +08:00
|
|
|
freeObjAsync(key, old, db->id);
|
2023-01-11 09:57:10 +01:00
|
|
|
} else {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
decrRefCount(old);
|
2016-07-18 13:49:31 +02:00
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2022-11-30 17:56:36 +08:00
|
|
|
/* Replace an existing key with a new value, we just replace value and don't
|
|
|
|
* emit any events */
|
2024-04-03 10:02:43 +07:00
|
|
|
void dbReplaceValue(serverDb *db, robj *key, robj *val) {
|
2023-05-28 10:58:29 +03:00
|
|
|
dbSetValue(db, key, val, 0, NULL);
|
2022-11-30 17:56:36 +08:00
|
|
|
}
|
|
|
|
|
2011-06-14 15:34:27 +02:00
|
|
|
/* High level Set operation. This function can be used in order to set
|
|
|
|
* a key, whatever it was existing or not, to a new object.
|
2010-06-22 00:07:48 +02:00
|
|
|
*
|
2011-06-14 15:34:27 +02:00
|
|
|
* 1) The ref count of the value object is incremented.
|
|
|
|
* 2) clients WATCHing for the destination key notified.
|
2019-12-18 11:58:02 +01:00
|
|
|
* 3) The expire time of the key is reset (the key is made persistent),
|
2021-11-03 14:12:33 +02:00
|
|
|
* unless 'SETKEY_KEEPTTL' is enabled in flags.
|
|
|
|
* 4) The key lookup can take place outside this interface outcome will be
|
|
|
|
* delivered with 'SETKEY_ALREADY_EXIST' or 'SETKEY_DOESNT_EXIST'
|
Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj
Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.
To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.
THE BUG
Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.
Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:
1. No DEL will be received from the master, which does not know about
such a key at all.
2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.
THE FIX
In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.
My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.
The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).
This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.
TODO
I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
2016-12-13 10:20:06 +01:00
|
|
|
*
|
2020-04-21 10:51:46 +02:00
|
|
|
* All the new keys in the database should be created via this interface.
|
|
|
|
* The client 'c' argument may be set to NULL if the operation is performed
|
|
|
|
* in a context where there is no clear client performing the operation. */
|
2024-04-03 10:02:43 +07:00
|
|
|
void setKey(client *c, serverDb *db, robj *key, robj *val, int flags) {
|
2021-11-03 14:12:33 +02:00
|
|
|
int keyfound = 0;
|
|
|
|
|
|
|
|
if (flags & SETKEY_ALREADY_EXIST)
|
|
|
|
keyfound = 1;
|
2023-05-28 10:58:29 +03:00
|
|
|
else if (flags & SETKEY_ADD_OR_UPDATE)
|
|
|
|
keyfound = -1;
|
2021-11-03 14:12:33 +02:00
|
|
|
else if (!(flags & SETKEY_DOESNT_EXIST))
|
|
|
|
keyfound = (lookupKeyWrite(db, key) != NULL);
|
|
|
|
|
|
|
|
if (!keyfound) {
|
2011-06-14 15:34:27 +02:00
|
|
|
dbAdd(db, key, val);
|
2023-05-28 10:58:29 +03:00
|
|
|
} else if (keyfound < 0) {
|
|
|
|
dbAddInternal(db, key, val, 1);
|
2010-06-22 00:07:48 +02:00
|
|
|
} else {
|
2023-05-28 10:58:29 +03:00
|
|
|
dbSetValue(db, key, val, 1, NULL);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2011-06-14 15:34:27 +02:00
|
|
|
incrRefCount(val);
|
2021-11-03 14:12:33 +02:00
|
|
|
if (!(flags & SETKEY_KEEPTTL)) removeExpire(db, key);
|
|
|
|
if (!(flags & SETKEY_NO_SIGNAL)) signalModifiedKey(c, db, key);
|
2019-12-18 11:58:02 +01:00
|
|
|
}
|
|
|
|
|
2024-04-09 01:24:03 -07:00
|
|
|
/* Return a random key, in form of an Object.
|
2010-06-22 00:07:48 +02:00
|
|
|
* If there are no keys, NULL is returned.
|
|
|
|
*
|
|
|
|
* The function makes sure to return keys not already expired. */
|
2024-04-03 10:02:43 +07:00
|
|
|
robj *dbRandomKey(serverDb *db) {
|
2014-03-20 16:20:37 +01:00
|
|
|
dictEntry *de;
|
2018-06-14 18:08:05 +02:00
|
|
|
int maxtries = 100;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
int allvolatile = kvstoreSize(db->keys) == kvstoreSize(db->expires);
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
while (1) {
|
|
|
|
sds key;
|
|
|
|
robj *keyobj;
|
2024-10-16 17:40:11 -07:00
|
|
|
int randomDictIndex = kvstoreGetFairRandomDictIndex(db->keys);
|
|
|
|
de = kvstoreDictGetFairRandomKey(db->keys, randomDictIndex);
|
2010-06-22 00:07:48 +02:00
|
|
|
if (de == NULL) return NULL;
|
|
|
|
|
2011-11-08 17:07:55 +01:00
|
|
|
key = dictGetKey(de);
|
2010-06-22 00:07:48 +02:00
|
|
|
keyobj = createStringObject(key, sdslen(key));
|
2024-10-16 17:40:11 -07:00
|
|
|
if (dbFindExpiresWithDictIndex(db, key, randomDictIndex)) {
|
2018-06-14 18:08:05 +02:00
|
|
|
if (allvolatile && server.primary_host && --maxtries == 0) {
|
|
|
|
/* If the DB is composed only of keys with an expire set,
|
|
|
|
* it could happen that all the keys are already logically
|
|
|
|
* expired in the repilca, so the function cannot stop because
|
|
|
|
* expireIfNeeded() is false, nor it can stop because
|
2021-06-10 20:39:33 +08:00
|
|
|
* dictGetFairRandomKey() returns NULL (there are keys to return).
|
2018-06-14 18:08:05 +02:00
|
|
|
* To prevent the infinite loop we do some tries, but if there
|
|
|
|
* are the conditions for an infinite loop, eventually we
|
|
|
|
* return a key name that may be already expired. */
|
|
|
|
return keyobj;
|
|
|
|
}
|
2024-10-16 17:40:11 -07:00
|
|
|
if (expireIfNeededWithDictIndex(db, keyobj, 0, randomDictIndex) != KEY_VALID) {
|
2010-06-22 00:07:48 +02:00
|
|
|
decrRefCount(keyobj);
|
|
|
|
continue; /* search for another key. This expired. */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return keyobj;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
int dbGenericDeleteWithDictIndex(serverDb *db, robj *key, int async, int flags, int dict_index) {
|
2022-11-30 17:56:36 +08:00
|
|
|
dictEntry **plink;
|
|
|
|
int table;
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
dictEntry *de = kvstoreDictTwoPhaseUnlinkFind(db->keys, dict_index, key->ptr, &plink, &table);
|
2020-11-16 16:34:04 +08:00
|
|
|
if (de) {
|
|
|
|
robj *val = dictGetVal(de);
|
2022-11-30 17:56:36 +08:00
|
|
|
/* RM_StringDMA may call dbUnshareStringValue which may free val, so we
|
|
|
|
* need to incr to retain val */
|
|
|
|
incrRefCount(val);
|
2020-11-16 16:34:04 +08:00
|
|
|
/* Tells the module that the key has been unlinked from the database. */
|
2022-11-30 17:56:36 +08:00
|
|
|
moduleNotifyKeyUnlink(key, val, db->id, flags);
|
2022-10-18 18:50:02 +02:00
|
|
|
/* We want to try to unblock any module clients or clients using a blocking XREADGROUP */
|
|
|
|
signalDeletedKeyAsReady(db, key, val->type);
|
2022-11-30 17:56:36 +08:00
|
|
|
/* We should call decr before freeObjAsync. If not, the refcount may be
|
|
|
|
* greater than 1, so freeObjAsync doesn't work */
|
|
|
|
decrRefCount(val);
|
2021-10-01 14:49:33 +02:00
|
|
|
if (async) {
|
2022-11-30 17:56:36 +08:00
|
|
|
/* Because of dbUnshareStringValue, the val in de may change. */
|
|
|
|
freeObjAsync(key, dictGetVal(de), db->id);
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
kvstoreDictSetVal(db->keys, dict_index, de, NULL);
|
2021-10-01 14:49:33 +02:00
|
|
|
}
|
2022-11-30 17:56:36 +08:00
|
|
|
/* Deleting an entry from the expires dict will not free the sds of
|
2023-12-06 09:37:24 +08:00
|
|
|
* the key, because it is shared with the main dictionary. */
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
kvstoreDictDelete(db->expires, dict_index, key->ptr);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
kvstoreDictTwoPhaseUnlinkFree(db->keys, dict_index, de, plink, table);
|
2011-04-28 19:00:33 +02:00
|
|
|
return 1;
|
|
|
|
} else {
|
|
|
|
return 0;
|
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
/* Helper for sync and async delete. */
|
|
|
|
int dbGenericDelete(serverDb *db, robj *key, int async, int flags) {
|
|
|
|
int dict_index = getKVStoreIndexForKey(key->ptr);
|
|
|
|
return dbGenericDeleteWithDictIndex(db, key, async, flags, dict_index);
|
|
|
|
}
|
|
|
|
|
2021-10-01 14:49:33 +02:00
|
|
|
/* Delete a key, value, and associated expiration entry if any, from the DB */
|
2024-04-03 10:02:43 +07:00
|
|
|
int dbSyncDelete(serverDb *db, robj *key) {
|
2022-11-30 17:56:36 +08:00
|
|
|
return dbGenericDelete(db, key, 0, DB_FLAG_KEY_DELETED);
|
2021-10-01 14:49:33 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Delete a key, value, and associated expiration entry if any, from the DB. If
|
|
|
|
* the value consists of many allocations, it may be freed asynchronously. */
|
2024-04-03 10:02:43 +07:00
|
|
|
int dbAsyncDelete(serverDb *db, robj *key) {
|
2022-11-30 17:56:36 +08:00
|
|
|
return dbGenericDelete(db, key, 1, DB_FLAG_KEY_DELETED);
|
2021-10-01 14:49:33 +02:00
|
|
|
}
|
|
|
|
|
2024-04-09 01:24:03 -07:00
|
|
|
/* This is a wrapper whose behavior depends on the server lazy free
|
2015-07-30 11:46:31 +02:00
|
|
|
* configuration. Deletes the key synchronously or asynchronously. */
|
2024-04-03 10:02:43 +07:00
|
|
|
int dbDelete(serverDb *db, robj *key) {
|
2022-11-30 17:56:36 +08:00
|
|
|
return dbGenericDelete(db, key, server.lazyfree_lazy_server_del, DB_FLAG_KEY_DELETED);
|
2015-07-30 11:46:31 +02:00
|
|
|
}
|
|
|
|
|
2014-03-30 18:32:17 +02:00
|
|
|
/* Prepare the string object stored at 'key' to be modified destructively
|
|
|
|
* to implement commands like SETBIT or APPEND.
|
|
|
|
*
|
|
|
|
* An object is usually ready to be modified unless one of the two conditions
|
|
|
|
* are true:
|
|
|
|
*
|
|
|
|
* 1) The object 'o' is shared (refcount > 1), we don't want to affect
|
|
|
|
* other users.
|
|
|
|
* 2) The object encoding is not "RAW".
|
|
|
|
*
|
|
|
|
* If the object is found in one of the above conditions (or both) by the
|
|
|
|
* function, an unshared / not-encoded copy of the string object is stored
|
|
|
|
* at 'key' in the specified 'db'. Otherwise the object 'o' itself is
|
|
|
|
* returned.
|
|
|
|
*
|
|
|
|
* USAGE:
|
|
|
|
*
|
|
|
|
* The object 'o' is what the caller already obtained by looking up 'key'
|
|
|
|
* in 'db', the usage pattern looks like this:
|
|
|
|
*
|
|
|
|
* o = lookupKeyWrite(db,key);
|
2015-07-26 15:28:00 +02:00
|
|
|
* if (checkType(c,o,OBJ_STRING)) return;
|
2014-03-30 18:32:17 +02:00
|
|
|
* o = dbUnshareStringValue(db,key,o);
|
|
|
|
*
|
|
|
|
* At this point the caller is ready to modify the object, for example
|
|
|
|
* using an sdscat() call to append some data, or anything else.
|
|
|
|
*/
|
2024-04-03 10:02:43 +07:00
|
|
|
robj *dbUnshareStringValue(serverDb *db, robj *key, robj *o) {
|
2015-07-26 15:29:53 +02:00
|
|
|
serverAssert(o->type == OBJ_STRING);
|
2015-07-26 15:28:00 +02:00
|
|
|
if (o->refcount != 1 || o->encoding != OBJ_ENCODING_RAW) {
|
2014-03-30 18:32:17 +02:00
|
|
|
robj *decoded = getDecodedObject(o);
|
|
|
|
o = createRawStringObject(decoded->ptr, sdslen(decoded->ptr));
|
|
|
|
decrRefCount(decoded);
|
2022-11-30 17:56:36 +08:00
|
|
|
dbReplaceValue(db, key, o);
|
2014-03-30 18:32:17 +02:00
|
|
|
}
|
|
|
|
return o;
|
|
|
|
}
|
|
|
|
|
2020-12-02 19:56:11 +08:00
|
|
|
/* Remove all keys from the database(s) structure. The dbarray argument
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
* may not be the server main DBs (could be a temporary DB).
|
2020-12-02 19:56:11 +08:00
|
|
|
*
|
|
|
|
* The dbnum can be -1 if all the DBs should be emptied, or the specified
|
|
|
|
* DB index if we want to empty only a single database.
|
|
|
|
* The function returns the number of keys removed from the database(s). */
|
2024-04-03 10:02:43 +07:00
|
|
|
long long emptyDbStructure(serverDb *dbarray, int dbnum, int async, void(callback)(dict *)) {
|
2020-12-02 19:56:11 +08:00
|
|
|
long long removed = 0;
|
|
|
|
int startdb, enddb;
|
|
|
|
|
|
|
|
if (dbnum == -1) {
|
|
|
|
startdb = 0;
|
|
|
|
enddb = server.dbnum - 1;
|
|
|
|
} else {
|
|
|
|
startdb = enddb = dbnum;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int j = startdb; j <= enddb; j++) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
removed += kvstoreSize(dbarray[j].keys);
|
2020-12-02 19:56:11 +08:00
|
|
|
if (async) {
|
|
|
|
emptyDbAsync(&dbarray[j]);
|
|
|
|
} else {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kvstoreEmpty(dbarray[j].keys, callback);
|
|
|
|
kvstoreEmpty(dbarray[j].expires, callback);
|
2020-12-02 19:56:11 +08:00
|
|
|
}
|
|
|
|
/* Because all keys of database are removed, reset average ttl. */
|
|
|
|
dbarray[j].avg_ttl = 0;
|
|
|
|
dbarray[j].expires_cursor = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return removed;
|
|
|
|
}
|
|
|
|
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
/* Remove all data (keys and functions) from all the databases in a
|
2024-04-09 01:24:03 -07:00
|
|
|
* server. If callback is given the function is called from
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
* time to time to signal that work is in progress.
|
2015-09-28 10:47:45 +02:00
|
|
|
*
|
2018-07-01 13:24:50 +08:00
|
|
|
* The dbnum can be -1 if all the DBs should be flushed, or the specified
|
2024-04-09 01:24:03 -07:00
|
|
|
* DB number if we want to flush only a single database number.
|
2015-09-28 10:47:45 +02:00
|
|
|
*
|
|
|
|
* Flags are be EMPTYDB_NO_FLAGS if no special flags are specified or
|
2020-12-02 19:56:11 +08:00
|
|
|
* EMPTYDB_ASYNC if we want the memory to be freed in a different thread
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
* and the function to return ASAP. EMPTYDB_NOFUNCTIONS can also be set
|
|
|
|
* to specify that we do not want to delete the functions.
|
2015-09-28 10:47:45 +02:00
|
|
|
*
|
Squash merging 125 typo/grammar/comment/doc PRs (#7773)
List of squashed commits or PRs
===============================
commit 66801ea
Author: hwware <wen.hui.ware@gmail.com>
Date: Mon Jan 13 00:54:31 2020 -0500
typo fix in acl.c
commit 46f55db
Author: Itamar Haber <itamar@redislabs.com>
Date: Sun Sep 6 18:24:11 2020 +0300
Updates a couple of comments
Specifically:
* RM_AutoMemory completed instead of pointing to docs
* Updated link to custom type doc
commit 61a2aa0
Author: xindoo <xindoo@qq.com>
Date: Tue Sep 1 19:24:59 2020 +0800
Correct errors in code comments
commit a5871d1
Author: yz1509 <pro-756@qq.com>
Date: Tue Sep 1 18:36:06 2020 +0800
fix typos in module.c
commit 41eede7
Author: bookug <bookug@qq.com>
Date: Sat Aug 15 01:11:33 2020 +0800
docs: fix typos in comments
commit c303c84
Author: lazy-snail <ws.niu@outlook.com>
Date: Fri Aug 7 11:15:44 2020 +0800
fix spelling in redis.conf
commit 1eb76bf
Author: zhujian <zhujianxyz@gmail.com>
Date: Thu Aug 6 15:22:10 2020 +0800
add a missing 'n' in comment
commit 1530ec2
Author: Daniel Dai <764122422@qq.com>
Date: Mon Jul 27 00:46:35 2020 -0400
fix spelling in tracking.c
commit e517b31
Author: Hunter-Chen <huntcool001@gmail.com>
Date: Fri Jul 17 22:33:32 2020 +0800
Update redis.conf
Co-authored-by: Itamar Haber <itamar@redislabs.com>
commit c300eff
Author: Hunter-Chen <huntcool001@gmail.com>
Date: Fri Jul 17 22:33:23 2020 +0800
Update redis.conf
Co-authored-by: Itamar Haber <itamar@redislabs.com>
commit 4c058a8
Author: 陈浩鹏 <chenhaopeng@heytea.com>
Date: Thu Jun 25 19:00:56 2020 +0800
Grammar fix and clarification
commit 5fcaa81
Author: bodong.ybd <bodong.ybd@alibaba-inc.com>
Date: Fri Jun 19 10:09:00 2020 +0800
Fix typos
commit 4caca9a
Author: Pruthvi P <pruthvi@ixigo.com>
Date: Fri May 22 00:33:22 2020 +0530
Fix typo eviciton => eviction
commit b2a25f6
Author: Brad Dunbar <dunbarb2@gmail.com>
Date: Sun May 17 12:39:59 2020 -0400
Fix a typo.
commit 12842ae
Author: hwware <wen.hui.ware@gmail.com>
Date: Sun May 3 17:16:59 2020 -0400
fix spelling in redis conf
commit ddba07c
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date: Sat May 2 23:25:34 2020 +0100
Correct a "conflicts" spelling error.
commit 8fc7bf2
Author: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
Date: Thu Apr 30 10:25:27 2020 +0900
docs: fix EXPIRE_FAST_CYCLE_DURATION to ACTIVE_EXPIRE_CYCLE_FAST_DURATION
commit 9b2b67a
Author: Brad Dunbar <dunbarb2@gmail.com>
Date: Fri Apr 24 11:46:22 2020 -0400
Fix a typo.
commit 0746f10
Author: devilinrust <63737265+devilinrust@users.noreply.github.com>
Date: Thu Apr 16 00:17:53 2020 +0200
Fix typos in server.c
commit 92b588d
Author: benjessop12 <56115861+benjessop12@users.noreply.github.com>
Date: Mon Apr 13 13:43:55 2020 +0100
Fix spelling mistake in lazyfree.c
commit 1da37aa
Merge: 2d4ba28 af347a8
Author: hwware <wen.hui.ware@gmail.com>
Date: Thu Mar 5 22:41:31 2020 -0500
Merge remote-tracking branch 'upstream/unstable' into expiretypofix
commit 2d4ba28
Author: hwware <wen.hui.ware@gmail.com>
Date: Mon Mar 2 00:09:40 2020 -0500
fix typo in expire.c
commit 1a746f7
Author: SennoYuki <minakami1yuki@gmail.com>
Date: Thu Feb 27 16:54:32 2020 +0800
fix typo
commit 8599b1a
Author: dongheejeong <donghee950403@gmail.com>
Date: Sun Feb 16 20:31:43 2020 +0000
Fix typo in server.c
commit f38d4e8
Author: hwware <wen.hui.ware@gmail.com>
Date: Sun Feb 2 22:58:38 2020 -0500
fix typo in evict.c
commit fe143fc
Author: Leo Murillo <leonardo.murillo@gmail.com>
Date: Sun Feb 2 01:57:22 2020 -0600
Fix a few typos in redis.conf
commit 1ab4d21
Author: viraja1 <anchan.viraj@gmail.com>
Date: Fri Dec 27 17:15:58 2019 +0530
Fix typo in Latency API docstring
commit ca1f70e
Author: gosth <danxuedexing@qq.com>
Date: Wed Dec 18 15:18:02 2019 +0800
fix typo in sort.c
commit a57c06b
Author: ZYunH <zyunhjob@163.com>
Date: Mon Dec 16 22:28:46 2019 +0800
fix-zset-typo
commit b8c92b5
Author: git-hulk <hulk.website@gmail.com>
Date: Mon Dec 16 15:51:42 2019 +0800
FIX: typo in cluster.c, onformation->information
commit 9dd981c
Author: wujm2007 <jim.wujm@gmail.com>
Date: Mon Dec 16 09:37:52 2019 +0800
Fix typo
commit e132d7a
Author: Sebastien Williams-Wynn <s.williamswynn.mail@gmail.com>
Date: Fri Nov 15 00:14:07 2019 +0000
Minor typo change
commit 47f44d5
Author: happynote3966 <01ssrmikururudevice01@gmail.com>
Date: Mon Nov 11 22:08:48 2019 +0900
fix comment typo in redis-cli.c
commit b8bdb0d
Author: fulei <fulei@kuaishou.com>
Date: Wed Oct 16 18:00:17 2019 +0800
Fix a spelling mistake of comments in defragDictBucketCallback
commit 0def46a
Author: fulei <fulei@kuaishou.com>
Date: Wed Oct 16 13:09:27 2019 +0800
fix some spelling mistakes of comments in defrag.c
commit f3596fd
Author: Phil Rajchgot <tophil@outlook.com>
Date: Sun Oct 13 02:02:32 2019 -0400
Typo and grammar fixes
Redis and its documentation are great -- just wanted to submit a few corrections in the spirit of Hacktoberfest. Thanks for all your work on this project. I use it all the time and it works beautifully.
commit 2b928cd
Author: KangZhiDong <worldkzd@gmail.com>
Date: Sun Sep 1 07:03:11 2019 +0800
fix typos
commit 33aea14
Author: Axlgrep <axlgrep@gmail.com>
Date: Tue Aug 27 11:02:18 2019 +0800
Fixed eviction spelling issues
commit e282a80
Author: Simen Flatby <simen@oms.no>
Date: Tue Aug 20 15:25:51 2019 +0200
Update comments to reflect prop name
In the comments the prop is referenced as replica-validity-factor,
but it is really named cluster-replica-validity-factor.
commit 74d1f9a
Author: Jim Green <jimgreen2013@qq.com>
Date: Tue Aug 20 20:00:31 2019 +0800
fix comment error, the code is ok
commit eea1407
Author: Liao Tonglang <liaotonglang@gmail.com>
Date: Fri May 31 10:16:18 2019 +0800
typo fix
fix cna't to can't
commit 0da553c
Author: KAWACHI Takashi <tkawachi@gmail.com>
Date: Wed Jul 17 00:38:16 2019 +0900
Fix typo
commit 7fc8fb6
Author: Michael Prokop <mika@grml.org>
Date: Tue May 28 17:58:42 2019 +0200
Typo fixes
s/familar/familiar/
s/compatiblity/compatibility/
s/ ot / to /
s/itsef/itself/
commit 5f46c9d
Author: zhumoing <34539422+zhumoing@users.noreply.github.com>
Date: Tue May 21 21:16:50 2019 +0800
typo-fixes
typo-fixes
commit 321dfe1
Author: wxisme <850885154@qq.com>
Date: Sat Mar 16 15:10:55 2019 +0800
typo fix
commit b4fb131
Merge: 267e0e6 3df1eb8
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date: Fri Feb 8 22:55:45 2019 +0200
Merge branch 'unstable' of antirez/redis into unstable
commit 267e0e6
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date: Wed Jan 30 21:26:04 2019 +0200
Minor typo fix
commit 30544e7
Author: inshal96 <39904558+inshal96@users.noreply.github.com>
Date: Fri Jan 4 16:54:50 2019 +0500
remove an extra 'a' in the comments
commit 337969d
Author: BrotherGao <yangdongheng11@gmail.com>
Date: Sat Dec 29 12:37:29 2018 +0800
fix typo in redis.conf
commit 9f4b121
Merge: 423a030 e504583
Author: BrotherGao <yangdongheng@xiaomi.com>
Date: Sat Dec 29 11:41:12 2018 +0800
Merge branch 'unstable' of antirez/redis into unstable
commit 423a030
Merge: 42b02b7 46a51cd
Author: 杨东衡 <yangdongheng@xiaomi.com>
Date: Tue Dec 4 23:56:11 2018 +0800
Merge branch 'unstable' of antirez/redis into unstable
commit 42b02b7
Merge: 68c0e6e b8febe6
Author: Dongheng Yang <yangdongheng11@gmail.com>
Date: Sun Oct 28 15:54:23 2018 +0800
Merge pull request #1 from antirez/unstable
update local data
commit 714b589
Author: Christian <crifei93@gmail.com>
Date: Fri Dec 28 01:17:26 2018 +0100
fix typo "resulution"
commit e23259d
Author: garenchan <1412950785@qq.com>
Date: Wed Dec 26 09:58:35 2018 +0800
fix typo: segfauls -> segfault
commit a9359f8
Author: xjp <jianping_xie@aliyun.com>
Date: Tue Dec 18 17:31:44 2018 +0800
Fixed REDISMODULE_H spell bug
commit a12c3e4
Author: jdiaz <jrd.palacios@gmail.com>
Date: Sat Dec 15 23:39:52 2018 -0600
Fixes hyperloglog hash function comment block description
commit 770eb11
Author: 林上耀 <1210tom@163.com>
Date: Sun Nov 25 17:16:10 2018 +0800
fix typo
commit fd97fbb
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date: Fri Nov 23 17:14:01 2018 +0100
Correct "unsupported" typo.
commit a85522d
Author: Jungnam Lee <jungnam.lee@oracle.com>
Date: Thu Nov 8 23:01:29 2018 +0900
fix typo in test comments
commit ade8007
Author: Arun Kumar <palerdot@users.noreply.github.com>
Date: Tue Oct 23 16:56:35 2018 +0530
Fixed grammatical typo
Fixed typo for word 'dictionary'
commit 869ee39
Author: Hamid Alaei <hamid.a85@gmail.com>
Date: Sun Aug 12 16:40:02 2018 +0430
fix documentations: (ThreadSafeContextStart/Stop -> ThreadSafeContextLock/Unlock), minor typo
commit f89d158
Author: Mayank Jain <mayankjain255@gmail.com>
Date: Tue Jul 31 23:01:21 2018 +0530
Updated README.md with some spelling corrections.
Made correction in spelling of some misspelled words.
commit 892198e
Author: dsomeshwar <someshwar.dhayalan@gmail.com>
Date: Sat Jul 21 23:23:04 2018 +0530
typo fix
commit 8a4d780
Author: Itamar Haber <itamar@redislabs.com>
Date: Mon Apr 30 02:06:52 2018 +0300
Fixes some typos
commit e3acef6
Author: Noah Rosamilia <ivoahivoah@gmail.com>
Date: Sat Mar 3 23:41:21 2018 -0500
Fix typo in /deps/README.md
commit 04442fb
Author: WuYunlong <xzsyeb@126.com>
Date: Sat Mar 3 10:32:42 2018 +0800
Fix typo in readSyncBulkPayload() comment.
commit 9f36880
Author: WuYunlong <xzsyeb@126.com>
Date: Sat Mar 3 10:20:37 2018 +0800
replication.c comment: run_id -> replid.
commit f866b4a
Author: Francesco 'makevoid' Canessa <makevoid@gmail.com>
Date: Thu Feb 22 22:01:56 2018 +0000
fix comment typo in server.c
commit 0ebc69b
Author: 줍 <jubee0124@gmail.com>
Date: Mon Feb 12 16:38:48 2018 +0900
Fix typo in redis.conf
Fix `five behaviors` to `eight behaviors` in [this sentence ](antirez/redis@unstable/redis.conf#L564)
commit b50a620
Author: martinbroadhurst <martinbroadhurst@users.noreply.github.com>
Date: Thu Dec 28 12:07:30 2017 +0000
Fix typo in valgrind.sup
commit 7d8f349
Author: Peter Boughton <peter@sorcerersisle.com>
Date: Mon Nov 27 19:52:19 2017 +0000
Update CONTRIBUTING; refer doc updates to redis-doc repo.
commit 02dec7e
Author: Klauswk <klauswk1@hotmail.com>
Date: Tue Oct 24 16:18:38 2017 -0200
Fix typo in comment
commit e1efbc8
Author: chenshi <baiwfg2@gmail.com>
Date: Tue Oct 3 18:26:30 2017 +0800
Correct two spelling errors of comments
commit 93327d8
Author: spacewander <spacewanderlzx@gmail.com>
Date: Wed Sep 13 16:47:24 2017 +0800
Update the comment for OBJ_ENCODING_EMBSTR_SIZE_LIMIT's value
The value of OBJ_ENCODING_EMBSTR_SIZE_LIMIT is 44 now instead of 39.
commit 63d361f
Author: spacewander <spacewanderlzx@gmail.com>
Date: Tue Sep 12 15:06:42 2017 +0800
Fix <prevlen> related doc in ziplist.c
According to the definition of ZIP_BIG_PREVLEN and other related code,
the guard of single byte <prevlen> should be 254 instead of 255.
commit ebe228d
Author: hanael80 <hanael80@gmail.com>
Date: Tue Aug 15 09:09:40 2017 +0900
Fix typo
commit 6b696e6
Author: Matt Robenolt <matt@ydekproductions.com>
Date: Mon Aug 14 14:50:47 2017 -0700
Fix typo in LATENCY DOCTOR output
commit a2ec6ae
Author: caosiyang <caosiyang@qiyi.com>
Date: Tue Aug 15 14:15:16 2017 +0800
Fix a typo: form => from
commit 3ab7699
Author: caosiyang <caosiyang@qiyi.com>
Date: Thu Aug 10 18:40:33 2017 +0800
Fix a typo: replicationFeedSlavesFromMaster() => replicationFeedSlavesFromMasterStream()
commit 72d43ef
Author: caosiyang <caosiyang@qiyi.com>
Date: Tue Aug 8 15:57:25 2017 +0800
fix a typo: servewr => server
commit 707c958
Author: Bo Cai <charpty@gmail.com>
Date: Wed Jul 26 21:49:42 2017 +0800
redis-cli.c typo: conut -> count.
Signed-off-by: Bo Cai <charpty@gmail.com>
commit b9385b2
Author: JackDrogon <jack.xsuperman@gmail.com>
Date: Fri Jun 30 14:22:31 2017 +0800
Fix some spell problems
commit 20d9230
Author: akosel <aaronjkosel@gmail.com>
Date: Sun Jun 4 19:35:13 2017 -0500
Fix typo
commit b167bfc
Author: Krzysiek Witkowicz <krzysiekwitkowicz@gmail.com>
Date: Mon May 22 21:32:27 2017 +0100
Fix #4008 small typo in comment
commit 2b78ac8
Author: Jake Clarkson <jacobwclarkson@gmail.com>
Date: Wed Apr 26 15:49:50 2017 +0100
Correct typo in tests/unit/hyperloglog.tcl
commit b0f1cdb
Author: Qi Luo <qiluo-msft@users.noreply.github.com>
Date: Wed Apr 19 14:25:18 2017 -0700
Fix typo
commit a90b0f9
Author: charsyam <charsyam@naver.com>
Date: Thu Mar 16 18:19:53 2017 +0900
fix typos
fix typos
fix typos
commit 8430a79
Author: Richard Hart <richardhart92@gmail.com>
Date: Mon Mar 13 22:17:41 2017 -0400
Fixed log message typo in listenToPort.
commit 481a1c2
Author: Vinod Kumar <kumar003vinod@gmail.com>
Date: Sun Jan 15 23:04:51 2017 +0530
src/db.c: Correct "save" -> "safe" typo
commit 586b4d3
Author: wangshaonan <wshn13@gmail.com>
Date: Wed Dec 21 20:28:27 2016 +0800
Fix typo they->the in helloworld.c
commit c1c4b5e
Author: Jenner <hypxm@qq.com>
Date: Mon Dec 19 16:39:46 2016 +0800
typo error
commit 1ee1a3f
Author: tielei <43289893@qq.com>
Date: Mon Jul 18 13:52:25 2016 +0800
fix some comments
commit 11a41fb
Author: Otto Kekäläinen <otto@seravo.fi>
Date: Sun Jul 3 10:23:55 2016 +0100
Fix spelling in documentation and comments
commit 5fb5d82
Author: francischan <f1ancis621@gmail.com>
Date: Tue Jun 28 00:19:33 2016 +0800
Fix outdated comments about redis.c file.
It should now refer to server.c file.
commit 6b254bc
Author: lmatt-bit <lmatt123n@gmail.com>
Date: Thu Apr 21 21:45:58 2016 +0800
Refine the comment of dictRehashMilliseconds func
SLAVECONF->REPLCONF in comment - by andyli029
commit ee9869f
Author: clark.kang <charsyam@naver.com>
Date: Tue Mar 22 11:09:51 2016 +0900
fix typos
commit f7b3b11
Author: Harisankar H <harisankarh@gmail.com>
Date: Wed Mar 9 11:49:42 2016 +0530
Typo correction: "faield" --> "failed"
Typo correction: "faield" --> "failed"
commit 3fd40fc
Author: Itamar Haber <itamar@redislabs.com>
Date: Thu Feb 25 10:31:51 2016 +0200
Fixes a typo in comments
commit 621c160
Author: Prayag Verma <prayag.verma@gmail.com>
Date: Mon Feb 1 12:36:20 2016 +0530
Fix typo in Readme.md
Spelling mistakes -
`eviciton` > `eviction`
`familar` > `familiar`
commit d7d07d6
Author: WonCheol Lee <toctoc21c@gmail.com>
Date: Wed Dec 30 15:11:34 2015 +0900
Typo fixed
commit a4dade7
Author: Felix Bünemann <buenemann@louis.info>
Date: Mon Dec 28 11:02:55 2015 +0100
[ci skip] Improve supervised upstart config docs
This mentions that "expect stop" is required for supervised upstart
to work correctly. See http://upstart.ubuntu.com/cookbook/#expect-stop
for an explanation.
commit d9caba9
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:30:03 2015 +1100
README: Remove trailing whitespace
commit 72d42e5
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:29:32 2015 +1100
README: Fix typo. th => the
commit dd6e957
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:29:20 2015 +1100
README: Fix typo. familar => familiar
commit 3a12b23
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:28:54 2015 +1100
README: Fix typo. eviciton => eviction
commit 2d1d03b
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:21:45 2015 +1100
README: Fix typo. sever => server
commit 3973b06
Author: Itamar Haber <itamar@garantiadata.com>
Date: Sat Dec 19 17:01:20 2015 +0200
Typo fix
commit 4f2e460
Author: Steve Gao <fu@2token.com>
Date: Fri Dec 4 10:22:05 2015 +0800
Update README - fix typos
commit b21667c
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 22:48:37 2015 +0800
delete redundancy color judge in sdscatcolor
commit 88894c7
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 22:14:42 2015 +0800
the example output shoule be HelloWorld
commit 2763470
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 17:41:39 2015 +0800
modify error word keyevente
Signed-off-by: binyan <binbin.yan@nokia.com>
commit 0847b3d
Author: Bruno Martins <bscmartins@gmail.com>
Date: Wed Nov 4 11:37:01 2015 +0000
typo
commit bbb9e9e
Author: dawedawe <dawedawe@gmx.de>
Date: Fri Mar 27 00:46:41 2015 +0100
typo: zimap -> zipmap
commit 5ed297e
Author: Axel Advento <badwolf.bloodseeker.rev@gmail.com>
Date: Tue Mar 3 15:58:29 2015 +0800
Fix 'salve' typos to 'slave'
commit edec9d6
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date: Wed Jun 12 14:12:47 2019 +0200
Update README.md
Co-Authored-By: Qix <Qix-@users.noreply.github.com>
commit 692a7af
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date: Tue May 28 14:32:04 2019 +0200
grammar
commit d962b0a
Author: Nick Frost <nickfrostatx@gmail.com>
Date: Wed Jul 20 15:17:12 2016 -0700
Minor grammar fix
commit 24fff01aaccaf5956973ada8c50ceb1462e211c6 (typos)
Author: Chad Miller <chadm@squareup.com>
Date: Tue Sep 8 13:46:11 2020 -0400
Fix faulty comment about operation of unlink()
commit 3cd5c1f3326c52aa552ada7ec797c6bb16452355
Author: Kevin <kevin.xgr@gmail.com>
Date: Wed Nov 20 00:13:50 2019 +0800
Fix typo in server.c.
From a83af59 Mon Sep 17 00:00:00 2001
From: wuwo <wuwo@wacai.com>
Date: Fri, 17 Mar 2017 20:37:45 +0800
Subject: [PATCH] falure to failure
From c961896 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=B7=A6=E6=87=B6?= <veficos@gmail.com>
Date: Sat, 27 May 2017 15:33:04 +0800
Subject: [PATCH] fix typo
From e600ef2 Mon Sep 17 00:00:00 2001
From: "rui.zou" <rui.zou@yunify.com>
Date: Sat, 30 Sep 2017 12:38:15 +0800
Subject: [PATCH] fix a typo
From c7d07fa Mon Sep 17 00:00:00 2001
From: Alexandre Perrin <alex@kaworu.ch>
Date: Thu, 16 Aug 2018 10:35:31 +0200
Subject: [PATCH] deps README.md typo
From b25cb67 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 10:55:37 +0300
Subject: [PATCH 1/2] fix typos in header
From ad28ca6 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 11:02:36 +0300
Subject: [PATCH 2/2] fix typos
commit 34924cdedd8552466fc22c1168d49236cb7ee915
Author: Adrian Lynch <adi_ady_ade@hotmail.com>
Date: Sat Apr 4 21:59:15 2015 +0100
Typos fixed
commit fd2a1e7
Author: Jan <jsteemann@users.noreply.github.com>
Date: Sat Oct 27 19:13:01 2018 +0200
Fix typos
Fix typos
commit e14e47c1a234b53b0e103c5f6a1c61481cbcbb02
Author: Andy Lester <andy@petdance.com>
Date: Fri Aug 2 22:30:07 2019 -0500
Fix multiple misspellings of "following"
commit 79b948ce2dac6b453fe80995abbcaac04c213d5a
Author: Andy Lester <andy@petdance.com>
Date: Fri Aug 2 22:24:28 2019 -0500
Fix misspelling of create-cluster
commit 1fffde52666dc99ab35efbd31071a4c008cb5a71
Author: Andy Lester <andy@petdance.com>
Date: Wed Jul 31 17:57:56 2019 -0500
Fix typos
commit 204c9ba9651e9e05fd73936b452b9a30be456cfe
Author: Xiaobo Zhu <xiaobo.zhu@shopee.com>
Date: Tue Aug 13 22:19:25 2019 +0800
fix typos
Squashed commit of the following:
commit 1d9aaf8
Author: danmedani <danmedani@gmail.com>
Date: Sun Aug 2 11:40:26 2015 -0700
README typo fix.
Squashed commit of the following:
commit 32bfa7c
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date: Mon Jul 6 21:15:08 2015 +0200
Fixed grammer
Squashed commit of the following:
commit b24f69c
Author: Sisir Koppaka <sisir.koppaka@gmail.com>
Date: Mon Mar 2 22:38:45 2015 -0500
utils/hashtable/rehashing.c: Fix typos
Squashed commit of the following:
commit 4e04082
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date: Mon Mar 23 08:22:21 2015 +0000
Small config file documentation improvements
Squashed commit of the following:
commit acb8773
Author: ctd1500 <ctd1500@gmail.com>
Date: Fri May 8 01:52:48 2015 -0700
Typo and grammar fixes in readme
commit 2eb75b6
Author: ctd1500 <ctd1500@gmail.com>
Date: Fri May 8 01:36:18 2015 -0700
fixed redis.conf comment
Squashed commit of the following:
commit a8249a2
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Fri Dec 11 11:39:52 2015 +0530
Revise correction of typos.
Squashed commit of the following:
commit 3c02028
Author: zhaojun11 <zhaojun11@jd.com>
Date: Wed Jan 17 19:05:28 2018 +0800
Fix typos include two code typos in cluster.c and latency.c
Squashed commit of the following:
commit 9dba47c
Author: q191201771 <191201771@qq.com>
Date: Sat Jan 4 11:31:04 2020 +0800
fix function listCreate comment in adlist.c
Update src/server.c
commit 2c7c2cb536e78dd211b1ac6f7bda00f0f54faaeb
Author: charpty <charpty@gmail.com>
Date: Tue May 1 23:16:59 2018 +0800
server.c typo: modules system dictionary type comment
Signed-off-by: charpty <charpty@gmail.com>
commit a8395323fb63cb59cb3591cb0f0c8edb7c29a680
Author: Itamar Haber <itamar@redislabs.com>
Date: Sun May 6 00:25:18 2018 +0300
Updates test_helper.tcl's help with undocumented options
Specifically:
* Host
* Port
* Client
commit bde6f9ced15755cd6407b4af7d601b030f36d60b
Author: wxisme <850885154@qq.com>
Date: Wed Aug 8 15:19:19 2018 +0800
fix comments in deps files
commit 3172474ba991532ab799ee1873439f3402412331
Author: wxisme <850885154@qq.com>
Date: Wed Aug 8 14:33:49 2018 +0800
fix some comments
commit 01b6f2b6858b5cf2ce4ad5092d2c746e755f53f0
Author: Thor Juhasz <thor@juhasz.pro>
Date: Sun Nov 18 14:37:41 2018 +0100
Minor fixes to comments
Found some parts a little unclear on a first read, which prompted me to have a better look at the file and fix some minor things I noticed.
Fixing minor typos and grammar. There are no changes to configuration options.
These changes are only meant to help the user better understand the explanations to the various configuration options
2020-09-10 13:43:38 +03:00
|
|
|
* On success the function returns the number of keys removed from the
|
2015-09-28 10:47:45 +02:00
|
|
|
* database(s). Otherwise -1 is returned in the specific case the
|
|
|
|
* DB number is out of range, and errno is set to EINVAL. */
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
long long emptyData(int dbnum, int flags, void(callback)(dict *)) {
|
2018-07-25 18:13:34 +08:00
|
|
|
int async = (flags & EMPTYDB_ASYNC);
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
int with_functions = !(flags & EMPTYDB_NOFUNCTIONS);
|
2024-04-05 16:59:55 -07:00
|
|
|
ValkeyModuleFlushInfoV1 fi = {VALKEYMODULE_FLUSHINFO_VERSION, !async, dbnum};
|
2010-06-22 00:07:48 +02:00
|
|
|
long long removed = 0;
|
|
|
|
|
2015-09-28 10:47:45 +02:00
|
|
|
if (dbnum < -1 || dbnum >= server.dbnum) {
|
|
|
|
errno = EINVAL;
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2020-12-02 19:56:11 +08:00
|
|
|
/* Fire the flushdb modules event. */
|
2024-04-05 16:59:55 -07:00
|
|
|
moduleFireServerEvent(VALKEYMODULE_EVENT_FLUSHDB, VALKEYMODULE_SUBEVENT_FLUSHDB_START, &fi);
|
2020-02-03 17:19:00 +05:30
|
|
|
|
2020-12-02 19:56:11 +08:00
|
|
|
/* Make sure the WATCHed keys are affected by the FLUSH* commands.
|
|
|
|
* Note that we need to call the function while the keys are still
|
|
|
|
* there. */
|
2020-12-23 19:13:12 -08:00
|
|
|
signalFlushedDb(dbnum, async);
|
2019-07-30 11:20:51 +02:00
|
|
|
|
2024-04-09 01:24:03 -07:00
|
|
|
/* Empty the database structure. */
|
2020-12-02 19:56:11 +08:00
|
|
|
removed = emptyDbStructure(server.db, dbnum, async, callback);
|
|
|
|
|
|
|
|
if (dbnum == -1) flushReplicaKeysWithExpireList();
|
|
|
|
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
if (with_functions) {
|
|
|
|
serverAssert(dbnum == -1);
|
Redis Function Libraries (#10004)
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
2022-01-06 13:39:38 +02:00
|
|
|
functionsLibCtxClearCurrent(async);
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
}
|
|
|
|
|
2020-12-02 19:56:11 +08:00
|
|
|
/* Also fire the end event. Note that this event will fire almost
|
|
|
|
* immediately after the start event if the flush is asynchronous. */
|
2024-04-05 16:59:55 -07:00
|
|
|
moduleFireServerEvent(VALKEYMODULE_EVENT_FLUSHDB, VALKEYMODULE_SUBEVENT_FLUSHDB_END, &fi);
|
2020-12-02 19:56:11 +08:00
|
|
|
|
|
|
|
return removed;
|
|
|
|
}
|
|
|
|
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
/* Initialize temporary db on replica for use during diskless replication. */
|
2024-04-03 10:02:43 +07:00
|
|
|
serverDb *initTempDb(void) {
|
Fix dictionary use-after-free in active expire and make kvstore iter to respect EMPTY flag (#13135)
After #13072, there is an use-after-free error. In expireScanCallback, we
will delete the dict, and then in dictScan we will continue to use the dict,
like we will doing `dictResumeRehashing(d)` in the end, this casued an error.
In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
delete the dict yet, and then when scan returns try to delete it again.
At the same time, we noticed that there will be similar problems in iterator.
We may also delete elements during the iteration process, causing the dict
to be deleted, so the part related to iter in the PR has also been modified.
dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
we currently have no scenario that elements will be deleted in kvstoreIterator
process, deal with it together to avoid future problems. Added some simple
tests to verify the changes.
In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
and they were also added. This PR also remove the slow flag from the expire
test (consumes 1.3s) so that problems can be found in CI in the future.
2024-03-18 23:41:54 +08:00
|
|
|
int slot_count_bits = 0;
|
|
|
|
int flags = KVSTORE_ALLOCATE_DICTS_ON_DEMAND;
|
|
|
|
if (server.cluster_enabled) {
|
|
|
|
slot_count_bits = CLUSTER_SLOT_MASK_BITS;
|
|
|
|
flags |= KVSTORE_FREE_EMPTY_DICTS;
|
|
|
|
}
|
2024-04-03 10:02:43 +07:00
|
|
|
serverDb *tempDb = zcalloc(sizeof(serverDb) * server.dbnum);
|
2020-12-02 19:56:11 +08:00
|
|
|
for (int i = 0; i < server.dbnum; i++) {
|
Set the correct id for tempDb (#12947)
background: some modules need to know the `dbid` information, such as
the function used during RDB loading:
```
robj *rdbLoadObject(int rdbtype, rio *rdb, sds key, int dbid, int *error) {
....
moduleInitIOContext(io,mt,rdb,&keyobj,dbid);
```
However, during replication, the "tempDb" created for diskless RDB
loading is not correctly set with the dbid. This leads to passing the
wrong dbid to the `rdbLoadObject` function (as tempDb uses zcalloc, all
ids are 0).
```
disklessLoadInitTempDb()->rdbLoadRioWithLoadingCtx()->
/* Read value */
val = rdbLoadObject(type,rdb,key,db->id,&error);
```
To fix it, set the correct ID (relative index) for the tempdb.
2024-01-22 11:47:51 +08:00
|
|
|
tempDb[i].id = i;
|
Use metadata to handle the reference relationship between kvstore and dict (#966)
Feature `one-dict-per-slot` refactors the database, and part of it
involved splitting the rehashing list from the global level back to the
database level, or more specifically, the kvstore level. This change is
fine, and it also simplifies the process of swapping databases, which is
good. And it should not have a major impact on the efficiency of
incremental rehashing.
To implement the kvstore-level rehashing list, each `dict` under the
`kvstore` needs to know which `kvstore` it belongs. However, kvstore did
not insert the reference relationship into the `dict` itself, instead,
it placed it in the `dictType`. In my view, this is a somewhat odd way.
Theoretically, `dictType` is just a collection of function handles, a
kind of virtual type that can be referenced globally, not an entity. But
now the `dictType` is instantiated, with each `kvstore` owning an actual
`dictType`, which in turn holds a reverse reference to the `kvstore`'s
resource pointer. This design is somewhat uncomfortable for me.
I think the `dictType` should not be instantiated. The references
between actual resources (`kvstore` and `dict`) should occur between
specific objects, rather than force materializing the `dictType`, which
is supposed to be virtual.
---------
Signed-off-by: zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
2024-09-02 22:35:24 +08:00
|
|
|
tempDb[i].keys = kvstoreCreate(&kvstoreKeysDictType, slot_count_bits, flags);
|
|
|
|
tempDb[i].expires = kvstoreCreate(&kvstoreExpiresDictType, slot_count_bits, flags);
|
2015-09-28 10:47:45 +02:00
|
|
|
}
|
2020-02-03 17:19:00 +05:30
|
|
|
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
return tempDb;
|
2020-12-02 19:56:11 +08:00
|
|
|
}
|
|
|
|
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
/* Discard tempDb, this can be slow (similar to FLUSHALL), but it's always async. */
|
2024-04-03 10:02:43 +07:00
|
|
|
void discardTempDb(serverDb *tempDb, void(callback)(dict *)) {
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
int async = 1;
|
2019-10-23 10:22:46 +02:00
|
|
|
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
/* Release temp DBs. */
|
|
|
|
emptyDbStructure(tempDb, -1, async, callback);
|
2020-12-02 19:56:11 +08:00
|
|
|
for (int i = 0; i < server.dbnum; i++) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kvstoreRelease(tempDb[i].keys);
|
|
|
|
kvstoreRelease(tempDb[i].expires);
|
2020-12-02 19:56:11 +08:00
|
|
|
}
|
|
|
|
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
zfree(tempDb);
|
2019-07-01 15:22:29 +03:00
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
int selectDb(client *c, int id) {
|
2010-06-22 00:07:48 +02:00
|
|
|
if (id < 0 || id >= server.dbnum) return C_ERR;
|
|
|
|
c->db = &server.db[id];
|
2015-07-26 23:17:55 +02:00
|
|
|
return C_OK;
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2023-05-02 17:31:32 -07:00
|
|
|
long long dbTotalServerKeyCount(void) {
|
2019-07-01 15:22:29 +03:00
|
|
|
long long total = 0;
|
|
|
|
int j;
|
|
|
|
for (j = 0; j < server.dbnum; j++) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
total += kvstoreSize(server.db[j].keys);
|
2019-07-01 15:22:29 +03:00
|
|
|
}
|
|
|
|
return total;
|
|
|
|
}
|
|
|
|
|
2010-12-29 19:39:42 +01:00
|
|
|
/*-----------------------------------------------------------------------------
|
|
|
|
* Hooks for key space changes.
|
|
|
|
*
|
|
|
|
* Every time a key in the database is modified the function
|
|
|
|
* signalModifiedKey() is called.
|
|
|
|
*
|
|
|
|
* Every time a DB is flushed the function signalFlushDb() is called.
|
|
|
|
*----------------------------------------------------------------------------*/
|
|
|
|
|
2020-04-21 10:51:46 +02:00
|
|
|
/* Note that the 'c' argument may be NULL if the key was modified out of
|
|
|
|
* a context of a client. */
|
2024-04-03 10:02:43 +07:00
|
|
|
void signalModifiedKey(client *c, serverDb *db, robj *key) {
|
2010-12-29 19:39:42 +01:00
|
|
|
touchWatchedKey(db, key);
|
2021-10-07 20:13:42 +08:00
|
|
|
trackingInvalidateKey(c, key, 1);
|
2010-12-29 19:39:42 +01:00
|
|
|
}
|
|
|
|
|
2020-12-23 19:13:12 -08:00
|
|
|
void signalFlushedDb(int dbid, int async) {
|
2021-01-04 20:48:28 +08:00
|
|
|
int startdb, enddb;
|
|
|
|
if (dbid == -1) {
|
|
|
|
startdb = 0;
|
|
|
|
enddb = server.dbnum - 1;
|
|
|
|
} else {
|
|
|
|
startdb = enddb = dbid;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int j = startdb; j <= enddb; j++) {
|
2022-10-18 18:50:02 +02:00
|
|
|
scanDatabaseForDeletedKeys(&server.db[j], NULL);
|
2021-01-04 20:48:28 +08:00
|
|
|
touchAllWatchedKeysInDb(&server.db[j], NULL);
|
|
|
|
}
|
|
|
|
|
2020-12-23 19:13:12 -08:00
|
|
|
trackingInvalidateKeysOnFlush(async);
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
|
|
|
|
/* Changes in this method may take place in swapMainDbWithTempDb as well,
|
|
|
|
* where we execute similar calls, but with subtle differences as it's
|
|
|
|
* not simply flushing db. */
|
2010-12-29 19:39:42 +01:00
|
|
|
}
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
/*-----------------------------------------------------------------------------
|
|
|
|
* Type agnostic commands operating on the key space
|
|
|
|
*----------------------------------------------------------------------------*/
|
|
|
|
|
2023-10-12 20:34:08 +08:00
|
|
|
/* Return the set of flags to use for the emptyData() call for FLUSHALL
|
2015-09-28 11:05:39 +02:00
|
|
|
* and FLUSHDB commands.
|
|
|
|
*
|
2021-01-15 21:32:58 +08:00
|
|
|
* sync: flushes the database in an sync manner.
|
|
|
|
* async: flushes the database in an async manner.
|
|
|
|
* no option: determine sync or async according to the value of lazyfree-lazy-user-flush.
|
2015-09-28 11:05:39 +02:00
|
|
|
*
|
|
|
|
* On success C_OK is returned and the flags are stored in *flags, otherwise
|
|
|
|
* C_ERR is returned and the function sends an error to the client. */
|
|
|
|
int getFlushCommandFlags(client *c, int *flags) {
|
|
|
|
/* Parse the optional ASYNC option. */
|
2021-01-15 21:32:58 +08:00
|
|
|
if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr, "sync")) {
|
|
|
|
*flags = EMPTYDB_NO_FLAGS;
|
|
|
|
} else if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr, "async")) {
|
2015-09-28 11:05:39 +02:00
|
|
|
*flags = EMPTYDB_ASYNC;
|
2021-01-15 21:32:58 +08:00
|
|
|
} else if (c->argc == 1) {
|
|
|
|
*flags = server.lazyfree_lazy_user_flush ? EMPTYDB_ASYNC : EMPTYDB_NO_FLAGS;
|
2015-09-28 11:05:39 +02:00
|
|
|
} else {
|
2021-01-15 21:32:58 +08:00
|
|
|
addReplyErrorObject(c, shared.syntaxerr);
|
|
|
|
return C_ERR;
|
2015-09-28 11:05:39 +02:00
|
|
|
}
|
|
|
|
return C_OK;
|
|
|
|
}
|
|
|
|
|
2019-11-03 17:35:35 +02:00
|
|
|
/* Flushes the whole server data set. */
|
|
|
|
void flushAllDataAndResetRDB(int flags) {
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
server.dirty += emptyData(-1, flags, NULL);
|
Refactory fork child related infra, Unify child pid
This is a refactory commit, isn't suppose to have any actual impact.
it does the following:
- keep just one server struct fork child pid variable instead of 3
- have one server struct variable indicating the purpose of the current fork
child.
- redisFork is now responsible of updating the server struct with the pid,
which means it can be the one that calls updateDictResizePolicy
- move child info pipe handling into redisFork instead of having them
repeated outside
- there are two classes of fork purposes, mutually exclusive group (AOF, RDB,
Module), and one that can create several forks to coexist in parallel (LDB,
but maybe Modules some day too, Module API allows for that).
- minor fix to killRDBChild:
unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild
doesn't clear the pid variable or call wait4, so checkChildrenDone does
the cleanup for it.
This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe,
updateDictResizePolicy, which didn't do any harm, but where unnecessary.
2020-12-16 15:14:04 +02:00
|
|
|
if (server.child_type == CHILD_TYPE_RDB) killRDBChild();
|
2019-11-03 17:35:35 +02:00
|
|
|
if (server.saveparamslen > 0) {
|
|
|
|
rdbSaveInfo rsi, *rsiptr;
|
|
|
|
rsiptr = rdbPopulateSaveInfo(&rsi);
|
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
|
|
|
rdbSave(REPLICA_REQ_NONE, server.rdb_filename, rsiptr, RDBFLAGS_NONE);
|
2019-11-03 17:35:35 +02:00
|
|
|
}
|
2020-12-15 15:30:24 +08:00
|
|
|
|
2019-11-03 17:35:35 +02:00
|
|
|
#if defined(USE_JEMALLOC)
|
|
|
|
/* jemalloc 5 doesn't release pages back to the OS when there's no traffic.
|
|
|
|
* for large databases, flushdb blocks for long anyway, so a bit more won't
|
2021-10-31 22:01:54 +08:00
|
|
|
* harm and this way the flush and purge will be synchronous. */
|
2019-11-03 17:35:35 +02:00
|
|
|
if (!(flags & EMPTYDB_ASYNC)) jemalloc_purge();
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2015-09-28 11:05:39 +02:00
|
|
|
/* FLUSHDB [ASYNC]
|
|
|
|
*
|
2024-04-09 01:24:03 -07:00
|
|
|
* Flushes the currently SELECTed DB. */
|
2015-07-26 15:20:46 +02:00
|
|
|
void flushdbCommand(client *c) {
|
2015-09-28 11:05:39 +02:00
|
|
|
int flags;
|
|
|
|
|
|
|
|
if (getFlushCommandFlags(c, &flags) == C_ERR) return;
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
/* flushdb should not flush the functions */
|
|
|
|
server.dirty += emptyData(c->db->id, flags | EMPTYDB_NOFUNCTIONS, NULL);
|
FLUSHDB and FLUSHALL add call forceCommandPropagation / FLUSHALL reset dirty counter to 0 if we enable save (#10691)
## FLUSHALL
We used to restore the dirty counter after `rdbSave` zeroed it if we enable save.
Otherwise FLUSHALL will not be replicated nor put into the AOF.
And then we do increment it again below.
Without that extra dirty++, when db was already empty, FLUSHALL
will not be replicated nor put into the AOF.
We now gonna replace all that dirty counter magic with a call
to forceCommandPropagation (REPL and AOF), instead of all the
messing around with the dirty counter.
Added tests to cover three part (dirty counter, REPL, AOF).
One benefit other than cleaner code is that the `rdb_changes_since_last_save` is correct in this case.
## FLUSHDB
FLUSHDB was not replicated nor put into the AOF when db was already empty.
Unlike DEL on a non-existing key, FLUSHDB always does something, and that's to call the module hook.
So basically FLUSHDB is never a NOP, and thus it should always be propagated.
Not doing that, could mean that if a module does something in that hook, and wants to
avoid issues of that hook being missing on the replica if the db is empty, it'll need to do complicated things.
So now FLUSHDB add call forceCommandPropagation, we will always propagate FLUSHDB.
Always propagating FLUSHDB seems like a safe approach that shouldn't have any drawbacks (other than looking odd)
This was mentioned in #8972
## Test section:
We actually found it while solving a race condition in the BGSAVE test (other.tcl).
It was found in extra_ci Daily Arm64 (test-libc-malloc).
```
[exception]: Executing test client: ERR Background save already in progress.
ERR Background save already in progress
```
It look like `r flushdb` trigger (schedule) a bgsave right after `waitForBgsave r` and before `r save`.
Changing flushdb to flushall, FLUSHALL will do a foreground save and then set the dirty counter to 0.
2022-05-11 16:21:16 +08:00
|
|
|
|
|
|
|
/* Without the forceCommandPropagation, when DB was already empty,
|
|
|
|
* FLUSHDB will not be replicated nor put into the AOF. */
|
|
|
|
forceCommandPropagation(c, PROPAGATE_REPL | PROPAGATE_AOF);
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
addReply(c, shared.ok);
|
FLUSHDB and FLUSHALL add call forceCommandPropagation / FLUSHALL reset dirty counter to 0 if we enable save (#10691)
## FLUSHALL
We used to restore the dirty counter after `rdbSave` zeroed it if we enable save.
Otherwise FLUSHALL will not be replicated nor put into the AOF.
And then we do increment it again below.
Without that extra dirty++, when db was already empty, FLUSHALL
will not be replicated nor put into the AOF.
We now gonna replace all that dirty counter magic with a call
to forceCommandPropagation (REPL and AOF), instead of all the
messing around with the dirty counter.
Added tests to cover three part (dirty counter, REPL, AOF).
One benefit other than cleaner code is that the `rdb_changes_since_last_save` is correct in this case.
## FLUSHDB
FLUSHDB was not replicated nor put into the AOF when db was already empty.
Unlike DEL on a non-existing key, FLUSHDB always does something, and that's to call the module hook.
So basically FLUSHDB is never a NOP, and thus it should always be propagated.
Not doing that, could mean that if a module does something in that hook, and wants to
avoid issues of that hook being missing on the replica if the db is empty, it'll need to do complicated things.
So now FLUSHDB add call forceCommandPropagation, we will always propagate FLUSHDB.
Always propagating FLUSHDB seems like a safe approach that shouldn't have any drawbacks (other than looking odd)
This was mentioned in #8972
## Test section:
We actually found it while solving a race condition in the BGSAVE test (other.tcl).
It was found in extra_ci Daily Arm64 (test-libc-malloc).
```
[exception]: Executing test client: ERR Background save already in progress.
ERR Background save already in progress
```
It look like `r flushdb` trigger (schedule) a bgsave right after `waitForBgsave r` and before `r save`.
Changing flushdb to flushall, FLUSHALL will do a foreground save and then set the dirty counter to 0.
2022-05-11 16:21:16 +08:00
|
|
|
|
2019-05-30 12:51:32 +03:00
|
|
|
#if defined(USE_JEMALLOC)
|
|
|
|
/* jemalloc 5 doesn't release pages back to the OS when there's no traffic.
|
|
|
|
* for large databases, flushdb blocks for long anyway, so a bit more won't
|
2021-10-31 22:01:54 +08:00
|
|
|
* harm and this way the flush and purge will be synchronous. */
|
2019-05-30 12:51:32 +03:00
|
|
|
if (!(flags & EMPTYDB_ASYNC)) jemalloc_purge();
|
|
|
|
#endif
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2015-09-28 11:05:39 +02:00
|
|
|
/* FLUSHALL [ASYNC]
|
|
|
|
*
|
|
|
|
* Flushes the whole server data set. */
|
2015-07-26 15:20:46 +02:00
|
|
|
void flushallCommand(client *c) {
|
2015-09-28 11:05:39 +02:00
|
|
|
int flags;
|
|
|
|
if (getFlushCommandFlags(c, &flags) == C_ERR) return;
|
Change FUNCTION CREATE, DELETE and FLUSH to be WRITE commands instead of MAY_REPLICATE. (#9953)
The issue with MAY_REPLICATE is that all automatic mechanisms to handle
write commands will not work. This require have a special treatment for:
* Not allow those commands to be executed on RO replica.
* Allow those commands to be executed on RO replica from primary connection.
* Allow those commands to be executed on the RO replica from AOF.
By setting those commands as WRITE commands we are getting all those properties from Redis.
Test was added to verify that those properties work as expected.
In addition, rearrange when and where functions are flushed. Before this PR functions were
flushed manually on `rdbLoadRio` and cleaned manually on failure. This contradicts the
assumptions that functions are data and need to be created/deleted alongside with the
data. A side effect of this, for example, `debug reload noflush` did not flush the data but
did flush the functions, `debug loadaof` flush the data but not the functions.
This PR move functions deletion into `emptyDb`. `emptyDb` (renamed to `emptyData`) will
now accept an additional flag, `NOFUNCTIONS` which specifically indicate that we do not
want to flush the functions (on all other cases, functions will be flushed). Used the new flag
on FLUSHALL and FLUSHDB only! Tests were added to `debug reload` and `debug loadaof`
to verify that functions behave the same as the data.
Notice that because now functions will be deleted along side with the data we can not allow
`CLUSTER RESET` to be called from within a function (it will cause the function to be released
while running), this PR adds `NO_SCRIPT` flag to `CLUSTER RESET` so it will not be possible
to be called from within a function. The other cluster commands are allowed from within a
function (there are use-cases that uses `GETKEYSINSLOT` to iterate over all the keys on a
given slot). Tests was added to verify `CLUSTER RESET` is denied from within a script.
Another small change on this PR is that `RDBFLAGS_ALLOW_DUP` is also applicable on functions.
When loading functions, if this flag is set, we will replace old functions with new ones on collisions.
2021-12-21 16:13:29 +02:00
|
|
|
/* flushall should not flush the functions */
|
|
|
|
flushAllDataAndResetRDB(flags | EMPTYDB_NOFUNCTIONS);
|
FLUSHDB and FLUSHALL add call forceCommandPropagation / FLUSHALL reset dirty counter to 0 if we enable save (#10691)
## FLUSHALL
We used to restore the dirty counter after `rdbSave` zeroed it if we enable save.
Otherwise FLUSHALL will not be replicated nor put into the AOF.
And then we do increment it again below.
Without that extra dirty++, when db was already empty, FLUSHALL
will not be replicated nor put into the AOF.
We now gonna replace all that dirty counter magic with a call
to forceCommandPropagation (REPL and AOF), instead of all the
messing around with the dirty counter.
Added tests to cover three part (dirty counter, REPL, AOF).
One benefit other than cleaner code is that the `rdb_changes_since_last_save` is correct in this case.
## FLUSHDB
FLUSHDB was not replicated nor put into the AOF when db was already empty.
Unlike DEL on a non-existing key, FLUSHDB always does something, and that's to call the module hook.
So basically FLUSHDB is never a NOP, and thus it should always be propagated.
Not doing that, could mean that if a module does something in that hook, and wants to
avoid issues of that hook being missing on the replica if the db is empty, it'll need to do complicated things.
So now FLUSHDB add call forceCommandPropagation, we will always propagate FLUSHDB.
Always propagating FLUSHDB seems like a safe approach that shouldn't have any drawbacks (other than looking odd)
This was mentioned in #8972
## Test section:
We actually found it while solving a race condition in the BGSAVE test (other.tcl).
It was found in extra_ci Daily Arm64 (test-libc-malloc).
```
[exception]: Executing test client: ERR Background save already in progress.
ERR Background save already in progress
```
It look like `r flushdb` trigger (schedule) a bgsave right after `waitForBgsave r` and before `r save`.
Changing flushdb to flushall, FLUSHALL will do a foreground save and then set the dirty counter to 0.
2022-05-11 16:21:16 +08:00
|
|
|
|
|
|
|
/* Without the forceCommandPropagation, when DBs were already empty,
|
|
|
|
* FLUSHALL will not be replicated nor put into the AOF. */
|
|
|
|
forceCommandPropagation(c, PROPAGATE_REPL | PROPAGATE_AOF);
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
addReply(c, shared.ok);
|
|
|
|
}
|
|
|
|
|
2022-12-20 13:21:50 +05:30
|
|
|
/* This command implements DEL and UNLINK. */
|
2015-07-30 11:46:31 +02:00
|
|
|
void delGenericCommand(client *c, int lazy) {
|
|
|
|
int numdel = 0, j;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
for (j = 1; j < c->argc; j++) {
|
2024-02-26 18:50:04 +08:00
|
|
|
if (expireIfNeeded(c->db, c->argv[j], 0) == KEY_DELETED) continue;
|
2015-07-30 11:46:31 +02:00
|
|
|
int deleted = lazy ? dbAsyncDelete(c->db, c->argv[j]) : dbSyncDelete(c->db, c->argv[j]);
|
|
|
|
if (deleted) {
|
2020-04-21 10:51:46 +02:00
|
|
|
signalModifiedKey(c, c->db, c->argv[j]);
|
2015-07-27 09:41:48 +02:00
|
|
|
notifyKeyspaceEvent(NOTIFY_GENERIC, "del", c->argv[j], c->db->id);
|
2010-06-22 00:07:48 +02:00
|
|
|
server.dirty++;
|
2015-07-30 11:46:31 +02:00
|
|
|
numdel++;
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
}
|
2015-07-30 11:46:31 +02:00
|
|
|
addReplyLongLong(c, numdel);
|
|
|
|
}
|
|
|
|
|
|
|
|
void delCommand(client *c) {
|
2019-07-17 11:00:51 +08:00
|
|
|
delGenericCommand(c, server.lazyfree_lazy_user_del);
|
2015-07-30 11:46:31 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void unlinkCommand(client *c) {
|
|
|
|
delGenericCommand(c, 1);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2015-07-13 18:06:24 +02:00
|
|
|
/* EXISTS key1 key2 ... key_N.
|
|
|
|
* Return value is the number of keys existing. */
|
2015-07-26 15:20:46 +02:00
|
|
|
void existsCommand(client *c) {
|
2015-07-13 18:06:24 +02:00
|
|
|
long long count = 0;
|
|
|
|
int j;
|
|
|
|
|
|
|
|
for (j = 1; j < c->argc; j++) {
|
2020-11-18 10:16:21 +01:00
|
|
|
if (lookupKeyReadWithFlags(c->db, c->argv[j], LOOKUP_NOTOUCH)) count++;
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2015-07-13 18:06:24 +02:00
|
|
|
addReplyLongLong(c, count);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void selectCommand(client *c) {
|
Improve dbid range check for SELECT, MOVE, COPY (#8085)
SELECT used to read the index into a `long` variable, and then pass it to a function
that takes an `int`, possibly causing an overflow before the range check.
Now all these commands use better and cleaner range check, and that also results in
a slight change of the error response in case of an invalid database index.
SELECT:
in the past it would have returned either `-ERR invalid DB index` (if not a number),
or `-ERR DB index is out of range` (if not between 1..16 or alike).
now it'll return either `-ERR value is out of range` (if not a number), or
`-ERR value is out of range, value must between -2147483648 and 2147483647`
(if not in the range for an int), or `-ERR DB index is out of range`
(if not between 0..16 or alike)
MOVE:
in the past it would only fail with `-ERR index out of range` no matter the reason.
now return the same errors as the new ones for SELECT mentioned above.
(i.e. unlike for SELECT even for a value like 17 we changed the error message)
COPY:
doesn't really matter how it behaved in the past (new command), new behavior is
like the above two.
2020-12-02 03:41:26 +08:00
|
|
|
int id;
|
2012-09-11 10:32:04 +02:00
|
|
|
|
Improve dbid range check for SELECT, MOVE, COPY (#8085)
SELECT used to read the index into a `long` variable, and then pass it to a function
that takes an `int`, possibly causing an overflow before the range check.
Now all these commands use better and cleaner range check, and that also results in
a slight change of the error response in case of an invalid database index.
SELECT:
in the past it would have returned either `-ERR invalid DB index` (if not a number),
or `-ERR DB index is out of range` (if not between 1..16 or alike).
now it'll return either `-ERR value is out of range` (if not a number), or
`-ERR value is out of range, value must between -2147483648 and 2147483647`
(if not in the range for an int), or `-ERR DB index is out of range`
(if not between 0..16 or alike)
MOVE:
in the past it would only fail with `-ERR index out of range` no matter the reason.
now return the same errors as the new ones for SELECT mentioned above.
(i.e. unlike for SELECT even for a value like 17 we changed the error message)
COPY:
doesn't really matter how it behaved in the past (new command), new behavior is
like the above two.
2020-12-02 03:41:26 +08:00
|
|
|
if (getIntFromObjectOrReply(c, c->argv[1], &id, NULL) != C_OK) return;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2011-05-05 18:10:02 +02:00
|
|
|
if (server.cluster_enabled && id != 0) {
|
2011-03-29 17:51:15 +02:00
|
|
|
addReplyError(c, "SELECT is not allowed in cluster mode");
|
|
|
|
return;
|
|
|
|
}
|
2015-07-26 23:17:55 +02:00
|
|
|
if (selectDb(c, id) == C_ERR) {
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
addReplyError(c, "DB index is out of range");
|
2010-06-22 00:07:48 +02:00
|
|
|
} else {
|
|
|
|
addReply(c, shared.ok);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void randomkeyCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
robj *key;
|
|
|
|
|
|
|
|
if ((key = dbRandomKey(c->db)) == NULL) {
|
2018-11-30 09:41:54 +01:00
|
|
|
addReplyNull(c);
|
2010-06-22 00:07:48 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
addReplyBulk(c, key);
|
|
|
|
decrRefCount(key);
|
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void keysCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
dictEntry *de;
|
|
|
|
sds pattern = c->argv[1]->ptr;
|
2023-12-05 22:21:50 +08:00
|
|
|
int plen = sdslen(pattern), allkeys, pslot = -1;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
unsigned long numkeys = 0;
|
2018-11-09 12:59:00 +01:00
|
|
|
void *replylen = addReplyDeferredLen(c);
|
2020-01-07 13:55:26 -06:00
|
|
|
allkeys = (pattern[0] == '*' && plen == 1);
|
2023-12-07 19:48:15 +08:00
|
|
|
if (server.cluster_enabled && !allkeys) {
|
2023-12-05 22:21:50 +08:00
|
|
|
pslot = patternHashSlot(pattern, plen);
|
|
|
|
}
|
2024-02-07 20:53:50 +08:00
|
|
|
kvstoreDictIterator *kvs_di = NULL;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kvstoreIterator *kvs_it = NULL;
|
2023-12-07 19:48:15 +08:00
|
|
|
if (pslot != -1) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
if (!kvstoreDictSize(c->db->keys, pslot)) {
|
|
|
|
/* Requested slot is empty */
|
|
|
|
setDeferredArrayLen(c, replylen, 0);
|
|
|
|
return;
|
|
|
|
}
|
2024-02-07 20:53:50 +08:00
|
|
|
kvs_di = kvstoreGetDictSafeIterator(c->db->keys, pslot);
|
2023-12-05 22:21:50 +08:00
|
|
|
} else {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kvs_it = kvstoreIteratorInit(c->db->keys);
|
2023-12-05 22:21:50 +08:00
|
|
|
}
|
2023-04-16 18:02:47 +08:00
|
|
|
robj keyobj;
|
2024-02-07 20:53:50 +08:00
|
|
|
while ((de = kvs_di ? kvstoreDictIteratorNext(kvs_di) : kvstoreIteratorNext(kvs_it)) != NULL) {
|
2011-11-08 17:07:55 +01:00
|
|
|
sds key = dictGetKey(de);
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2010-08-30 11:51:45 +02:00
|
|
|
if (allkeys || stringmatchlen(pattern, plen, key, sdslen(key), 0)) {
|
2023-04-16 18:02:47 +08:00
|
|
|
initStaticStringObject(keyobj, key);
|
|
|
|
if (!keyIsExpired(c->db, &keyobj)) {
|
|
|
|
addReplyBulkCBuffer(c, key, sdslen(key));
|
2010-06-22 00:07:48 +02:00
|
|
|
numkeys++;
|
|
|
|
}
|
|
|
|
}
|
2024-07-01 02:33:10 +08:00
|
|
|
if (c->flag.close_asap) break;
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2024-02-07 20:53:50 +08:00
|
|
|
if (kvs_di) kvstoreReleaseDictIterator(kvs_di);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
if (kvs_it) kvstoreIteratorRelease(kvs_it);
|
2018-11-09 12:59:00 +01:00
|
|
|
setDeferredArrayLen(c, replylen, numkeys);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2023-06-27 21:43:46 +08:00
|
|
|
/* Data used by the dict scan callback. */
|
|
|
|
typedef struct {
|
|
|
|
list *keys; /* elements that collect from dict */
|
|
|
|
robj *o; /* o must be a hash/set/zset object, NULL means current db */
|
|
|
|
long long type; /* the particular type when scan the db */
|
|
|
|
sds pattern; /* pattern string, NULL means no pattern */
|
|
|
|
long sampled; /* cumulative number of keys sampled */
|
2024-05-07 14:39:28 +08:00
|
|
|
int only_keys; /* set to 1 means to return keys only */
|
2023-06-27 21:43:46 +08:00
|
|
|
} scanData;
|
|
|
|
|
|
|
|
/* Helper function to compare key type in scan commands */
|
|
|
|
int objectTypeCompare(robj *o, long long target) {
|
|
|
|
if (o->type != OBJ_MODULE) {
|
|
|
|
if (o->type != target)
|
|
|
|
return 0;
|
|
|
|
else
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
/* module type compare */
|
2024-04-05 16:59:55 -07:00
|
|
|
long long mt = (long long)VALKEYMODULE_TYPE_SIGN(((moduleValue *)o->ptr)->type->id);
|
2023-06-27 21:43:46 +08:00
|
|
|
if (target != -mt)
|
|
|
|
return 0;
|
|
|
|
else
|
|
|
|
return 1;
|
|
|
|
}
|
2013-10-28 11:11:34 +01:00
|
|
|
/* This callback is used by scanGenericCommand in order to collect elements
|
|
|
|
* returned by the dictionary iterator into a list. */
|
2012-07-09 01:00:26 -07:00
|
|
|
void scanCallback(void *privdata, const dictEntry *de) {
|
2023-06-27 21:43:46 +08:00
|
|
|
scanData *data = (scanData *)privdata;
|
|
|
|
list *keys = data->keys;
|
|
|
|
robj *o = data->o;
|
|
|
|
sds val = NULL;
|
|
|
|
sds key = NULL;
|
|
|
|
data->sampled++;
|
|
|
|
|
|
|
|
/* o and typename can not have values at the same time. */
|
|
|
|
serverAssert(!((data->type != LLONG_MAX) && o));
|
|
|
|
|
|
|
|
/* Filter an element if it isn't the type we want. */
|
|
|
|
if (!o && data->type != LLONG_MAX) {
|
|
|
|
robj *rval = dictGetVal(de);
|
|
|
|
if (!objectTypeCompare(rval, data->type)) return;
|
2024-05-17 13:35:31 +02:00
|
|
|
}
|
2023-06-27 21:43:46 +08:00
|
|
|
|
|
|
|
/* Filter element if it does not match the pattern. */
|
|
|
|
sds keysds = dictGetKey(de);
|
|
|
|
if (data->pattern) {
|
|
|
|
if (!stringmatchlen(data->pattern, sdslen(data->pattern), keysds, sdslen(keysds), 0)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2013-10-28 11:11:34 +01:00
|
|
|
|
|
|
|
if (o == NULL) {
|
2023-06-27 21:43:46 +08:00
|
|
|
key = keysds;
|
2015-07-26 15:28:00 +02:00
|
|
|
} else if (o->type == OBJ_SET) {
|
2023-06-27 21:43:46 +08:00
|
|
|
key = keysds;
|
2015-07-26 15:28:00 +02:00
|
|
|
} else if (o->type == OBJ_HASH) {
|
2023-06-27 21:43:46 +08:00
|
|
|
key = keysds;
|
2024-05-07 14:39:28 +08:00
|
|
|
if (!data->only_keys) {
|
|
|
|
val = dictGetVal(de);
|
|
|
|
}
|
2015-07-26 15:28:00 +02:00
|
|
|
} else if (o->type == OBJ_ZSET) {
|
2023-06-27 21:43:46 +08:00
|
|
|
key = sdsdup(keysds);
|
2024-05-07 14:39:28 +08:00
|
|
|
if (!data->only_keys) {
|
|
|
|
char buf[MAX_LONG_DOUBLE_CHARS];
|
|
|
|
int len = ld2string(buf, sizeof(buf), *(double *)dictGetVal(de), LD_STR_AUTO);
|
|
|
|
val = sdsnewlen(buf, len);
|
|
|
|
}
|
2013-10-28 11:11:34 +01:00
|
|
|
} else {
|
2015-07-27 09:41:48 +02:00
|
|
|
serverPanic("Type not handled in SCAN callback.");
|
2013-10-28 11:11:34 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
listAddNodeTail(keys, key);
|
2024-05-07 14:39:28 +08:00
|
|
|
if (val) listAddNodeTail(keys, val);
|
2012-07-09 01:00:26 -07:00
|
|
|
}
|
|
|
|
|
2013-11-05 17:23:11 +01:00
|
|
|
/* Try to parse a SCAN cursor stored at object 'o':
|
2013-11-05 15:47:50 +01:00
|
|
|
* if the cursor is valid, store it as unsigned integer into *cursor and
|
2015-07-26 23:17:55 +02:00
|
|
|
* returns C_OK. Otherwise return C_ERR and send an error to the
|
2013-11-05 15:47:50 +01:00
|
|
|
* client. */
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
int parseScanCursorOrReply(client *c, robj *o, unsigned long long *cursor) {
|
|
|
|
if (!string2ull(o->ptr, cursor)) {
|
2013-11-05 15:47:50 +01:00
|
|
|
addReplyError(c, "invalid cursor");
|
2015-07-26 23:17:55 +02:00
|
|
|
return C_ERR;
|
2013-11-05 15:47:50 +01:00
|
|
|
}
|
2015-07-26 23:17:55 +02:00
|
|
|
return C_OK;
|
2013-11-05 15:47:50 +01:00
|
|
|
}
|
|
|
|
|
2023-06-27 21:43:46 +08:00
|
|
|
char *obj_type_name[OBJ_TYPE_MAX] = {"string", "list", "set", "zset", "hash", NULL, /* module type is special */
|
|
|
|
"stream"};
|
|
|
|
|
|
|
|
/* Helper function to get type from a string in scan commands */
|
|
|
|
long long getObjectTypeByName(char *name) {
|
|
|
|
for (long long i = 0; i < OBJ_TYPE_MAX; i++) {
|
|
|
|
if (obj_type_name[i] && !strcasecmp(name, obj_type_name[i])) {
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
moduleType *mt = moduleTypeLookupModuleByNameIgnoreCase(name);
|
2024-04-05 16:59:55 -07:00
|
|
|
if (mt != NULL) return -(VALKEYMODULE_TYPE_SIGN(mt->id));
|
2023-06-27 21:43:46 +08:00
|
|
|
|
|
|
|
return LLONG_MAX;
|
|
|
|
}
|
|
|
|
|
|
|
|
char *getObjectTypeName(robj *o) {
|
|
|
|
if (o == NULL) {
|
|
|
|
return "none";
|
|
|
|
}
|
|
|
|
|
|
|
|
serverAssert(o->type >= 0 && o->type < OBJ_TYPE_MAX);
|
|
|
|
|
|
|
|
if (o->type == OBJ_MODULE) {
|
|
|
|
moduleValue *mv = o->ptr;
|
|
|
|
return mv->type->name;
|
|
|
|
} else {
|
|
|
|
return obj_type_name[o->type];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-10-28 11:11:34 +01:00
|
|
|
/* This command implements SCAN, HSCAN and SSCAN commands.
|
2019-05-22 16:39:04 +01:00
|
|
|
* If object 'o' is passed, then it must be a Hash, Set or Zset object, otherwise
|
2013-10-28 11:11:34 +01:00
|
|
|
* if 'o' is NULL the command will operate on the dictionary associated with
|
|
|
|
* the current database.
|
|
|
|
*
|
|
|
|
* When 'o' is not NULL the function assumes that the first argument in
|
|
|
|
* the client arguments vector is a key so it skips it before iterating
|
|
|
|
* in order to parse options.
|
|
|
|
*
|
2013-12-05 16:35:32 +01:00
|
|
|
* In the case of a Hash object the function returns both the field and value
|
2013-10-28 11:11:34 +01:00
|
|
|
* of every element on the Hash. */
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
void scanGenericCommand(client *c, robj *o, unsigned long long cursor) {
|
2012-07-09 01:00:26 -07:00
|
|
|
int i, j;
|
2023-06-27 21:43:46 +08:00
|
|
|
listNode *node;
|
2013-10-28 11:11:34 +01:00
|
|
|
long count = 10;
|
2014-12-12 12:02:39 +09:00
|
|
|
sds pat = NULL;
|
2019-05-22 16:39:04 +01:00
|
|
|
sds typename = NULL;
|
2023-06-27 21:43:46 +08:00
|
|
|
long long type = LLONG_MAX;
|
2024-05-07 14:39:28 +08:00
|
|
|
int patlen = 0, use_pattern = 0, only_keys = 0;
|
2013-10-28 11:11:34 +01:00
|
|
|
dict *ht;
|
|
|
|
|
|
|
|
/* Object must be NULL (to iterate keys names), or the type of the object
|
|
|
|
* must be Set, Sorted Set, or Hash. */
|
2015-07-26 15:29:53 +02:00
|
|
|
serverAssert(o == NULL || o->type == OBJ_SET || o->type == OBJ_HASH || o->type == OBJ_ZSET);
|
2013-10-28 11:11:34 +01:00
|
|
|
|
|
|
|
/* Set i to the first option argument. The previous one is the cursor. */
|
|
|
|
i = (o == NULL) ? 2 : 3; /* Skip the key argument if needed. */
|
2012-07-09 01:00:26 -07:00
|
|
|
|
2013-10-28 11:11:34 +01:00
|
|
|
/* Step 1: Parse options. */
|
2012-07-09 01:00:26 -07:00
|
|
|
while (i < c->argc) {
|
|
|
|
j = c->argc - i;
|
|
|
|
if (!strcasecmp(c->argv[i]->ptr, "count") && j >= 2) {
|
2013-10-25 12:01:49 +02:00
|
|
|
if (getLongFromObjectOrReply(c, c->argv[i + 1], &count, NULL) != C_OK) {
|
2023-06-27 21:43:46 +08:00
|
|
|
return;
|
2012-07-09 01:00:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
if (count < 1) {
|
2020-12-23 19:06:25 -08:00
|
|
|
addReplyErrorObject(c, shared.syntaxerr);
|
2023-06-27 21:43:46 +08:00
|
|
|
return;
|
2012-07-09 01:00:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
i += 2;
|
2013-10-25 11:45:32 +02:00
|
|
|
} else if (!strcasecmp(c->argv[i]->ptr, "match") && j >= 2) {
|
2012-07-09 01:00:26 -07:00
|
|
|
pat = c->argv[i + 1]->ptr;
|
|
|
|
patlen = sdslen(pat);
|
|
|
|
|
2013-10-31 10:35:56 +01:00
|
|
|
/* The pattern always matches if it is exactly "*", so it is
|
|
|
|
* equivalent to disabling it. */
|
2021-11-16 23:03:24 +08:00
|
|
|
use_pattern = !(patlen == 1 && pat[0] == '*');
|
2012-07-09 01:00:26 -07:00
|
|
|
|
|
|
|
i += 2;
|
2019-05-22 16:39:04 +01:00
|
|
|
} else if (!strcasecmp(c->argv[i]->ptr, "type") && o == NULL && j >= 2) {
|
|
|
|
/* SCAN for a particular type only applies to the db dict */
|
|
|
|
typename = c->argv[i + 1]->ptr;
|
2023-06-27 21:43:46 +08:00
|
|
|
type = getObjectTypeByName(typename);
|
|
|
|
if (type == LLONG_MAX) {
|
|
|
|
addReplyErrorFormat(c, "unknown type name '%s'", typename);
|
2024-05-17 13:35:31 +02:00
|
|
|
return;
|
2023-06-27 21:43:46 +08:00
|
|
|
}
|
2019-05-22 16:39:04 +01:00
|
|
|
i += 2;
|
2024-01-31 02:32:58 +08:00
|
|
|
} else if (!strcasecmp(c->argv[i]->ptr, "novalues")) {
|
|
|
|
if (!o || o->type != OBJ_HASH) {
|
|
|
|
addReplyError(c, "NOVALUES option can only be used in HSCAN");
|
|
|
|
return;
|
|
|
|
}
|
2024-05-07 14:39:28 +08:00
|
|
|
only_keys = 1;
|
|
|
|
i++;
|
|
|
|
} else if (!strcasecmp(c->argv[i]->ptr, "noscores")) {
|
|
|
|
if (!o || o->type != OBJ_ZSET) {
|
|
|
|
addReplyError(c, "NOSCORES option can only be used in ZSCAN");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
only_keys = 1;
|
2024-01-31 02:32:58 +08:00
|
|
|
i++;
|
2012-07-09 01:00:26 -07:00
|
|
|
} else {
|
2020-12-23 19:06:25 -08:00
|
|
|
addReplyErrorObject(c, shared.syntaxerr);
|
2023-06-27 21:43:46 +08:00
|
|
|
return;
|
2012-07-09 01:00:26 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-10-28 11:11:34 +01:00
|
|
|
/* Step 2: Iterate the collection.
|
|
|
|
*
|
2021-11-24 19:34:13 +08:00
|
|
|
* Note that if the object is encoded with a listpack, intset, or any other
|
2013-12-05 16:35:32 +01:00
|
|
|
* representation that is not a hash table, we are sure that it is also
|
2013-10-28 11:11:34 +01:00
|
|
|
* composed of a small number of elements. So to avoid taking state we
|
|
|
|
* just return everything inside the object in a single call, setting the
|
|
|
|
* cursor to zero to signal the end of the iteration. */
|
|
|
|
|
2013-12-05 16:35:32 +01:00
|
|
|
/* Handle the case of a hash table. */
|
2013-10-28 11:11:34 +01:00
|
|
|
ht = NULL;
|
|
|
|
if (o == NULL) {
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
ht = NULL;
|
2015-07-26 15:28:00 +02:00
|
|
|
} else if (o->type == OBJ_SET && o->encoding == OBJ_ENCODING_HT) {
|
2013-10-28 11:11:34 +01:00
|
|
|
ht = o->ptr;
|
2015-07-26 15:28:00 +02:00
|
|
|
} else if (o->type == OBJ_HASH && o->encoding == OBJ_ENCODING_HT) {
|
2013-10-28 11:11:34 +01:00
|
|
|
ht = o->ptr;
|
2015-07-26 15:28:00 +02:00
|
|
|
} else if (o->type == OBJ_ZSET && o->encoding == OBJ_ENCODING_SKIPLIST) {
|
2013-10-28 11:11:34 +01:00
|
|
|
zset *zs = o->ptr;
|
|
|
|
ht = zs->dict;
|
2023-06-27 21:43:46 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
list *keys = listCreate();
|
|
|
|
/* Set a free callback for the contents of the collected keys list.
|
|
|
|
* For the main keyspace dict, and when we scan a key that's dict encoded
|
|
|
|
* (we have 'ht'), we don't need to define free method because the strings
|
|
|
|
* in the list are just a shallow copy from the pointer in the dictEntry.
|
|
|
|
* When scanning a key with other encodings (e.g. listpack), we need to
|
|
|
|
* free the temporary strings we add to that list.
|
|
|
|
* The exception to the above is ZSET, where we do allocate temporary
|
|
|
|
* strings even when scanning a dict. */
|
|
|
|
if (o && (!ht || o->type == OBJ_ZSET)) {
|
|
|
|
listSetFreeMethod(keys, (void (*)(void *))sdsfree);
|
2013-10-28 11:11:34 +01:00
|
|
|
}
|
|
|
|
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
/* For main dictionary scan or data structure using hashtable. */
|
|
|
|
if (!o || ht) {
|
2014-09-09 17:53:30 +08:00
|
|
|
/* We set the max number of iterations to ten times the specified
|
|
|
|
* COUNT, so if the hash table is in a pathological state (very
|
|
|
|
* sparsely populated) we avoid to block too much time at the cost
|
|
|
|
* of returning no or very few elements. */
|
|
|
|
long maxiterations = count * 10;
|
2013-10-28 11:11:34 +01:00
|
|
|
|
2023-06-27 21:43:46 +08:00
|
|
|
/* We pass scanData which have three pointers to the callback:
|
|
|
|
* 1. data.keys: the list to which it will add new elements;
|
|
|
|
* 2. data.o: the object containing the dictionary so that
|
|
|
|
* it is possible to fetch more data in a type-dependent way;
|
|
|
|
* 3. data.type: the specified type scan in the db, LLONG_MAX means
|
|
|
|
* type matching is no needed;
|
2024-01-31 02:32:58 +08:00
|
|
|
* 4. data.pattern: the pattern string;
|
2023-06-27 21:43:46 +08:00
|
|
|
* 5. data.sampled: the maxiteration limit is there in case we're
|
|
|
|
* working on an empty dict, one with a lot of empty buckets, and
|
|
|
|
* for the buckets are not empty, we need to limit the spampled number
|
2024-01-31 02:32:58 +08:00
|
|
|
* to prevent a long hang time caused by filtering too many keys;
|
2024-05-07 14:39:28 +08:00
|
|
|
* 6. data.only_keys: to control whether values will be returned or
|
2024-01-31 02:32:58 +08:00
|
|
|
* only keys are returned. */
|
2023-06-27 21:43:46 +08:00
|
|
|
scanData data = {
|
|
|
|
.keys = keys,
|
|
|
|
.o = o,
|
|
|
|
.type = type,
|
|
|
|
.pattern = use_pattern ? pat : NULL,
|
|
|
|
.sampled = 0,
|
2024-05-07 14:39:28 +08:00
|
|
|
.only_keys = only_keys,
|
2023-06-27 21:43:46 +08:00
|
|
|
};
|
2023-11-01 08:06:49 +01:00
|
|
|
|
|
|
|
/* A pattern may restrict all matching keys to one cluster slot. */
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
int onlydidx = -1;
|
2023-11-01 08:06:49 +01:00
|
|
|
if (o == NULL && use_pattern && server.cluster_enabled) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
onlydidx = patternHashSlot(pat, patlen);
|
2023-11-01 08:06:49 +01:00
|
|
|
}
|
2013-10-28 11:11:34 +01:00
|
|
|
do {
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
/* In cluster mode there is a separate dictionary for each slot.
|
|
|
|
* If cursor is empty, we should try exploring next non-empty slot. */
|
|
|
|
if (o == NULL) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
cursor = kvstoreScan(c->db->keys, cursor, onlydidx, scanCallback, NULL, &data);
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
} else {
|
|
|
|
cursor = dictScan(ht, cursor, scanCallback, &data);
|
|
|
|
}
|
2023-06-27 21:43:46 +08:00
|
|
|
} while (cursor && maxiterations-- && data.sampled < count);
|
Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581)
In #11290, we added listpack encoding for SET object.
But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
And forgot to support it in RM_ScanKey, causes it hang.
This PR add support SET listpack in zuiFind, and in RM_ScanKey.
And add tests for related commands to cover this case.
Other changes:
- There is no reason for zuiFind to go into the internals of the SET.
It can simply use setTypeIsMember and don't care about encoding.
- Remove the `#include "intset.h"` from server.h reduce the chance of
accidental intset API use.
- Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
interfaces to the header.
- In scanGenericCommand, use setTypeInitIterator and setTypeNext
to handle OBJ_SET scan.
- In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
they can share code and better performance.
The zuiFind part fixes #11578
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2022-12-09 23:08:01 +08:00
|
|
|
} else if (o->type == OBJ_SET) {
|
|
|
|
char *str;
|
2023-06-27 21:43:46 +08:00
|
|
|
char buf[LONG_STR_SIZE];
|
Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581)
In #11290, we added listpack encoding for SET object.
But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
And forgot to support it in RM_ScanKey, causes it hang.
This PR add support SET listpack in zuiFind, and in RM_ScanKey.
And add tests for related commands to cover this case.
Other changes:
- There is no reason for zuiFind to go into the internals of the SET.
It can simply use setTypeIsMember and don't care about encoding.
- Remove the `#include "intset.h"` from server.h reduce the chance of
accidental intset API use.
- Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
interfaces to the header.
- In scanGenericCommand, use setTypeInitIterator and setTypeNext
to handle OBJ_SET scan.
- In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
they can share code and better performance.
The zuiFind part fixes #11578
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2022-12-09 23:08:01 +08:00
|
|
|
size_t len;
|
|
|
|
int64_t llele;
|
|
|
|
setTypeIterator *si = setTypeInitIterator(o);
|
|
|
|
while (setTypeNext(si, &str, &len, &llele) != -1) {
|
|
|
|
if (str == NULL) {
|
2023-06-27 21:43:46 +08:00
|
|
|
len = ll2string(buf, sizeof(buf), llele);
|
|
|
|
}
|
|
|
|
char *key = str ? str : buf;
|
|
|
|
if (use_pattern && !stringmatchlen(pat, sdslen(pat), key, len, 0)) {
|
|
|
|
continue;
|
Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581)
In #11290, we added listpack encoding for SET object.
But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
And forgot to support it in RM_ScanKey, causes it hang.
This PR add support SET listpack in zuiFind, and in RM_ScanKey.
And add tests for related commands to cover this case.
Other changes:
- There is no reason for zuiFind to go into the internals of the SET.
It can simply use setTypeIsMember and don't care about encoding.
- Remove the `#include "intset.h"` from server.h reduce the chance of
accidental intset API use.
- Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
interfaces to the header.
- In scanGenericCommand, use setTypeInitIterator and setTypeNext
to handle OBJ_SET scan.
- In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
they can share code and better performance.
The zuiFind part fixes #11578
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2022-12-09 23:08:01 +08:00
|
|
|
}
|
2023-06-27 21:43:46 +08:00
|
|
|
listAddNodeTail(keys, sdsnewlen(key, len));
|
Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581)
In #11290, we added listpack encoding for SET object.
But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
And forgot to support it in RM_ScanKey, causes it hang.
This PR add support SET listpack in zuiFind, and in RM_ScanKey.
And add tests for related commands to cover this case.
Other changes:
- There is no reason for zuiFind to go into the internals of the SET.
It can simply use setTypeIsMember and don't care about encoding.
- Remove the `#include "intset.h"` from server.h reduce the chance of
accidental intset API use.
- Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
interfaces to the header.
- In scanGenericCommand, use setTypeInitIterator and setTypeNext
to handle OBJ_SET scan.
- In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
they can share code and better performance.
The zuiFind part fixes #11578
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2022-12-09 23:08:01 +08:00
|
|
|
}
|
|
|
|
setTypeReleaseIterator(si);
|
2013-11-05 15:32:21 +01:00
|
|
|
cursor = 0;
|
Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581)
In #11290, we added listpack encoding for SET object.
But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
And forgot to support it in RM_ScanKey, causes it hang.
This PR add support SET listpack in zuiFind, and in RM_ScanKey.
And add tests for related commands to cover this case.
Other changes:
- There is no reason for zuiFind to go into the internals of the SET.
It can simply use setTypeIsMember and don't care about encoding.
- Remove the `#include "intset.h"` from server.h reduce the chance of
accidental intset API use.
- Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
interfaces to the header.
- In scanGenericCommand, use setTypeInitIterator and setTypeNext
to handle OBJ_SET scan.
- In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
they can share code and better performance.
The zuiFind part fixes #11578
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2022-12-09 23:08:01 +08:00
|
|
|
} else if ((o->type == OBJ_HASH || o->type == OBJ_ZSET) && o->encoding == OBJ_ENCODING_LISTPACK) {
|
2021-08-10 14:18:49 +08:00
|
|
|
unsigned char *p = lpFirst(o->ptr);
|
2023-06-27 21:43:46 +08:00
|
|
|
unsigned char *str;
|
|
|
|
int64_t len;
|
2021-08-10 14:18:49 +08:00
|
|
|
unsigned char intbuf[LP_INTBUF_SIZE];
|
|
|
|
|
|
|
|
while (p) {
|
2023-06-27 21:43:46 +08:00
|
|
|
str = lpGet(p, &len, intbuf);
|
|
|
|
/* point to the value */
|
|
|
|
p = lpNext(o->ptr, p);
|
|
|
|
if (use_pattern && !stringmatchlen(pat, sdslen(pat), (char *)str, len, 0)) {
|
|
|
|
/* jump to the next key/val pair */
|
|
|
|
p = lpNext(o->ptr, p);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* add key object */
|
|
|
|
listAddNodeTail(keys, sdsnewlen(str, len));
|
|
|
|
/* add value object */
|
2024-05-07 14:39:28 +08:00
|
|
|
if (!only_keys) {
|
2024-01-31 02:32:58 +08:00
|
|
|
str = lpGet(p, &len, intbuf);
|
|
|
|
listAddNodeTail(keys, sdsnewlen(str, len));
|
|
|
|
}
|
2023-06-27 21:43:46 +08:00
|
|
|
p = lpNext(o->ptr, p);
|
2021-08-10 14:18:49 +08:00
|
|
|
}
|
|
|
|
cursor = 0;
|
2013-10-28 11:11:34 +01:00
|
|
|
} else {
|
2015-07-27 09:41:48 +02:00
|
|
|
serverPanic("Not handled encoding in SCAN.");
|
2013-10-28 11:11:34 +01:00
|
|
|
}
|
2012-07-09 01:00:26 -07:00
|
|
|
|
2023-06-27 21:43:46 +08:00
|
|
|
/* Step 3: Filter the expired keys */
|
|
|
|
if (o == NULL && listLength(keys)) {
|
|
|
|
robj kobj;
|
|
|
|
listIter li;
|
|
|
|
listNode *ln;
|
|
|
|
listRewind(keys, &li);
|
|
|
|
while ((ln = listNext(&li))) {
|
|
|
|
sds key = listNodeValue(ln);
|
|
|
|
initStaticStringObject(kobj, key);
|
2024-02-26 18:50:04 +08:00
|
|
|
if (expireIfNeeded(c->db, &kobj, 0) != KEY_VALID) {
|
2023-06-27 21:43:46 +08:00
|
|
|
listDelNode(keys, ln);
|
2013-10-28 11:11:34 +01:00
|
|
|
}
|
2012-07-09 01:00:26 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-10-28 11:11:34 +01:00
|
|
|
/* Step 4: Reply to the client. */
|
2018-11-09 12:59:00 +01:00
|
|
|
addReplyArrayLen(c, 2);
|
2014-08-13 11:44:38 +02:00
|
|
|
addReplyBulkLongLong(c, cursor);
|
2012-07-09 01:00:26 -07:00
|
|
|
|
2018-11-09 12:59:00 +01:00
|
|
|
addReplyArrayLen(c, listLength(keys));
|
2013-10-25 11:54:37 +02:00
|
|
|
while ((node = listFirst(keys)) != NULL) {
|
2023-06-27 21:43:46 +08:00
|
|
|
sds key = listNodeValue(node);
|
|
|
|
addReplyBulkCBuffer(c, key, sdslen(key));
|
2013-10-25 11:54:37 +02:00
|
|
|
listDelNode(keys, node);
|
2012-07-09 01:00:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
listRelease(keys);
|
|
|
|
}
|
|
|
|
|
2013-10-28 11:11:34 +01:00
|
|
|
/* The SCAN command completely relies on scanGenericCommand. */
|
2015-07-26 15:20:46 +02:00
|
|
|
void scanCommand(client *c) {
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
unsigned long long cursor;
|
2015-07-26 23:17:55 +02:00
|
|
|
if (parseScanCursorOrReply(c, c->argv[1], &cursor) == C_ERR) return;
|
2013-11-05 15:47:50 +01:00
|
|
|
scanGenericCommand(c, NULL, cursor);
|
2013-10-28 11:11:34 +01:00
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void dbsizeCommand(client *c) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
addReplyLongLong(c, kvstoreSize(c->db->keys));
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void lastsaveCommand(client *c) {
|
2010-09-02 14:30:56 +02:00
|
|
|
addReplyLongLong(c, server.lastsave);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2019-06-10 17:41:44 +01:00
|
|
|
void typeCommand(client *c) {
|
|
|
|
robj *o;
|
|
|
|
o = lookupKeyReadWithFlags(c->db, c->argv[1], LOOKUP_NOTOUCH);
|
2019-07-08 11:04:37 +01:00
|
|
|
addReplyStatus(c, getObjectTypeName(o));
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2024-04-02 22:36:53 +09:00
|
|
|
/* SHUTDOWN [[NOSAVE | SAVE] [NOW] [FORCE] | ABORT] */
|
2015-07-26 15:20:46 +02:00
|
|
|
void shutdownCommand(client *c) {
|
2022-01-02 08:50:15 +01:00
|
|
|
int flags = SHUTDOWN_NOFLAGS;
|
|
|
|
int abort = 0;
|
|
|
|
for (int i = 1; i < c->argc; i++) {
|
|
|
|
if (!strcasecmp(c->argv[i]->ptr, "nosave")) {
|
2015-07-27 09:41:48 +02:00
|
|
|
flags |= SHUTDOWN_NOSAVE;
|
2022-01-02 08:50:15 +01:00
|
|
|
} else if (!strcasecmp(c->argv[i]->ptr, "save")) {
|
2015-07-27 09:41:48 +02:00
|
|
|
flags |= SHUTDOWN_SAVE;
|
2022-01-02 08:50:15 +01:00
|
|
|
} else if (!strcasecmp(c->argv[i]->ptr, "now")) {
|
|
|
|
flags |= SHUTDOWN_NOW;
|
|
|
|
} else if (!strcasecmp(c->argv[i]->ptr, "force")) {
|
|
|
|
flags |= SHUTDOWN_FORCE;
|
|
|
|
} else if (!strcasecmp(c->argv[i]->ptr, "abort")) {
|
|
|
|
abort = 1;
|
2011-11-18 14:10:48 +01:00
|
|
|
} else {
|
2020-12-23 19:06:25 -08:00
|
|
|
addReplyErrorObject(c, shared.syntaxerr);
|
2011-11-18 14:10:48 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2022-01-02 08:50:15 +01:00
|
|
|
if ((abort && flags != SHUTDOWN_NOFLAGS) || (flags & SHUTDOWN_NOSAVE && flags & SHUTDOWN_SAVE)) {
|
|
|
|
/* Illegal combo. */
|
|
|
|
addReplyErrorObject(c, shared.syntaxerr);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (abort) {
|
|
|
|
if (abortShutdown() == C_OK)
|
|
|
|
addReply(c, shared.ok);
|
|
|
|
else
|
|
|
|
addReplyError(c, "No shutdown in progress.");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2024-07-01 02:33:10 +08:00
|
|
|
if (!(flags & SHUTDOWN_NOW) && c->flag.deny_blocking) {
|
2022-01-02 08:50:15 +01:00
|
|
|
addReplyError(c, "SHUTDOWN without NOW or ABORT isn't allowed for DENY BLOCKING client");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
Fix broken protocol in MISCONF error, RM_Yield bugs, RM_Call(EVAL) OOM check bug, and new RM_Call checks. (#10786)
* Fix broken protocol when redis can't persist to RDB (general commands, not
modules), excessive newline. regression of #10372 (7.0 RC3)
* Fix broken protocol when Redis can't persist to AOF (modules and
scripts), missing newline.
* Fix bug in OOM check of EVAL scripts called from RM_Call.
set the cached OOM state for scripts before executing module commands too,
so that it can serve scripts that are executed by modules.
i.e. in the past EVAL executed by RM_Call could have either falsely
fail or falsely succeeded because of a wrong cached OOM state flag.
* Fix bugs with RM_Yield:
1. SHUTDOWN should only accept the NOSAVE mode
2. Avoid eviction during yield command processing.
3. Avoid processing master client commands while yielding from another client
* Add new two more checks to RM_Call script mode.
1. READONLY You can't write against a read only replica
2. MASTERDOWN Link with MASTER is down and `replica-serve-stale-data` is set to `no`
* Add new RM_Call flag to let redis automatically refuse `deny-oom` commands
while over the memory limit.
* Add tests to cover various errors from Scripts, Modules, Modules
calling scripts, and Modules calling commands in script mode.
Add tests:
* Looks like the MISCONF error was completely uncovered by the tests,
add tests for it, including from scripts, and modules
* Add tests for NOREPLICAS from scripts
* Add tests for the various errors in module RM_Call, including RM_Call that
calls EVAL, and RM_call in "eval mode". that includes:
NOREPLICAS, READONLY, MASTERDOWN, MISCONF
2022-06-01 13:04:22 +03:00
|
|
|
if (!(flags & SHUTDOWN_NOSAVE) && isInsideYieldingLongCommand()) {
|
2022-01-02 08:50:15 +01:00
|
|
|
/* Script timed out. Shutdown allowed only with the NOSAVE flag. See
|
|
|
|
* also processCommand where these errors are returned. */
|
2022-01-20 09:05:53 +02:00
|
|
|
if (server.busy_module_yield_flags && server.busy_module_yield_reply) {
|
|
|
|
addReplyErrorFormat(c, "-BUSY %s", server.busy_module_yield_reply);
|
|
|
|
} else if (server.busy_module_yield_flags) {
|
|
|
|
addReplyErrorObject(c, shared.slowmoduleerr);
|
|
|
|
} else if (scriptIsEval()) {
|
2022-01-02 08:50:15 +01:00
|
|
|
addReplyErrorObject(c, shared.slowevalerr);
|
2022-01-20 09:05:53 +02:00
|
|
|
} else {
|
2022-01-02 08:50:15 +01:00
|
|
|
addReplyErrorObject(c, shared.slowscripterr);
|
2022-01-20 09:05:53 +02:00
|
|
|
}
|
2022-01-02 08:50:15 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
reprocess command when client is unblocked on keys (#11012)
*TL;DR*
---------------------------------------
Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551)
We decided to refactor the client blocking code to eliminate some of the code duplications
and to rebuild the infrastructure better for future key blocking cases.
*In this PR*
---------------------------------------
1. reprocess the command once a client becomes unblocked on key (instead of running
custom code for the unblocked path that's different than the one that would have run if
blocking wasn't needed)
2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
details in the notes section below)
4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
XREAD we need to read from the last msg and not wait for new msg in order to prevent
endless block loop.
5. Added statistics to the info "Clients" section to report the:
* `total_blocking_keys` - number of blocking keys
* `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
which would like
to be unblocked on when the key is deleted.
6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
and make an explicit verification in the call() function in order to decide if stats update should take place.
This should simplify the logic and also mitigate existing issues: for example module calls which are
triggered as part of AOF loading might still report stats even though they are called during AOF loading.
*Behavior changes*
---------------------------------------------------
1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
since we now re-process the command once the client is unblocked some errors will be reported differently.
The old implementation used to issue
``UNBLOCKED the stream key no longer exists``
in the following cases:
- The stream key has been deleted (ie. calling DEL)
- The stream and group existed but the key type was changed by overriding it (ie. with set command)
- The key not longer exists after we swapdb with a db which does not contains this key
- After swapdb when the new db has this key but with different type.
In the new implementation the reported errors will be the same as if the command was processed after effect:
**NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.
2. Reprocessing the command means that some checks will be reevaluated once the
client is unblocked.
For example, ACL rules might change since the command originally was executed and
will fail once the client is unblocked.
Another example is OOM condition checks which might enable the command to run and
block but fail the command reprocess once the client is unblocked.
3. One of the changes in this PR is that no command stats are being updated once the
command is blocked (all stats will be updated once the client is unblocked). This implies
that when we have many clients blocked, users will no longer be able to get that information
from the command stats. However the information can still be gathered from the client list.
**Client blocking**
---------------------------------------------------
the blocking on key will still be triggered the same way as it is done today.
in order to block the current client on list of keys, the call to
blockForKeys will still need to be made which will perform the same as it is today:
* add the client to the list of blocked clients on each key
* keep the key with a matching list node (position in the global blocking clients list for that key)
in the client private blocking key dict.
* flag the client with CLIENT_BLOCKED
* update blocking statistics
* register the client on the timeout table
**Key Unblock**
---------------------------------------------------
Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
the implementation in that part will stay the same as today - adding the key to the global readyList.
The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
is in order to keep the signal operation as short as possible, since it is called during the command processing.
The main change is that instead of going through a dedicated code path that operates the blocked command
we will just call processPendingCommandsAndResetClient.
**ClientUnblock (keys)**
---------------------------------------------------
1. Unblocking clients on keys will be triggered after command is
processed and during the beforeSleep
8. the general schema is:
9. For each key *k* in the readyList:
```
For each client *c* which is blocked on *k*:
in case either:
1. *k* exists AND the *k* type matches the current client blocking type
OR
2. *k* exists and *c* is blocked on module command
OR
3. *k* does not exists and *c* was blocked with the flag
unblock_on_deleted_key
do:
1. remove the client from the list of clients blocked on this key
2. remove the blocking list node from the client blocking key dict
3. remove the client from the timeout list
10. queue the client on the unblocked_clients list
11. *NEW*: call processCommandAndResetClient(c);
```
*NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
which will queue the client for processing in moduleUnblockedClients list.
**Process Unblocked clients**
---------------------------------------------------
The process of all unblocked clients is done in the beforeSleep and no change is planned
in that part.
The general schema will be:
For each client *c* in server.unblocked_clients:
* remove client from the server.unblocked_clients
* set back the client readHandler
* continue processing the pending command and input buffer.
*Some notes regarding the new implementation*
---------------------------------------------------
1. Although it was proposed, it is currently difficult to remove the
read handler from the client while it is blocked.
The reason is that a blocked client should be unblocked when it is
disconnected, or we might consume data into void.
2. While this PR mainly keep the current blocking logic as-is, there
might be some future additions to the infrastructure that we would
like to have:
- allow non-preemptive blocking of client - sometimes we can think
that a new kind of blocking can be expected to not be preempt. for
example lets imagine we hold some keys on disk and when a command
needs to process them it will block until the keys are uploaded.
in this case we will want the client to not disconnect or be
unblocked until the process is completed (remove the client read
handler, prevent client timeout, disable unblock via debug command etc...).
- allow generic blocking based on command declared keys - we might
want to add a hook before command processing to check if any of the
declared keys require the command to block. this way it would be
easier to add new kinds of key-based blocking mechanisms.
Co-authored-by: Oran Agra <oran@redislabs.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
2023-01-01 23:35:42 +02:00
|
|
|
blockClientShutdown(c);
|
2024-09-08 16:26:56 +08:00
|
|
|
if (prepareForShutdown(c, flags) == C_OK) exit(0);
|
2022-01-02 08:50:15 +01:00
|
|
|
/* If we're here, then shutdown is ongoing (the client is still blocked) or
|
|
|
|
* failed (the client has received an error). */
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void renameGenericCommand(client *c, int nx) {
|
2010-06-22 00:07:48 +02:00
|
|
|
robj *o;
|
2011-11-09 16:51:19 +01:00
|
|
|
long long expire;
|
2015-02-23 11:24:24 +01:00
|
|
|
int samekey = 0;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2015-02-23 11:24:24 +01:00
|
|
|
/* When source and dest key is the same, no operation is performed,
|
|
|
|
* if the key exists, however we still return an error on unexisting key. */
|
|
|
|
if (sdscmp(c->argv[1]->ptr, c->argv[2]->ptr) == 0) samekey = 1;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
if ((o = lookupKeyWriteOrReply(c, c->argv[1], shared.nokeyerr)) == NULL) return;
|
|
|
|
|
2015-02-23 11:24:24 +01:00
|
|
|
if (samekey) {
|
|
|
|
addReply(c, nx ? shared.czero : shared.ok);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
incrRefCount(o);
|
2011-10-10 15:21:19 +02:00
|
|
|
expire = getExpire(c->db, c->argv[1]);
|
2011-06-14 15:34:27 +02:00
|
|
|
if (lookupKeyWrite(c->db, c->argv[2]) != NULL) {
|
2010-06-22 00:07:48 +02:00
|
|
|
if (nx) {
|
|
|
|
decrRefCount(o);
|
|
|
|
addReply(c, shared.czero);
|
|
|
|
return;
|
|
|
|
}
|
2013-01-23 16:44:45 +01:00
|
|
|
/* Overwrite: delete the old key before creating the new one
|
|
|
|
* with the same name. */
|
2011-10-10 15:21:19 +02:00
|
|
|
dbDelete(c->db, c->argv[2]);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2011-10-10 15:21:19 +02:00
|
|
|
dbAdd(c->db, c->argv[2], o);
|
Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj
Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.
To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.
THE BUG
Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.
Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:
1. No DEL will be received from the master, which does not know about
such a key at all.
2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.
THE FIX
In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.
My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.
The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).
This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.
TODO
I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
2016-12-13 10:20:06 +01:00
|
|
|
if (expire != -1) setExpire(c, c->db, c->argv[2], expire);
|
2010-06-22 00:07:48 +02:00
|
|
|
dbDelete(c->db, c->argv[1]);
|
2020-04-21 10:51:46 +02:00
|
|
|
signalModifiedKey(c, c->db, c->argv[1]);
|
|
|
|
signalModifiedKey(c, c->db, c->argv[2]);
|
2015-07-27 09:41:48 +02:00
|
|
|
notifyKeyspaceEvent(NOTIFY_GENERIC, "rename_from", c->argv[1], c->db->id);
|
|
|
|
notifyKeyspaceEvent(NOTIFY_GENERIC, "rename_to", c->argv[2], c->db->id);
|
2010-06-22 00:07:48 +02:00
|
|
|
server.dirty++;
|
|
|
|
addReply(c, nx ? shared.cone : shared.ok);
|
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void renameCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
renameGenericCommand(c, 0);
|
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void renamenxCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
renameGenericCommand(c, 1);
|
|
|
|
}
|
|
|
|
|
2015-07-26 15:20:46 +02:00
|
|
|
void moveCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
robj *o;
|
2024-04-03 10:02:43 +07:00
|
|
|
serverDb *src, *dst;
|
Improve dbid range check for SELECT, MOVE, COPY (#8085)
SELECT used to read the index into a `long` variable, and then pass it to a function
that takes an `int`, possibly causing an overflow before the range check.
Now all these commands use better and cleaner range check, and that also results in
a slight change of the error response in case of an invalid database index.
SELECT:
in the past it would have returned either `-ERR invalid DB index` (if not a number),
or `-ERR DB index is out of range` (if not between 1..16 or alike).
now it'll return either `-ERR value is out of range` (if not a number), or
`-ERR value is out of range, value must between -2147483648 and 2147483647`
(if not in the range for an int), or `-ERR DB index is out of range`
(if not between 0..16 or alike)
MOVE:
in the past it would only fail with `-ERR index out of range` no matter the reason.
now return the same errors as the new ones for SELECT mentioned above.
(i.e. unlike for SELECT even for a value like 17 we changed the error message)
COPY:
doesn't really matter how it behaved in the past (new command), new behavior is
like the above two.
2020-12-02 03:41:26 +08:00
|
|
|
int srcid, dbid;
|
|
|
|
long long expire;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2011-03-29 17:51:15 +02:00
|
|
|
if (server.cluster_enabled) {
|
|
|
|
addReplyError(c, "MOVE is not allowed in cluster mode");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
/* Obtain source and target DB pointers */
|
|
|
|
src = c->db;
|
|
|
|
srcid = c->db->id;
|
2014-08-01 14:55:24 -04:00
|
|
|
|
Improve dbid range check for SELECT, MOVE, COPY (#8085)
SELECT used to read the index into a `long` variable, and then pass it to a function
that takes an `int`, possibly causing an overflow before the range check.
Now all these commands use better and cleaner range check, and that also results in
a slight change of the error response in case of an invalid database index.
SELECT:
in the past it would have returned either `-ERR invalid DB index` (if not a number),
or `-ERR DB index is out of range` (if not between 1..16 or alike).
now it'll return either `-ERR value is out of range` (if not a number), or
`-ERR value is out of range, value must between -2147483648 and 2147483647`
(if not in the range for an int), or `-ERR DB index is out of range`
(if not between 0..16 or alike)
MOVE:
in the past it would only fail with `-ERR index out of range` no matter the reason.
now return the same errors as the new ones for SELECT mentioned above.
(i.e. unlike for SELECT even for a value like 17 we changed the error message)
COPY:
doesn't really matter how it behaved in the past (new command), new behavior is
like the above two.
2020-12-02 03:41:26 +08:00
|
|
|
if (getIntFromObjectOrReply(c, c->argv[2], &dbid, NULL) != C_OK) return;
|
|
|
|
|
|
|
|
if (selectDb(c, dbid) == C_ERR) {
|
|
|
|
addReplyError(c, "DB index is out of range");
|
2010-06-22 00:07:48 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
dst = c->db;
|
|
|
|
selectDb(c, srcid); /* Back to the source DB */
|
|
|
|
|
|
|
|
/* If the user is moving using as target the same
|
|
|
|
* DB as the source DB it is probably an error. */
|
|
|
|
if (src == dst) {
|
2020-12-23 19:06:25 -08:00
|
|
|
addReplyErrorObject(c, shared.sameobjecterr);
|
2010-06-22 00:07:48 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check if the element exists and get a reference */
|
|
|
|
o = lookupKeyWrite(c->db, c->argv[1]);
|
|
|
|
if (!o) {
|
|
|
|
addReply(c, shared.czero);
|
|
|
|
return;
|
|
|
|
}
|
2015-09-14 12:28:22 +02:00
|
|
|
expire = getExpire(c->db, c->argv[1]);
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2011-06-14 15:34:27 +02:00
|
|
|
/* Return zero if the key already exists in the target DB */
|
|
|
|
if (lookupKeyWrite(dst, c->argv[1]) != NULL) {
|
2010-06-22 00:07:48 +02:00
|
|
|
addReply(c, shared.czero);
|
|
|
|
return;
|
|
|
|
}
|
2011-06-14 15:34:27 +02:00
|
|
|
dbAdd(dst, c->argv[1], o);
|
Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj
Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.
To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.
THE BUG
Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.
Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:
1. No DEL will be received from the master, which does not know about
such a key at all.
2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.
THE FIX
In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.
My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.
The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).
This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.
TODO
I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
2016-12-13 10:20:06 +01:00
|
|
|
if (expire != -1) setExpire(c, dst, c->argv[1], expire);
|
2010-06-22 00:07:48 +02:00
|
|
|
incrRefCount(o);
|
|
|
|
|
|
|
|
/* OK! key moved, free the entry in the source DB */
|
|
|
|
dbDelete(src, c->argv[1]);
|
2020-04-21 10:51:46 +02:00
|
|
|
signalModifiedKey(c, src, c->argv[1]);
|
|
|
|
signalModifiedKey(c, dst, c->argv[1]);
|
2019-11-19 00:02:45 -05:00
|
|
|
notifyKeyspaceEvent(NOTIFY_GENERIC, "move_from", c->argv[1], src->id);
|
|
|
|
notifyKeyspaceEvent(NOTIFY_GENERIC, "move_to", c->argv[1], dst->id);
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
server.dirty++;
|
|
|
|
addReply(c, shared.cone);
|
|
|
|
}
|
|
|
|
|
2020-11-17 19:03:05 +09:00
|
|
|
void copyCommand(client *c) {
|
|
|
|
robj *o;
|
2024-04-03 10:02:43 +07:00
|
|
|
serverDb *src, *dst;
|
Improve dbid range check for SELECT, MOVE, COPY (#8085)
SELECT used to read the index into a `long` variable, and then pass it to a function
that takes an `int`, possibly causing an overflow before the range check.
Now all these commands use better and cleaner range check, and that also results in
a slight change of the error response in case of an invalid database index.
SELECT:
in the past it would have returned either `-ERR invalid DB index` (if not a number),
or `-ERR DB index is out of range` (if not between 1..16 or alike).
now it'll return either `-ERR value is out of range` (if not a number), or
`-ERR value is out of range, value must between -2147483648 and 2147483647`
(if not in the range for an int), or `-ERR DB index is out of range`
(if not between 0..16 or alike)
MOVE:
in the past it would only fail with `-ERR index out of range` no matter the reason.
now return the same errors as the new ones for SELECT mentioned above.
(i.e. unlike for SELECT even for a value like 17 we changed the error message)
COPY:
doesn't really matter how it behaved in the past (new command), new behavior is
like the above two.
2020-12-02 03:41:26 +08:00
|
|
|
int srcid, dbid;
|
|
|
|
long long expire;
|
2020-11-17 19:03:05 +09:00
|
|
|
int j, replace = 0, delete = 0;
|
|
|
|
|
|
|
|
/* Obtain source and target DB pointers
|
|
|
|
* Default target DB is the same as the source DB
|
|
|
|
* Parse the REPLACE option and targetDB option. */
|
|
|
|
src = c->db;
|
|
|
|
dst = c->db;
|
|
|
|
srcid = c->db->id;
|
|
|
|
dbid = c->db->id;
|
|
|
|
for (j = 3; j < c->argc; j++) {
|
|
|
|
int additional = c->argc - j - 1;
|
|
|
|
if (!strcasecmp(c->argv[j]->ptr, "replace")) {
|
|
|
|
replace = 1;
|
|
|
|
} else if (!strcasecmp(c->argv[j]->ptr, "db") && additional >= 1) {
|
Improve dbid range check for SELECT, MOVE, COPY (#8085)
SELECT used to read the index into a `long` variable, and then pass it to a function
that takes an `int`, possibly causing an overflow before the range check.
Now all these commands use better and cleaner range check, and that also results in
a slight change of the error response in case of an invalid database index.
SELECT:
in the past it would have returned either `-ERR invalid DB index` (if not a number),
or `-ERR DB index is out of range` (if not between 1..16 or alike).
now it'll return either `-ERR value is out of range` (if not a number), or
`-ERR value is out of range, value must between -2147483648 and 2147483647`
(if not in the range for an int), or `-ERR DB index is out of range`
(if not between 0..16 or alike)
MOVE:
in the past it would only fail with `-ERR index out of range` no matter the reason.
now return the same errors as the new ones for SELECT mentioned above.
(i.e. unlike for SELECT even for a value like 17 we changed the error message)
COPY:
doesn't really matter how it behaved in the past (new command), new behavior is
like the above two.
2020-12-02 03:41:26 +08:00
|
|
|
if (getIntFromObjectOrReply(c, c->argv[j + 1], &dbid, NULL) != C_OK) return;
|
|
|
|
|
|
|
|
if (selectDb(c, dbid) == C_ERR) {
|
|
|
|
addReplyError(c, "DB index is out of range");
|
2020-11-17 19:03:05 +09:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
dst = c->db;
|
|
|
|
selectDb(c, srcid); /* Back to the source DB */
|
|
|
|
j++; /* Consume additional arg. */
|
|
|
|
} else {
|
2020-12-25 18:27:30 +02:00
|
|
|
addReplyErrorObject(c, shared.syntaxerr);
|
2020-11-17 19:03:05 +09:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((server.cluster_enabled == 1) && (srcid != 0 || dbid != 0)) {
|
|
|
|
addReplyError(c, "Copying to another database is not allowed in cluster mode");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If the user select the same DB as
|
|
|
|
* the source DB and using newkey as the same key
|
|
|
|
* it is probably an error. */
|
|
|
|
robj *key = c->argv[1];
|
|
|
|
robj *newkey = c->argv[2];
|
|
|
|
if (src == dst && (sdscmp(key->ptr, newkey->ptr) == 0)) {
|
2020-12-23 19:06:25 -08:00
|
|
|
addReplyErrorObject(c, shared.sameobjecterr);
|
2020-11-17 19:03:05 +09:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check if the element exists and get a reference */
|
2021-11-28 10:26:28 +01:00
|
|
|
o = lookupKeyRead(c->db, key);
|
2020-11-17 19:03:05 +09:00
|
|
|
if (!o) {
|
|
|
|
addReply(c, shared.czero);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
expire = getExpire(c->db, key);
|
|
|
|
|
|
|
|
/* Return zero if the key already exists in the target DB.
|
|
|
|
* If REPLACE option is selected, delete newkey from targetDB. */
|
|
|
|
if (lookupKeyWrite(dst, newkey) != NULL) {
|
|
|
|
if (replace) {
|
|
|
|
delete = 1;
|
|
|
|
} else {
|
|
|
|
addReply(c, shared.czero);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Duplicate object according to object's type. */
|
|
|
|
robj *newobj;
|
|
|
|
switch (o->type) {
|
|
|
|
case OBJ_STRING: newobj = dupStringObject(o); break;
|
|
|
|
case OBJ_LIST: newobj = listTypeDup(o); break;
|
|
|
|
case OBJ_SET: newobj = setTypeDup(o); break;
|
|
|
|
case OBJ_ZSET: newobj = zsetDup(o); break;
|
|
|
|
case OBJ_HASH: newobj = hashTypeDup(o); break;
|
|
|
|
case OBJ_STREAM: newobj = streamDup(o); break;
|
|
|
|
case OBJ_MODULE:
|
2021-06-16 14:45:49 +08:00
|
|
|
newobj = moduleTypeDupOrReply(c, key, newkey, dst->id, o);
|
2020-12-09 20:22:45 +02:00
|
|
|
if (!newobj) return;
|
|
|
|
break;
|
2020-11-17 19:03:05 +09:00
|
|
|
default: addReplyError(c, "unknown type object"); return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (delete) {
|
|
|
|
dbDelete(dst, newkey);
|
|
|
|
}
|
|
|
|
|
|
|
|
dbAdd(dst, newkey, newobj);
|
|
|
|
if (expire != -1) setExpire(c, dst, newkey, expire);
|
|
|
|
|
|
|
|
/* OK! key copied */
|
|
|
|
signalModifiedKey(c, dst, c->argv[2]);
|
|
|
|
notifyKeyspaceEvent(NOTIFY_GENERIC, "copy_to", c->argv[2], dst->id);
|
|
|
|
|
|
|
|
server.dirty++;
|
|
|
|
addReply(c, shared.cone);
|
|
|
|
}
|
|
|
|
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
/* Helper function for dbSwapDatabases(): scans the list of keys that have
|
2018-06-12 17:28:40 +02:00
|
|
|
* one or more blocked clients for B[LR]POP or other blocking commands
|
|
|
|
* and signal the keys as ready if they are of the right type. See the comment
|
|
|
|
* where the function is used for more info. */
|
2024-04-03 10:02:43 +07:00
|
|
|
void scanDatabaseForReadyKeys(serverDb *db) {
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
dictEntry *de;
|
|
|
|
dictIterator *di = dictGetSafeIterator(db->blocking_keys);
|
|
|
|
while ((de = dictNext(di)) != NULL) {
|
|
|
|
robj *key = dictGetKey(de);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
dictEntry *kde = dbFind(db, key->ptr);
|
2021-11-28 10:26:28 +01:00
|
|
|
if (kde) {
|
|
|
|
robj *value = dictGetVal(kde);
|
|
|
|
signalKeyAsReady(db, key, value->type);
|
|
|
|
}
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
}
|
|
|
|
dictReleaseIterator(di);
|
|
|
|
}
|
|
|
|
|
2022-03-08 17:10:36 +02:00
|
|
|
/* Since we are unblocking XREADGROUP clients in the event the
|
|
|
|
* key was deleted/overwritten we must do the same in case the
|
|
|
|
* database was flushed/swapped. */
|
2024-04-03 10:02:43 +07:00
|
|
|
void scanDatabaseForDeletedKeys(serverDb *emptied, serverDb *replaced_with) {
|
2022-03-08 17:10:36 +02:00
|
|
|
dictEntry *de;
|
|
|
|
dictIterator *di = dictGetSafeIterator(emptied->blocking_keys);
|
|
|
|
while ((de = dictNext(di)) != NULL) {
|
|
|
|
robj *key = dictGetKey(de);
|
2022-10-18 18:50:02 +02:00
|
|
|
int existed = 0, exists = 0;
|
|
|
|
int original_type = -1, curr_type = -1;
|
2022-03-08 17:10:36 +02:00
|
|
|
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
dictEntry *kde = dbFind(emptied, key->ptr);
|
2022-03-08 17:10:36 +02:00
|
|
|
if (kde) {
|
|
|
|
robj *value = dictGetVal(kde);
|
2022-10-18 18:50:02 +02:00
|
|
|
original_type = value->type;
|
|
|
|
existed = 1;
|
2022-03-08 17:10:36 +02:00
|
|
|
}
|
2022-10-18 18:50:02 +02:00
|
|
|
|
2022-03-08 17:10:36 +02:00
|
|
|
if (replaced_with) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kde = dbFind(replaced_with, key->ptr);
|
2022-03-08 17:10:36 +02:00
|
|
|
if (kde) {
|
|
|
|
robj *value = dictGetVal(kde);
|
2022-10-18 18:50:02 +02:00
|
|
|
curr_type = value->type;
|
|
|
|
exists = 1;
|
2022-03-08 17:10:36 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/* We want to try to unblock any client using a blocking XREADGROUP */
|
2022-10-18 18:50:02 +02:00
|
|
|
if ((existed && !exists) || original_type != curr_type) signalDeletedKeyAsReady(emptied, key, original_type);
|
2022-03-08 17:10:36 +02:00
|
|
|
}
|
|
|
|
dictReleaseIterator(di);
|
|
|
|
}
|
|
|
|
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
/* Swap two databases at runtime so that all clients will magically see
|
|
|
|
* the new database even if already connected. Note that the client
|
|
|
|
* structure c->db points to a given DB, so we need to be smarter and
|
|
|
|
* swap the underlying referenced structures, otherwise we would need
|
2024-04-09 01:24:03 -07:00
|
|
|
* to fix all the references to the DB structure.
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
*
|
|
|
|
* Returns C_ERR if at least one of the DB ids are out of range, otherwise
|
|
|
|
* C_OK is returned. */
|
2021-02-28 21:54:52 -08:00
|
|
|
int dbSwapDatabases(int id1, int id2) {
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
if (id1 < 0 || id1 >= server.dbnum || id2 < 0 || id2 >= server.dbnum) return C_ERR;
|
|
|
|
if (id1 == id2) return C_OK;
|
2024-04-03 10:02:43 +07:00
|
|
|
serverDb aux = server.db[id1];
|
|
|
|
serverDb *db1 = &server.db[id1], *db2 = &server.db[id2];
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
|
2022-02-22 11:09:46 +01:00
|
|
|
/* Swapdb should make transaction fail if there is any
|
|
|
|
* client watching keys */
|
|
|
|
touchAllWatchedKeysInDb(db1, db2);
|
|
|
|
touchAllWatchedKeysInDb(db2, db1);
|
|
|
|
|
2022-03-08 17:10:36 +02:00
|
|
|
/* Try to unblock any XREADGROUP clients if the key no longer exists. */
|
2022-10-18 18:50:02 +02:00
|
|
|
scanDatabaseForDeletedKeys(db1, db2);
|
|
|
|
scanDatabaseForDeletedKeys(db2, db1);
|
2022-03-08 17:10:36 +02:00
|
|
|
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
/* Swap hash tables. Note that we don't swap blocking_keys,
|
|
|
|
* ready_keys and watched_keys, since we want clients to
|
|
|
|
* remain in the same DB they were. */
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
db1->keys = db2->keys;
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
db1->expires = db2->expires;
|
|
|
|
db1->avg_ttl = db2->avg_ttl;
|
2019-11-14 18:27:37 +01:00
|
|
|
db1->expires_cursor = db2->expires_cursor;
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
db2->keys = aux.keys;
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
db2->expires = aux.expires;
|
|
|
|
db2->avg_ttl = aux.avg_ttl;
|
2019-11-14 18:27:37 +01:00
|
|
|
db2->expires_cursor = aux.expires_cursor;
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
|
|
|
|
/* Now we need to handle clients blocked on lists: as an effect
|
|
|
|
* of swapping the two DBs, a client that was waiting for list
|
|
|
|
* X in a given DB, may now actually be unblocked if X happens
|
|
|
|
* to exist in the new version of the DB, after the swap.
|
|
|
|
*
|
|
|
|
* However normally we only do this check for efficiency reasons
|
|
|
|
* in dbAdd() when a list is created. So here we need to rescan
|
|
|
|
* the list of clients blocked on lists and signal lists as ready
|
2022-02-22 11:09:46 +01:00
|
|
|
* if needed. */
|
2022-03-08 17:10:36 +02:00
|
|
|
scanDatabaseForReadyKeys(db1);
|
|
|
|
scanDatabaseForReadyKeys(db2);
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
return C_OK;
|
|
|
|
}
|
|
|
|
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
/* Logically, this discards (flushes) the old main database, and apply the newly loaded
|
|
|
|
* database (temp) as the main (active) database, the actual freeing of old database
|
|
|
|
* (which will now be placed in the temp one) is done later. */
|
2024-04-03 10:02:43 +07:00
|
|
|
void swapMainDbWithTempDb(serverDb *tempDb) {
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
for (int i = 0; i < server.dbnum; i++) {
|
2024-04-03 10:02:43 +07:00
|
|
|
serverDb aux = server.db[i];
|
|
|
|
serverDb *activedb = &server.db[i], *newdb = &tempDb[i];
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
|
2022-02-22 11:09:46 +01:00
|
|
|
/* Swapping databases should make transaction fail if there is any
|
|
|
|
* client watching keys. */
|
|
|
|
touchAllWatchedKeysInDb(activedb, newdb);
|
|
|
|
|
2022-03-08 17:10:36 +02:00
|
|
|
/* Try to unblock any XREADGROUP clients if the key no longer exists. */
|
2022-10-18 18:50:02 +02:00
|
|
|
scanDatabaseForDeletedKeys(activedb, newdb);
|
2022-03-08 17:10:36 +02:00
|
|
|
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
/* Swap hash tables. Note that we don't swap blocking_keys,
|
|
|
|
* ready_keys and watched_keys, since clients
|
|
|
|
* remain in the same DB they were. */
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
activedb->keys = newdb->keys;
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
activedb->expires = newdb->expires;
|
|
|
|
activedb->avg_ttl = newdb->avg_ttl;
|
|
|
|
activedb->expires_cursor = newdb->expires_cursor;
|
|
|
|
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
newdb->keys = aux.keys;
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
newdb->expires = aux.expires;
|
|
|
|
newdb->avg_ttl = aux.avg_ttl;
|
|
|
|
newdb->expires_cursor = aux.expires_cursor;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
/* Now we need to handle clients blocked on lists: as an effect
|
|
|
|
* of swapping the two DBs, a client that was waiting for list
|
|
|
|
* X in a given DB, may now actually be unblocked if X happens
|
|
|
|
* to exist in the new version of the DB, after the swap.
|
|
|
|
*
|
|
|
|
* However normally we only do this check for efficiency reasons
|
|
|
|
* in dbAdd() when a list is created. So here we need to rescan
|
|
|
|
* the list of clients blocked on lists and signal lists as ready
|
2022-02-22 11:09:46 +01:00
|
|
|
* if needed. */
|
2022-03-08 17:10:36 +02:00
|
|
|
scanDatabaseForReadyKeys(activedb);
|
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
trackingInvalidateKeysOnFlush(1);
|
|
|
|
flushReplicaKeysWithExpireList();
|
|
|
|
}
|
|
|
|
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
/* SWAPDB db1 db2 */
|
|
|
|
void swapdbCommand(client *c) {
|
2021-02-28 21:54:52 -08:00
|
|
|
int id1, id2;
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
|
|
|
|
/* Not allowed in cluster mode: we have just DB 0 there. */
|
|
|
|
if (server.cluster_enabled) {
|
|
|
|
addReplyError(c, "SWAPDB is not allowed in cluster mode");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get the two DBs indexes. */
|
2021-02-28 21:54:52 -08:00
|
|
|
if (getIntFromObjectOrReply(c, c->argv[1], &id1, "invalid first DB index") != C_OK) return;
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
|
2021-02-28 21:54:52 -08:00
|
|
|
if (getIntFromObjectOrReply(c, c->argv[2], &id2, "invalid second DB index") != C_OK) return;
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
|
|
|
|
/* Swap... */
|
|
|
|
if (dbSwapDatabases(id1, id2) == C_ERR) {
|
|
|
|
addReplyError(c, "DB index is out of range");
|
|
|
|
return;
|
|
|
|
} else {
|
2024-04-05 16:59:55 -07:00
|
|
|
ValkeyModuleSwapDbInfo si = {VALKEYMODULE_SWAPDBINFO_VERSION, id1, id2};
|
|
|
|
moduleFireServerEvent(VALKEYMODULE_EVENT_SWAPDB, 0, &si);
|
SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
|
|
|
server.dirty++;
|
|
|
|
addReply(c, shared.ok);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
/*-----------------------------------------------------------------------------
|
|
|
|
* Expires API
|
|
|
|
*----------------------------------------------------------------------------*/
|
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
int removeExpire(serverDb *db, robj *key) {
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
return kvstoreDictDelete(db->expires, getKVStoreIndexForKey(key->ptr), key->ptr) == DICT_OK;
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj
Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.
To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.
THE BUG
Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.
Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:
1. No DEL will be received from the master, which does not know about
such a key at all.
2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.
THE FIX
In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.
My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.
The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).
This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.
TODO
I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
2016-12-13 10:20:06 +01:00
|
|
|
/* Set an expire to the specified key. If the expire is set in the context
|
|
|
|
* of an user calling a command 'c' is the client, otherwise 'c' is set
|
|
|
|
* to NULL. The 'when' parameter is the absolute unix time in milliseconds
|
|
|
|
* after which the key will no longer be considered valid. */
|
2024-04-03 10:02:43 +07:00
|
|
|
void setExpire(client *c, serverDb *db, robj *key, long long when) {
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
dictEntry *kde, *de, *existing;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
/* Reuse the sds from the main dict in the expire dict */
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
int dict_index = getKVStoreIndexForKey(key->ptr);
|
|
|
|
kde = kvstoreDictFind(db->keys, dict_index, key->ptr);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
serverAssertWithInfo(NULL, key, kde != NULL);
|
Optimize the per slot dictionary by checking for cluster mode earlier (#995)
While doing some profiling, I noticed that getKeySlot() was a fairly
large part (~0.7%) of samples doing perf with high pipeline during
standalone. I think this is because we do a very late check for
server.cluster_mode, we first call getKeySlot() and then call
calculateKeySlot(). (calculateKeySlot was surprisingly not automatically
inlined, we were doing a jump into it and then immediately returning
zero). We then also do useless work in the form of caching zero in
client->slot, which will further mess with cache lines.
So, this PR tries to accomplish a few things things.
1) The usage of the `slot` name made a lot more sense before the
introduction of the kvstore. Now with kvstore, we call this the database
index, so all the references to slot in standalone are no longer really
accurate.
2) Pull the cluster mode check all the way out of getKeySlot(), so
hopefully a bit more performant.
3) Remove calculateKeySlot() as independent from getKeySlot().
calculateKeySlot used to have 3 call sites outside of db.c, which
warranted it's own function. It's now only called in two places,
pubsub.c and networking.c.
I ran some profiling, and saw about ~0.3% improvement, but don't really
trust it because you'll see a much higher (~2%) variance in test runs
just by how the branch predictions will get changed with a new memory
layout. Running perf again showed no samples in getKeySlot() and a
reduction in samples in lookupKey(), so maybe this will help a little
bit.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ping Xie <pingxie@google.com>
2024-09-11 09:53:42 -07:00
|
|
|
de = kvstoreDictAddRaw(db->expires, dict_index, dictGetKey(kde), &existing);
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
if (existing) {
|
|
|
|
dictSetSignedIntegerVal(existing, when);
|
|
|
|
} else {
|
|
|
|
dictSetSignedIntegerVal(de, when);
|
|
|
|
}
|
Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj
Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.
To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.
THE BUG
Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.
Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:
1. No DEL will be received from the master, which does not know about
such a key at all.
2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.
THE FIX
In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.
My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.
The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).
This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.
TODO
I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
2016-12-13 10:20:06 +01:00
|
|
|
|
|
|
|
int writable_replica = server.primary_host && server.repl_replica_ro == 0;
|
2024-07-01 02:33:10 +08:00
|
|
|
if (c && writable_replica && !c->flag.primary) rememberReplicaKeyWithExpire(db, key);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Return the expire time of the specified key, or -1 if no expire
|
|
|
|
* is associated with this key (i.e. the key is non volatile) */
|
2024-10-16 17:40:11 -07:00
|
|
|
long long getExpireWithDictIndex(serverDb *db, robj *key, int dict_index) {
|
2010-06-22 00:07:48 +02:00
|
|
|
dictEntry *de;
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
if ((de = dbFindExpiresWithDictIndex(db, key->ptr, dict_index)) == NULL) return -1;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2011-11-09 16:51:19 +01:00
|
|
|
return dictGetSignedIntegerVal(de);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
/* Return the expire time of the specified key, or -1 if no expire
|
|
|
|
* is associated with this key (i.e. the key is non volatile) */
|
|
|
|
long long getExpire(serverDb *db, robj *key) {
|
|
|
|
int dict_index = getKVStoreIndexForKey(key->ptr);
|
|
|
|
return getExpireWithDictIndex(db, key, dict_index);
|
|
|
|
}
|
|
|
|
|
|
|
|
void deleteExpiredKeyAndPropagateWithDictIndex(serverDb *db, robj *keyobj, int dict_index) {
|
2021-08-05 16:09:24 +08:00
|
|
|
mstime_t expire_latency;
|
|
|
|
latencyStartMonitor(expire_latency);
|
2024-10-16 17:40:11 -07:00
|
|
|
dbGenericDeleteWithDictIndex(db, keyobj, server.lazyfree_lazy_expire, DB_FLAG_KEY_EXPIRED, dict_index);
|
2021-08-05 16:09:24 +08:00
|
|
|
latencyEndMonitor(expire_latency);
|
|
|
|
latencyAddSampleIfNeeded("expire-del", expire_latency);
|
2022-08-24 12:51:36 +03:00
|
|
|
notifyKeyspaceEvent(NOTIFY_EXPIRED, "expired", keyobj, db->id);
|
2021-08-05 16:09:24 +08:00
|
|
|
signalModifiedKey(NULL, db, keyobj);
|
Sort out mess around propagation and MULTI/EXEC (#9890)
The mess:
Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
causing edge cases, ugly/hacky code, and the tendency for bugs
The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
top-most call() is responsible for going over that list and actually propagating them (and wrapping
them in MULTI/EXEC if there's more than one command). This is done in the new function,
propagatePendingCommands.
Callers to propagatePendingCommands:
1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
one to propagate them) - via `afterCommand`
2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`.
3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
the deletion explicitly.
4. cron stuff: active-expire and eviction may also propagate stuff
5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
context may cause propagation.
6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
releasing the GIL.
A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
first all the commands from RM_Call, and then the ones from RM_Replicate
Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
not anymore.
This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
Optimizations:
1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
Bugfixes:
1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
we need to prevent it from propagating to AOF/replicas
2. We need to set current_client in RM_Call. buggy scenario:
- CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
- assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
(we always send a notification before propagating the command)
2021-12-22 23:03:48 +01:00
|
|
|
propagateDeletion(db, keyobj, server.lazyfree_lazy_expire);
|
2021-08-05 16:09:24 +08:00
|
|
|
server.stat_expiredkeys++;
|
|
|
|
}
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
/* Delete the specified expired key and propagate expire. */
|
|
|
|
void deleteExpiredKeyAndPropagate(serverDb *db, robj *keyobj) {
|
|
|
|
int dict_index = getKVStoreIndexForKey(keyobj->ptr);
|
|
|
|
deleteExpiredKeyAndPropagateWithDictIndex(db, keyobj, dict_index);
|
|
|
|
}
|
|
|
|
|
2024-08-31 22:39:07 +08:00
|
|
|
/* Delete the specified expired key from overwriting and propagate the DEL or UNLINK. */
|
|
|
|
void deleteExpiredKeyFromOverwriteAndPropagate(client *c, robj *keyobj) {
|
|
|
|
int deleted = dbGenericDelete(c->db, keyobj, server.lazyfree_lazy_expire, DB_FLAG_KEY_EXPIRED);
|
|
|
|
serverAssertWithInfo(c, keyobj, deleted);
|
|
|
|
server.dirty++;
|
|
|
|
|
|
|
|
/* Replicate/AOF this as an explicit DEL or UNLINK. */
|
|
|
|
robj *aux = server.lazyfree_lazy_expire ? shared.unlink : shared.del;
|
|
|
|
rewriteClientCommandVector(c, 2, aux, keyobj);
|
|
|
|
signalModifiedKey(c, c->db, keyobj);
|
|
|
|
notifyKeyspaceEvent(NOTIFY_GENERIC, "del", keyobj, c->db->id);
|
|
|
|
}
|
|
|
|
|
2023-10-24 05:10:03 -05:00
|
|
|
/* Propagate an implicit key deletion into replicas and the AOF file.
|
|
|
|
* When a key was deleted in the primary by eviction, expiration or a similar
|
|
|
|
* mechanism a DEL/UNLINK operation for this key is sent
|
|
|
|
* to all the replicas and the AOF file if enabled.
|
2010-08-02 18:13:39 +02:00
|
|
|
*
|
2023-10-24 05:10:03 -05:00
|
|
|
* This way the key deletion is centralized in one place, and since both
|
|
|
|
* AOF and the replication link guarantee operation ordering, everything
|
|
|
|
* will be consistent even if we allow write operations against deleted
|
Sort out mess around propagation and MULTI/EXEC (#9890)
The mess:
Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
causing edge cases, ugly/hacky code, and the tendency for bugs
The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
top-most call() is responsible for going over that list and actually propagating them (and wrapping
them in MULTI/EXEC if there's more than one command). This is done in the new function,
propagatePendingCommands.
Callers to propagatePendingCommands:
1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
one to propagate them) - via `afterCommand`
2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`.
3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
the deletion explicitly.
4. cron stuff: active-expire and eviction may also propagate stuff
5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
context may cause propagation.
6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
releasing the GIL.
A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
first all the commands from RM_Call, and then the ones from RM_Replicate
Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
not anymore.
This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
Optimizations:
1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
Bugfixes:
1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
we need to prevent it from propagating to AOF/replicas
2. We need to set current_client in RM_Call. buggy scenario:
- CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
- assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
(we always send a notification before propagating the command)
2021-12-22 23:03:48 +01:00
|
|
|
* keys.
|
|
|
|
*
|
|
|
|
* This function may be called from:
|
|
|
|
* 1. Within call(): Example: Lazy-expire on key access.
|
|
|
|
* In this case the caller doesn't have to do anything
|
|
|
|
* because call() handles server.also_propagate(); or
|
2023-10-24 05:10:03 -05:00
|
|
|
* 2. Outside of call(): Example: Active-expire, eviction, slot ownership changed.
|
Sort out mess around propagation and MULTI/EXEC (#9890)
The mess:
Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
causing edge cases, ugly/hacky code, and the tendency for bugs
The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
top-most call() is responsible for going over that list and actually propagating them (and wrapping
them in MULTI/EXEC if there's more than one command). This is done in the new function,
propagatePendingCommands.
Callers to propagatePendingCommands:
1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
one to propagate them) - via `afterCommand`
2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`.
3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
the deletion explicitly.
4. cron stuff: active-expire and eviction may also propagate stuff
5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
context may cause propagation.
6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
releasing the GIL.
A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
first all the commands from RM_Call, and then the ones from RM_Replicate
Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
not anymore.
This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
Optimizations:
1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
Bugfixes:
1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
we need to prevent it from propagating to AOF/replicas
2. We need to set current_client in RM_Call. buggy scenario:
- CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
- assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
(we always send a notification before propagating the command)
2021-12-22 23:03:48 +01:00
|
|
|
* In this the caller must remember to call
|
2023-02-14 08:33:21 +01:00
|
|
|
* postExecutionUnitOperations, preferably just after a
|
2023-10-24 05:10:03 -05:00
|
|
|
* single deletion batch, so that DEL/UNLINK will NOT be wrapped
|
Sort out mess around propagation and MULTI/EXEC (#9890)
The mess:
Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
causing edge cases, ugly/hacky code, and the tendency for bugs
The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
top-most call() is responsible for going over that list and actually propagating them (and wrapping
them in MULTI/EXEC if there's more than one command). This is done in the new function,
propagatePendingCommands.
Callers to propagatePendingCommands:
1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
one to propagate them) - via `afterCommand`
2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`.
3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
the deletion explicitly.
4. cron stuff: active-expire and eviction may also propagate stuff
5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
context may cause propagation.
6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
releasing the GIL.
A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
first all the commands from RM_Call, and then the ones from RM_Replicate
Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
not anymore.
This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
Optimizations:
1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
Bugfixes:
1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
we need to prevent it from propagating to AOF/replicas
2. We need to set current_client in RM_Call. buggy scenario:
- CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
- assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
(we always send a notification before propagating the command)
2021-12-22 23:03:48 +01:00
|
|
|
* in MULTI/EXEC */
|
2024-04-03 10:02:43 +07:00
|
|
|
void propagateDeletion(serverDb *db, robj *key, int lazy) {
|
2010-08-02 18:13:39 +02:00
|
|
|
robj *argv[2];
|
|
|
|
|
2015-10-02 15:27:57 +02:00
|
|
|
argv[0] = lazy ? shared.unlink : shared.del;
|
2010-08-02 18:13:39 +02:00
|
|
|
argv[1] = key;
|
2012-02-04 08:58:37 +01:00
|
|
|
incrRefCount(argv[0]);
|
|
|
|
incrRefCount(argv[1]);
|
2010-08-02 18:13:39 +02:00
|
|
|
|
2023-10-24 05:10:03 -05:00
|
|
|
/* If the primary decided to delete a key we must propagate it to replicas no matter what.
|
2021-04-19 16:16:02 +02:00
|
|
|
* Even if module executed a command without asking for propagation. */
|
|
|
|
int prev_replication_allowed = server.replication_allowed;
|
|
|
|
server.replication_allowed = 1;
|
Sort out mess around propagation and MULTI/EXEC (#9890)
The mess:
Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
causing edge cases, ugly/hacky code, and the tendency for bugs
The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
top-most call() is responsible for going over that list and actually propagating them (and wrapping
them in MULTI/EXEC if there's more than one command). This is done in the new function,
propagatePendingCommands.
Callers to propagatePendingCommands:
1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
one to propagate them) - via `afterCommand`
2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`.
3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
the deletion explicitly.
4. cron stuff: active-expire and eviction may also propagate stuff
5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
context may cause propagation.
6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
releasing the GIL.
A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
first all the commands from RM_Call, and then the ones from RM_Replicate
Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
not anymore.
This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
Optimizations:
1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
Bugfixes:
1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
we need to prevent it from propagating to AOF/replicas
2. We need to set current_client in RM_Call. buggy scenario:
- CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
- assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
(we always send a notification before propagating the command)
2021-12-22 23:03:48 +01:00
|
|
|
alsoPropagate(db->id, argv, 2, PROPAGATE_AOF | PROPAGATE_REPL);
|
2021-04-19 16:16:02 +02:00
|
|
|
server.replication_allowed = prev_replication_allowed;
|
2010-08-02 18:13:39 +02:00
|
|
|
|
2010-08-02 21:37:39 +02:00
|
|
|
decrRefCount(argv[0]);
|
|
|
|
decrRefCount(argv[1]);
|
2010-08-02 18:13:39 +02:00
|
|
|
}
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
int keyIsExpiredWithDictIndex(serverDb *db, robj *key, int dict_index) {
|
2022-10-18 18:55:49 +08:00
|
|
|
/* Don't expire anything while loading. It will be done later. */
|
|
|
|
if (server.loading) return 0;
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
mstime_t when = getExpireWithDictIndex(db, key, dict_index);
|
2019-11-06 09:57:29 +01:00
|
|
|
mstime_t now;
|
2018-10-19 18:00:57 +08:00
|
|
|
|
|
|
|
if (when < 0) return 0; /* No expire for this key */
|
|
|
|
|
2022-10-09 13:18:34 +08:00
|
|
|
now = commandTimeSnapshot();
|
2018-10-19 18:00:57 +08:00
|
|
|
|
2019-11-06 09:57:29 +01:00
|
|
|
/* The key expired if the current (virtual or real) time is greater
|
|
|
|
* than the expire time of the key. */
|
2018-10-19 18:00:57 +08:00
|
|
|
return now > when;
|
|
|
|
}
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
/* Check if the key is expired. */
|
|
|
|
int keyIsExpired(serverDb *db, robj *key) {
|
|
|
|
int dict_index = getKVStoreIndexForKey(key->ptr);
|
|
|
|
return keyIsExpiredWithDictIndex(db, key, dict_index);
|
|
|
|
}
|
|
|
|
|
|
|
|
keyStatus expireIfNeededWithDictIndex(serverDb *db, robj *key, int flags, int dict_index) {
|
2024-02-26 18:50:04 +08:00
|
|
|
if (server.lazy_expire_disabled) return KEY_VALID;
|
2024-10-16 17:40:11 -07:00
|
|
|
if (!keyIsExpiredWithDictIndex(db, key, dict_index)) return KEY_VALID;
|
2018-10-24 12:26:27 +02:00
|
|
|
|
2021-11-28 10:26:28 +01:00
|
|
|
/* If we are running in the context of a replica, instead of
|
2018-10-24 12:26:27 +02:00
|
|
|
* evicting the expired key from the database, we return ASAP:
|
2021-11-28 10:26:28 +01:00
|
|
|
* the replica key expiration is controlled by the primary that will
|
|
|
|
* send us synthesized DEL operations for expired keys. The
|
|
|
|
* exception is when write operations are performed on writable
|
|
|
|
* replicas.
|
2018-10-24 12:26:27 +02:00
|
|
|
*
|
|
|
|
* Still we try to return the right information to the caller,
|
2024-02-26 18:50:04 +08:00
|
|
|
* that is, KEY_VALID if we think the key should still be valid,
|
|
|
|
* KEY_EXPIRED if we think the key is expired but don't want to delete it at this time.
|
2021-11-28 10:26:28 +01:00
|
|
|
*
|
|
|
|
* When replicating commands from the primary, keys are never considered
|
|
|
|
* expired. */
|
|
|
|
if (server.primary_host != NULL) {
|
2024-07-01 02:33:10 +08:00
|
|
|
if (server.current_client && (server.current_client->flag.primary)) return KEY_VALID;
|
2024-02-26 18:50:04 +08:00
|
|
|
if (!(flags & EXPIRE_FORCE_DELETE_EXPIRED)) return KEY_EXPIRED;
|
2021-11-28 10:26:28 +01:00
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2022-08-24 19:39:15 +03:00
|
|
|
/* In some cases we're explicitly instructed to return an indication of a
|
|
|
|
* missing key without actually deleting it, even on primaries. */
|
|
|
|
if (flags & EXPIRE_AVOID_DELETE_EXPIRED) return KEY_EXPIRED;
|
|
|
|
|
2022-10-27 11:57:04 +03:00
|
|
|
/* If 'expire' action is paused, for whatever reason, then don't expire any key.
|
|
|
|
* Typically, at the end of the pause we will properly expire the key OR we
|
|
|
|
* will have failed over and the new primary will send us the expire. */
|
2024-02-26 18:50:04 +08:00
|
|
|
if (isPausedActionsWithUpdate(PAUSE_ACTION_EXPIRE)) return KEY_EXPIRED;
|
2021-01-07 23:36:54 -08:00
|
|
|
|
2023-06-27 21:43:46 +08:00
|
|
|
/* The key needs to be converted from static to heap before deleted */
|
|
|
|
int static_key = key->refcount == OBJ_STATIC_REFCOUNT;
|
|
|
|
if (static_key) {
|
|
|
|
key = createStringObject(key->ptr, sdslen(key->ptr));
|
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
/* Delete the key */
|
2024-10-16 17:40:11 -07:00
|
|
|
deleteExpiredKeyAndPropagateWithDictIndex(db, key, dict_index);
|
2023-06-27 21:43:46 +08:00
|
|
|
if (static_key) {
|
|
|
|
decrRefCount(key);
|
|
|
|
}
|
2024-02-26 18:50:04 +08:00
|
|
|
return KEY_DELETED;
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
/* This function is called when we are going to perform some operation
|
|
|
|
* in a given key, but such key may be already logically expired even if
|
|
|
|
* it still exists in the database. The main way this function is called
|
|
|
|
* is via lookupKey*() family of functions.
|
|
|
|
*
|
|
|
|
* The behavior of the function depends on the replication role of the
|
|
|
|
* instance, because by default replicas do not delete expired keys. They
|
|
|
|
* wait for DELs from the primary for consistency matters. However even
|
|
|
|
* replicas will try to have a coherent return value for the function,
|
|
|
|
* so that read commands executed in the replica side will be able to
|
|
|
|
* behave like if the key is expired even if still present (because the
|
|
|
|
* primary has yet to propagate the DEL).
|
|
|
|
*
|
|
|
|
* In primary as a side effect of finding a key which is expired, such
|
|
|
|
* key will be evicted from the database. Also this may trigger the
|
|
|
|
* propagation of a DEL/UNLINK command in AOF / replication stream.
|
|
|
|
*
|
|
|
|
* On replicas, this function does not delete expired keys by default, but
|
|
|
|
* it still returns KEY_EXPIRED if the key is logically expired. To force deletion
|
|
|
|
* of logically expired keys even on replicas, use the EXPIRE_FORCE_DELETE_EXPIRED
|
|
|
|
* flag. Note though that if the current client is executing
|
|
|
|
* replicated commands from the primary, keys are never considered expired.
|
|
|
|
*
|
|
|
|
* On the other hand, if you just want expiration check, but need to avoid
|
|
|
|
* the actual key deletion and propagation of the deletion, use the
|
|
|
|
* EXPIRE_AVOID_DELETE_EXPIRED flag.
|
|
|
|
*
|
|
|
|
* The return value of the function is KEY_VALID if the key is still valid.
|
|
|
|
* The function returns KEY_EXPIRED if the key is expired BUT not deleted,
|
|
|
|
* or returns KEY_DELETED if the key is expired and deleted. */
|
|
|
|
keyStatus expireIfNeeded(serverDb *db, robj *key, int flags) {
|
|
|
|
int dict_index = getKVStoreIndexForKey(key->ptr);
|
|
|
|
return expireIfNeededWithDictIndex(db, key, flags, dict_index);
|
|
|
|
}
|
|
|
|
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
/* CB passed to kvstoreExpand.
|
|
|
|
* The purpose is to skip expansion of unused dicts in cluster mode (all
|
|
|
|
* dicts not mapped to *my* slots) */
|
|
|
|
static int dbExpandSkipSlot(int slot) {
|
|
|
|
return !clusterNodeCoversSlot(getMyClusterNode(), slot);
|
|
|
|
}
|
|
|
|
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
/*
|
|
|
|
* This functions increases size of the main/expires db to match desired number.
|
|
|
|
* In cluster mode resizes all individual dictionaries for slots that this node owns.
|
|
|
|
*
|
|
|
|
* Based on the parameter `try_expand`, appropriate dict expand API is invoked.
|
|
|
|
* if try_expand is set to 1, `dictTryExpand` is used else `dictExpand`.
|
|
|
|
* The return code is either `DICT_OK`/`DICT_ERR` for both the API(s).
|
|
|
|
* `DICT_OK` response is for successful expansion. However ,`DICT_ERR` response signifies failure in allocation in
|
|
|
|
* `dictTryExpand` call and in case of `dictExpand` call it signifies no expansion was performed.
|
|
|
|
*/
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
static int dbExpandGeneric(kvstore *kvs, uint64_t db_size, int try_expand) {
|
|
|
|
int ret;
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
if (server.cluster_enabled) {
|
2023-11-22 17:16:06 +08:00
|
|
|
/* We don't know exact number of keys that would fall into each slot, but we can
|
|
|
|
* approximate it, assuming even distribution, divide it by the number of slots. */
|
2023-12-07 14:30:48 +08:00
|
|
|
int slots = getMyShardSlotCount();
|
2023-11-22 17:16:06 +08:00
|
|
|
if (slots == 0) return C_OK;
|
|
|
|
db_size = db_size / slots;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
ret = kvstoreExpand(kvs, db_size, try_expand, dbExpandSkipSlot);
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
} else {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
ret = kvstoreExpand(kvs, db_size, try_expand, NULL);
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
}
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
|
|
|
|
return ret ? C_OK : C_ERR;
|
|
|
|
}
|
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
int dbExpand(serverDb *db, uint64_t db_size, int try_expand) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
return dbExpandGeneric(db->keys, db_size, try_expand);
|
|
|
|
}
|
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
int dbExpandExpires(serverDb *db, uint64_t db_size, int try_expand) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
return dbExpandGeneric(db->expires, db_size, try_expand);
|
|
|
|
}
|
|
|
|
|
2024-10-16 17:40:11 -07:00
|
|
|
dictEntry *dbFindWithDictIndex(serverDb *db, void *key, int dict_index) {
|
|
|
|
return kvstoreDictFind(db->keys, dict_index, key);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
}
|
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
dictEntry *dbFind(serverDb *db, void *key) {
|
2024-10-16 17:40:11 -07:00
|
|
|
int dict_index = getKVStoreIndexForKey(key);
|
|
|
|
return dbFindWithDictIndex(db, key, dict_index);
|
|
|
|
}
|
|
|
|
|
|
|
|
dictEntry *dbFindExpiresWithDictIndex(serverDb *db, void *key, int dict_index) {
|
|
|
|
return kvstoreDictFind(db->expires, dict_index, key);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
}
|
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
dictEntry *dbFindExpires(serverDb *db, void *key) {
|
2024-10-16 17:40:11 -07:00
|
|
|
int dict_index = getKVStoreIndexForKey(key);
|
|
|
|
return dbFindExpiresWithDictIndex(db, key, dict_index);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
}
|
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
unsigned long long dbSize(serverDb *db) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
return kvstoreSize(db->keys);
|
|
|
|
}
|
|
|
|
|
2024-04-03 10:02:43 +07:00
|
|
|
unsigned long long dbScan(serverDb *db, unsigned long long cursor, dictScanFunction *scan_cb, void *privdata) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
return kvstoreScan(db->keys, cursor, -1, scan_cb, NULL, privdata);
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
|
|
|
}
|
|
|
|
|
2011-03-23 18:09:17 +01:00
|
|
|
/* -----------------------------------------------------------------------------
|
|
|
|
* API to get key arguments from commands
|
|
|
|
* ---------------------------------------------------------------------------*/
|
2020-10-05 17:03:17 +03:00
|
|
|
|
|
|
|
/* Prepare the getKeysResult struct to hold numkeys, either by using the
|
|
|
|
* pre-allocated keysbuf or by allocating a new array on the heap.
|
|
|
|
*
|
|
|
|
* This function must be called at least once before starting to populate
|
|
|
|
* the result, and can be called repeatedly to enlarge the result array.
|
|
|
|
*/
|
2022-01-20 13:05:27 -08:00
|
|
|
keyReference *getKeysPrepareResult(getKeysResult *result, int numkeys) {
|
2024-06-14 08:42:00 -07:00
|
|
|
/* initGetKeysResult initializes keys to NULL, point it to the pre-allocated stack
|
2020-10-05 17:03:17 +03:00
|
|
|
* buffer here. */
|
|
|
|
if (!result->keys) {
|
|
|
|
serverAssert(!result->numkeys);
|
|
|
|
result->keys = result->keysbuf;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Resize if necessary */
|
|
|
|
if (numkeys > result->size) {
|
|
|
|
if (result->keys != result->keysbuf) {
|
|
|
|
/* We're not using a static buffer, just (re)alloc */
|
2022-01-20 13:05:27 -08:00
|
|
|
result->keys = zrealloc(result->keys, numkeys * sizeof(keyReference));
|
2020-10-05 17:03:17 +03:00
|
|
|
} else {
|
|
|
|
/* We are using a static buffer, copy its contents */
|
2022-01-20 13:05:27 -08:00
|
|
|
result->keys = zmalloc(numkeys * sizeof(keyReference));
|
|
|
|
if (result->numkeys) memcpy(result->keys, result->keysbuf, result->numkeys * sizeof(keyReference));
|
2020-10-05 17:03:17 +03:00
|
|
|
}
|
|
|
|
result->size = numkeys;
|
|
|
|
}
|
|
|
|
|
|
|
|
return result->keys;
|
|
|
|
}
|
2011-03-23 18:09:17 +01:00
|
|
|
|
2022-01-20 13:05:27 -08:00
|
|
|
/* Returns a bitmask with all the flags found in any of the key specs of the command.
|
|
|
|
* The 'inv' argument means we'll return a mask with all flags that are missing in at least one spec. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int64_t getAllKeySpecsFlags(struct serverCommand *cmd, int inv) {
|
2022-01-20 13:05:27 -08:00
|
|
|
int64_t flags = 0;
|
|
|
|
for (int j = 0; j < cmd->key_specs_num; j++) {
|
|
|
|
keySpec *spec = cmd->key_specs + j;
|
|
|
|
flags |= inv ? ~spec->flags : spec->flags;
|
|
|
|
}
|
|
|
|
return flags;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fetch the keys based of the provided key specs. Returns the number of keys found, or -1 on error.
|
|
|
|
* There are several flags that can be used to modify how this function finds keys in a command.
|
|
|
|
*
|
2022-02-22 01:00:03 -08:00
|
|
|
* GET_KEYSPEC_INCLUDE_NOT_KEYS: Return 'fake' keys as if they were keys.
|
2022-01-20 13:05:27 -08:00
|
|
|
* GET_KEYSPEC_RETURN_PARTIAL: Skips invalid and incomplete keyspecs but returns the keys
|
|
|
|
* found in other valid keyspecs.
|
|
|
|
*/
|
2024-04-03 23:54:33 +07:00
|
|
|
int getKeysUsingKeySpecs(struct serverCommand *cmd, robj **argv, int argc, int search_flags, getKeysResult *result) {
|
2023-07-03 12:45:18 +03:00
|
|
|
int j, i, last, first, step;
|
2022-01-20 13:05:27 -08:00
|
|
|
keyReference *keys;
|
2023-07-06 13:17:25 +08:00
|
|
|
serverAssert(result->numkeys == 0); /* caller should initialize or reset it */
|
2022-01-20 13:05:27 -08:00
|
|
|
|
|
|
|
for (j = 0; j < cmd->key_specs_num; j++) {
|
|
|
|
keySpec *spec = cmd->key_specs + j;
|
|
|
|
serverAssert(spec->begin_search_type != KSPEC_BS_INVALID);
|
2022-02-22 01:00:03 -08:00
|
|
|
/* Skip specs that represent 'fake' keys */
|
|
|
|
if ((spec->flags & CMD_KEY_NOT_KEY) && !(search_flags & GET_KEYSPEC_INCLUDE_NOT_KEYS)) {
|
2022-01-20 13:05:27 -08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
first = 0;
|
|
|
|
if (spec->begin_search_type == KSPEC_BS_INDEX) {
|
|
|
|
first = spec->bs.index.pos;
|
|
|
|
} else if (spec->begin_search_type == KSPEC_BS_KEYWORD) {
|
|
|
|
int start_index =
|
|
|
|
spec->bs.keyword.startfrom > 0 ? spec->bs.keyword.startfrom : argc + spec->bs.keyword.startfrom;
|
|
|
|
int end_index = spec->bs.keyword.startfrom > 0 ? argc - 1 : 1;
|
|
|
|
for (i = start_index; i != end_index; i = start_index <= end_index ? i + 1 : i - 1) {
|
|
|
|
if (i >= argc || i < 1) break;
|
|
|
|
if (!strcasecmp((char *)argv[i]->ptr, spec->bs.keyword.keyword)) {
|
|
|
|
first = i + 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* keyword not found */
|
|
|
|
if (!first) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* unknown spec */
|
|
|
|
goto invalid_spec;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (spec->find_keys_type == KSPEC_FK_RANGE) {
|
|
|
|
step = spec->fk.range.keystep;
|
|
|
|
if (spec->fk.range.lastkey >= 0) {
|
|
|
|
last = first + spec->fk.range.lastkey;
|
|
|
|
} else {
|
|
|
|
if (!spec->fk.range.limit) {
|
|
|
|
last = argc + spec->fk.range.lastkey;
|
|
|
|
} else {
|
|
|
|
serverAssert(spec->fk.range.lastkey == -1);
|
|
|
|
last = first + ((argc - first) / spec->fk.range.limit + spec->fk.range.lastkey);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else if (spec->find_keys_type == KSPEC_FK_KEYNUM) {
|
|
|
|
step = spec->fk.keynum.keystep;
|
|
|
|
long long numkeys;
|
|
|
|
if (spec->fk.keynum.keynumidx >= argc) goto invalid_spec;
|
|
|
|
|
|
|
|
sds keynum_str = argv[first + spec->fk.keynum.keynumidx]->ptr;
|
|
|
|
if (!string2ll(keynum_str, sdslen(keynum_str), &numkeys) || numkeys < 0) {
|
|
|
|
/* Unable to parse the numkeys argument or it was invalid */
|
|
|
|
goto invalid_spec;
|
|
|
|
}
|
|
|
|
|
|
|
|
first += spec->fk.keynum.firstkey;
|
|
|
|
last = first + (int)numkeys - 1;
|
|
|
|
} else {
|
|
|
|
/* unknown spec */
|
|
|
|
goto invalid_spec;
|
|
|
|
}
|
|
|
|
|
|
|
|
int count = ((last - first) + 1);
|
2023-07-03 12:45:18 +03:00
|
|
|
keys = getKeysPrepareResult(result, result->numkeys + count);
|
2022-01-20 13:05:27 -08:00
|
|
|
|
|
|
|
/* First or last is out of bounds, which indicates a syntax error */
|
|
|
|
if (last >= argc || last < first || first >= argc) {
|
|
|
|
goto invalid_spec;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = first; i <= last; i += step) {
|
|
|
|
if (i >= argc || i < first) {
|
|
|
|
/* Modules commands, and standard commands with a not fixed number
|
|
|
|
* of arguments (negative arity parameter) do not have dispatch
|
|
|
|
* time arity checks, so we need to handle the case where the user
|
|
|
|
* passed an invalid number of arguments here. In this case we
|
|
|
|
* return no keys and expect the command implementation to report
|
|
|
|
* an arity or syntax error. */
|
|
|
|
if (cmd->flags & CMD_MODULE || cmd->arity < 0) {
|
|
|
|
continue;
|
|
|
|
} else {
|
2024-05-08 15:17:32 -04:00
|
|
|
serverPanic("%s built-in command declared keys positions"
|
|
|
|
" not matching the arity requirements.",
|
|
|
|
server.extended_redis_compat ? "Redis" : "Valkey");
|
2022-01-20 13:05:27 -08:00
|
|
|
}
|
|
|
|
}
|
2023-07-03 12:45:18 +03:00
|
|
|
keys[result->numkeys].pos = i;
|
|
|
|
keys[result->numkeys].flags = spec->flags;
|
|
|
|
result->numkeys++;
|
2022-01-20 13:05:27 -08:00
|
|
|
}
|
|
|
|
|
2022-01-30 11:00:03 +01:00
|
|
|
/* Handle incomplete specs (only after we added the current spec
|
|
|
|
* to `keys`, just in case GET_KEYSPEC_RETURN_PARTIAL was given) */
|
|
|
|
if (spec->flags & CMD_KEY_INCOMPLETE) {
|
|
|
|
goto invalid_spec;
|
|
|
|
}
|
|
|
|
|
2022-01-20 13:05:27 -08:00
|
|
|
/* Done with this spec */
|
|
|
|
continue;
|
|
|
|
|
|
|
|
invalid_spec:
|
|
|
|
if (search_flags & GET_KEYSPEC_RETURN_PARTIAL) {
|
|
|
|
continue;
|
|
|
|
} else {
|
|
|
|
result->numkeys = 0;
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-03 12:45:18 +03:00
|
|
|
return result->numkeys;
|
2022-01-20 13:05:27 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Return all the arguments that are keys in the command passed via argc / argv.
|
|
|
|
* This function will eventually replace getKeysFromCommand.
|
|
|
|
*
|
|
|
|
* The command returns the positions of all the key arguments inside the array,
|
|
|
|
* so the actual return value is a heap allocated array of integers. The
|
|
|
|
* length of the array is returned by reference into *numkeys.
|
|
|
|
*
|
|
|
|
* Along with the position, this command also returns the flags that are
|
2024-04-09 01:24:03 -07:00
|
|
|
* associated with how the server will access the key.
|
2022-01-20 13:05:27 -08:00
|
|
|
*
|
2024-04-03 23:54:33 +07:00
|
|
|
* 'cmd' must be point to the corresponding entry into the serverCommand
|
2022-02-08 10:01:35 +02:00
|
|
|
* table, according to the command name in argv[0]. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int getKeysFromCommandWithSpecs(struct serverCommand *cmd,
|
|
|
|
robj **argv,
|
|
|
|
int argc,
|
|
|
|
int search_flags,
|
|
|
|
getKeysResult *result) {
|
2022-02-22 01:00:03 -08:00
|
|
|
/* The command has at least one key-spec not marked as NOT_KEY */
|
|
|
|
int has_keyspec = (getAllKeySpecsFlags(cmd, 1) & CMD_KEY_NOT_KEY);
|
2022-02-08 10:01:35 +02:00
|
|
|
/* The command has at least one key-spec marked as VARIABLE_FLAGS */
|
|
|
|
int has_varflags = (getAllKeySpecsFlags(cmd, 0) & CMD_KEY_VARIABLE_FLAGS);
|
|
|
|
|
|
|
|
/* We prefer key-specs if there are any, and their flags are reliable. */
|
|
|
|
if (has_keyspec && !has_varflags) {
|
|
|
|
int ret = getKeysUsingKeySpecs(cmd, argv, argc, search_flags, result);
|
|
|
|
if (ret >= 0) return ret;
|
|
|
|
/* If the specs returned with an error (probably an INVALID or INCOMPLETE spec),
|
|
|
|
* fallback to the callback method. */
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Resort to getkeys callback methods. */
|
2022-01-20 13:05:27 -08:00
|
|
|
if (cmd->flags & CMD_MODULE_GETKEYS) return moduleGetCommandKeysViaAPI(cmd, argv, argc, result);
|
2022-02-08 10:01:35 +02:00
|
|
|
|
|
|
|
/* We use native getkeys as a last resort, since not all these native getkeys provide
|
|
|
|
* flags properly (only the ones that correspond to INVALID, INCOMPLETE or VARIABLE_FLAGS do.*/
|
|
|
|
if (cmd->getkeys_proc) return cmd->getkeys_proc(cmd, argv, argc, result);
|
|
|
|
return 0;
|
2022-01-20 13:05:27 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* This function returns a sanity check if the command may have keys. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int doesCommandHaveKeys(struct serverCommand *cmd) {
|
2022-05-10 13:56:12 +02:00
|
|
|
return cmd->getkeys_proc || /* has getkeys_proc (non modules) */
|
2022-01-20 13:05:27 -08:00
|
|
|
(cmd->flags & CMD_MODULE_GETKEYS) || /* module with GETKEYS */
|
2022-02-22 01:00:03 -08:00
|
|
|
(getAllKeySpecsFlags(cmd, 1) & CMD_KEY_NOT_KEY); /* has at least one key-spec not marked as NOT_KEY */
|
|
|
|
}
|
|
|
|
|
2024-04-09 01:24:03 -07:00
|
|
|
/* A simplified channel spec table that contains all of the commands
|
2022-02-22 01:00:03 -08:00
|
|
|
* and which channels they have and how they are accessed. */
|
|
|
|
typedef struct ChannelSpecs {
|
2024-04-03 23:33:33 +07:00
|
|
|
serverCommandProc *proc; /* Command procedure to match against */
|
2022-02-22 01:00:03 -08:00
|
|
|
uint64_t flags; /* CMD_CHANNEL_* flags for this command */
|
|
|
|
int start; /* The initial position of the first channel */
|
|
|
|
int count; /* The number of channels, or -1 if all remaining
|
|
|
|
* arguments are channels. */
|
|
|
|
} ChannelSpecs;
|
|
|
|
|
|
|
|
ChannelSpecs commands_with_channels[] = {
|
|
|
|
{subscribeCommand, CMD_CHANNEL_SUBSCRIBE, 1, -1},
|
|
|
|
{ssubscribeCommand, CMD_CHANNEL_SUBSCRIBE, 1, -1},
|
|
|
|
{unsubscribeCommand, CMD_CHANNEL_UNSUBSCRIBE, 1, -1},
|
|
|
|
{sunsubscribeCommand, CMD_CHANNEL_UNSUBSCRIBE, 1, -1},
|
|
|
|
{psubscribeCommand, CMD_CHANNEL_PATTERN | CMD_CHANNEL_SUBSCRIBE, 1, -1},
|
|
|
|
{punsubscribeCommand, CMD_CHANNEL_PATTERN | CMD_CHANNEL_UNSUBSCRIBE, 1, -1},
|
|
|
|
{publishCommand, CMD_CHANNEL_PUBLISH, 1, 1},
|
|
|
|
{spublishCommand, CMD_CHANNEL_PUBLISH, 1, 1},
|
|
|
|
{NULL, 0} /* Terminator. */
|
|
|
|
};
|
|
|
|
|
|
|
|
/* Returns 1 if the command may access any channels matched by the flags
|
|
|
|
* argument. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int doesCommandHaveChannelsWithFlags(struct serverCommand *cmd, int flags) {
|
2022-02-22 01:00:03 -08:00
|
|
|
/* If a module declares get channels, we are just going to assume
|
|
|
|
* has channels. This API is allowed to return false positives. */
|
|
|
|
if (cmd->flags & CMD_MODULE_GETCHANNELS) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
for (ChannelSpecs *spec = commands_with_channels; spec->proc != NULL; spec += 1) {
|
|
|
|
if (cmd->proc == spec->proc) {
|
|
|
|
return !!(spec->flags & flags);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Return all the arguments that are channels in the command passed via argc / argv.
|
|
|
|
* This function behaves similar to getKeysFromCommandWithSpecs, but with channels
|
|
|
|
* instead of keys.
|
|
|
|
*
|
|
|
|
* The command returns the positions of all the channel arguments inside the array,
|
|
|
|
* so the actual return value is a heap allocated array of integers. The
|
|
|
|
* length of the array is returned by reference into *numkeys.
|
|
|
|
*
|
|
|
|
* Along with the position, this command also returns the flags that are
|
2024-04-09 01:24:03 -07:00
|
|
|
* associated with how the server will access the channel.
|
2022-02-22 01:00:03 -08:00
|
|
|
*
|
2024-04-03 23:54:33 +07:00
|
|
|
* 'cmd' must be point to the corresponding entry into the serverCommand
|
2022-02-22 01:00:03 -08:00
|
|
|
* table, according to the command name in argv[0]. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int getChannelsFromCommand(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-02-22 01:00:03 -08:00
|
|
|
keyReference *keys;
|
|
|
|
/* If a module declares get channels, use that. */
|
|
|
|
if (cmd->flags & CMD_MODULE_GETCHANNELS) {
|
|
|
|
return moduleGetCommandChannelsViaAPI(cmd, argv, argc, result);
|
|
|
|
}
|
|
|
|
/* Otherwise check the channel spec table */
|
|
|
|
for (ChannelSpecs *spec = commands_with_channels; spec != NULL; spec += 1) {
|
|
|
|
if (cmd->proc == spec->proc) {
|
|
|
|
int start = spec->start;
|
|
|
|
int stop = (spec->count == -1) ? argc : start + spec->count;
|
|
|
|
if (stop > argc) stop = argc;
|
|
|
|
int count = 0;
|
|
|
|
keys = getKeysPrepareResult(result, stop - start);
|
|
|
|
for (int i = start; i < stop; i++) {
|
|
|
|
keys[count].pos = i;
|
|
|
|
keys[count++].flags = spec->flags;
|
|
|
|
}
|
|
|
|
result->numkeys = count;
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
2022-01-20 13:05:27 -08:00
|
|
|
}
|
|
|
|
|
2014-03-10 15:24:38 +01:00
|
|
|
/* The base case is to use the keys position as given in the command table
|
A better approach for COMMAND INFO for movablekeys commands (#8324)
Fix #7297
The problem:
Today, there is no way for a client library or app to know the key name indexes for commands such as
ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
resolve each execution of these commands with COMMAND GETKEYS.
The solution:
Introducing key specs other than the legacy "range" (first,last,step)
The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
of 0 or more key arguments.
A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
obviously remain unchanged.
A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
need to use GETKEYS.
Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
containing details about the spec (specific meaning for each spec type)
The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
clients should ignore any unfamiliar flags.
In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
key specs:
1. `start_search`: Given an array of args, indicate where we should start searching for keys
2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
### start_search step specs
- `index`: specify an argument index explicitly
- `index`: 0 based index (1 means the first command argument)
- `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
- `keyword`: the string to search for
- `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
Examples:
- `SET` has start_search of type `index` with value `1`
- `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
- `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
### find_keys step specs
- `range`: specify `[count, step, limit]`.
- `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
- `step`: how many args should we skip after finding a key, in order to find the next one
- `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
- “keynum”: specify `[keynum_index, first_key_index, step]`.
- `keynum_index`: is relative to the return of the `start_search` spec.
- `first_key_index`: is relative to `keynum_index`.
- `step`: how many args should we skip after finding a key, in order to find the next one
Examples:
- `SET` has `range` of `[0,1,0]`
- `MSET` has `range` of `[-1,2,0]`
- `XREAD` has `range` of `[-1,1,2]`
- `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
- `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
`[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
args of the vast majority of commands.
If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
start searching in the wrong index).
The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
report false information (assuming the command syntax is correct).
For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
report only a subset of all keys - hence the `incomplete` flag.
Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
the STORE keyword spec, as the word "store" can appear anywhere in the command).
We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
Comments:
1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
legacy_range, that, if possible, is built according to the new specs.
3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
example).
"incomplete" specs:
the command we have issues with are MIGRATE, STRALGO, and SORT
for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
key spec that is both "incomplete" and of "unknown type"
if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
its own parser) to retrieve the keys.
please note that all commands, apart from the three mentioned above, have "complete" key specs
2021-09-15 10:10:29 +02:00
|
|
|
* (firstkey, lastkey, step).
|
|
|
|
* This function works only on command with the legacy_range_key_spec,
|
2022-01-20 13:05:27 -08:00
|
|
|
* all other commands should be handled by getkeys_proc.
|
|
|
|
*
|
|
|
|
* If the commands keyspec is incomplete, no keys will be returned, and the provided
|
|
|
|
* keys function should be called instead.
|
|
|
|
*
|
|
|
|
* NOTE: This function does not guarantee populating the flags for
|
|
|
|
* the keys, in order to get flags you should use getKeysUsingKeySpecs. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int getKeysUsingLegacyRangeSpec(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-01-20 13:05:27 -08:00
|
|
|
int j, i = 0, last, first, step;
|
|
|
|
keyReference *keys;
|
2015-07-27 09:41:48 +02:00
|
|
|
UNUSED(argv);
|
2011-03-23 18:09:17 +01:00
|
|
|
|
A better approach for COMMAND INFO for movablekeys commands (#8324)
Fix #7297
The problem:
Today, there is no way for a client library or app to know the key name indexes for commands such as
ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
resolve each execution of these commands with COMMAND GETKEYS.
The solution:
Introducing key specs other than the legacy "range" (first,last,step)
The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
of 0 or more key arguments.
A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
obviously remain unchanged.
A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
need to use GETKEYS.
Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
containing details about the spec (specific meaning for each spec type)
The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
clients should ignore any unfamiliar flags.
In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
key specs:
1. `start_search`: Given an array of args, indicate where we should start searching for keys
2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
### start_search step specs
- `index`: specify an argument index explicitly
- `index`: 0 based index (1 means the first command argument)
- `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
- `keyword`: the string to search for
- `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
Examples:
- `SET` has start_search of type `index` with value `1`
- `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
- `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
### find_keys step specs
- `range`: specify `[count, step, limit]`.
- `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
- `step`: how many args should we skip after finding a key, in order to find the next one
- `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
- “keynum”: specify `[keynum_index, first_key_index, step]`.
- `keynum_index`: is relative to the return of the `start_search` spec.
- `first_key_index`: is relative to `keynum_index`.
- `step`: how many args should we skip after finding a key, in order to find the next one
Examples:
- `SET` has `range` of `[0,1,0]`
- `MSET` has `range` of `[-1,2,0]`
- `XREAD` has `range` of `[-1,1,2]`
- `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
- `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
`[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
args of the vast majority of commands.
If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
start searching in the wrong index).
The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
report false information (assuming the command syntax is correct).
For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
report only a subset of all keys - hence the `incomplete` flag.
Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
the STORE keyword spec, as the word "store" can appear anywhere in the command).
We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
Comments:
1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
legacy_range, that, if possible, is built according to the new specs.
3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
example).
"incomplete" specs:
the command we have issues with are MIGRATE, STRALGO, and SORT
for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
key spec that is both "incomplete" and of "unknown type"
if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
its own parser) to retrieve the keys.
please note that all commands, apart from the three mentioned above, have "complete" key specs
2021-09-15 10:10:29 +02:00
|
|
|
if (cmd->legacy_range_key_spec.begin_search_type == KSPEC_BS_INVALID) {
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = 0;
|
|
|
|
return 0;
|
2011-03-23 18:09:17 +01:00
|
|
|
}
|
2017-04-19 16:17:08 +02:00
|
|
|
|
A better approach for COMMAND INFO for movablekeys commands (#8324)
Fix #7297
The problem:
Today, there is no way for a client library or app to know the key name indexes for commands such as
ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
resolve each execution of these commands with COMMAND GETKEYS.
The solution:
Introducing key specs other than the legacy "range" (first,last,step)
The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
of 0 or more key arguments.
A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
obviously remain unchanged.
A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
need to use GETKEYS.
Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
containing details about the spec (specific meaning for each spec type)
The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
clients should ignore any unfamiliar flags.
In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
key specs:
1. `start_search`: Given an array of args, indicate where we should start searching for keys
2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
### start_search step specs
- `index`: specify an argument index explicitly
- `index`: 0 based index (1 means the first command argument)
- `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
- `keyword`: the string to search for
- `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
Examples:
- `SET` has start_search of type `index` with value `1`
- `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
- `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
### find_keys step specs
- `range`: specify `[count, step, limit]`.
- `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
- `step`: how many args should we skip after finding a key, in order to find the next one
- `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
- “keynum”: specify `[keynum_index, first_key_index, step]`.
- `keynum_index`: is relative to the return of the `start_search` spec.
- `first_key_index`: is relative to `keynum_index`.
- `step`: how many args should we skip after finding a key, in order to find the next one
Examples:
- `SET` has `range` of `[0,1,0]`
- `MSET` has `range` of `[-1,2,0]`
- `XREAD` has `range` of `[-1,1,2]`
- `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
- `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
`[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
args of the vast majority of commands.
If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
start searching in the wrong index).
The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
report false information (assuming the command syntax is correct).
For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
report only a subset of all keys - hence the `incomplete` flag.
Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
the STORE keyword spec, as the word "store" can appear anywhere in the command).
We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
Comments:
1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
legacy_range, that, if possible, is built according to the new specs.
3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
example).
"incomplete" specs:
the command we have issues with are MIGRATE, STRALGO, and SORT
for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
key spec that is both "incomplete" and of "unknown type"
if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
its own parser) to retrieve the keys.
please note that all commands, apart from the three mentioned above, have "complete" key specs
2021-09-15 10:10:29 +02:00
|
|
|
first = cmd->legacy_range_key_spec.bs.index.pos;
|
|
|
|
last = cmd->legacy_range_key_spec.fk.range.lastkey;
|
|
|
|
if (last >= 0) last += first;
|
|
|
|
step = cmd->legacy_range_key_spec.fk.range.keystep;
|
|
|
|
|
2011-03-23 18:09:17 +01:00
|
|
|
if (last < 0) last = argc + last;
|
2020-02-05 18:06:33 +02:00
|
|
|
|
A better approach for COMMAND INFO for movablekeys commands (#8324)
Fix #7297
The problem:
Today, there is no way for a client library or app to know the key name indexes for commands such as
ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
resolve each execution of these commands with COMMAND GETKEYS.
The solution:
Introducing key specs other than the legacy "range" (first,last,step)
The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
of 0 or more key arguments.
A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
obviously remain unchanged.
A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
need to use GETKEYS.
Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
containing details about the spec (specific meaning for each spec type)
The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
clients should ignore any unfamiliar flags.
In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
key specs:
1. `start_search`: Given an array of args, indicate where we should start searching for keys
2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
### start_search step specs
- `index`: specify an argument index explicitly
- `index`: 0 based index (1 means the first command argument)
- `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
- `keyword`: the string to search for
- `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
Examples:
- `SET` has start_search of type `index` with value `1`
- `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
- `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
### find_keys step specs
- `range`: specify `[count, step, limit]`.
- `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
- `step`: how many args should we skip after finding a key, in order to find the next one
- `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
- “keynum”: specify `[keynum_index, first_key_index, step]`.
- `keynum_index`: is relative to the return of the `start_search` spec.
- `first_key_index`: is relative to `keynum_index`.
- `step`: how many args should we skip after finding a key, in order to find the next one
Examples:
- `SET` has `range` of `[0,1,0]`
- `MSET` has `range` of `[-1,2,0]`
- `XREAD` has `range` of `[-1,1,2]`
- `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
- `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
`[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
args of the vast majority of commands.
If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
start searching in the wrong index).
The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
report false information (assuming the command syntax is correct).
For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
report only a subset of all keys - hence the `incomplete` flag.
Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
the STORE keyword spec, as the word "store" can appear anywhere in the command).
We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
Comments:
1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
legacy_range, that, if possible, is built according to the new specs.
3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
example).
"incomplete" specs:
the command we have issues with are MIGRATE, STRALGO, and SORT
for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
key spec that is both "incomplete" and of "unknown type"
if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
its own parser) to retrieve the keys.
please note that all commands, apart from the three mentioned above, have "complete" key specs
2021-09-15 10:10:29 +02:00
|
|
|
int count = ((last - first) + 1);
|
2020-10-05 17:03:17 +03:00
|
|
|
keys = getKeysPrepareResult(result, count);
|
2020-02-05 18:06:33 +02:00
|
|
|
|
A better approach for COMMAND INFO for movablekeys commands (#8324)
Fix #7297
The problem:
Today, there is no way for a client library or app to know the key name indexes for commands such as
ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
resolve each execution of these commands with COMMAND GETKEYS.
The solution:
Introducing key specs other than the legacy "range" (first,last,step)
The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
of 0 or more key arguments.
A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
obviously remain unchanged.
A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
need to use GETKEYS.
Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
containing details about the spec (specific meaning for each spec type)
The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
clients should ignore any unfamiliar flags.
In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
key specs:
1. `start_search`: Given an array of args, indicate where we should start searching for keys
2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
### start_search step specs
- `index`: specify an argument index explicitly
- `index`: 0 based index (1 means the first command argument)
- `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
- `keyword`: the string to search for
- `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
Examples:
- `SET` has start_search of type `index` with value `1`
- `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
- `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
### find_keys step specs
- `range`: specify `[count, step, limit]`.
- `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
- `step`: how many args should we skip after finding a key, in order to find the next one
- `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
- “keynum”: specify `[keynum_index, first_key_index, step]`.
- `keynum_index`: is relative to the return of the `start_search` spec.
- `first_key_index`: is relative to `keynum_index`.
- `step`: how many args should we skip after finding a key, in order to find the next one
Examples:
- `SET` has `range` of `[0,1,0]`
- `MSET` has `range` of `[-1,2,0]`
- `XREAD` has `range` of `[-1,1,2]`
- `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
- `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
`[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
args of the vast majority of commands.
If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
start searching in the wrong index).
The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
report false information (assuming the command syntax is correct).
For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
report only a subset of all keys - hence the `incomplete` flag.
Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
the STORE keyword spec, as the word "store" can appear anywhere in the command).
We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
Comments:
1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
legacy_range, that, if possible, is built according to the new specs.
3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
example).
"incomplete" specs:
the command we have issues with are MIGRATE, STRALGO, and SORT
for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
key spec that is both "incomplete" and of "unknown type"
if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
its own parser) to retrieve the keys.
please note that all commands, apart from the three mentioned above, have "complete" key specs
2021-09-15 10:10:29 +02:00
|
|
|
for (j = first; j <= last; j += step) {
|
2022-01-20 13:05:27 -08:00
|
|
|
if (j >= argc || j < first) {
|
2018-01-12 11:21:10 +01:00
|
|
|
/* Modules commands, and standard commands with a not fixed number
|
2018-07-01 13:24:50 +08:00
|
|
|
* of arguments (negative arity parameter) do not have dispatch
|
2018-01-12 11:21:10 +01:00
|
|
|
* time arity checks, so we need to handle the case where the user
|
|
|
|
* passed an invalid number of arguments here. In this case we
|
|
|
|
* return no keys and expect the command implementation to report
|
|
|
|
* an arity or syntax error. */
|
|
|
|
if (cmd->flags & CMD_MODULE || cmd->arity < 0) {
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = 0;
|
|
|
|
return 0;
|
2017-04-19 16:17:08 +02:00
|
|
|
} else {
|
2024-05-08 15:17:32 -04:00
|
|
|
serverPanic("%s built-in command declared keys positions"
|
|
|
|
" not matching the arity requirements.",
|
|
|
|
server.extended_redis_compat ? "Redis" : "Valkey");
|
2017-04-19 16:17:08 +02:00
|
|
|
}
|
|
|
|
}
|
2022-01-20 13:05:27 -08:00
|
|
|
keys[i].pos = j;
|
|
|
|
/* Flags are omitted from legacy key specs */
|
|
|
|
keys[i++].flags = 0;
|
2011-03-23 18:09:17 +01:00
|
|
|
}
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = i;
|
|
|
|
return i;
|
2011-03-23 18:09:17 +01:00
|
|
|
}
|
|
|
|
|
2014-03-10 15:31:01 +01:00
|
|
|
/* Return all the arguments that are keys in the command passed via argc / argv.
|
|
|
|
*
|
|
|
|
* The command returns the positions of all the key arguments inside the array,
|
Squash merging 125 typo/grammar/comment/doc PRs (#7773)
List of squashed commits or PRs
===============================
commit 66801ea
Author: hwware <wen.hui.ware@gmail.com>
Date: Mon Jan 13 00:54:31 2020 -0500
typo fix in acl.c
commit 46f55db
Author: Itamar Haber <itamar@redislabs.com>
Date: Sun Sep 6 18:24:11 2020 +0300
Updates a couple of comments
Specifically:
* RM_AutoMemory completed instead of pointing to docs
* Updated link to custom type doc
commit 61a2aa0
Author: xindoo <xindoo@qq.com>
Date: Tue Sep 1 19:24:59 2020 +0800
Correct errors in code comments
commit a5871d1
Author: yz1509 <pro-756@qq.com>
Date: Tue Sep 1 18:36:06 2020 +0800
fix typos in module.c
commit 41eede7
Author: bookug <bookug@qq.com>
Date: Sat Aug 15 01:11:33 2020 +0800
docs: fix typos in comments
commit c303c84
Author: lazy-snail <ws.niu@outlook.com>
Date: Fri Aug 7 11:15:44 2020 +0800
fix spelling in redis.conf
commit 1eb76bf
Author: zhujian <zhujianxyz@gmail.com>
Date: Thu Aug 6 15:22:10 2020 +0800
add a missing 'n' in comment
commit 1530ec2
Author: Daniel Dai <764122422@qq.com>
Date: Mon Jul 27 00:46:35 2020 -0400
fix spelling in tracking.c
commit e517b31
Author: Hunter-Chen <huntcool001@gmail.com>
Date: Fri Jul 17 22:33:32 2020 +0800
Update redis.conf
Co-authored-by: Itamar Haber <itamar@redislabs.com>
commit c300eff
Author: Hunter-Chen <huntcool001@gmail.com>
Date: Fri Jul 17 22:33:23 2020 +0800
Update redis.conf
Co-authored-by: Itamar Haber <itamar@redislabs.com>
commit 4c058a8
Author: 陈浩鹏 <chenhaopeng@heytea.com>
Date: Thu Jun 25 19:00:56 2020 +0800
Grammar fix and clarification
commit 5fcaa81
Author: bodong.ybd <bodong.ybd@alibaba-inc.com>
Date: Fri Jun 19 10:09:00 2020 +0800
Fix typos
commit 4caca9a
Author: Pruthvi P <pruthvi@ixigo.com>
Date: Fri May 22 00:33:22 2020 +0530
Fix typo eviciton => eviction
commit b2a25f6
Author: Brad Dunbar <dunbarb2@gmail.com>
Date: Sun May 17 12:39:59 2020 -0400
Fix a typo.
commit 12842ae
Author: hwware <wen.hui.ware@gmail.com>
Date: Sun May 3 17:16:59 2020 -0400
fix spelling in redis conf
commit ddba07c
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date: Sat May 2 23:25:34 2020 +0100
Correct a "conflicts" spelling error.
commit 8fc7bf2
Author: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
Date: Thu Apr 30 10:25:27 2020 +0900
docs: fix EXPIRE_FAST_CYCLE_DURATION to ACTIVE_EXPIRE_CYCLE_FAST_DURATION
commit 9b2b67a
Author: Brad Dunbar <dunbarb2@gmail.com>
Date: Fri Apr 24 11:46:22 2020 -0400
Fix a typo.
commit 0746f10
Author: devilinrust <63737265+devilinrust@users.noreply.github.com>
Date: Thu Apr 16 00:17:53 2020 +0200
Fix typos in server.c
commit 92b588d
Author: benjessop12 <56115861+benjessop12@users.noreply.github.com>
Date: Mon Apr 13 13:43:55 2020 +0100
Fix spelling mistake in lazyfree.c
commit 1da37aa
Merge: 2d4ba28 af347a8
Author: hwware <wen.hui.ware@gmail.com>
Date: Thu Mar 5 22:41:31 2020 -0500
Merge remote-tracking branch 'upstream/unstable' into expiretypofix
commit 2d4ba28
Author: hwware <wen.hui.ware@gmail.com>
Date: Mon Mar 2 00:09:40 2020 -0500
fix typo in expire.c
commit 1a746f7
Author: SennoYuki <minakami1yuki@gmail.com>
Date: Thu Feb 27 16:54:32 2020 +0800
fix typo
commit 8599b1a
Author: dongheejeong <donghee950403@gmail.com>
Date: Sun Feb 16 20:31:43 2020 +0000
Fix typo in server.c
commit f38d4e8
Author: hwware <wen.hui.ware@gmail.com>
Date: Sun Feb 2 22:58:38 2020 -0500
fix typo in evict.c
commit fe143fc
Author: Leo Murillo <leonardo.murillo@gmail.com>
Date: Sun Feb 2 01:57:22 2020 -0600
Fix a few typos in redis.conf
commit 1ab4d21
Author: viraja1 <anchan.viraj@gmail.com>
Date: Fri Dec 27 17:15:58 2019 +0530
Fix typo in Latency API docstring
commit ca1f70e
Author: gosth <danxuedexing@qq.com>
Date: Wed Dec 18 15:18:02 2019 +0800
fix typo in sort.c
commit a57c06b
Author: ZYunH <zyunhjob@163.com>
Date: Mon Dec 16 22:28:46 2019 +0800
fix-zset-typo
commit b8c92b5
Author: git-hulk <hulk.website@gmail.com>
Date: Mon Dec 16 15:51:42 2019 +0800
FIX: typo in cluster.c, onformation->information
commit 9dd981c
Author: wujm2007 <jim.wujm@gmail.com>
Date: Mon Dec 16 09:37:52 2019 +0800
Fix typo
commit e132d7a
Author: Sebastien Williams-Wynn <s.williamswynn.mail@gmail.com>
Date: Fri Nov 15 00:14:07 2019 +0000
Minor typo change
commit 47f44d5
Author: happynote3966 <01ssrmikururudevice01@gmail.com>
Date: Mon Nov 11 22:08:48 2019 +0900
fix comment typo in redis-cli.c
commit b8bdb0d
Author: fulei <fulei@kuaishou.com>
Date: Wed Oct 16 18:00:17 2019 +0800
Fix a spelling mistake of comments in defragDictBucketCallback
commit 0def46a
Author: fulei <fulei@kuaishou.com>
Date: Wed Oct 16 13:09:27 2019 +0800
fix some spelling mistakes of comments in defrag.c
commit f3596fd
Author: Phil Rajchgot <tophil@outlook.com>
Date: Sun Oct 13 02:02:32 2019 -0400
Typo and grammar fixes
Redis and its documentation are great -- just wanted to submit a few corrections in the spirit of Hacktoberfest. Thanks for all your work on this project. I use it all the time and it works beautifully.
commit 2b928cd
Author: KangZhiDong <worldkzd@gmail.com>
Date: Sun Sep 1 07:03:11 2019 +0800
fix typos
commit 33aea14
Author: Axlgrep <axlgrep@gmail.com>
Date: Tue Aug 27 11:02:18 2019 +0800
Fixed eviction spelling issues
commit e282a80
Author: Simen Flatby <simen@oms.no>
Date: Tue Aug 20 15:25:51 2019 +0200
Update comments to reflect prop name
In the comments the prop is referenced as replica-validity-factor,
but it is really named cluster-replica-validity-factor.
commit 74d1f9a
Author: Jim Green <jimgreen2013@qq.com>
Date: Tue Aug 20 20:00:31 2019 +0800
fix comment error, the code is ok
commit eea1407
Author: Liao Tonglang <liaotonglang@gmail.com>
Date: Fri May 31 10:16:18 2019 +0800
typo fix
fix cna't to can't
commit 0da553c
Author: KAWACHI Takashi <tkawachi@gmail.com>
Date: Wed Jul 17 00:38:16 2019 +0900
Fix typo
commit 7fc8fb6
Author: Michael Prokop <mika@grml.org>
Date: Tue May 28 17:58:42 2019 +0200
Typo fixes
s/familar/familiar/
s/compatiblity/compatibility/
s/ ot / to /
s/itsef/itself/
commit 5f46c9d
Author: zhumoing <34539422+zhumoing@users.noreply.github.com>
Date: Tue May 21 21:16:50 2019 +0800
typo-fixes
typo-fixes
commit 321dfe1
Author: wxisme <850885154@qq.com>
Date: Sat Mar 16 15:10:55 2019 +0800
typo fix
commit b4fb131
Merge: 267e0e6 3df1eb8
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date: Fri Feb 8 22:55:45 2019 +0200
Merge branch 'unstable' of antirez/redis into unstable
commit 267e0e6
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date: Wed Jan 30 21:26:04 2019 +0200
Minor typo fix
commit 30544e7
Author: inshal96 <39904558+inshal96@users.noreply.github.com>
Date: Fri Jan 4 16:54:50 2019 +0500
remove an extra 'a' in the comments
commit 337969d
Author: BrotherGao <yangdongheng11@gmail.com>
Date: Sat Dec 29 12:37:29 2018 +0800
fix typo in redis.conf
commit 9f4b121
Merge: 423a030 e504583
Author: BrotherGao <yangdongheng@xiaomi.com>
Date: Sat Dec 29 11:41:12 2018 +0800
Merge branch 'unstable' of antirez/redis into unstable
commit 423a030
Merge: 42b02b7 46a51cd
Author: 杨东衡 <yangdongheng@xiaomi.com>
Date: Tue Dec 4 23:56:11 2018 +0800
Merge branch 'unstable' of antirez/redis into unstable
commit 42b02b7
Merge: 68c0e6e b8febe6
Author: Dongheng Yang <yangdongheng11@gmail.com>
Date: Sun Oct 28 15:54:23 2018 +0800
Merge pull request #1 from antirez/unstable
update local data
commit 714b589
Author: Christian <crifei93@gmail.com>
Date: Fri Dec 28 01:17:26 2018 +0100
fix typo "resulution"
commit e23259d
Author: garenchan <1412950785@qq.com>
Date: Wed Dec 26 09:58:35 2018 +0800
fix typo: segfauls -> segfault
commit a9359f8
Author: xjp <jianping_xie@aliyun.com>
Date: Tue Dec 18 17:31:44 2018 +0800
Fixed REDISMODULE_H spell bug
commit a12c3e4
Author: jdiaz <jrd.palacios@gmail.com>
Date: Sat Dec 15 23:39:52 2018 -0600
Fixes hyperloglog hash function comment block description
commit 770eb11
Author: 林上耀 <1210tom@163.com>
Date: Sun Nov 25 17:16:10 2018 +0800
fix typo
commit fd97fbb
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date: Fri Nov 23 17:14:01 2018 +0100
Correct "unsupported" typo.
commit a85522d
Author: Jungnam Lee <jungnam.lee@oracle.com>
Date: Thu Nov 8 23:01:29 2018 +0900
fix typo in test comments
commit ade8007
Author: Arun Kumar <palerdot@users.noreply.github.com>
Date: Tue Oct 23 16:56:35 2018 +0530
Fixed grammatical typo
Fixed typo for word 'dictionary'
commit 869ee39
Author: Hamid Alaei <hamid.a85@gmail.com>
Date: Sun Aug 12 16:40:02 2018 +0430
fix documentations: (ThreadSafeContextStart/Stop -> ThreadSafeContextLock/Unlock), minor typo
commit f89d158
Author: Mayank Jain <mayankjain255@gmail.com>
Date: Tue Jul 31 23:01:21 2018 +0530
Updated README.md with some spelling corrections.
Made correction in spelling of some misspelled words.
commit 892198e
Author: dsomeshwar <someshwar.dhayalan@gmail.com>
Date: Sat Jul 21 23:23:04 2018 +0530
typo fix
commit 8a4d780
Author: Itamar Haber <itamar@redislabs.com>
Date: Mon Apr 30 02:06:52 2018 +0300
Fixes some typos
commit e3acef6
Author: Noah Rosamilia <ivoahivoah@gmail.com>
Date: Sat Mar 3 23:41:21 2018 -0500
Fix typo in /deps/README.md
commit 04442fb
Author: WuYunlong <xzsyeb@126.com>
Date: Sat Mar 3 10:32:42 2018 +0800
Fix typo in readSyncBulkPayload() comment.
commit 9f36880
Author: WuYunlong <xzsyeb@126.com>
Date: Sat Mar 3 10:20:37 2018 +0800
replication.c comment: run_id -> replid.
commit f866b4a
Author: Francesco 'makevoid' Canessa <makevoid@gmail.com>
Date: Thu Feb 22 22:01:56 2018 +0000
fix comment typo in server.c
commit 0ebc69b
Author: 줍 <jubee0124@gmail.com>
Date: Mon Feb 12 16:38:48 2018 +0900
Fix typo in redis.conf
Fix `five behaviors` to `eight behaviors` in [this sentence ](antirez/redis@unstable/redis.conf#L564)
commit b50a620
Author: martinbroadhurst <martinbroadhurst@users.noreply.github.com>
Date: Thu Dec 28 12:07:30 2017 +0000
Fix typo in valgrind.sup
commit 7d8f349
Author: Peter Boughton <peter@sorcerersisle.com>
Date: Mon Nov 27 19:52:19 2017 +0000
Update CONTRIBUTING; refer doc updates to redis-doc repo.
commit 02dec7e
Author: Klauswk <klauswk1@hotmail.com>
Date: Tue Oct 24 16:18:38 2017 -0200
Fix typo in comment
commit e1efbc8
Author: chenshi <baiwfg2@gmail.com>
Date: Tue Oct 3 18:26:30 2017 +0800
Correct two spelling errors of comments
commit 93327d8
Author: spacewander <spacewanderlzx@gmail.com>
Date: Wed Sep 13 16:47:24 2017 +0800
Update the comment for OBJ_ENCODING_EMBSTR_SIZE_LIMIT's value
The value of OBJ_ENCODING_EMBSTR_SIZE_LIMIT is 44 now instead of 39.
commit 63d361f
Author: spacewander <spacewanderlzx@gmail.com>
Date: Tue Sep 12 15:06:42 2017 +0800
Fix <prevlen> related doc in ziplist.c
According to the definition of ZIP_BIG_PREVLEN and other related code,
the guard of single byte <prevlen> should be 254 instead of 255.
commit ebe228d
Author: hanael80 <hanael80@gmail.com>
Date: Tue Aug 15 09:09:40 2017 +0900
Fix typo
commit 6b696e6
Author: Matt Robenolt <matt@ydekproductions.com>
Date: Mon Aug 14 14:50:47 2017 -0700
Fix typo in LATENCY DOCTOR output
commit a2ec6ae
Author: caosiyang <caosiyang@qiyi.com>
Date: Tue Aug 15 14:15:16 2017 +0800
Fix a typo: form => from
commit 3ab7699
Author: caosiyang <caosiyang@qiyi.com>
Date: Thu Aug 10 18:40:33 2017 +0800
Fix a typo: replicationFeedSlavesFromMaster() => replicationFeedSlavesFromMasterStream()
commit 72d43ef
Author: caosiyang <caosiyang@qiyi.com>
Date: Tue Aug 8 15:57:25 2017 +0800
fix a typo: servewr => server
commit 707c958
Author: Bo Cai <charpty@gmail.com>
Date: Wed Jul 26 21:49:42 2017 +0800
redis-cli.c typo: conut -> count.
Signed-off-by: Bo Cai <charpty@gmail.com>
commit b9385b2
Author: JackDrogon <jack.xsuperman@gmail.com>
Date: Fri Jun 30 14:22:31 2017 +0800
Fix some spell problems
commit 20d9230
Author: akosel <aaronjkosel@gmail.com>
Date: Sun Jun 4 19:35:13 2017 -0500
Fix typo
commit b167bfc
Author: Krzysiek Witkowicz <krzysiekwitkowicz@gmail.com>
Date: Mon May 22 21:32:27 2017 +0100
Fix #4008 small typo in comment
commit 2b78ac8
Author: Jake Clarkson <jacobwclarkson@gmail.com>
Date: Wed Apr 26 15:49:50 2017 +0100
Correct typo in tests/unit/hyperloglog.tcl
commit b0f1cdb
Author: Qi Luo <qiluo-msft@users.noreply.github.com>
Date: Wed Apr 19 14:25:18 2017 -0700
Fix typo
commit a90b0f9
Author: charsyam <charsyam@naver.com>
Date: Thu Mar 16 18:19:53 2017 +0900
fix typos
fix typos
fix typos
commit 8430a79
Author: Richard Hart <richardhart92@gmail.com>
Date: Mon Mar 13 22:17:41 2017 -0400
Fixed log message typo in listenToPort.
commit 481a1c2
Author: Vinod Kumar <kumar003vinod@gmail.com>
Date: Sun Jan 15 23:04:51 2017 +0530
src/db.c: Correct "save" -> "safe" typo
commit 586b4d3
Author: wangshaonan <wshn13@gmail.com>
Date: Wed Dec 21 20:28:27 2016 +0800
Fix typo they->the in helloworld.c
commit c1c4b5e
Author: Jenner <hypxm@qq.com>
Date: Mon Dec 19 16:39:46 2016 +0800
typo error
commit 1ee1a3f
Author: tielei <43289893@qq.com>
Date: Mon Jul 18 13:52:25 2016 +0800
fix some comments
commit 11a41fb
Author: Otto Kekäläinen <otto@seravo.fi>
Date: Sun Jul 3 10:23:55 2016 +0100
Fix spelling in documentation and comments
commit 5fb5d82
Author: francischan <f1ancis621@gmail.com>
Date: Tue Jun 28 00:19:33 2016 +0800
Fix outdated comments about redis.c file.
It should now refer to server.c file.
commit 6b254bc
Author: lmatt-bit <lmatt123n@gmail.com>
Date: Thu Apr 21 21:45:58 2016 +0800
Refine the comment of dictRehashMilliseconds func
SLAVECONF->REPLCONF in comment - by andyli029
commit ee9869f
Author: clark.kang <charsyam@naver.com>
Date: Tue Mar 22 11:09:51 2016 +0900
fix typos
commit f7b3b11
Author: Harisankar H <harisankarh@gmail.com>
Date: Wed Mar 9 11:49:42 2016 +0530
Typo correction: "faield" --> "failed"
Typo correction: "faield" --> "failed"
commit 3fd40fc
Author: Itamar Haber <itamar@redislabs.com>
Date: Thu Feb 25 10:31:51 2016 +0200
Fixes a typo in comments
commit 621c160
Author: Prayag Verma <prayag.verma@gmail.com>
Date: Mon Feb 1 12:36:20 2016 +0530
Fix typo in Readme.md
Spelling mistakes -
`eviciton` > `eviction`
`familar` > `familiar`
commit d7d07d6
Author: WonCheol Lee <toctoc21c@gmail.com>
Date: Wed Dec 30 15:11:34 2015 +0900
Typo fixed
commit a4dade7
Author: Felix Bünemann <buenemann@louis.info>
Date: Mon Dec 28 11:02:55 2015 +0100
[ci skip] Improve supervised upstart config docs
This mentions that "expect stop" is required for supervised upstart
to work correctly. See http://upstart.ubuntu.com/cookbook/#expect-stop
for an explanation.
commit d9caba9
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:30:03 2015 +1100
README: Remove trailing whitespace
commit 72d42e5
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:29:32 2015 +1100
README: Fix typo. th => the
commit dd6e957
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:29:20 2015 +1100
README: Fix typo. familar => familiar
commit 3a12b23
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:28:54 2015 +1100
README: Fix typo. eviciton => eviction
commit 2d1d03b
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:21:45 2015 +1100
README: Fix typo. sever => server
commit 3973b06
Author: Itamar Haber <itamar@garantiadata.com>
Date: Sat Dec 19 17:01:20 2015 +0200
Typo fix
commit 4f2e460
Author: Steve Gao <fu@2token.com>
Date: Fri Dec 4 10:22:05 2015 +0800
Update README - fix typos
commit b21667c
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 22:48:37 2015 +0800
delete redundancy color judge in sdscatcolor
commit 88894c7
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 22:14:42 2015 +0800
the example output shoule be HelloWorld
commit 2763470
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 17:41:39 2015 +0800
modify error word keyevente
Signed-off-by: binyan <binbin.yan@nokia.com>
commit 0847b3d
Author: Bruno Martins <bscmartins@gmail.com>
Date: Wed Nov 4 11:37:01 2015 +0000
typo
commit bbb9e9e
Author: dawedawe <dawedawe@gmx.de>
Date: Fri Mar 27 00:46:41 2015 +0100
typo: zimap -> zipmap
commit 5ed297e
Author: Axel Advento <badwolf.bloodseeker.rev@gmail.com>
Date: Tue Mar 3 15:58:29 2015 +0800
Fix 'salve' typos to 'slave'
commit edec9d6
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date: Wed Jun 12 14:12:47 2019 +0200
Update README.md
Co-Authored-By: Qix <Qix-@users.noreply.github.com>
commit 692a7af
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date: Tue May 28 14:32:04 2019 +0200
grammar
commit d962b0a
Author: Nick Frost <nickfrostatx@gmail.com>
Date: Wed Jul 20 15:17:12 2016 -0700
Minor grammar fix
commit 24fff01aaccaf5956973ada8c50ceb1462e211c6 (typos)
Author: Chad Miller <chadm@squareup.com>
Date: Tue Sep 8 13:46:11 2020 -0400
Fix faulty comment about operation of unlink()
commit 3cd5c1f3326c52aa552ada7ec797c6bb16452355
Author: Kevin <kevin.xgr@gmail.com>
Date: Wed Nov 20 00:13:50 2019 +0800
Fix typo in server.c.
From a83af59 Mon Sep 17 00:00:00 2001
From: wuwo <wuwo@wacai.com>
Date: Fri, 17 Mar 2017 20:37:45 +0800
Subject: [PATCH] falure to failure
From c961896 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=B7=A6=E6=87=B6?= <veficos@gmail.com>
Date: Sat, 27 May 2017 15:33:04 +0800
Subject: [PATCH] fix typo
From e600ef2 Mon Sep 17 00:00:00 2001
From: "rui.zou" <rui.zou@yunify.com>
Date: Sat, 30 Sep 2017 12:38:15 +0800
Subject: [PATCH] fix a typo
From c7d07fa Mon Sep 17 00:00:00 2001
From: Alexandre Perrin <alex@kaworu.ch>
Date: Thu, 16 Aug 2018 10:35:31 +0200
Subject: [PATCH] deps README.md typo
From b25cb67 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 10:55:37 +0300
Subject: [PATCH 1/2] fix typos in header
From ad28ca6 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 11:02:36 +0300
Subject: [PATCH 2/2] fix typos
commit 34924cdedd8552466fc22c1168d49236cb7ee915
Author: Adrian Lynch <adi_ady_ade@hotmail.com>
Date: Sat Apr 4 21:59:15 2015 +0100
Typos fixed
commit fd2a1e7
Author: Jan <jsteemann@users.noreply.github.com>
Date: Sat Oct 27 19:13:01 2018 +0200
Fix typos
Fix typos
commit e14e47c1a234b53b0e103c5f6a1c61481cbcbb02
Author: Andy Lester <andy@petdance.com>
Date: Fri Aug 2 22:30:07 2019 -0500
Fix multiple misspellings of "following"
commit 79b948ce2dac6b453fe80995abbcaac04c213d5a
Author: Andy Lester <andy@petdance.com>
Date: Fri Aug 2 22:24:28 2019 -0500
Fix misspelling of create-cluster
commit 1fffde52666dc99ab35efbd31071a4c008cb5a71
Author: Andy Lester <andy@petdance.com>
Date: Wed Jul 31 17:57:56 2019 -0500
Fix typos
commit 204c9ba9651e9e05fd73936b452b9a30be456cfe
Author: Xiaobo Zhu <xiaobo.zhu@shopee.com>
Date: Tue Aug 13 22:19:25 2019 +0800
fix typos
Squashed commit of the following:
commit 1d9aaf8
Author: danmedani <danmedani@gmail.com>
Date: Sun Aug 2 11:40:26 2015 -0700
README typo fix.
Squashed commit of the following:
commit 32bfa7c
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date: Mon Jul 6 21:15:08 2015 +0200
Fixed grammer
Squashed commit of the following:
commit b24f69c
Author: Sisir Koppaka <sisir.koppaka@gmail.com>
Date: Mon Mar 2 22:38:45 2015 -0500
utils/hashtable/rehashing.c: Fix typos
Squashed commit of the following:
commit 4e04082
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date: Mon Mar 23 08:22:21 2015 +0000
Small config file documentation improvements
Squashed commit of the following:
commit acb8773
Author: ctd1500 <ctd1500@gmail.com>
Date: Fri May 8 01:52:48 2015 -0700
Typo and grammar fixes in readme
commit 2eb75b6
Author: ctd1500 <ctd1500@gmail.com>
Date: Fri May 8 01:36:18 2015 -0700
fixed redis.conf comment
Squashed commit of the following:
commit a8249a2
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Fri Dec 11 11:39:52 2015 +0530
Revise correction of typos.
Squashed commit of the following:
commit 3c02028
Author: zhaojun11 <zhaojun11@jd.com>
Date: Wed Jan 17 19:05:28 2018 +0800
Fix typos include two code typos in cluster.c and latency.c
Squashed commit of the following:
commit 9dba47c
Author: q191201771 <191201771@qq.com>
Date: Sat Jan 4 11:31:04 2020 +0800
fix function listCreate comment in adlist.c
Update src/server.c
commit 2c7c2cb536e78dd211b1ac6f7bda00f0f54faaeb
Author: charpty <charpty@gmail.com>
Date: Tue May 1 23:16:59 2018 +0800
server.c typo: modules system dictionary type comment
Signed-off-by: charpty <charpty@gmail.com>
commit a8395323fb63cb59cb3591cb0f0c8edb7c29a680
Author: Itamar Haber <itamar@redislabs.com>
Date: Sun May 6 00:25:18 2018 +0300
Updates test_helper.tcl's help with undocumented options
Specifically:
* Host
* Port
* Client
commit bde6f9ced15755cd6407b4af7d601b030f36d60b
Author: wxisme <850885154@qq.com>
Date: Wed Aug 8 15:19:19 2018 +0800
fix comments in deps files
commit 3172474ba991532ab799ee1873439f3402412331
Author: wxisme <850885154@qq.com>
Date: Wed Aug 8 14:33:49 2018 +0800
fix some comments
commit 01b6f2b6858b5cf2ce4ad5092d2c746e755f53f0
Author: Thor Juhasz <thor@juhasz.pro>
Date: Sun Nov 18 14:37:41 2018 +0100
Minor fixes to comments
Found some parts a little unclear on a first read, which prompted me to have a better look at the file and fix some minor things I noticed.
Fixing minor typos and grammar. There are no changes to configuration options.
These changes are only meant to help the user better understand the explanations to the various configuration options
2020-09-10 13:43:38 +03:00
|
|
|
* so the actual return value is a heap allocated array of integers. The
|
2014-03-10 15:31:01 +01:00
|
|
|
* length of the array is returned by reference into *numkeys.
|
|
|
|
*
|
2024-04-03 23:54:33 +07:00
|
|
|
* 'cmd' must be point to the corresponding entry into the serverCommand
|
2014-03-10 15:31:01 +01:00
|
|
|
* table, according to the command name in argv[0].
|
2014-03-10 15:24:38 +01:00
|
|
|
*
|
|
|
|
* This function uses the command table if a command-specific helper function
|
|
|
|
* is not required, otherwise it calls the command-specific function. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int getKeysFromCommand(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2016-04-27 18:09:31 +02:00
|
|
|
if (cmd->flags & CMD_MODULE_GETKEYS) {
|
2020-10-05 17:03:17 +03:00
|
|
|
return moduleGetCommandKeysViaAPI(cmd, argv, argc, result);
|
2022-05-10 13:56:12 +02:00
|
|
|
} else if (cmd->getkeys_proc) {
|
2020-10-05 17:03:17 +03:00
|
|
|
return cmd->getkeys_proc(cmd, argv, argc, result);
|
2011-03-23 18:09:17 +01:00
|
|
|
} else {
|
A better approach for COMMAND INFO for movablekeys commands (#8324)
Fix #7297
The problem:
Today, there is no way for a client library or app to know the key name indexes for commands such as
ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them.
For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to
resolve each execution of these commands with COMMAND GETKEYS.
The solution:
Introducing key specs other than the legacy "range" (first,last,step)
The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates
the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery
of 0 or more key arguments.
A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will
obviously remain unchanged.
A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it
must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order
to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no
need to use GETKEYS.
Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array
containing details about the spec (specific meaning for each spec type)
The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write.
clients should ignore any unfamiliar flags.
In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of
key specs:
1. `start_search`: Given an array of args, indicate where we should start searching for keys
2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys.
### start_search step specs
- `index`: specify an argument index explicitly
- `index`: 0 based index (1 means the first command argument)
- `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears.
- `keyword`: the string to search for
- `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end)
Examples:
- `SET` has start_search of type `index` with value `1`
- `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]`
- `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]`
### find_keys step specs
- `range`: specify `[count, step, limit]`.
- `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last
- `step`: how many args should we skip after finding a key, in order to find the next one
- `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on.
- “keynum”: specify `[keynum_index, first_key_index, step]`.
- `keynum_index`: is relative to the return of the `start_search` spec.
- `first_key_index`: is relative to `keynum_index`.
- `step`: how many args should we skip after finding a key, in order to find the next one
Examples:
- `SET` has `range` of `[0,1,0]`
- `MSET` has `range` of `[-1,2,0]`
- `XREAD` has `range` of `[-1,1,2]`
- `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]`
- `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value
`[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun)
Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key
args of the vast majority of commands.
If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option.
Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will
start searching in the wrong index).
The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never
report false information (assuming the command syntax is correct).
For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will
report only a subset of all keys - hence the `incomplete` flag.
Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that
COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe
the STORE keyword spec, as the word "store" can appear anywhere in the command).
We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for
all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime.
Comments:
1. Redis doesn't internally use the new specs, they are only used for COMMAND output.
2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called
legacy_range, that, if possible, is built according to the new specs.
3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for
example).
"incomplete" specs:
the command we have issues with are MIGRATE, STRALGO, and SORT
for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is
actually the string "keys" will return just a subset of the keys (hence, it's "incomplete")
for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a
key spec that is both "incomplete" and of "unknown type"
if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have
its own parser) to retrieve the keys.
please note that all commands, apart from the three mentioned above, have "complete" key specs
2021-09-15 10:10:29 +02:00
|
|
|
return getKeysUsingLegacyRangeSpec(cmd, argv, argc, result);
|
2011-03-23 18:09:17 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-10 15:24:38 +01:00
|
|
|
/* Free the result of getKeysFromCommand. */
|
2020-10-05 17:03:17 +03:00
|
|
|
void getKeysFreeResult(getKeysResult *result) {
|
|
|
|
if (result && result->keys != result->keysbuf) zfree(result->keys);
|
2011-03-23 18:09:17 +01:00
|
|
|
}
|
|
|
|
|
2014-03-10 15:24:38 +01:00
|
|
|
/* Helper function to extract keys from following commands:
|
2020-09-17 15:54:37 +08:00
|
|
|
* COMMAND [destkey] <num-keys> <key> [...] <key> [...] ... <options>
|
|
|
|
*
|
|
|
|
* eg:
|
|
|
|
* ZUNION <num-keys> <key> <key> ... <key> <options>
|
2014-03-10 15:24:38 +01:00
|
|
|
* ZUNIONSTORE <destkey> <num-keys> <key> <key> ... <key> <options>
|
2020-09-17 15:54:37 +08:00
|
|
|
*
|
|
|
|
* 'storeKeyOfs': destkey index, 0 means destkey not exists.
|
|
|
|
* 'keyCountOfs': num-keys index.
|
|
|
|
* 'firstKeyOfs': firstkey index.
|
|
|
|
* 'keyStep': the interval of each key, usually this value is 1.
|
2022-01-20 13:05:27 -08:00
|
|
|
*
|
2023-05-25 07:03:17 +09:00
|
|
|
* The commands using this function have a fully defined keyspec, so returning flags isn't needed. */
|
2020-10-05 17:03:17 +03:00
|
|
|
int genericGetKeys(int storeKeyOfs,
|
|
|
|
int keyCountOfs,
|
|
|
|
int firstKeyOfs,
|
|
|
|
int keyStep,
|
|
|
|
robj **argv,
|
|
|
|
int argc,
|
|
|
|
getKeysResult *result) {
|
2022-01-20 13:05:27 -08:00
|
|
|
int i, num;
|
|
|
|
keyReference *keys;
|
2011-03-23 18:09:17 +01:00
|
|
|
|
2020-09-17 15:54:37 +08:00
|
|
|
num = atoi(argv[keyCountOfs]->ptr);
|
2011-03-23 18:09:17 +01:00
|
|
|
/* Sanity check. Don't return any key if the command is going to
|
2020-09-17 15:54:37 +08:00
|
|
|
* reply with syntax error. (no input keys). */
|
|
|
|
if (num < 1 || num > (argc - firstKeyOfs) / keyStep) {
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = 0;
|
|
|
|
return 0;
|
2011-03-23 18:09:17 +01:00
|
|
|
}
|
2014-03-07 16:32:04 -05:00
|
|
|
|
2020-10-05 17:03:17 +03:00
|
|
|
int numkeys = storeKeyOfs ? num + 1 : num;
|
|
|
|
keys = getKeysPrepareResult(result, numkeys);
|
|
|
|
result->numkeys = numkeys;
|
2014-03-07 16:32:04 -05:00
|
|
|
|
2020-09-17 15:54:37 +08:00
|
|
|
/* Add all key positions for argv[firstKeyOfs...n] to keys[] */
|
2022-01-20 13:05:27 -08:00
|
|
|
for (i = 0; i < num; i++) {
|
|
|
|
keys[i].pos = firstKeyOfs + (i * keyStep);
|
|
|
|
keys[i].flags = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (storeKeyOfs) {
|
|
|
|
keys[num].pos = storeKeyOfs;
|
|
|
|
keys[num].flags = 0;
|
|
|
|
}
|
2020-10-05 17:03:17 +03:00
|
|
|
return result->numkeys;
|
2020-09-11 16:59:15 +08:00
|
|
|
}
|
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int sintercardGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2021-09-16 19:07:08 +08:00
|
|
|
UNUSED(cmd);
|
|
|
|
return genericGetKeys(0, 1, 2, 1, argv, argc, result);
|
|
|
|
}
|
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int zunionInterDiffStoreGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2020-09-11 16:59:15 +08:00
|
|
|
UNUSED(cmd);
|
2020-10-05 17:03:17 +03:00
|
|
|
return genericGetKeys(1, 2, 3, 1, argv, argc, result);
|
2020-09-17 15:54:37 +08:00
|
|
|
}
|
2020-09-11 16:59:15 +08:00
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int zunionInterDiffGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2020-09-17 15:54:37 +08:00
|
|
|
UNUSED(cmd);
|
2020-10-05 17:03:17 +03:00
|
|
|
return genericGetKeys(0, 1, 2, 1, argv, argc, result);
|
2011-03-23 18:09:17 +01:00
|
|
|
}
|
2011-04-28 19:00:33 +02:00
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int evalGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2015-07-27 09:41:48 +02:00
|
|
|
UNUSED(cmd);
|
2020-10-05 17:03:17 +03:00
|
|
|
return genericGetKeys(0, 2, 3, 1, argv, argc, result);
|
2014-03-10 15:26:10 +01:00
|
|
|
}
|
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int functionGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2021-10-07 14:41:26 +03:00
|
|
|
UNUSED(cmd);
|
|
|
|
return genericGetKeys(0, 2, 3, 1, argv, argc, result);
|
|
|
|
}
|
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int lmpopGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2021-09-09 17:02:33 +08:00
|
|
|
UNUSED(cmd);
|
|
|
|
return genericGetKeys(0, 1, 2, 1, argv, argc, result);
|
|
|
|
}
|
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int blmpopGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2021-09-09 17:02:33 +08:00
|
|
|
UNUSED(cmd);
|
|
|
|
return genericGetKeys(0, 2, 3, 1, argv, argc, result);
|
|
|
|
}
|
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int zmpopGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2021-09-23 13:34:40 +08:00
|
|
|
UNUSED(cmd);
|
|
|
|
return genericGetKeys(0, 1, 2, 1, argv, argc, result);
|
|
|
|
}
|
|
|
|
|
2024-04-03 23:54:33 +07:00
|
|
|
int bzmpopGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2021-09-23 13:34:40 +08:00
|
|
|
UNUSED(cmd);
|
|
|
|
return genericGetKeys(0, 2, 3, 1, argv, argc, result);
|
|
|
|
}
|
|
|
|
|
2022-01-20 13:05:27 -08:00
|
|
|
/* Helper function to extract keys from the SORT RO command.
|
|
|
|
*
|
|
|
|
* SORT <sort-key>
|
|
|
|
*
|
|
|
|
* The second argument of SORT is always a key, however an arbitrary number of
|
|
|
|
* keys may be accessed while doing the sort (the BY and GET args), so the
|
|
|
|
* key-spec declares incomplete keys which is why we have to provide a concrete
|
|
|
|
* implementation to fetch the keys.
|
|
|
|
*
|
|
|
|
* This command declares incomplete keys, so the flags are correctly set for this function */
|
2024-04-03 23:54:33 +07:00
|
|
|
int sortROGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-01-20 13:05:27 -08:00
|
|
|
keyReference *keys;
|
|
|
|
UNUSED(cmd);
|
|
|
|
UNUSED(argv);
|
|
|
|
UNUSED(argc);
|
|
|
|
|
|
|
|
keys = getKeysPrepareResult(result, 1);
|
|
|
|
keys[0].pos = 1; /* <sort-key> is always present. */
|
|
|
|
keys[0].flags = CMD_KEY_RO | CMD_KEY_ACCESS;
|
2023-08-31 03:00:02 +08:00
|
|
|
result->numkeys = 1;
|
|
|
|
return result->numkeys;
|
2022-01-20 13:05:27 -08:00
|
|
|
}
|
|
|
|
|
2014-03-10 16:26:08 +01:00
|
|
|
/* Helper function to extract keys from the SORT command.
|
|
|
|
*
|
|
|
|
* SORT <sort-key> ... STORE <store-key> ...
|
|
|
|
*
|
|
|
|
* The first argument of SORT is always a key, however a list of options
|
|
|
|
* follow in SQL-alike style. Here we parse just the minimum in order to
|
2022-01-20 13:05:27 -08:00
|
|
|
* correctly identify keys in the "STORE" option.
|
|
|
|
*
|
|
|
|
* This command declares incomplete keys, so the flags are correctly set for this function */
|
2024-04-03 23:54:33 +07:00
|
|
|
int sortGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-01-20 13:05:27 -08:00
|
|
|
int i, j, num, found_store = 0;
|
|
|
|
keyReference *keys;
|
2015-07-27 09:41:48 +02:00
|
|
|
UNUSED(cmd);
|
2014-03-10 16:26:08 +01:00
|
|
|
|
|
|
|
num = 0;
|
2020-10-05 17:03:17 +03:00
|
|
|
keys = getKeysPrepareResult(result, 2); /* Alloc 2 places for the worst case. */
|
2022-01-20 13:05:27 -08:00
|
|
|
keys[num].pos = 1; /* <sort-key> is always present. */
|
|
|
|
keys[num++].flags = CMD_KEY_RO | CMD_KEY_ACCESS;
|
2014-03-10 16:26:08 +01:00
|
|
|
|
|
|
|
/* Search for STORE option. By default we consider options to don't
|
|
|
|
* have arguments, so if we find an unknown option name we scan the
|
|
|
|
* next. However there are options with 1 or 2 arguments, so we
|
|
|
|
* provide a list here in order to skip the right number of args. */
|
|
|
|
struct {
|
|
|
|
char *name;
|
|
|
|
int skip;
|
|
|
|
} skiplist[] = {
|
|
|
|
{"limit", 2}, {"get", 1}, {"by", 1}, {NULL, 0} /* End of elements. */
|
|
|
|
};
|
|
|
|
|
|
|
|
for (i = 2; i < argc; i++) {
|
|
|
|
for (j = 0; skiplist[j].name != NULL; j++) {
|
|
|
|
if (!strcasecmp(argv[i]->ptr, skiplist[j].name)) {
|
|
|
|
i += skiplist[j].skip;
|
|
|
|
break;
|
|
|
|
} else if (!strcasecmp(argv[i]->ptr, "store") && i + 1 < argc) {
|
2014-03-10 16:39:07 +01:00
|
|
|
/* Note: we don't increment "num" here and continue the loop
|
|
|
|
* to be sure to process the *last* "STORE" option if multiple
|
|
|
|
* ones are provided. This is same behavior as SORT. */
|
2014-07-21 17:31:21 -04:00
|
|
|
found_store = 1;
|
2022-01-20 13:05:27 -08:00
|
|
|
keys[num].pos = i + 1; /* <store-key> */
|
|
|
|
keys[num].flags = CMD_KEY_OW | CMD_KEY_UPDATE;
|
2014-03-10 16:26:08 +01:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = num + found_store;
|
|
|
|
return result->numkeys;
|
2014-03-10 16:26:08 +01:00
|
|
|
}
|
|
|
|
|
2022-01-20 13:05:27 -08:00
|
|
|
/* This command declares incomplete keys, so the flags are correctly set for this function */
|
2024-04-03 23:54:33 +07:00
|
|
|
int migrateGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-10-13 20:03:54 +08:00
|
|
|
int i, j, num, first;
|
2022-01-20 13:05:27 -08:00
|
|
|
keyReference *keys;
|
2015-12-11 18:09:01 +01:00
|
|
|
UNUSED(cmd);
|
|
|
|
|
|
|
|
/* Assume the obvious form. */
|
|
|
|
first = 3;
|
|
|
|
num = 1;
|
|
|
|
|
|
|
|
/* But check for the extended one with the KEYS option. */
|
2022-10-13 20:03:54 +08:00
|
|
|
struct {
|
|
|
|
char *name;
|
|
|
|
int skip;
|
|
|
|
} skip_keywords[] = {{"copy", 0}, {"replace", 0}, {"auth", 1}, {"auth2", 2}, {NULL, 0}};
|
2015-12-11 18:09:01 +01:00
|
|
|
if (argc > 6) {
|
|
|
|
for (i = 6; i < argc; i++) {
|
2022-10-13 20:03:54 +08:00
|
|
|
if (!strcasecmp(argv[i]->ptr, "keys")) {
|
|
|
|
if (sdslen(argv[3]->ptr) > 0) {
|
|
|
|
/* This is a syntax error. So ignore the keys and leave
|
|
|
|
* the syntax error to be handled by migrateCommand. */
|
|
|
|
num = 0;
|
|
|
|
} else {
|
|
|
|
first = i + 1;
|
|
|
|
num = argc - first;
|
|
|
|
}
|
2015-12-11 18:09:01 +01:00
|
|
|
break;
|
|
|
|
}
|
2022-10-13 20:03:54 +08:00
|
|
|
for (j = 0; skip_keywords[j].name != NULL; j++) {
|
|
|
|
if (!strcasecmp(argv[i]->ptr, skip_keywords[j].name)) {
|
|
|
|
i += skip_keywords[j].skip;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2015-12-11 18:09:01 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-05 17:03:17 +03:00
|
|
|
keys = getKeysPrepareResult(result, num);
|
2022-01-20 13:05:27 -08:00
|
|
|
for (i = 0; i < num; i++) {
|
|
|
|
keys[i].pos = first + i;
|
|
|
|
keys[i].flags = CMD_KEY_RW | CMD_KEY_ACCESS | CMD_KEY_DELETE;
|
|
|
|
}
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = num;
|
|
|
|
return num;
|
2015-12-11 18:09:01 +01:00
|
|
|
}
|
|
|
|
|
2017-04-07 22:31:11 +00:00
|
|
|
/* Helper function to extract keys from following commands:
|
|
|
|
* GEORADIUS key x y radius unit [WITHDIST] [WITHHASH] [WITHCOORD] [ASC|DESC]
|
2023-05-09 07:24:37 -04:00
|
|
|
* [COUNT count] [STORE key|STOREDIST key]
|
2022-01-20 13:05:27 -08:00
|
|
|
* GEORADIUSBYMEMBER key member radius unit ... options ...
|
|
|
|
*
|
|
|
|
* This command has a fully defined keyspec, so returning flags isn't needed. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int georadiusGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-01-20 13:05:27 -08:00
|
|
|
int i, num;
|
|
|
|
keyReference *keys;
|
2017-04-07 22:31:11 +00:00
|
|
|
UNUSED(cmd);
|
|
|
|
|
|
|
|
/* Check for the presence of the stored key in the command */
|
|
|
|
int stored_key = -1;
|
|
|
|
for (i = 5; i < argc; i++) {
|
|
|
|
char *arg = argv[i]->ptr;
|
|
|
|
/* For the case when user specifies both "store" and "storedist" options, the
|
2018-02-11 21:02:07 +08:00
|
|
|
* second key specified would override the first key. This behavior is kept
|
2017-04-07 22:31:11 +00:00
|
|
|
* the same as in georadiusCommand method.
|
|
|
|
*/
|
|
|
|
if ((!strcasecmp(arg, "store") || !strcasecmp(arg, "storedist")) && ((i + 1) < argc)) {
|
|
|
|
stored_key = i + 1;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
num = 1 + (stored_key == -1 ? 0 : 1);
|
|
|
|
|
|
|
|
/* Keys in the command come from two places:
|
|
|
|
* argv[1] = key,
|
|
|
|
* argv[5...n] = stored key if present
|
|
|
|
*/
|
2020-10-05 17:03:17 +03:00
|
|
|
keys = getKeysPrepareResult(result, num);
|
2017-04-07 22:31:11 +00:00
|
|
|
|
|
|
|
/* Add all key positions to keys[] */
|
2022-01-20 13:05:27 -08:00
|
|
|
keys[0].pos = 1;
|
|
|
|
keys[0].flags = 0;
|
2017-04-07 22:31:11 +00:00
|
|
|
if (num > 1) {
|
2022-01-20 13:05:27 -08:00
|
|
|
keys[1].pos = stored_key;
|
|
|
|
keys[1].flags = 0;
|
2017-04-07 22:31:11 +00:00
|
|
|
}
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = num;
|
|
|
|
return num;
|
2017-04-07 22:31:11 +00:00
|
|
|
}
|
|
|
|
|
2017-09-08 11:40:16 +02:00
|
|
|
/* XREAD [BLOCK <milliseconds>] [COUNT <count>] [GROUP <groupname> <ttl>]
|
2022-01-20 13:05:27 -08:00
|
|
|
* STREAMS key_1 key_2 ... key_N ID_1 ID_2 ... ID_N
|
|
|
|
*
|
|
|
|
* This command has a fully defined keyspec, so returning flags isn't needed. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int xreadGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-01-20 13:05:27 -08:00
|
|
|
int i, num = 0;
|
|
|
|
keyReference *keys;
|
2017-09-08 11:40:16 +02:00
|
|
|
UNUSED(cmd);
|
|
|
|
|
2018-06-18 14:06:06 +02:00
|
|
|
/* We need to parse the options of the command in order to seek the first
|
|
|
|
* "STREAMS" string which is actually the option. This is needed because
|
|
|
|
* "STREAMS" could also be the name of the consumer group and even the
|
|
|
|
* name of the stream key. */
|
2017-09-08 11:40:16 +02:00
|
|
|
int streams_pos = -1;
|
|
|
|
for (i = 1; i < argc; i++) {
|
|
|
|
char *arg = argv[i]->ptr;
|
2018-06-18 14:06:06 +02:00
|
|
|
if (!strcasecmp(arg, "block")) {
|
|
|
|
i++; /* Skip option argument. */
|
|
|
|
} else if (!strcasecmp(arg, "count")) {
|
|
|
|
i++; /* Skip option argument. */
|
|
|
|
} else if (!strcasecmp(arg, "group")) {
|
|
|
|
i += 2; /* Skip option argument. */
|
|
|
|
} else if (!strcasecmp(arg, "noack")) {
|
|
|
|
/* Nothing to do. */
|
|
|
|
} else if (!strcasecmp(arg, "streams")) {
|
|
|
|
streams_pos = i;
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
break; /* Syntax error. */
|
|
|
|
}
|
2017-09-08 11:40:16 +02:00
|
|
|
}
|
2017-09-08 11:51:53 +02:00
|
|
|
if (streams_pos != -1) num = argc - streams_pos - 1;
|
2017-09-08 11:40:16 +02:00
|
|
|
|
|
|
|
/* Syntax error. */
|
2018-04-20 20:19:03 +08:00
|
|
|
if (streams_pos == -1 || num == 0 || num % 2 != 0) {
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = 0;
|
|
|
|
return 0;
|
2017-09-08 11:40:16 +02:00
|
|
|
}
|
2017-09-08 11:51:53 +02:00
|
|
|
num /= 2; /* We have half the keys as there are arguments because
|
|
|
|
there are also the IDs, one per key. */
|
2017-09-08 11:40:16 +02:00
|
|
|
|
2020-10-05 17:03:17 +03:00
|
|
|
keys = getKeysPrepareResult(result, num);
|
2022-01-20 13:05:27 -08:00
|
|
|
for (i = streams_pos + 1; i < argc - num; i++) {
|
|
|
|
keys[i - streams_pos - 1].pos = i;
|
|
|
|
keys[i - streams_pos - 1].flags = 0;
|
|
|
|
}
|
2020-10-05 17:03:17 +03:00
|
|
|
result->numkeys = num;
|
|
|
|
return num;
|
2017-09-08 11:40:16 +02:00
|
|
|
}
|
2022-01-27 03:03:21 +08:00
|
|
|
|
|
|
|
/* Helper function to extract keys from the SET command, which may have
|
|
|
|
* a read flag if the GET argument is passed in. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int setGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-01-27 03:03:21 +08:00
|
|
|
keyReference *keys;
|
|
|
|
UNUSED(cmd);
|
|
|
|
|
|
|
|
keys = getKeysPrepareResult(result, 1);
|
|
|
|
keys[0].pos = 1; /* We always know the position */
|
|
|
|
result->numkeys = 1;
|
|
|
|
|
|
|
|
for (int i = 3; i < argc; i++) {
|
|
|
|
char *arg = argv[i]->ptr;
|
|
|
|
if ((arg[0] == 'g' || arg[0] == 'G') && (arg[1] == 'e' || arg[1] == 'E') && (arg[2] == 't' || arg[2] == 'T') &&
|
|
|
|
arg[3] == '\0') {
|
|
|
|
keys[0].flags = CMD_KEY_RW | CMD_KEY_ACCESS | CMD_KEY_UPDATE;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
keys[0].flags = CMD_KEY_OW | CMD_KEY_UPDATE;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Helper function to extract keys from the BITFIELD command, which may be
|
|
|
|
* read-only if the BITFIELD GET subcommand is used. */
|
2024-04-03 23:54:33 +07:00
|
|
|
int bitfieldGetKeys(struct serverCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
2022-01-27 03:03:21 +08:00
|
|
|
keyReference *keys;
|
2022-08-07 14:21:19 +08:00
|
|
|
int readonly = 1;
|
2022-01-27 03:03:21 +08:00
|
|
|
UNUSED(cmd);
|
|
|
|
|
|
|
|
keys = getKeysPrepareResult(result, 1);
|
|
|
|
keys[0].pos = 1; /* We always know the position */
|
|
|
|
result->numkeys = 1;
|
|
|
|
|
|
|
|
for (int i = 2; i < argc; i++) {
|
|
|
|
int remargs = argc - i - 1; /* Remaining args other than current. */
|
|
|
|
char *arg = argv[i]->ptr;
|
|
|
|
if (!strcasecmp(arg, "get") && remargs >= 2) {
|
2022-08-07 14:21:19 +08:00
|
|
|
i += 2;
|
|
|
|
} else if ((!strcasecmp(arg, "set") || !strcasecmp(arg, "incrby")) && remargs >= 3) {
|
|
|
|
readonly = 0;
|
|
|
|
i += 3;
|
|
|
|
break;
|
|
|
|
} else if (!strcasecmp(arg, "overflow") && remargs >= 1) {
|
|
|
|
i += 1;
|
|
|
|
} else {
|
|
|
|
readonly = 0; /* Syntax error. safer to assume non-RO. */
|
|
|
|
break;
|
2022-01-27 03:03:21 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-08-07 14:21:19 +08:00
|
|
|
if (readonly) {
|
|
|
|
keys[0].flags = CMD_KEY_RO | CMD_KEY_ACCESS;
|
|
|
|
} else {
|
|
|
|
keys[0].flags = CMD_KEY_RW | CMD_KEY_ACCESS | CMD_KEY_UPDATE;
|
|
|
|
}
|
2022-01-27 03:03:21 +08:00
|
|
|
return 1;
|
|
|
|
}
|