2012-11-08 18:25:23 +01:00
/*
* Copyright ( c ) 2009 - 2012 , Salvatore Sanfilippo < antirez at gmail dot com >
* All rights reserved .
*
* Redistribution and use in source and binary forms , with or without
* modification , are permitted provided that the following conditions are met :
*
* * Redistributions of source code must retain the above copyright notice ,
* this list of conditions and the following disclaimer .
* * Redistributions in binary form must reproduce the above copyright
* notice , this list of conditions and the following disclaimer in the
* documentation and / or other materials provided with the distribution .
* * Neither the name of Redis nor the names of its contributors may be used
* to endorse or promote products derived from this software without
* specific prior written permission .
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS " AS IS "
* AND ANY EXPRESS OR IMPLIED WARRANTIES , INCLUDING , BUT NOT LIMITED TO , THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED . IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT , INDIRECT , INCIDENTAL , SPECIAL , EXEMPLARY , OR
* CONSEQUENTIAL DAMAGES ( INCLUDING , BUT NOT LIMITED TO , PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES ; LOSS OF USE , DATA , OR PROFITS ; OR BUSINESS
* INTERRUPTION ) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY , WHETHER IN
* CONTRACT , STRICT LIABILITY , OR TORT ( INCLUDING NEGLIGENCE OR OTHERWISE )
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE , EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE .
*/
2015-07-26 15:14:57 +02:00
# include "server.h"
2011-10-23 10:42:16 +02:00
# include "lzf.h" /* LZF compression library */
2012-01-02 22:14:10 -08:00
# include "zipmap.h"
2012-04-09 22:40:41 +02:00
# include "endianconv.h"
optimizing d2string() and addReplyDouble() with grisu2: double to string conversion based on Florian Loitsch's Grisu-algorithm (#10587)
All commands / use cases that heavily rely on double to a string representation conversion,
(e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ),
could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the
equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures
100% coverage of conversion.
This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries
( fmtlib ) that use the optimized double to string conversion underneath.
The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on
https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ).
test suite changes:
Despite being compatible, in some cases it produces a different result from printf, and some tests
had to be adjusted.
one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000`
instead of 5e+9, which sounds like a bug?
In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding
issues (`expr 0.8 == 0.79999999999999999`)
2022-10-15 10:17:41 +01:00
# include "fpconv_dtoa.h"
2017-09-05 13:14:13 +02:00
# include "stream.h"
2021-10-07 14:41:26 +03:00
# include "functions.h"
Fix zuiFind crash / RM_ScanKey hang on SET object listpack encoding (#11581)
In #11290, we added listpack encoding for SET object.
But forgot to support it in zuiFind, causes ZINTER, ZINTERSTORE,
ZINTERCARD, ZIDFF, ZDIFFSTORE to crash.
And forgot to support it in RM_ScanKey, causes it hang.
This PR add support SET listpack in zuiFind, and in RM_ScanKey.
And add tests for related commands to cover this case.
Other changes:
- There is no reason for zuiFind to go into the internals of the SET.
It can simply use setTypeIsMember and don't care about encoding.
- Remove the `#include "intset.h"` from server.h reduce the chance of
accidental intset API use.
- Move setTypeAddAux, setTypeRemoveAux and setTypeIsMemberAux
interfaces to the header.
- In scanGenericCommand, use setTypeInitIterator and setTypeNext
to handle OBJ_SET scan.
- In RM_ScanKey, improve hash scan mode, use lpGetValue like zset,
they can share code and better performance.
The zuiFind part fixes #11578
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2022-12-09 23:08:01 +08:00
# include "intset.h" /* Compact integer set structure */
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
# include "bio.h"
2011-10-23 10:42:16 +02:00
2010-06-22 00:07:48 +02:00
# include <math.h>
2020-09-17 23:20:10 +08:00
# include <fcntl.h>
2010-07-01 21:13:38 +02:00
# include <sys/types.h>
# include <sys/time.h>
# include <sys/resource.h>
# include <sys/wait.h>
# include <arpa/inet.h>
2010-11-08 11:52:03 +01:00
# include <sys/stat.h>
2016-02-15 16:14:56 +01:00
# include <sys/param.h>
2010-06-22 00:07:48 +02:00
2020-10-27 10:27:27 +01:00
/* This macro is called when the internal RDB structure is corrupt */
2020-11-02 09:35:37 +02:00
# define rdbReportCorruptRDB(...) rdbReportError(1, __LINE__,__VA_ARGS__)
2019-07-16 11:00:34 +03:00
/* This macro is called when RDB read failed (possibly a short read) */
2019-07-17 12:45:01 +02:00
# define rdbReportReadError(...) rdbReportError(0, __LINE__,__VA_ARGS__)
2014-05-12 11:44:37 -04:00
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
/* This macro tells if we are in the context of a RESTORE command, and not loading an RDB or AOF. */
# define isRestoreContext() \
2022-08-03 19:38:08 +03:00
( ( server . current_client = = NULL | | server . current_client - > id = = CLIENT_ID_AOF ) ? 0 : 1 )
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
2019-07-01 15:22:29 +03:00
char * rdbFileBeingLoaded = NULL ; /* used for rdb checking on read error */
2016-07-01 09:36:52 +02:00
extern int rdbCheckMode ;
void rdbCheckError ( const char * fmt , . . . ) ;
2016-07-01 11:59:25 +02:00
void rdbCheckSetError ( const char * fmt , . . . ) ;
2016-07-01 09:36:52 +02:00
2020-09-16 20:21:04 +03:00
# ifdef __GNUC__
void rdbReportError ( int corruption_error , int linenum , char * reason , . . . ) __attribute__ ( ( format ( printf , 3 , 4 ) ) ) ;
# endif
2019-07-17 12:45:01 +02:00
void rdbReportError ( int corruption_error , int linenum , char * reason , . . . ) {
2016-07-01 15:26:55 +02:00
va_list ap ;
char msg [ 1024 ] ;
int len ;
len = snprintf ( msg , sizeof ( msg ) ,
2019-07-16 11:00:34 +03:00
" Internal error in RDB reading offset %llu, function at rdb.c:%d -> " ,
( unsigned long long ) server . loading_loaded_bytes , linenum ) ;
2016-07-01 15:26:55 +02:00
va_start ( ap , reason ) ;
vsnprintf ( msg + len , sizeof ( msg ) - len , reason , ap ) ;
va_end ( ap ) ;
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
if ( isRestoreContext ( ) ) {
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
/* If we're in the context of a RESTORE command, just propagate the error. */
/* log in VERBOSE, and return (don't exit). */
serverLog ( LL_VERBOSE , " %s " , msg ) ;
return ;
} else if ( rdbCheckMode ) {
/* If we're inside the rdb checker, let it handle the error. */
2016-07-01 15:26:55 +02:00
rdbCheckError ( " %s " , msg ) ;
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
} else if ( rdbFileBeingLoaded ) {
/* If we're loading an rdb file form disk, run rdb check (and exit) */
serverLog ( LL_WARNING , " %s " , msg ) ;
char * argv [ 2 ] = { " " , rdbFileBeingLoaded } ;
2023-04-17 14:05:36 -04:00
if ( anetIsFifo ( argv [ 1 ] ) ) {
/* Cannot check RDB FIFO because we cannot reopen the FIFO and check already streamed data. */
rdbCheckError ( " Cannot check RDB that is a FIFO: %s " , argv [ 1 ] ) ;
return ;
}
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
redis_check_rdb_main ( 2 , argv , NULL ) ;
} else if ( corruption_error ) {
/* In diskless loading, in case of corrupt file, log and exit. */
serverLog ( LL_WARNING , " %s. Failure loading rdb format " , msg ) ;
} else {
/* In diskless loading, in case of a short read (not a corrupt
* file ) , log and proceed ( don ' t exit ) . */
serverLog ( LL_WARNING , " %s. Failure loading rdb format from socket, assuming connection error, resuming operation. " , msg ) ;
return ;
2016-07-01 09:36:52 +02:00
}
2019-07-01 15:22:29 +03:00
serverLog ( LL_WARNING , " Terminating server after rdb file reading failure. " ) ;
2014-05-12 11:44:37 -04:00
exit ( 1 ) ;
}
Avoid saving module aux on RDB if no aux data was saved by the module. (#11374)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
2022-10-18 19:45:46 +03:00
ssize_t rdbWriteRaw ( rio * rdb , void * p , size_t len ) {
2011-05-14 12:47:42 +02:00
if ( rdb & & rioWrite ( rdb , p , len ) = = 0 )
2011-05-13 17:31:00 +02:00
return - 1 ;
2010-11-21 16:12:25 +01:00
return len ;
}
2011-05-13 17:31:00 +02:00
int rdbSaveType ( rio * rdb , unsigned char type ) {
return rdbWriteRaw ( rdb , & type , 1 ) ;
2010-06-22 00:07:48 +02:00
}
2012-06-02 10:21:57 +02:00
/* Load a "type" in RDB format, that is a one byte unsigned integer.
* This function is not only used to load object types , but also special
* " types " like the end - of - file type , the EXPIRE type , and so forth . */
2011-05-13 23:24:19 +02:00
int rdbLoadType ( rio * rdb ) {
unsigned char type ;
if ( rioRead ( rdb , & type , 1 ) = = 0 ) return - 1 ;
return type ;
2010-06-22 00:07:48 +02:00
}
2018-06-12 17:21:57 +02:00
/* This is only used to load old databases stored with the RDB_OPCODE_EXPIRETIME
2024-04-09 01:24:03 -07:00
* opcode . New versions of the server store using the RDB_OPCODE_EXPIRETIME_MS
2019-07-17 17:30:02 +02:00
* opcode . On error - 1 is returned , however this could be a valid time , so
* to check for loading errors the caller should call rioGetReadError ( ) after
* calling this function . */
2011-05-13 23:24:19 +02:00
time_t rdbLoadTime ( rio * rdb ) {
int32_t t32 ;
if ( rioRead ( rdb , & t32 , 4 ) = = 0 ) return - 1 ;
return ( time_t ) t32 ;
2010-06-22 00:07:48 +02:00
}
2023-08-16 15:38:59 +08:00
ssize_t rdbSaveMillisecondTime ( rio * rdb , long long t ) {
2011-11-09 16:51:19 +01:00
int64_t t64 = ( int64_t ) t ;
2018-06-12 17:21:57 +02:00
memrev64ifbe ( & t64 ) ; /* Store in little endian. */
2011-11-09 16:51:19 +01:00
return rdbWriteRaw ( rdb , & t64 , 8 ) ;
}
2018-06-12 18:21:39 +02:00
/* This function loads a time from the RDB file. It gets the version of the
2024-04-09 01:24:03 -07:00
* RDB because , unfortunately , before Redis OSS 5 ( RDB version 9 ) , the function
2018-06-12 18:21:39 +02:00
* failed to convert data to / from little endian , so RDB files with keys having
* expires could not be shared between big endian and little endian systems
* ( because the expire time will be totally wrong ) . The fix for this is just
* to call memrev64ifbe ( ) , however if we fix this for all the RDB versions ,
* this call will introduce an incompatibility for big endian systems :
2024-04-09 01:24:03 -07:00
* after upgrading to Redis OSS version 5 they will no longer be able to load their
2018-06-12 18:21:39 +02:00
* own old RDB files . Because of that , we instead fix the function only for new
* RDB versions , and load older RDB versions as we used to do in the past ,
2019-07-17 17:30:02 +02:00
* allowing big endian systems to load their own old RDB files .
*
* On I / O error the function returns LLONG_MAX , however if this is also a
* valid stored value , the caller should use rioGetReadError ( ) to check for
* errors after calling this function . */
long long rdbLoadMillisecondTime ( rio * rdb , int rdbver ) {
2011-11-09 16:51:19 +01:00
int64_t t64 ;
2019-07-17 17:30:02 +02:00
if ( rioRead ( rdb , & t64 , 8 ) = = 0 ) return LLONG_MAX ;
2018-06-12 18:21:39 +02:00
if ( rdbver > = 9 ) /* Check the top comment of this function. */
memrev64ifbe ( & t64 ) ; /* Convert in big endian if the system is BE. */
2011-11-09 16:51:19 +01:00
return ( long long ) t64 ;
}
2011-05-13 23:24:19 +02:00
/* Saves an encoded length. The first two bits in the first byte are used to
2015-07-27 09:41:48 +02:00
* hold the encoding type . See the RDB_ * definitions for more information
2011-05-13 23:24:19 +02:00
* on the types of encoding . */
2016-06-01 11:35:47 +02:00
int rdbSaveLen ( rio * rdb , uint64_t len ) {
2010-06-22 00:07:48 +02:00
unsigned char buf [ 2 ] ;
2011-05-13 17:31:00 +02:00
size_t nwritten ;
2010-06-22 00:07:48 +02:00
if ( len < ( 1 < < 6 ) ) {
/* Save a 6 bit len */
2015-07-27 09:41:48 +02:00
buf [ 0 ] = ( len & 0xFF ) | ( RDB_6BITLEN < < 6 ) ;
2011-05-13 17:31:00 +02:00
if ( rdbWriteRaw ( rdb , buf , 1 ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten = 1 ;
2010-06-22 00:07:48 +02:00
} else if ( len < ( 1 < < 14 ) ) {
/* Save a 14 bit len */
2015-07-27 09:41:48 +02:00
buf [ 0 ] = ( ( len > > 8 ) & 0xFF ) | ( RDB_14BITLEN < < 6 ) ;
2010-06-22 00:07:48 +02:00
buf [ 1 ] = len & 0xFF ;
2011-05-13 17:31:00 +02:00
if ( rdbWriteRaw ( rdb , buf , 2 ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten = 2 ;
2016-06-01 11:35:47 +02:00
} else if ( len < = UINT32_MAX ) {
2010-06-22 00:07:48 +02:00
/* Save a 32 bit len */
2016-06-01 11:35:47 +02:00
buf [ 0 ] = RDB_32BITLEN ;
2011-05-13 17:31:00 +02:00
if ( rdbWriteRaw ( rdb , buf , 1 ) = = - 1 ) return - 1 ;
2016-06-01 11:35:47 +02:00
uint32_t len32 = htonl ( len ) ;
if ( rdbWriteRaw ( rdb , & len32 , 4 ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten = 1 + 4 ;
2016-06-01 11:35:47 +02:00
} else {
/* Save a 64 bit len */
buf [ 0 ] = RDB_64BITLEN ;
if ( rdbWriteRaw ( rdb , buf , 1 ) = = - 1 ) return - 1 ;
len = htonu64 ( len ) ;
if ( rdbWriteRaw ( rdb , & len , 8 ) = = - 1 ) return - 1 ;
nwritten = 1 + 8 ;
2010-06-22 00:07:48 +02:00
}
2010-11-21 15:39:34 +01:00
return nwritten ;
2010-06-22 00:07:48 +02:00
}
2016-06-01 20:18:28 +02:00
/* Load an encoded length. If the loaded length is a normal length as stored
* with rdbSaveLen ( ) , the read length is set to ' * lenptr ' . If instead the
* loaded length describes a special encoding that follows , then ' * isencoded '
* is set to 1 and the encoding format is stored at ' * lenptr ' .
*
* See the RDB_ENC_ * definitions in rdb . h for more information on special
* encodings .
*
* The function returns - 1 on error , 0 on success . */
int rdbLoadLenByRef ( rio * rdb , int * isencoded , uint64_t * lenptr ) {
2011-05-13 23:24:19 +02:00
unsigned char buf [ 2 ] ;
int type ;
if ( isencoded ) * isencoded = 0 ;
2016-06-01 20:18:28 +02:00
if ( rioRead ( rdb , buf , 1 ) = = 0 ) return - 1 ;
2011-05-13 23:24:19 +02:00
type = ( buf [ 0 ] & 0xC0 ) > > 6 ;
2015-07-27 09:41:48 +02:00
if ( type = = RDB_ENCVAL ) {
2011-05-13 23:24:19 +02:00
/* Read a 6 bit encoding type. */
if ( isencoded ) * isencoded = 1 ;
2016-06-01 20:18:28 +02:00
* lenptr = buf [ 0 ] & 0x3F ;
2015-07-27 09:41:48 +02:00
} else if ( type = = RDB_6BITLEN ) {
2011-05-13 23:24:19 +02:00
/* Read a 6 bit len. */
2016-06-01 20:18:28 +02:00
* lenptr = buf [ 0 ] & 0x3F ;
2015-07-27 09:41:48 +02:00
} else if ( type = = RDB_14BITLEN ) {
2011-05-13 23:24:19 +02:00
/* Read a 14 bit len. */
2016-06-01 20:18:28 +02:00
if ( rioRead ( rdb , buf + 1 , 1 ) = = 0 ) return - 1 ;
* lenptr = ( ( buf [ 0 ] & 0x3F ) < < 8 ) | buf [ 1 ] ;
2016-06-01 11:35:47 +02:00
} else if ( buf [ 0 ] = = RDB_32BITLEN ) {
2011-05-13 23:24:19 +02:00
/* Read a 32 bit len. */
2016-06-01 11:35:47 +02:00
uint32_t len ;
2016-06-01 20:18:28 +02:00
if ( rioRead ( rdb , & len , 4 ) = = 0 ) return - 1 ;
* lenptr = ntohl ( len ) ;
2016-06-01 11:35:47 +02:00
} else if ( buf [ 0 ] = = RDB_64BITLEN ) {
/* Read a 64 bit len. */
uint64_t len ;
2016-06-01 20:18:28 +02:00
if ( rioRead ( rdb , & len , 8 ) = = 0 ) return - 1 ;
* lenptr = ntohu64 ( len ) ;
2016-06-01 11:35:47 +02:00
} else {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB (
2016-07-01 15:26:55 +02:00
" Unknown length encoding %d in rdbLoadLen() " , type ) ;
2016-06-01 20:18:28 +02:00
return - 1 ; /* Never reached. */
2011-05-13 23:24:19 +02:00
}
2016-06-01 20:18:28 +02:00
return 0 ;
}
/* This is like rdbLoadLenByRef() but directly returns the value read
* from the RDB stream , signaling an error by returning RDB_LENERR
2024-04-09 01:24:03 -07:00
* ( since it is a too large count to be applicable in any server data
2016-06-01 20:18:28 +02:00
* structure ) . */
uint64_t rdbLoadLen ( rio * rdb , int * isencoded ) {
uint64_t len ;
if ( rdbLoadLenByRef ( rdb , isencoded , & len ) = = - 1 ) return RDB_LENERR ;
return len ;
2011-05-13 23:24:19 +02:00
}
/* Encodes the "value" argument as integer when it fits in the supported ranges
* for encoded types . If the function successfully encodes the integer , the
* representation is stored in the buffer pointer to by " enc " and the string
* length is returned . Otherwise 0 is returned . */
2010-06-22 00:07:48 +02:00
int rdbEncodeInteger ( long long value , unsigned char * enc ) {
if ( value > = - ( 1 < < 7 ) & & value < = ( 1 < < 7 ) - 1 ) {
2015-07-27 09:41:48 +02:00
enc [ 0 ] = ( RDB_ENCVAL < < 6 ) | RDB_ENC_INT8 ;
2010-06-22 00:07:48 +02:00
enc [ 1 ] = value & 0xFF ;
return 2 ;
} else if ( value > = - ( 1 < < 15 ) & & value < = ( 1 < < 15 ) - 1 ) {
2015-07-27 09:41:48 +02:00
enc [ 0 ] = ( RDB_ENCVAL < < 6 ) | RDB_ENC_INT16 ;
2010-06-22 00:07:48 +02:00
enc [ 1 ] = value & 0xFF ;
enc [ 2 ] = ( value > > 8 ) & 0xFF ;
return 3 ;
} else if ( value > = - ( ( long long ) 1 < < 31 ) & & value < = ( ( long long ) 1 < < 31 ) - 1 ) {
2015-07-27 09:41:48 +02:00
enc [ 0 ] = ( RDB_ENCVAL < < 6 ) | RDB_ENC_INT32 ;
2010-06-22 00:07:48 +02:00
enc [ 1 ] = value & 0xFF ;
enc [ 2 ] = ( value > > 8 ) & 0xFF ;
enc [ 3 ] = ( value > > 16 ) & 0xFF ;
enc [ 4 ] = ( value > > 24 ) & 0xFF ;
return 5 ;
} else {
return 0 ;
}
}
2011-05-13 23:24:19 +02:00
/* Loads an integer-encoded object with the specified encoding type "enctype".
2014-12-23 19:26:34 +01:00
* The returned value changes according to the flags , see
2019-09-06 12:01:44 +08:00
* rdbGenericLoadStringObject ( ) for more info . */
2016-05-18 11:45:40 +02:00
void * rdbLoadIntegerObject ( rio * rdb , int enctype , int flags , size_t * lenptr ) {
2014-12-23 19:26:34 +01:00
int plain = flags & RDB_LOAD_PLAIN ;
2015-07-31 18:01:23 +02:00
int sds = flags & RDB_LOAD_SDS ;
2014-12-23 19:26:34 +01:00
int encode = flags & RDB_LOAD_ENC ;
2011-05-13 23:24:19 +02:00
unsigned char enc [ 4 ] ;
long long val ;
2015-07-27 09:41:48 +02:00
if ( enctype = = RDB_ENC_INT8 ) {
2011-05-13 23:24:19 +02:00
if ( rioRead ( rdb , enc , 1 ) = = 0 ) return NULL ;
val = ( signed char ) enc [ 0 ] ;
2015-07-27 09:41:48 +02:00
} else if ( enctype = = RDB_ENC_INT16 ) {
2011-05-13 23:24:19 +02:00
uint16_t v ;
if ( rioRead ( rdb , enc , 2 ) = = 0 ) return NULL ;
2021-11-11 14:51:33 +03:00
v = ( ( uint32_t ) enc [ 0 ] ) |
( ( uint32_t ) enc [ 1 ] < < 8 ) ;
2011-05-13 23:24:19 +02:00
val = ( int16_t ) v ;
2015-07-27 09:41:48 +02:00
} else if ( enctype = = RDB_ENC_INT32 ) {
2011-05-13 23:24:19 +02:00
uint32_t v ;
if ( rioRead ( rdb , enc , 4 ) = = 0 ) return NULL ;
2021-11-11 14:51:33 +03:00
v = ( ( uint32_t ) enc [ 0 ] ) |
( ( uint32_t ) enc [ 1 ] < < 8 ) |
( ( uint32_t ) enc [ 2 ] < < 16 ) |
( ( uint32_t ) enc [ 3 ] < < 24 ) ;
2011-05-13 23:24:19 +02:00
val = ( int32_t ) v ;
} else {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Unknown RDB integer encoding type %d " , enctype ) ;
2019-07-18 18:51:45 +02:00
return NULL ; /* Never reached. */
2011-05-13 23:24:19 +02:00
}
2015-07-31 18:01:23 +02:00
if ( plain | | sds ) {
2015-07-27 09:41:48 +02:00
char buf [ LONG_STR_SIZE ] , * p ;
2014-12-23 19:26:34 +01:00
int len = ll2string ( buf , sizeof ( buf ) , val ) ;
2016-05-18 11:45:40 +02:00
if ( lenptr ) * lenptr = len ;
2017-02-23 03:04:08 -08:00
p = plain ? zmalloc ( len ) : sdsnewlen ( SDS_NOINIT , len ) ;
2014-12-23 19:26:34 +01:00
memcpy ( p , buf , len ) ;
return p ;
} else if ( encode ) {
2018-06-21 18:08:37 +08:00
return createStringObjectFromLongLongForValue ( val ) ;
2014-12-23 19:26:34 +01:00
} else {
2023-06-20 20:14:44 +08:00
return createStringObjectFromLongLongWithSds ( val ) ;
2014-12-23 19:26:34 +01:00
}
2011-05-13 23:24:19 +02:00
}
2010-06-22 00:07:48 +02:00
/* String objects in the form "2391" "-100" without any space and with a
* range of values that can fit in an 8 , 16 or 32 bit signed value can be
* encoded as integers to save space */
int rdbTryIntegerEncoding ( char * s , size_t len , unsigned char * enc ) {
long long value ;
2021-05-19 14:24:26 +08:00
if ( string2ll ( s , len , & value ) ) {
return rdbEncodeInteger ( value , enc ) ;
} else {
return 0 ;
}
2010-06-22 00:07:48 +02:00
}
2015-01-18 15:54:30 -05:00
ssize_t rdbSaveLzfBlob ( rio * rdb , void * data , size_t compress_len ,
size_t original_len ) {
2010-06-22 00:07:48 +02:00
unsigned char byte ;
2015-01-18 15:54:30 -05:00
ssize_t n , nwritten = 0 ;
2010-06-22 00:07:48 +02:00
/* Data compressed! Let's save it on disk */
2015-07-27 09:41:48 +02:00
byte = ( RDB_ENCVAL < < 6 ) | RDB_ENC_LZF ;
2011-05-13 17:31:00 +02:00
if ( ( n = rdbWriteRaw ( rdb , & byte , 1 ) ) = = - 1 ) goto writeerr ;
2010-11-21 16:12:25 +01:00
nwritten + = n ;
2010-11-21 15:39:34 +01:00
2014-12-10 21:26:31 -05:00
if ( ( n = rdbSaveLen ( rdb , compress_len ) ) = = - 1 ) goto writeerr ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2014-12-10 21:26:31 -05:00
if ( ( n = rdbSaveLen ( rdb , original_len ) ) = = - 1 ) goto writeerr ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2014-12-10 21:26:31 -05:00
if ( ( n = rdbWriteRaw ( rdb , data , compress_len ) ) = = - 1 ) goto writeerr ;
2010-11-21 16:12:25 +01:00
nwritten + = n ;
2010-11-21 15:39:34 +01:00
return nwritten ;
2010-06-22 00:07:48 +02:00
writeerr :
return - 1 ;
}
2015-01-18 15:54:30 -05:00
ssize_t rdbSaveLzfStringObject ( rio * rdb , unsigned char * s , size_t len ) {
2014-12-10 21:26:31 -05:00
size_t comprlen , outlen ;
void * out ;
/* We require at least four bytes compression for this to be worth it */
if ( len < = 4 ) return 0 ;
outlen = len - 4 ;
if ( ( out = zmalloc ( outlen + 1 ) ) = = NULL ) return 0 ;
comprlen = lzf_compress ( s , len , out , outlen ) ;
if ( comprlen = = 0 ) {
zfree ( out ) ;
return 0 ;
}
2015-01-18 15:54:30 -05:00
ssize_t nwritten = rdbSaveLzfBlob ( rdb , out , comprlen , len ) ;
2014-12-10 21:26:31 -05:00
zfree ( out ) ;
return nwritten ;
}
2014-12-23 19:26:34 +01:00
/* Load an LZF compressed string in RDB format. The returned value
* changes according to ' flags ' . For more info check the
* rdbGenericLoadStringObject ( ) function . */
2016-05-18 11:45:40 +02:00
void * rdbLoadLzfStringObject ( rio * rdb , int flags , size_t * lenptr ) {
2014-12-23 19:26:34 +01:00
int plain = flags & RDB_LOAD_PLAIN ;
2015-07-31 18:01:23 +02:00
int sds = flags & RDB_LOAD_SDS ;
2016-06-01 20:18:28 +02:00
uint64_t len , clen ;
2011-05-13 23:24:19 +02:00
unsigned char * c = NULL ;
2015-07-31 18:01:23 +02:00
char * val = NULL ;
2011-05-13 23:24:19 +02:00
2015-07-27 09:41:48 +02:00
if ( ( clen = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) return NULL ;
if ( ( len = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) return NULL ;
2020-11-22 21:22:49 +02:00
if ( ( c = ztrymalloc ( clen ) ) = = NULL ) {
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
serverLog ( isRestoreContext ( ) ? LL_VERBOSE : LL_WARNING , " rdbLoadLzfStringObject failed allocating %llu bytes " , ( unsigned long long ) clen ) ;
2020-11-22 21:22:49 +02:00
goto err ;
}
2014-12-23 19:26:34 +01:00
/* Allocate our target according to the uncompressed size. */
if ( plain ) {
2020-11-22 21:22:49 +02:00
val = ztrymalloc ( len ) ;
2014-12-23 19:26:34 +01:00
} else {
2020-11-22 21:22:49 +02:00
val = sdstrynewlen ( SDS_NOINIT , len ) ;
}
if ( ! val ) {
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
serverLog ( isRestoreContext ( ) ? LL_VERBOSE : LL_WARNING , " rdbLoadLzfStringObject failed allocating %llu bytes " , ( unsigned long long ) len ) ;
2020-11-22 21:22:49 +02:00
goto err ;
2014-12-23 19:26:34 +01:00
}
2020-11-22 21:22:49 +02:00
2017-02-23 03:04:08 -08:00
if ( lenptr ) * lenptr = len ;
2014-12-23 19:26:34 +01:00
/* Load the compressed representation and uncompress it to target. */
2011-05-13 23:24:19 +02:00
if ( rioRead ( rdb , c , clen ) = = 0 ) goto err ;
2020-08-14 16:05:34 +03:00
if ( lzf_decompress ( c , clen , val , len ) ! = len ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Invalid LZF compressed string " ) ;
2020-08-14 16:05:34 +03:00
goto err ;
2016-07-01 11:59:25 +02:00
}
2011-05-13 23:24:19 +02:00
zfree ( c ) ;
2014-12-23 19:26:34 +01:00
2015-07-31 18:01:23 +02:00
if ( plain | | sds ) {
2014-12-23 19:26:34 +01:00
return val ;
2015-07-31 18:01:23 +02:00
} else {
2015-07-26 15:28:00 +02:00
return createObject ( OBJ_STRING , val ) ;
2015-07-31 18:01:23 +02:00
}
2011-05-13 23:24:19 +02:00
err :
zfree ( c ) ;
2014-12-23 19:26:34 +01:00
if ( plain )
zfree ( val ) ;
else
sdsfree ( val ) ;
2011-05-13 23:24:19 +02:00
return NULL ;
}
2013-01-17 01:00:20 +08:00
/* Save a string object as [len][data] on disk. If the object is a string
2011-02-28 09:56:48 +01:00
* representation of an integer value we try to save it in a special form */
2015-01-18 15:54:30 -05:00
ssize_t rdbSaveRawString ( rio * rdb , unsigned char * s , size_t len ) {
2010-06-22 00:07:48 +02:00
int enclen ;
2015-01-18 15:54:30 -05:00
ssize_t n , nwritten = 0 ;
2010-06-22 00:07:48 +02:00
/* Try integer encoding */
if ( len < = 11 ) {
unsigned char buf [ 5 ] ;
if ( ( enclen = rdbTryIntegerEncoding ( ( char * ) s , len , buf ) ) > 0 ) {
2011-05-13 17:31:00 +02:00
if ( rdbWriteRaw ( rdb , buf , enclen ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
return enclen ;
2010-06-22 00:07:48 +02:00
}
}
/* Try LZF compression - under 20 bytes it's unable to compress even
* aaaaaaaaaaaaaaaaaa so skip it */
2011-12-21 12:22:13 +01:00
if ( server . rdb_compression & & len > 20 ) {
2011-05-13 17:31:00 +02:00
n = rdbSaveLzfStringObject ( rdb , s , len ) ;
2010-11-21 15:39:34 +01:00
if ( n = = - 1 ) return - 1 ;
if ( n > 0 ) return n ;
/* Return value of 0 means data can't be compressed, save the old way */
2010-06-22 00:07:48 +02:00
}
/* Store verbatim */
2011-05-13 17:31:00 +02:00
if ( ( n = rdbSaveLen ( rdb , len ) ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
if ( len > 0 ) {
2011-05-13 17:31:00 +02:00
if ( rdbWriteRaw ( rdb , s , len ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = len ;
}
return nwritten ;
2010-06-22 00:07:48 +02:00
}
/* Save a long long value as either an encoded string or a string. */
2015-01-18 15:54:30 -05:00
ssize_t rdbSaveLongLongAsStringObject ( rio * rdb , long long value ) {
2010-06-22 00:07:48 +02:00
unsigned char buf [ 32 ] ;
2015-01-18 15:54:30 -05:00
ssize_t n , nwritten = 0 ;
2010-06-22 00:07:48 +02:00
int enclen = rdbEncodeInteger ( value , buf ) ;
if ( enclen > 0 ) {
2011-05-13 17:31:00 +02:00
return rdbWriteRaw ( rdb , buf , enclen ) ;
2010-06-22 00:07:48 +02:00
} else {
/* Encode as string */
enclen = ll2string ( ( char * ) buf , 32 , value ) ;
2015-07-26 15:29:53 +02:00
serverAssert ( enclen < 32 ) ;
2011-05-13 17:31:00 +02:00
if ( ( n = rdbSaveLen ( rdb , enclen ) ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2011-05-13 17:31:00 +02:00
if ( ( n = rdbWriteRaw ( rdb , buf , enclen ) ) = = - 1 ) return - 1 ;
2010-11-21 16:12:25 +01:00
nwritten + = n ;
2010-06-22 00:07:48 +02:00
}
2010-11-21 15:39:34 +01:00
return nwritten ;
2010-06-22 00:07:48 +02:00
}
2024-04-09 01:24:03 -07:00
/* Like rdbSaveRawString() gets an Object instead. */
2017-12-21 11:10:48 +02:00
ssize_t rdbSaveStringObject ( rio * rdb , robj * obj ) {
2010-06-22 00:07:48 +02:00
/* Avoid to decode the object, then encode it again, if the
2013-01-17 01:00:20 +08:00
* object is already integer encoded . */
2015-07-26 15:28:00 +02:00
if ( obj - > encoding = = OBJ_ENCODING_INT ) {
2011-05-13 17:31:00 +02:00
return rdbSaveLongLongAsStringObject ( rdb , ( long ) obj - > ptr ) ;
2010-06-22 00:07:48 +02:00
} else {
2015-07-26 15:29:53 +02:00
serverAssertWithInfo ( NULL , obj , sdsEncodedObject ( obj ) ) ;
2011-05-13 17:31:00 +02:00
return rdbSaveRawString ( rdb , obj - > ptr , sdslen ( obj - > ptr ) ) ;
2010-06-22 00:07:48 +02:00
}
}
2014-12-23 19:26:34 +01:00
/* Load a string object from an RDB file according to flags:
*
* RDB_LOAD_NONE ( no flags ) : load an RDB object , unencoded .
2024-04-09 01:24:03 -07:00
* RDB_LOAD_ENC : If the returned type is an Object , try to
2014-12-23 19:26:34 +01:00
* encode it in a special way to be more memory
* efficient . When this flag is passed the function
* no longer guarantees that obj - > ptr is an SDS string .
* RDB_LOAD_PLAIN : Return a plain string allocated with zmalloc ( )
2024-04-09 01:24:03 -07:00
* instead of an Object with an sds in it .
* RDB_LOAD_SDS : Return an SDS string instead of an Object .
2016-05-18 11:45:40 +02:00
*
* On I / O error NULL is returned .
*/
void * rdbGenericLoadStringObject ( rio * rdb , int flags , size_t * lenptr ) {
2014-12-23 19:26:34 +01:00
int plain = flags & RDB_LOAD_PLAIN ;
2015-07-31 18:01:23 +02:00
int sds = flags & RDB_LOAD_SDS ;
2011-05-13 23:24:19 +02:00
int isencoded ;
2020-09-16 20:21:04 +03:00
unsigned long long len ;
2011-05-13 23:24:19 +02:00
len = rdbLoadLen ( rdb , & isencoded ) ;
2020-08-14 16:05:34 +03:00
if ( len = = RDB_LENERR ) return NULL ;
2011-05-13 23:24:19 +02:00
if ( isencoded ) {
switch ( len ) {
2015-07-27 09:41:48 +02:00
case RDB_ENC_INT8 :
case RDB_ENC_INT16 :
case RDB_ENC_INT32 :
2016-05-18 11:45:40 +02:00
return rdbLoadIntegerObject ( rdb , len , flags , lenptr ) ;
2015-07-27 09:41:48 +02:00
case RDB_ENC_LZF :
2016-05-18 11:45:40 +02:00
return rdbLoadLzfStringObject ( rdb , flags , lenptr ) ;
2011-05-13 23:24:19 +02:00
default :
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Unknown RDB string encoding type %llu " , len ) ;
2020-09-16 20:21:04 +03:00
return NULL ;
2011-05-13 23:24:19 +02:00
}
}
2015-07-31 18:01:23 +02:00
if ( plain | | sds ) {
2020-11-22 21:22:49 +02:00
void * buf = plain ? ztrymalloc ( len ) : sdstrynewlen ( SDS_NOINIT , len ) ;
if ( ! buf ) {
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
serverLog ( isRestoreContext ( ) ? LL_VERBOSE : LL_WARNING , " rdbGenericLoadStringObject failed allocating %llu bytes " , len ) ;
2020-11-22 21:22:49 +02:00
return NULL ;
}
2016-05-18 11:45:40 +02:00
if ( lenptr ) * lenptr = len ;
2015-07-31 18:01:23 +02:00
if ( len & & rioRead ( rdb , buf , len ) = = 0 ) {
if ( plain )
zfree ( buf ) ;
else
sdsfree ( buf ) ;
return NULL ;
}
return buf ;
} else {
2022-07-31 17:29:59 +03:00
robj * o = tryCreateStringObject ( SDS_NOINIT , len ) ;
2021-08-05 22:56:14 +03:00
if ( ! o ) {
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
serverLog ( isRestoreContext ( ) ? LL_VERBOSE : LL_WARNING , " rdbGenericLoadStringObject failed allocating %llu bytes " , len ) ;
2021-08-05 22:56:14 +03:00
return NULL ;
}
2014-12-23 19:26:34 +01:00
if ( len & & rioRead ( rdb , o - > ptr , len ) = = 0 ) {
decrRefCount ( o ) ;
return NULL ;
}
return o ;
2011-05-13 23:24:19 +02:00
}
}
robj * rdbLoadStringObject ( rio * rdb ) {
2016-05-18 11:45:40 +02:00
return rdbGenericLoadStringObject ( rdb , RDB_LOAD_NONE , NULL ) ;
2011-05-13 23:24:19 +02:00
}
robj * rdbLoadEncodedStringObject ( rio * rdb ) {
2016-05-18 11:45:40 +02:00
return rdbGenericLoadStringObject ( rdb , RDB_LOAD_ENC , NULL ) ;
2011-05-13 23:24:19 +02:00
}
2010-06-22 00:07:48 +02:00
/* Save a double value. Doubles are saved as strings prefixed by an unsigned
2013-01-17 01:00:20 +08:00
* 8 bit integer specifying the length of the representation .
2010-06-22 00:07:48 +02:00
* This 8 bit integer has special values in order to specify the following
* conditions :
* 253 : not a number
* 254 : + inf
* 255 : - inf
*/
2023-08-16 15:38:59 +08:00
ssize_t rdbSaveDoubleValue ( rio * rdb , double val ) {
2010-06-22 00:07:48 +02:00
unsigned char buf [ 128 ] ;
int len ;
if ( isnan ( val ) ) {
buf [ 0 ] = 253 ;
len = 1 ;
} else if ( ! isfinite ( val ) ) {
len = 1 ;
buf [ 0 ] = ( val < 0 ) ? 255 : 254 ;
} else {
Optimize integer zset scores in listpack (converting to string and back) (#10486)
When the score doesn't have fractional part, and can be stored as an integer,
we use the integer capabilities of listpack to store it, rather than convert it to string.
This already existed before this PR (lpInsert dose that conversion implicitly).
But to do that, we would have first converted the score from double to string (calling `d2string`),
then pass the string to `lpAppend` which identified it as being an integer and convert it back to an int.
Now, instead of converting it to a string, we store it using lpAppendInteger`.
Unrelated:
---
* Fix the double2ll range check (negative and positive ranges, and also the comparison operands
were slightly off. but also, the range could be made much larger, see comment).
* Unify the double to string conversion code in rdb.c with the one in util.c
* Small optimization in lpStringToInt64, don't attempt to convert strings that are obviously too long.
Benchmark;
---
Up to 20% improvement in certain tight loops doing zzlInsert with large integers.
(if listpack is pre-allocated to avoid realloc, and insertion is sorted from largest to smaller)
2022-04-17 17:16:46 +03:00
long long lvalue ;
/* Integer printing function is much faster, check if we can safely use it. */
if ( double2ll ( val , & lvalue ) )
ll2string ( ( char * ) buf + 1 , sizeof ( buf ) - 1 , lvalue ) ;
optimizing d2string() and addReplyDouble() with grisu2: double to string conversion based on Florian Loitsch's Grisu-algorithm (#10587)
All commands / use cases that heavily rely on double to a string representation conversion,
(e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ),
could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the
equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures
100% coverage of conversion.
This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries
( fmtlib ) that use the optimized double to string conversion underneath.
The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on
https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ).
test suite changes:
Despite being compatible, in some cases it produces a different result from printf, and some tests
had to be adjusted.
one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000`
instead of 5e+9, which sounds like a bug?
In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding
issues (`expr 0.8 == 0.79999999999999999`)
2022-10-15 10:17:41 +01:00
else {
const int dlen = fpconv_dtoa ( val , ( char * ) buf + 1 ) ;
buf [ dlen + 1 ] = ' \0 ' ;
}
2010-06-22 00:07:48 +02:00
buf [ 0 ] = strlen ( ( char * ) buf + 1 ) ;
len = buf [ 0 ] + 1 ;
}
2011-05-13 17:31:00 +02:00
return rdbWriteRaw ( rdb , buf , len ) ;
2010-06-22 00:07:48 +02:00
}
2011-05-13 23:24:19 +02:00
/* For information about double serialization check rdbSaveDoubleValue() */
int rdbLoadDoubleValue ( rio * rdb , double * val ) {
2014-05-12 11:35:10 +02:00
char buf [ 256 ] ;
2011-05-13 23:24:19 +02:00
unsigned char len ;
if ( rioRead ( rdb , & len , 1 ) = = 0 ) return - 1 ;
switch ( len ) {
case 255 : * val = R_NegInf ; return 0 ;
case 254 : * val = R_PosInf ; return 0 ;
case 253 : * val = R_Nan ; return 0 ;
default :
if ( rioRead ( rdb , buf , len ) = = 0 ) return - 1 ;
buf [ len ] = ' \0 ' ;
2020-08-14 16:05:34 +03:00
if ( sscanf ( buf , " %lg " , val ) ! = 1 ) return - 1 ;
2011-05-13 23:24:19 +02:00
return 0 ;
}
}
2016-06-01 11:55:47 +02:00
/* Saves a double for RDB 8 or greater, where IE754 binary64 format is assumed.
* We just make sure the integer is always stored in little endian , otherwise
2016-05-18 11:45:40 +02:00
* the value is copied verbatim from memory to disk .
*
* Return - 1 on error , the size of the serialized value on success . */
2016-06-01 11:55:47 +02:00
int rdbSaveBinaryDoubleValue ( rio * rdb , double val ) {
memrev64ifbe ( & val ) ;
2016-10-03 00:08:35 +02:00
return rdbWriteRaw ( rdb , & val , sizeof ( val ) ) ;
2016-06-01 11:55:47 +02:00
}
/* Loads a double from RDB 8 or greater. See rdbSaveBinaryDoubleValue() for
2016-05-18 11:45:40 +02:00
* more info . On error - 1 is returned , otherwise 0. */
2016-06-01 11:55:47 +02:00
int rdbLoadBinaryDoubleValue ( rio * rdb , double * val ) {
2016-10-03 00:08:35 +02:00
if ( rioRead ( rdb , val , sizeof ( * val ) ) = = 0 ) return - 1 ;
2016-06-01 11:55:47 +02:00
memrev64ifbe ( val ) ;
return 0 ;
}
2016-10-03 00:08:35 +02:00
/* Like rdbSaveBinaryDoubleValue() but single precision. */
int rdbSaveBinaryFloatValue ( rio * rdb , float val ) {
memrev32ifbe ( & val ) ;
return rdbWriteRaw ( rdb , & val , sizeof ( val ) ) ;
}
/* Like rdbLoadBinaryDoubleValue() but single precision. */
int rdbLoadBinaryFloatValue ( rio * rdb , float * val ) {
if ( rioRead ( rdb , val , sizeof ( * val ) ) = = 0 ) return - 1 ;
memrev32ifbe ( val ) ;
return 0 ;
}
2011-05-13 23:24:19 +02:00
/* Save the object type of object "o". */
int rdbSaveObjectType ( rio * rdb , robj * o ) {
switch ( o - > type ) {
2015-07-26 15:28:00 +02:00
case OBJ_STRING :
2015-07-27 09:41:48 +02:00
return rdbSaveType ( rdb , RDB_TYPE_STRING ) ;
2015-07-26 15:28:00 +02:00
case OBJ_LIST :
Add listpack encoding for list (#11303)
Improve memory efficiency of list keys
## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.
## Conversion rules
* Convert listpack to quicklist
When the listpack length or size reaches the `list-max-listpack-size` limit,
it will be converted to a quicklist.
* Convert quicklist to listpack
When a quicklist has only one node, and its length or size is reduced to half
of the `list-max-listpack-size` limit, it will be converted to a listpack.
This is done to avoid frequent conversions when we add or remove at the bounding size or length.
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
so when changing the direction, we need to use the current node (listTypeEntry->p) to
update `listTypeIterator->lpi` to the next node in the reverse direction.
## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement
### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement
From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
the main enhancement is brought by `addListListpackRangeReply()`.
## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.
## Note
1. Add conversion callback to support doing some work before conversion
Since the quicklist iterator decompresses the current node when it is released, we can
no longer decompress the quicklist after we convert the list.
2022-11-17 02:29:46 +08:00
if ( o - > encoding = = OBJ_ENCODING_QUICKLIST | | o - > encoding = = OBJ_ENCODING_LISTPACK )
2021-11-03 20:47:18 +02:00
return rdbSaveType ( rdb , RDB_TYPE_LIST_QUICKLIST_2 ) ;
2011-05-13 23:24:19 +02:00
else
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown list encoding " ) ;
2015-07-26 15:28:00 +02:00
case OBJ_SET :
if ( o - > encoding = = OBJ_ENCODING_INTSET )
2015-07-27 09:41:48 +02:00
return rdbSaveType ( rdb , RDB_TYPE_SET_INTSET ) ;
2015-07-26 15:28:00 +02:00
else if ( o - > encoding = = OBJ_ENCODING_HT )
2015-07-27 09:41:48 +02:00
return rdbSaveType ( rdb , RDB_TYPE_SET ) ;
2022-11-09 18:50:07 +01:00
else if ( o - > encoding = = OBJ_ENCODING_LISTPACK )
return rdbSaveType ( rdb , RDB_TYPE_SET_LISTPACK ) ;
2011-05-13 23:24:19 +02:00
else
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown set encoding " ) ;
2015-07-26 15:28:00 +02:00
case OBJ_ZSET :
2021-09-09 23:18:53 +08:00
if ( o - > encoding = = OBJ_ENCODING_LISTPACK )
return rdbSaveType ( rdb , RDB_TYPE_ZSET_LISTPACK ) ;
2015-07-26 15:28:00 +02:00
else if ( o - > encoding = = OBJ_ENCODING_SKIPLIST )
2016-06-01 11:55:47 +02:00
return rdbSaveType ( rdb , RDB_TYPE_ZSET_2 ) ;
2011-05-13 23:24:19 +02:00
else
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown sorted set encoding " ) ;
2015-07-26 15:28:00 +02:00
case OBJ_HASH :
2021-08-10 14:18:49 +08:00
if ( o - > encoding = = OBJ_ENCODING_LISTPACK )
return rdbSaveType ( rdb , RDB_TYPE_HASH_LISTPACK ) ;
2015-07-26 15:28:00 +02:00
else if ( o - > encoding = = OBJ_ENCODING_HT )
2015-07-27 09:41:48 +02:00
return rdbSaveType ( rdb , RDB_TYPE_HASH ) ;
2011-05-13 23:24:19 +02:00
else
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown hash encoding " ) ;
2017-09-05 13:14:13 +02:00
case OBJ_STREAM :
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 17:51:31 +05:30
return rdbSaveType ( rdb , RDB_TYPE_STREAM_LISTPACKS_3 ) ;
2016-05-18 11:45:40 +02:00
case OBJ_MODULE :
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
return rdbSaveType ( rdb , RDB_TYPE_MODULE_2 ) ;
2011-05-13 23:24:19 +02:00
default :
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown object type " ) ;
2011-05-13 23:24:19 +02:00
}
return - 1 ; /* avoid warning */
}
2012-06-02 10:21:57 +02:00
/* Use rdbLoadType() to load a TYPE in RDB format, but returns -1 if the
* type is not specifically a valid Object Type . */
2011-05-13 23:24:19 +02:00
int rdbLoadObjectType ( rio * rdb ) {
int type ;
if ( ( type = rdbLoadType ( rdb ) ) = = - 1 ) return - 1 ;
if ( ! rdbIsObjectType ( type ) ) return - 1 ;
return type ;
2010-06-22 00:07:48 +02:00
}
2018-01-31 12:05:04 +01:00
/* This helper function serializes a consumer group Pending Entries List (PEL)
* into the RDB file . The ' nacks ' argument tells the function if also persist
2021-06-10 20:39:33 +08:00
* the information about the not acknowledged message , or if to persist
2018-01-31 12:05:04 +01:00
* just the IDs : this is useful because for the global consumer group PEL
* we serialized the NACKs as well , but when serializing the local consumer
* PELs we just add the ID , that will be resolved inside the global PEL to
* put a reference to the same structure . */
ssize_t rdbSaveStreamPEL ( rio * rdb , rax * pel , int nacks ) {
ssize_t n , nwritten = 0 ;
/* Number of entries in the PEL. */
if ( ( n = rdbSaveLen ( rdb , raxSize ( pel ) ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
/* Save each entry. */
raxIterator ri ;
raxStart ( & ri , pel ) ;
raxSeek ( & ri , " ^ " , NULL , 0 ) ;
while ( raxNext ( & ri ) ) {
/* We store IDs in raw form as 128 big big endian numbers, like
* they are inside the radix tree key . */
2020-07-21 01:13:05 -04:00
if ( ( n = rdbWriteRaw ( rdb , ri . key , sizeof ( streamID ) ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2018-01-31 12:05:04 +01:00
nwritten + = n ;
if ( nacks ) {
streamNACK * nack = ri . data ;
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveMillisecondTime ( rdb , nack - > delivery_time ) ) = = - 1 ) {
raxStop ( & ri ) ;
2018-01-31 12:05:04 +01:00
return - 1 ;
2020-07-21 01:13:05 -04:00
}
2018-01-31 12:05:04 +01:00
nwritten + = n ;
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveLen ( rdb , nack - > delivery_count ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2018-01-31 12:05:04 +01:00
nwritten + = n ;
/* We don't save the consumer name: we'll save the pending IDs
* for each consumer in the consumer PEL , and resolve the consumer
* at loading time . */
}
}
raxStop ( & ri ) ;
return nwritten ;
}
2018-01-31 17:06:32 +01:00
/* Serialize the consumers of a stream consumer group into the RDB. Helper
* function for the stream data type serialization . What we do here is to
* persist the consumer metadata , and it ' s PEL , for each consumer . */
size_t rdbSaveStreamConsumers ( rio * rdb , streamCG * cg ) {
ssize_t n , nwritten = 0 ;
/* Number of consumers in this consumer group. */
if ( ( n = rdbSaveLen ( rdb , raxSize ( cg - > consumers ) ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
/* Save each consumer. */
raxIterator ri ;
raxStart ( & ri , cg - > consumers ) ;
raxSeek ( & ri , " ^ " , NULL , 0 ) ;
while ( raxNext ( & ri ) ) {
streamConsumer * consumer = ri . data ;
/* Consumer name. */
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveRawString ( rdb , ri . key , ri . key_len ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2018-01-31 17:06:32 +01:00
nwritten + = n ;
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 17:51:31 +05:30
/* Seen time. */
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveMillisecondTime ( rdb , consumer - > seen_time ) ) = = - 1 ) {
raxStop ( & ri ) ;
2018-01-31 17:06:32 +01:00
return - 1 ;
2020-07-21 01:13:05 -04:00
}
2018-01-31 17:06:32 +01:00
nwritten + = n ;
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 17:51:31 +05:30
/* Active time. */
if ( ( n = rdbSaveMillisecondTime ( rdb , consumer - > active_time ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
nwritten + = n ;
2018-01-31 17:06:32 +01:00
/* Consumer PEL, without the ACKs (see last parameter of the function
* passed with value of 0 ) , at loading time we ' ll lookup the ID
* in the consumer group global PEL and will put a reference in the
* consumer local PEL . */
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveStreamPEL ( rdb , consumer - > pel , 0 ) ) = = - 1 ) {
raxStop ( & ri ) ;
2018-01-31 17:06:32 +01:00
return - 1 ;
2020-07-21 01:13:05 -04:00
}
2018-01-31 17:06:32 +01:00
nwritten + = n ;
}
raxStop ( & ri ) ;
return nwritten ;
}
2024-04-09 01:24:03 -07:00
/* Save an Object.
2018-01-31 12:05:04 +01:00
* Returns - 1 on error , number of bytes written on success . */
2021-06-16 14:45:49 +08:00
ssize_t rdbSaveObject ( rio * rdb , robj * o , robj * key , int dbid ) {
2015-01-18 15:54:30 -05:00
ssize_t n = 0 , nwritten = 0 ;
2010-11-21 15:39:34 +01:00
2015-07-26 15:28:00 +02:00
if ( o - > type = = OBJ_STRING ) {
2010-06-22 00:07:48 +02:00
/* Save a string value */
2011-05-13 17:31:00 +02:00
if ( ( n = rdbSaveStringObject ( rdb , o ) ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2015-07-26 15:28:00 +02:00
} else if ( o - > type = = OBJ_LIST ) {
2010-06-22 00:07:48 +02:00
/* Save a list value */
2015-07-26 15:28:00 +02:00
if ( o - > encoding = = OBJ_ENCODING_QUICKLIST ) {
2014-12-10 13:53:12 -05:00
quicklist * ql = o - > ptr ;
quicklistNode * node = ql - > head ;
2010-06-22 00:07:48 +02:00
2014-12-10 13:53:12 -05:00
if ( ( n = rdbSaveLen ( rdb , ql - > len ) ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
Fix saving of zero-length lists.
Normally in modern Redis you can't create zero-len lists, however it's
possible to load them from old RDB files generated, for instance, using
Redis 2.8 (see issue #4409). The "Right Thing" would be not loading such
lists at all, but this requires to hook in rdb.c random places in a not
great way, for a problem that is at this point, at best, minor.
Here in this commit instead I just fix the fact that zero length lists,
materialized as quicklists with the first node set to NULL, were
iterated in the wrong way while they are saved, leading to a crash.
The other parts of the list implementation are apparently able to deal
with empty lists correctly, even if they are no longer a thing.
2017-11-06 12:33:42 +01:00
while ( node ) {
2021-11-03 20:47:18 +02:00
if ( ( n = rdbSaveLen ( rdb , node - > container ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
2014-12-10 21:26:31 -05:00
if ( quicklistNodeIsCompressed ( node ) ) {
void * data ;
size_t compress_len = quicklistGetLzf ( node , & data ) ;
if ( ( n = rdbSaveLzfBlob ( rdb , data , compress_len , node - > sz ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
} else {
2021-11-03 20:47:18 +02:00
if ( ( n = rdbSaveRawString ( rdb , node - > entry , node - > sz ) ) = = - 1 ) return - 1 ;
2014-12-10 21:26:31 -05:00
nwritten + = n ;
}
Fix saving of zero-length lists.
Normally in modern Redis you can't create zero-len lists, however it's
possible to load them from old RDB files generated, for instance, using
Redis 2.8 (see issue #4409). The "Right Thing" would be not loading such
lists at all, but this requires to hook in rdb.c random places in a not
great way, for a problem that is at this point, at best, minor.
Here in this commit instead I just fix the fact that zero length lists,
materialized as quicklists with the first node set to NULL, were
iterated in the wrong way while they are saved, leading to a crash.
The other parts of the list implementation are apparently able to deal
with empty lists correctly, even if they are no longer a thing.
2017-11-06 12:33:42 +01:00
node = node - > next ;
}
Add listpack encoding for list (#11303)
Improve memory efficiency of list keys
## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.
## Conversion rules
* Convert listpack to quicklist
When the listpack length or size reaches the `list-max-listpack-size` limit,
it will be converted to a quicklist.
* Convert quicklist to listpack
When a quicklist has only one node, and its length or size is reduced to half
of the `list-max-listpack-size` limit, it will be converted to a listpack.
This is done to avoid frequent conversions when we add or remove at the bounding size or length.
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
so when changing the direction, we need to use the current node (listTypeEntry->p) to
update `listTypeIterator->lpi` to the next node in the reverse direction.
## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement
### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement
From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
the main enhancement is brought by `addListListpackRangeReply()`.
## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.
## Note
1. Add conversion callback to support doing some work before conversion
Since the quicklist iterator decompresses the current node when it is released, we can
no longer decompress the quicklist after we convert the list.
2022-11-17 02:29:46 +08:00
} else if ( o - > encoding = = OBJ_ENCODING_LISTPACK ) {
unsigned char * lp = o - > ptr ;
/* Save list listpack as a fake quicklist that only has a single node. */
if ( ( n = rdbSaveLen ( rdb , 1 ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
if ( ( n = rdbSaveLen ( rdb , QUICKLIST_NODE_CONTAINER_PACKED ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
if ( ( n = rdbSaveRawString ( rdb , lp , lpBytes ( lp ) ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
2010-06-22 00:07:48 +02:00
} else {
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown list encoding " ) ;
2010-06-22 00:07:48 +02:00
}
2015-07-26 15:28:00 +02:00
} else if ( o - > type = = OBJ_SET ) {
2010-06-22 00:07:48 +02:00
/* Save a set value */
2015-07-26 15:28:00 +02:00
if ( o - > encoding = = OBJ_ENCODING_HT ) {
2010-07-02 19:57:12 +02:00
dict * set = o - > ptr ;
dictIterator * di = dictGetIterator ( set ) ;
dictEntry * de ;
2010-06-22 00:07:48 +02:00
2018-05-09 12:06:37 +02:00
if ( ( n = rdbSaveLen ( rdb , dictSize ( set ) ) ) = = - 1 ) {
dictReleaseIterator ( di ) ;
return - 1 ;
}
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2010-07-02 19:57:12 +02:00
while ( ( de = dictNext ( di ) ) ! = NULL ) {
2015-07-31 18:01:23 +02:00
sds ele = dictGetKey ( de ) ;
if ( ( n = rdbSaveRawString ( rdb , ( unsigned char * ) ele , sdslen ( ele ) ) )
2018-05-09 11:03:27 +02:00
= = - 1 )
{
dictReleaseIterator ( di ) ;
return - 1 ;
}
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2010-07-02 19:57:12 +02:00
}
dictReleaseIterator ( di ) ;
2015-07-26 15:28:00 +02:00
} else if ( o - > encoding = = OBJ_ENCODING_INTSET ) {
2011-02-28 17:53:47 +01:00
size_t l = intsetBlobLen ( ( intset * ) o - > ptr ) ;
2010-07-02 19:57:12 +02:00
2011-05-13 17:31:00 +02:00
if ( ( n = rdbSaveRawString ( rdb , o - > ptr , l ) ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2022-11-09 18:50:07 +01:00
} else if ( o - > encoding = = OBJ_ENCODING_LISTPACK ) {
size_t l = lpBytes ( ( unsigned char * ) o - > ptr ) ;
if ( ( n = rdbSaveRawString ( rdb , o - > ptr , l ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
2010-07-02 19:57:12 +02:00
} else {
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown set encoding " ) ;
2010-06-22 00:07:48 +02:00
}
2015-07-26 15:28:00 +02:00
} else if ( o - > type = = OBJ_ZSET ) {
2011-03-09 13:16:38 +01:00
/* Save a sorted set value */
2021-09-09 23:18:53 +08:00
if ( o - > encoding = = OBJ_ENCODING_LISTPACK ) {
size_t l = lpBytes ( ( unsigned char * ) o - > ptr ) ;
2010-06-22 00:07:48 +02:00
2011-05-13 17:31:00 +02:00
if ( ( n = rdbSaveRawString ( rdb , o - > ptr , l ) ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2015-07-26 15:28:00 +02:00
} else if ( o - > encoding = = OBJ_ENCODING_SKIPLIST ) {
2011-03-09 13:16:38 +01:00
zset * zs = o - > ptr ;
2017-03-31 21:45:00 +08:00
zskiplist * zsl = zs - > zsl ;
2011-03-09 13:16:38 +01:00
2017-03-31 21:45:00 +08:00
if ( ( n = rdbSaveLen ( rdb , zsl - > length ) ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2011-03-09 13:16:38 +01:00
2017-04-18 11:01:47 +02:00
/* We save the skiplist elements from the greatest to the smallest
* ( that ' s trivial since the elements are already ordered in the
* skiplist ) : this improves the load process , since the next loaded
* element will always be the smaller , so adding to the skiplist
* will always immediately stop at the head , making the insertion
* O ( 1 ) instead of O ( log ( N ) ) . */
2017-03-31 21:45:00 +08:00
zskiplistNode * zn = zsl - > tail ;
while ( zn ! = NULL ) {
2017-04-18 11:01:47 +02:00
if ( ( n = rdbSaveRawString ( rdb ,
( unsigned char * ) zn - > ele , sdslen ( zn - > ele ) ) ) = = - 1 )
{
return - 1 ;
}
2011-03-09 13:16:38 +01:00
nwritten + = n ;
2017-04-18 11:01:47 +02:00
if ( ( n = rdbSaveBinaryDoubleValue ( rdb , zn - > score ) ) = = - 1 )
return - 1 ;
2011-03-09 13:16:38 +01:00
nwritten + = n ;
2017-03-31 21:45:00 +08:00
zn = zn - > backward ;
2011-03-09 13:16:38 +01:00
}
} else {
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown sorted set encoding " ) ;
2010-06-22 00:07:48 +02:00
}
2015-07-26 15:28:00 +02:00
} else if ( o - > type = = OBJ_HASH ) {
2010-06-22 00:07:48 +02:00
/* Save a hash value */
2021-08-10 14:18:49 +08:00
if ( o - > encoding = = OBJ_ENCODING_LISTPACK ) {
size_t l = lpBytes ( ( unsigned char * ) o - > ptr ) ;
2010-06-22 00:07:48 +02:00
2011-05-13 17:31:00 +02:00
if ( ( n = rdbSaveRawString ( rdb , o - > ptr , l ) ) = = - 1 ) return - 1 ;
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2015-07-26 15:28:00 +02:00
} else if ( o - > encoding = = OBJ_ENCODING_HT ) {
2010-06-22 00:07:48 +02:00
dictIterator * di = dictGetIterator ( o - > ptr ) ;
dictEntry * de ;
2018-05-09 12:06:37 +02:00
if ( ( n = rdbSaveLen ( rdb , dictSize ( ( dict * ) o - > ptr ) ) ) = = - 1 ) {
dictReleaseIterator ( di ) ;
return - 1 ;
}
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2010-06-22 00:07:48 +02:00
while ( ( de = dictNext ( di ) ) ! = NULL ) {
2015-09-23 10:34:53 +02:00
sds field = dictGetKey ( de ) ;
sds value = dictGetVal ( de ) ;
2010-06-22 00:07:48 +02:00
2015-09-23 10:34:53 +02:00
if ( ( n = rdbSaveRawString ( rdb , ( unsigned char * ) field ,
2018-05-09 11:03:27 +02:00
sdslen ( field ) ) ) = = - 1 )
{
dictReleaseIterator ( di ) ;
return - 1 ;
}
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2015-09-23 10:34:53 +02:00
if ( ( n = rdbSaveRawString ( rdb , ( unsigned char * ) value ,
2018-05-09 11:03:27 +02:00
sdslen ( value ) ) ) = = - 1 )
{
dictReleaseIterator ( di ) ;
return - 1 ;
}
2010-11-21 15:39:34 +01:00
nwritten + = n ;
2010-06-22 00:07:48 +02:00
}
dictReleaseIterator ( di ) ;
2012-01-02 22:14:10 -08:00
} else {
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown hash encoding " ) ;
2010-06-22 00:07:48 +02:00
}
2017-09-05 13:14:13 +02:00
} else if ( o - > type = = OBJ_STREAM ) {
/* Store how many listpacks we have inside the radix tree. */
stream * s = o - > ptr ;
rax * rax = s - > rax ;
if ( ( n = rdbSaveLen ( rdb , raxSize ( rax ) ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
2012-01-02 22:14:10 -08:00
2017-09-05 13:14:13 +02:00
/* Serialize all the listpacks inside the radix tree as they are,
* when loading back , we ' ll use the first entry of each listpack
* to insert it back into the radix tree . */
raxIterator ri ;
raxStart ( & ri , rax ) ;
raxSeek ( & ri , " ^ " , NULL , 0 ) ;
while ( raxNext ( & ri ) ) {
unsigned char * lp = ri . data ;
size_t lp_bytes = lpBytes ( lp ) ;
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveRawString ( rdb , ri . key , ri . key_len ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2017-09-28 16:55:46 +02:00
nwritten + = n ;
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveRawString ( rdb , lp , lp_bytes ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2017-09-05 13:14:13 +02:00
nwritten + = n ;
}
raxStop ( & ri ) ;
2017-09-05 16:24:11 +02:00
2017-09-06 12:00:18 +02:00
/* Save the number of elements inside the stream. We cannot obtain
* this easily later , since our macro nodes should be checked for
* number of items : not a great CPU / space tradeoff . */
if ( ( n = rdbSaveLen ( rdb , s - > length ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
2017-09-05 16:24:11 +02:00
/* Save the last entry ID. */
if ( ( n = rdbSaveLen ( rdb , s - > last_id . ms ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
if ( ( n = rdbSaveLen ( rdb , s - > last_id . seq ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
Add stream consumer group lag tracking and reporting (#9127)
Adds the ability to track the lag of a consumer group (CG), that is, the number
of entries yet-to-be-delivered from the stream.
The proposed constant-time solution is in the spirit of "best-effort."
Partially addresses #8737.
## Description of approach
We add a new "entries_added" property to the stream. This starts at 0 for a new
stream and is incremented by 1 with every `XADD`. It is essentially an all-time
counter of the entries added to the stream.
Given the stream's length and this counter value, we can trivially find the logical
"entries_added" counter of the first ID if and only if the stream is contiguous.
A fragmented stream contains one or more tombstones generated by `XDEL`s.
The new "xdel_max_id" stream property tracks the latest tombstone.
The CG also tracks its last delivered ID's as an "entries_read" counter and
increments it independently when delivering new messages, unless the this
read counter is invalid (-1 means invalid offset). When the CG's counter is
available, the reported lag is the difference between added and read counters.
Lastly, this also adds a "first_id" field to the stream structure in order to make
looking it up cheaper in most cases.
## Limitations
There are two cases in which the mechanism isn't able to track the lag.
In these cases, `XINFO` replies with `null` in the "lag" field.
The first case is when a CG is created with an arbitrary last delivered ID,
that isn't "0-0", nor the first or the last entries of the stream. In this case,
it is impossible to obtain a valid read counter (short of an O(N) operation).
The second case is when there are one or more tombstones fragmenting
the stream's entries range.
In both cases, given enough time and assuming that the consumers are
active (reading and lacking) and advancing, the CG should be able to
catch up with the tip of the stream and report zero lag.
Once that's achieved, lag tracking would resume as normal (until the
next tombstone is set).
## API changes
* `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
for explicitly specifying the new CG's counter.
* `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
for specifying the CG's counter.
* `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
number of entries added to the stream.
* `XINFO` reports the current lag and logical read counter of CGs.
* `XSETID` is an internal command that's used in replication/aof. It has been added with
the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
for propagating the CG's offset and maximal tombstone ID of the stream.
## The generic unsolved problem
The current stream implementation doesn't provide an efficient way to obtain the
approximate/exact size of a range of entries. While it could've been nice to have
that ability (#5813) in general, let alone specifically in the context of CGs, the risk
and complexities involved in such implementation are in all likelihood prohibitive.
## A refactoring note
The `streamGetEdgeID` has been refactored to accommodate both the existing seek
of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
argument). Furthermore, this refactoring also migrated the seek logic to use the
`streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
`skip_tombstones` Boolean struct field to control the emission of these.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-02-23 22:34:58 +02:00
/* Save the first entry ID. */
if ( ( n = rdbSaveLen ( rdb , s - > first_id . ms ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
if ( ( n = rdbSaveLen ( rdb , s - > first_id . seq ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
/* Save the maximal tombstone ID. */
if ( ( n = rdbSaveLen ( rdb , s - > max_deleted_entry_id . ms ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
if ( ( n = rdbSaveLen ( rdb , s - > max_deleted_entry_id . seq ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
/* Save the offset. */
if ( ( n = rdbSaveLen ( rdb , s - > entries_added ) ) = = - 1 ) return - 1 ;
nwritten + = n ;
2018-01-31 12:05:04 +01:00
/* The consumer groups and their clients are part of the stream
* type , so serialize every consumer group . */
/* Save the number of groups. */
2018-02-18 23:13:41 +01:00
size_t num_cgroups = s - > cgroups ? raxSize ( s - > cgroups ) : 0 ;
if ( ( n = rdbSaveLen ( rdb , num_cgroups ) ) = = - 1 ) return - 1 ;
2018-01-31 12:05:04 +01:00
nwritten + = n ;
2018-02-18 23:13:41 +01:00
if ( num_cgroups ) {
/* Serialize each consumer group. */
raxStart ( & ri , s - > cgroups ) ;
raxSeek ( & ri , " ^ " , NULL , 0 ) ;
while ( raxNext ( & ri ) ) {
streamCG * cg = ri . data ;
2018-01-31 12:05:04 +01:00
2018-02-18 23:13:41 +01:00
/* Save the group name. */
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveRawString ( rdb , ri . key , ri . key_len ) ) = = - 1 ) {
raxStop ( & ri ) ;
2018-02-18 23:13:41 +01:00
return - 1 ;
2020-07-21 01:13:05 -04:00
}
2018-02-18 23:13:41 +01:00
nwritten + = n ;
2018-01-31 12:05:04 +01:00
2018-02-18 23:13:41 +01:00
/* Last ID. */
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveLen ( rdb , cg - > last_id . ms ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2018-02-18 23:13:41 +01:00
nwritten + = n ;
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveLen ( rdb , cg - > last_id . seq ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2018-02-18 23:13:41 +01:00
nwritten + = n ;
Add stream consumer group lag tracking and reporting (#9127)
Adds the ability to track the lag of a consumer group (CG), that is, the number
of entries yet-to-be-delivered from the stream.
The proposed constant-time solution is in the spirit of "best-effort."
Partially addresses #8737.
## Description of approach
We add a new "entries_added" property to the stream. This starts at 0 for a new
stream and is incremented by 1 with every `XADD`. It is essentially an all-time
counter of the entries added to the stream.
Given the stream's length and this counter value, we can trivially find the logical
"entries_added" counter of the first ID if and only if the stream is contiguous.
A fragmented stream contains one or more tombstones generated by `XDEL`s.
The new "xdel_max_id" stream property tracks the latest tombstone.
The CG also tracks its last delivered ID's as an "entries_read" counter and
increments it independently when delivering new messages, unless the this
read counter is invalid (-1 means invalid offset). When the CG's counter is
available, the reported lag is the difference between added and read counters.
Lastly, this also adds a "first_id" field to the stream structure in order to make
looking it up cheaper in most cases.
## Limitations
There are two cases in which the mechanism isn't able to track the lag.
In these cases, `XINFO` replies with `null` in the "lag" field.
The first case is when a CG is created with an arbitrary last delivered ID,
that isn't "0-0", nor the first or the last entries of the stream. In this case,
it is impossible to obtain a valid read counter (short of an O(N) operation).
The second case is when there are one or more tombstones fragmenting
the stream's entries range.
In both cases, given enough time and assuming that the consumers are
active (reading and lacking) and advancing, the CG should be able to
catch up with the tip of the stream and report zero lag.
Once that's achieved, lag tracking would resume as normal (until the
next tombstone is set).
## API changes
* `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
for explicitly specifying the new CG's counter.
* `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
for specifying the CG's counter.
* `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
number of entries added to the stream.
* `XINFO` reports the current lag and logical read counter of CGs.
* `XSETID` is an internal command that's used in replication/aof. It has been added with
the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
for propagating the CG's offset and maximal tombstone ID of the stream.
## The generic unsolved problem
The current stream implementation doesn't provide an efficient way to obtain the
approximate/exact size of a range of entries. While it could've been nice to have
that ability (#5813) in general, let alone specifically in the context of CGs, the risk
and complexities involved in such implementation are in all likelihood prohibitive.
## A refactoring note
The `streamGetEdgeID` has been refactored to accommodate both the existing seek
of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
argument). Furthermore, this refactoring also migrated the seek logic to use the
`streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
`skip_tombstones` Boolean struct field to control the emission of these.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-02-23 22:34:58 +02:00
/* Save the group's logical reads counter. */
if ( ( n = rdbSaveLen ( rdb , cg - > entries_read ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
nwritten + = n ;
2018-01-31 12:05:04 +01:00
2018-02-18 23:13:41 +01:00
/* Save the global PEL. */
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveStreamPEL ( rdb , cg - > pel , 1 ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2018-02-18 23:13:41 +01:00
nwritten + = n ;
2018-01-31 12:05:04 +01:00
2018-02-18 23:13:41 +01:00
/* Save the consumers of this group. */
2020-07-21 01:13:05 -04:00
if ( ( n = rdbSaveStreamConsumers ( rdb , cg ) ) = = - 1 ) {
raxStop ( & ri ) ;
return - 1 ;
}
2018-02-18 23:13:41 +01:00
nwritten + = n ;
}
raxStop ( & ri ) ;
2018-01-31 12:05:04 +01:00
}
2016-05-18 11:45:40 +02:00
} else if ( o - > type = = OBJ_MODULE ) {
/* Save a module-specific value. */
2024-04-05 16:59:55 -07:00
ValkeyModuleIO io ;
2016-05-18 11:45:40 +02:00
moduleValue * mv = o - > ptr ;
moduleType * mt = mv - > type ;
/* Write the "module" identifier as prefix, so that we'll be able
* to call the right module during loading . */
int retval = rdbSaveLen ( rdb , mt - > id ) ;
if ( retval = = - 1 ) return - 1 ;
2021-06-16 14:45:49 +08:00
moduleInitIOContext ( io , mt , rdb , key , dbid ) ;
2016-05-18 11:45:40 +02:00
io . bytes + = retval ;
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
/* Then write the module-specific representation + EOF marker. */
2016-05-18 11:45:40 +02:00
mt - > rdb_save ( & io , mv - > value ) ;
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
retval = rdbSaveLen ( rdb , RDB_MODULE_OPCODE_EOF ) ;
2019-07-21 17:41:03 +03:00
if ( retval = = - 1 )
io . error = 1 ;
else
io . bytes + = retval ;
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
2016-10-06 17:05:38 +02:00
if ( io . ctx ) {
moduleFreeContext ( io . ctx ) ;
zfree ( io . ctx ) ;
}
2016-06-05 15:34:43 +02:00
return io . error ? - 1 : ( ssize_t ) io . bytes ;
2010-06-22 00:07:48 +02:00
} else {
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown object type " ) ;
2010-06-22 00:07:48 +02:00
}
2010-11-21 15:39:34 +01:00
return nwritten ;
2010-06-22 00:07:48 +02:00
}
/* Return the length the object will have on disk if saved with
* the rdbSaveObject ( ) function . Currently we use a trick to get
* this length with very little changes to the code . In the future
* we could switch to a faster solution . */
2021-06-16 14:45:49 +08:00
size_t rdbSavedObjectLen ( robj * o , robj * key , int dbid ) {
ssize_t len = rdbSaveObject ( NULL , o , key , dbid ) ;
2015-07-26 15:29:53 +02:00
serverAssertWithInfo ( NULL , o , len ! = - 1 ) ;
2010-11-21 16:27:47 +01:00
return len ;
2010-06-22 00:07:48 +02:00
}
2010-12-30 16:41:36 +01:00
/* Save a key-value pair, with expire time, type, key, value.
* On error - 1 is returned .
2021-03-25 21:09:12 +08:00
* On success if the key was actually saved 1 is returned . */
2021-06-16 14:45:49 +08:00
int rdbSaveKeyValuePair ( rio * rdb , robj * key , robj * val , long long expiretime , int dbid ) {
2018-03-15 13:15:46 +01:00
int savelru = server . maxmemory_policy & MAXMEMORY_FLAG_LRU ;
int savelfu = server . maxmemory_policy & MAXMEMORY_FLAG_LFU ;
2010-12-30 16:41:36 +01:00
/* Save the expire time */
if ( expiretime ! = - 1 ) {
2015-07-27 09:41:48 +02:00
if ( rdbSaveType ( rdb , RDB_OPCODE_EXPIRETIME_MS ) = = - 1 ) return - 1 ;
2011-11-09 16:51:19 +01:00
if ( rdbSaveMillisecondTime ( rdb , expiretime ) = = - 1 ) return - 1 ;
2010-12-30 16:41:36 +01:00
}
2011-05-13 22:14:39 +02:00
2018-03-15 13:15:46 +01:00
/* Save the LRU info. */
if ( savelru ) {
2018-06-12 17:31:04 +02:00
uint64_t idletime = estimateObjectIdleTime ( val ) ;
2018-03-15 13:15:46 +01:00
idletime / = 1000 ; /* Using seconds is enough and requires less space.*/
if ( rdbSaveType ( rdb , RDB_OPCODE_IDLE ) = = - 1 ) return - 1 ;
if ( rdbSaveLen ( rdb , idletime ) = = - 1 ) return - 1 ;
}
/* Save the LFU info. */
if ( savelfu ) {
uint8_t buf [ 1 ] ;
buf [ 0 ] = LFUDecrAndReturn ( val ) ;
/* We can encode this in exactly two bytes: the opcode and an 8
* bit counter , since the frequency is logarithmic with a 0 - 255 range .
* Note that we do not store the halving time because to reset it
* a single time when loading does not affect the frequency much . */
if ( rdbSaveType ( rdb , RDB_OPCODE_FREQ ) = = - 1 ) return - 1 ;
if ( rdbWriteRaw ( rdb , buf , 1 ) = = - 1 ) return - 1 ;
}
2010-12-30 16:41:36 +01:00
/* Save type, key, value */
2011-05-13 22:14:39 +02:00
if ( rdbSaveObjectType ( rdb , val ) = = - 1 ) return - 1 ;
2011-05-13 17:31:00 +02:00
if ( rdbSaveStringObject ( rdb , key ) = = - 1 ) return - 1 ;
2021-06-16 14:45:49 +08:00
if ( rdbSaveObject ( rdb , val , key , dbid ) = = - 1 ) return - 1 ;
2019-07-01 15:22:29 +03:00
/* Delay return if required (for testing) */
if ( server . rdb_key_save_delay )
2020-09-03 08:47:29 +03:00
debugDelay ( server . rdb_key_save_delay ) ;
2019-07-01 15:22:29 +03:00
2010-12-30 16:41:36 +01:00
return 1 ;
}
2015-01-08 08:56:35 +01:00
/* Save an AUX field. */
2017-12-21 11:10:48 +02:00
ssize_t rdbSaveAuxField ( rio * rdb , void * key , size_t keylen , void * val , size_t vallen ) {
ssize_t ret , len = 0 ;
if ( ( ret = rdbSaveType ( rdb , RDB_OPCODE_AUX ) ) = = - 1 ) return - 1 ;
len + = ret ;
2018-02-27 21:55:20 +09:00
if ( ( ret = rdbSaveRawString ( rdb , key , keylen ) ) = = - 1 ) return - 1 ;
2017-12-21 11:10:48 +02:00
len + = ret ;
2018-02-27 21:55:20 +09:00
if ( ( ret = rdbSaveRawString ( rdb , val , vallen ) ) = = - 1 ) return - 1 ;
2017-12-21 11:10:48 +02:00
len + = ret ;
return len ;
2015-01-08 08:56:35 +01:00
}
/* Wrapper for rdbSaveAuxField() used when key/val length can be obtained
* with strlen ( ) . */
2017-12-21 11:10:48 +02:00
ssize_t rdbSaveAuxFieldStrStr ( rio * rdb , char * key , char * val ) {
2015-01-08 08:56:35 +01:00
return rdbSaveAuxField ( rdb , key , strlen ( key ) , val , strlen ( val ) ) ;
}
/* Wrapper for strlen(key) + integer type (up to long long range). */
2017-12-21 11:10:48 +02:00
ssize_t rdbSaveAuxFieldStrInt ( rio * rdb , char * key , long long val ) {
2015-07-27 09:41:48 +02:00
char buf [ LONG_STR_SIZE ] ;
2015-01-08 08:56:35 +01:00
int vlen = ll2string ( buf , sizeof ( buf ) , val ) ;
return rdbSaveAuxField ( rdb , key , strlen ( key ) , buf , vlen ) ;
}
/* Save a few default AUX fields with information about the RDB generated. */
2019-10-29 17:59:09 +02:00
int rdbSaveInfoAuxFields ( rio * rdb , int rdbflags , rdbSaveInfo * rsi ) {
2015-01-08 09:08:55 +01:00
int redis_bits = ( sizeof ( void * ) = = 8 ) ? 64 : 32 ;
2022-02-12 00:47:03 +08:00
int aof_base = ( rdbflags & RDBFLAGS_AOF_PREAMBLE ) ! = 0 ;
2015-01-08 09:08:55 +01:00
2015-01-08 12:06:17 +01:00
/* Add a few fields about the state when the RDB was created. */
2024-04-05 21:15:57 -07:00
if ( rdbSaveAuxFieldStrStr ( rdb , " valkey-ver " , VALKEY_VERSION ) = = - 1 ) return - 1 ;
2015-01-08 09:08:55 +01:00
if ( rdbSaveAuxFieldStrInt ( rdb , " redis-bits " , redis_bits ) = = - 1 ) return - 1 ;
2015-01-08 08:56:35 +01:00
if ( rdbSaveAuxFieldStrInt ( rdb , " ctime " , time ( NULL ) ) = = - 1 ) return - 1 ;
2015-01-08 09:08:55 +01:00
if ( rdbSaveAuxFieldStrInt ( rdb , " used-mem " , zmalloc_used_memory ( ) ) = = - 1 ) return - 1 ;
PSYNC2: different improvements to Redis replication.
The gist of the changes is that now, partial resynchronizations between
slaves and masters (without the need of a full resync with RDB transfer
and so forth), work in a number of cases when it was impossible
in the past. For instance:
1. When a slave is promoted to mastrer, the slaves of the old master can
partially resynchronize with the new master.
2. Chained slalves (slaves of slaves) can be moved to replicate to other
slaves or the master itsef, without requiring a full resync.
3. The master itself, after being turned into a slave, is able to
partially resynchronize with the new master, when it joins replication
again.
In order to obtain this, the following main changes were operated:
* Slaves also take a replication backlog, not just masters.
* Same stream replication for all the slaves and sub slaves. The
replication stream is identical from the top level master to its slaves
and is also the same from the slaves to their sub-slaves and so forth.
This means that if a slave is later promoted to master, it has the
same replication backlong, and can partially resynchronize with its
slaves (that were previously slaves of the old master).
* A given replication history is no longer identified by the `runid` of
a Redis node. There is instead a `replication ID` which changes every
time the instance has a new history no longer coherent with the past
one. So, for example, slaves publish the same replication history of
their master, however when they are turned into masters, they publish
a new replication ID, but still remember the old ID, so that they are
able to partially resynchronize with slaves of the old master (up to a
given offset).
* The replication protocol was slightly modified so that a new extended
+CONTINUE reply from the master is able to inform the slave of a
replication ID change.
* REPLCONF CAPA is used in order to notify masters that a slave is able
to understand the new +CONTINUE reply.
* The RDB file was extended with an auxiliary field that is able to
select a given DB after loading in the slave, so that the slave can
continue receiving the replication stream from the point it was
disconnected without requiring the master to insert "SELECT" statements.
This is useful in order to guarantee the "same stream" property, because
the slave must be able to accumulate an identical backlog.
* Slave pings to sub-slaves are now sent in a special form, when the
top-level master is disconnected, in order to don't interfer with the
replication stream. We just use out of band "\n" bytes as in other parts
of the Redis protocol.
An old design document is available here:
https://gist.github.com/antirez/ae068f95c0d084891305
However the implementation is not identical to the description because
during the work to implement it, different changes were needed in order
to make things working well.
2016-11-09 11:31:06 +01:00
/* Handle saving options that generate aux fields. */
if ( rsi ) {
2017-09-19 23:03:39 +02:00
if ( rdbSaveAuxFieldStrInt ( rdb , " repl-stream-db " , rsi - > repl_stream_db )
= = - 1 ) return - 1 ;
if ( rdbSaveAuxFieldStrStr ( rdb , " repl-id " , server . replid )
= = - 1 ) return - 1 ;
if ( rdbSaveAuxFieldStrInt ( rdb , " repl-offset " , server . master_repl_offset )
= = - 1 ) return - 1 ;
PSYNC2: different improvements to Redis replication.
The gist of the changes is that now, partial resynchronizations between
slaves and masters (without the need of a full resync with RDB transfer
and so forth), work in a number of cases when it was impossible
in the past. For instance:
1. When a slave is promoted to mastrer, the slaves of the old master can
partially resynchronize with the new master.
2. Chained slalves (slaves of slaves) can be moved to replicate to other
slaves or the master itsef, without requiring a full resync.
3. The master itself, after being turned into a slave, is able to
partially resynchronize with the new master, when it joins replication
again.
In order to obtain this, the following main changes were operated:
* Slaves also take a replication backlog, not just masters.
* Same stream replication for all the slaves and sub slaves. The
replication stream is identical from the top level master to its slaves
and is also the same from the slaves to their sub-slaves and so forth.
This means that if a slave is later promoted to master, it has the
same replication backlong, and can partially resynchronize with its
slaves (that were previously slaves of the old master).
* A given replication history is no longer identified by the `runid` of
a Redis node. There is instead a `replication ID` which changes every
time the instance has a new history no longer coherent with the past
one. So, for example, slaves publish the same replication history of
their master, however when they are turned into masters, they publish
a new replication ID, but still remember the old ID, so that they are
able to partially resynchronize with slaves of the old master (up to a
given offset).
* The replication protocol was slightly modified so that a new extended
+CONTINUE reply from the master is able to inform the slave of a
replication ID change.
* REPLCONF CAPA is used in order to notify masters that a slave is able
to understand the new +CONTINUE reply.
* The RDB file was extended with an auxiliary field that is able to
select a given DB after loading in the slave, so that the slave can
continue receiving the replication stream from the point it was
disconnected without requiring the master to insert "SELECT" statements.
This is useful in order to guarantee the "same stream" property, because
the slave must be able to accumulate an identical backlog.
* Slave pings to sub-slaves are now sent in a special form, when the
top-level master is disconnected, in order to don't interfer with the
replication stream. We just use out of band "\n" bytes as in other parts
of the Redis protocol.
An old design document is available here:
https://gist.github.com/antirez/ae068f95c0d084891305
However the implementation is not identical to the description because
during the work to implement it, different changes were needed in order
to make things working well.
2016-11-09 11:31:06 +01:00
}
2022-02-12 00:47:03 +08:00
if ( rdbSaveAuxFieldStrInt ( rdb , " aof-base " , aof_base ) = = - 1 ) return - 1 ;
2015-01-08 08:56:35 +01:00
return 1 ;
}
2019-07-21 17:41:03 +03:00
ssize_t rdbSaveSingleModuleAux ( rio * rdb , int when , moduleType * mt ) {
/* Save a module-specific aux value. */
2024-04-05 16:59:55 -07:00
ValkeyModuleIO io ;
Avoid saving module aux on RDB if no aux data was saved by the module. (#11374)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
2022-10-18 19:45:46 +03:00
int retval = 0 ;
2021-06-16 14:45:49 +08:00
moduleInitIOContext ( io , mt , rdb , NULL , - 1 ) ;
Avoid saving module aux on RDB if no aux data was saved by the module. (#11374)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
2022-10-18 19:45:46 +03:00
/* We save the AUX field header in a temporary buffer so we can support aux_save2 API.
* If aux_save2 is used the buffer will be flushed at the first time the module will perform
* a write operation to the RDB and will be ignored is case there was no writes . */
rio aux_save_headers_rio ;
rioInitWithBuffer ( & aux_save_headers_rio , sdsempty ( ) ) ;
if ( rdbSaveType ( & aux_save_headers_rio , RDB_OPCODE_MODULE_AUX ) = = - 1 ) goto error ;
2019-07-21 17:41:03 +03:00
/* Write the "module" identifier as prefix, so that we'll be able
* to call the right module during loading . */
Avoid saving module aux on RDB if no aux data was saved by the module. (#11374)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
2022-10-18 19:45:46 +03:00
if ( rdbSaveLen ( & aux_save_headers_rio , mt - > id ) = = - 1 ) goto error ;
2019-07-21 17:41:03 +03:00
2019-09-05 14:11:37 +03:00
/* write the 'when' so that we can provide it on loading. add a UINT opcode
* for backwards compatibility , everything after the MT needs to be prefixed
* by an opcode . */
Avoid saving module aux on RDB if no aux data was saved by the module. (#11374)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
2022-10-18 19:45:46 +03:00
if ( rdbSaveLen ( & aux_save_headers_rio , RDB_MODULE_OPCODE_UINT ) = = - 1 ) goto error ;
if ( rdbSaveLen ( & aux_save_headers_rio , when ) = = - 1 ) goto error ;
2019-07-21 17:41:03 +03:00
/* Then write the module-specific representation + EOF marker. */
Avoid saving module aux on RDB if no aux data was saved by the module. (#11374)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
2022-10-18 19:45:46 +03:00
if ( mt - > aux_save2 ) {
io . pre_flush_buffer = aux_save_headers_rio . io . buffer . ptr ;
mt - > aux_save2 ( & io , when ) ;
if ( io . pre_flush_buffer ) {
/* aux_save did not save any data to the RDB.
* We will avoid saving any data related to this aux type
* to allow loading this RDB if the module is not present . */
sdsfree ( io . pre_flush_buffer ) ;
io . pre_flush_buffer = NULL ;
return 0 ;
}
} else {
/* Write headers now, aux_save does not do lazy saving of the headers. */
retval = rdbWriteRaw ( rdb , aux_save_headers_rio . io . buffer . ptr , sdslen ( aux_save_headers_rio . io . buffer . ptr ) ) ;
if ( retval = = - 1 ) goto error ;
io . bytes + = retval ;
sdsfree ( aux_save_headers_rio . io . buffer . ptr ) ;
mt - > aux_save ( & io , when ) ;
}
2019-07-21 17:41:03 +03:00
retval = rdbSaveLen ( rdb , RDB_MODULE_OPCODE_EOF ) ;
Avoid saving module aux on RDB if no aux data was saved by the module. (#11374)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
2022-10-18 19:45:46 +03:00
serverAssert ( ! io . pre_flush_buffer ) ;
2019-07-21 17:41:03 +03:00
if ( retval = = - 1 )
io . error = 1 ;
else
io . bytes + = retval ;
if ( io . ctx ) {
moduleFreeContext ( io . ctx ) ;
zfree ( io . ctx ) ;
}
if ( io . error )
return - 1 ;
return io . bytes ;
Avoid saving module aux on RDB if no aux data was saved by the module. (#11374)
### Background
The issue is that when saving an RDB with module AUX data, the module AUX metadata
(moduleid, when, ...) is saved to the RDB even though the module did not saved any actual data.
This prevent loading the RDB in the absence of the module (although there is no actual data in
the RDB that requires the module to be loaded).
### Solution
The solution suggested in this PR is that module AUX will be saved on the RDB only if the module
actually saved something during `aux_save` function.
To support backward compatibility, we introduce `aux_save2` callback that acts the same as
`aux_save` with the tiny change of avoid saving the aux field if no data was actually saved by
the module. Modules can use the new API to make sure that if they have no data to save,
then it will be possible to load the created RDB even without the module.
### Concerns
A module may register for the aux load and save hooks just in order to be notified when
saving or loading starts or completed (there are better ways to do that, but it still possible
that someone used it).
However, if a module didn't save a single field in the save callback, it means it's not allowed
to read in the read callback, since it has no way to distinguish between empty and non-empty
payloads. furthermore, it means that if the module did that, it must never change it, since it'll
break compatibility with it's old RDB files, so this is really not a valid use case.
Since some modules (ones who currently save one field indicating an empty payload), need
to know if saving an empty payload is valid, and if Redis is gonna ignore an empty payload
or store it, we opted to add a new API (rather than change behavior of an existing API and
expect modules to check the redis version)
### Technical Details
To avoid saving AUX data on RDB, we change the code to first save the AUX metadata
(moduleid, when, ...) into a temporary buffer. The buffer is then flushed to the rio at the first
time the module makes a write operation inside the `aux_save` function. If the module saves
nothing (and `aux_save2` was used), the entire temporary buffer is simply dropped and no
data about this AUX field is saved to the RDB. This make it possible to load the RDB even in
the absence of the module.
Test was added to verify the fix.
2022-10-18 19:45:46 +03:00
error :
sdsfree ( aux_save_headers_rio . io . buffer . ptr ) ;
return - 1 ;
2019-07-21 17:41:03 +03:00
}
2022-01-02 09:39:01 +02:00
ssize_t rdbSaveFunctions ( rio * rdb ) {
Redis Function Libraries (#10004)
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
2022-01-06 13:39:38 +02:00
dict * functions = functionsLibGet ( ) ;
2021-12-26 09:03:37 +02:00
dictIterator * iter = dictGetIterator ( functions ) ;
dictEntry * entry = NULL ;
2022-01-02 09:39:01 +02:00
ssize_t written = 0 ;
ssize_t ret ;
2021-12-26 09:03:37 +02:00
while ( ( entry = dictNext ( iter ) ) ) {
2022-04-05 10:27:24 +03:00
if ( ( ret = rdbSaveType ( rdb , RDB_OPCODE_FUNCTION2 ) ) < 0 ) goto werr ;
2022-01-02 09:39:01 +02:00
written + = ret ;
Redis Function Libraries (#10004)
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
2022-01-06 13:39:38 +02:00
functionLibInfo * li = dictGetVal ( entry ) ;
if ( ( ret = rdbSaveRawString ( rdb , ( unsigned char * ) li - > code , sdslen ( li - > code ) ) ) < 0 ) goto werr ;
2022-01-02 09:39:01 +02:00
written + = ret ;
2021-12-26 09:03:37 +02:00
}
dictReleaseIterator ( iter ) ;
2022-01-02 09:39:01 +02:00
return written ;
werr :
dictReleaseIterator ( iter ) ;
return - 1 ;
}
ssize_t rdbSaveDb ( rio * rdb , int dbid , int rdbflags , long * key_counter ) {
dictEntry * de ;
ssize_t written = 0 ;
ssize_t res ;
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
kvstoreIterator * kvs_it = NULL ;
2022-01-02 09:39:01 +02:00
static long long info_updated_time = 0 ;
char * pname = ( rdbflags & RDBFLAGS_AOF_PREAMBLE ) ? " AOF rewrite " : " RDB " ;
2024-04-03 10:02:43 +07:00
serverDb * db = server . db + dbid ;
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
unsigned long long int db_size = kvstoreSize ( db - > keys ) ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
if ( db_size = = 0 ) return 0 ;
2022-01-02 09:39:01 +02:00
/* Write the SELECT DB opcode */
if ( ( res = rdbSaveType ( rdb , RDB_OPCODE_SELECTDB ) ) < 0 ) goto werr ;
written + = res ;
if ( ( res = rdbSaveLen ( rdb , dbid ) ) < 0 ) goto werr ;
written + = res ;
/* Write the RESIZE DB opcode. */
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
unsigned long long expires_size = kvstoreSize ( db - > expires ) ;
2022-01-02 09:39:01 +02:00
if ( ( res = rdbSaveType ( rdb , RDB_OPCODE_RESIZEDB ) ) < 0 ) goto werr ;
written + = res ;
if ( ( res = rdbSaveLen ( rdb , db_size ) ) < 0 ) goto werr ;
written + = res ;
if ( ( res = rdbSaveLen ( rdb , expires_size ) ) < 0 ) goto werr ;
written + = res ;
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
kvs_it = kvstoreIteratorInit ( db - > keys ) ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
int last_slot = - 1 ;
2022-01-02 09:39:01 +02:00
/* Iterate this DB writing every entry */
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
while ( ( de = kvstoreIteratorNext ( kvs_it ) ) ! = NULL ) {
int curr_slot = kvstoreIteratorGetCurrentDictIndex ( kvs_it ) ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
/* Save slot info. */
if ( server . cluster_enabled & & curr_slot ! = last_slot ) {
if ( ( res = rdbSaveType ( rdb , RDB_OPCODE_SLOT_INFO ) ) < 0 ) goto werr ;
written + = res ;
if ( ( res = rdbSaveLen ( rdb , curr_slot ) ) < 0 ) goto werr ;
written + = res ;
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
if ( ( res = rdbSaveLen ( rdb , kvstoreDictSize ( db - > keys , curr_slot ) ) ) < 0 ) goto werr ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
written + = res ;
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
if ( ( res = rdbSaveLen ( rdb , kvstoreDictSize ( db - > expires , curr_slot ) ) ) < 0 ) goto werr ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
written + = res ;
last_slot = curr_slot ;
}
2022-01-02 09:39:01 +02:00
sds keystr = dictGetKey ( de ) ;
robj key , * o = dictGetVal ( de ) ;
long long expire ;
size_t rdb_bytes_before_key = rdb - > processed_bytes ;
initStaticStringObject ( key , keystr ) ;
expire = getExpire ( db , & key ) ;
if ( ( res = rdbSaveKeyValuePair ( rdb , & key , o , expire , dbid ) ) < 0 ) goto werr ;
written + = res ;
/* In fork child process, we can try to release memory back to the
* OS and possibly avoid or decrease COW . We give the dismiss
* mechanism a hint about an estimated size of the object we stored . */
size_t dump_size = rdb - > processed_bytes - rdb_bytes_before_key ;
if ( server . in_fork_child ) dismissObject ( o , dump_size ) ;
/* Update child info every 1 second (approximately).
* in order to avoid calling mstime ( ) on each iteration , we will
* check the diff every 1024 keys */
if ( ( ( * key_counter ) + + & 1023 ) = = 0 ) {
long long now = mstime ( ) ;
if ( now - info_updated_time > = 1000 ) {
sendChildInfo ( CHILD_INFO_TYPE_CURRENT_INFO , * key_counter , pname ) ;
info_updated_time = now ;
}
}
}
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
kvstoreIteratorRelease ( kvs_it ) ;
2022-01-02 09:39:01 +02:00
return written ;
werr :
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
if ( kvs_it ) kvstoreIteratorRelease ( kvs_it ) ;
2022-01-02 09:39:01 +02:00
return - 1 ;
2021-12-26 09:03:37 +02:00
}
2014-10-07 12:56:23 +02:00
/* Produces a dump of the database in RDB format sending it to the specified
2024-04-09 01:24:03 -07:00
* I / O channel . On success C_OK is returned , otherwise C_ERR
2014-10-07 12:56:23 +02:00
* is returned and part of the output , or all the output , can be
* missing because of I / O errors .
*
2015-07-26 23:17:55 +02:00
* When the function returns C_ERR and if ' error ' is not NULL , the
2014-10-07 12:56:23 +02:00
* integer pointed by ' error ' is set to the value of errno just after the I / O
* error . */
2022-01-02 09:39:01 +02:00
int rdbSaveRio ( int req , rio * rdb , int * error , int rdbflags , rdbSaveInfo * rsi ) {
2012-03-31 17:08:40 +02:00
char magic [ 10 ] ;
2012-04-09 22:40:41 +02:00
uint64_t cksum ;
2022-01-02 09:39:01 +02:00
long key_counter = 0 ;
2020-12-20 20:23:20 +02:00
int j ;
2010-06-22 00:07:48 +02:00
2012-04-10 15:47:10 +02:00
if ( server . rdb_checksum )
2014-10-07 12:56:23 +02:00
rdb - > update_cksum = rioGenericUpdateChecksum ;
2015-07-27 09:41:48 +02:00
snprintf ( magic , sizeof ( magic ) , " REDIS%04d " , RDB_VERSION ) ;
2014-10-07 12:56:23 +02:00
if ( rdbWriteRaw ( rdb , magic , 9 ) = = - 1 ) goto werr ;
2019-10-29 17:59:09 +02:00
if ( rdbSaveInfoAuxFields ( rdb , rdbflags , rsi ) = = - 1 ) goto werr ;
2024-04-05 16:59:55 -07:00
if ( ! ( req & SLAVE_REQ_RDB_EXCLUDE_DATA ) & & rdbSaveModulesAux ( rdb , VALKEYMODULE_AUX_BEFORE_RDB ) = = - 1 ) goto werr ;
2022-01-02 09:39:01 +02:00
/* save functions */
2022-01-04 17:09:22 +02:00
if ( ! ( req & SLAVE_REQ_RDB_EXCLUDE_FUNCTIONS ) & & rdbSaveFunctions ( rdb ) = = - 1 ) goto werr ;
2022-01-02 09:39:01 +02:00
/* save all databases, skip this if we're in functions-only mode */
2022-01-04 17:09:22 +02:00
if ( ! ( req & SLAVE_REQ_RDB_EXCLUDE_DATA ) ) {
2022-01-02 09:39:01 +02:00
for ( j = 0 ; j < server . dbnum ; j + + ) {
if ( rdbSaveDb ( rdb , j , rdbflags , & key_counter ) = = - 1 ) goto werr ;
2010-06-22 00:07:48 +02:00
}
}
2012-04-09 22:40:41 +02:00
2024-04-05 16:59:55 -07:00
if ( ! ( req & SLAVE_REQ_RDB_EXCLUDE_DATA ) & & rdbSaveModulesAux ( rdb , VALKEYMODULE_AUX_AFTER_RDB ) = = - 1 ) goto werr ;
2019-07-21 17:41:03 +03:00
2010-06-22 00:07:48 +02:00
/* EOF opcode */
2015-07-27 09:41:48 +02:00
if ( rdbSaveType ( rdb , RDB_OPCODE_EOF ) = = - 1 ) goto werr ;
2010-06-22 00:07:48 +02:00
2012-04-10 15:47:10 +02:00
/* CRC64 checksum. It will be zero if checksum computation is disabled, the
* loading code skips the check in this case . */
2014-10-07 12:56:23 +02:00
cksum = rdb - > cksum ;
2012-04-09 22:40:41 +02:00
memrev64ifbe ( & cksum ) ;
2014-10-07 12:56:23 +02:00
if ( rioWrite ( rdb , & cksum , 8 ) = = 0 ) goto werr ;
2015-07-26 23:17:55 +02:00
return C_OK ;
2014-10-07 12:56:23 +02:00
werr :
if ( error ) * error = errno ;
2015-07-26 23:17:55 +02:00
return C_ERR ;
2014-10-07 12:56:23 +02:00
}
Refine the purpose of rdb saving with accurate flags (#12925)
In Redis, rdb is produced in three scenarios mainly.
- backup, such as `bgsave` and `save` command
- full sync in replication
- aof rewrite if `aof-use-rdb-preamble` is yes
We also have some RDB flags to identify the purpose of rdb saving.
```C
/* flags on the purpose of rdb save or load */
#define RDBFLAGS_NONE 0 /* No special RDB loading. */
#define RDBFLAGS_AOF_PREAMBLE (1<<0) /* Load/save the RDB as AOF preamble. */
#define RDBFLAGS_REPLICATION (1<<1) /* Load/save for SYNC. */
```
But currently, it seems that these flags and purposes of rdb saving
don't exactly match. I find it in `rdbSaveRioWithEOFMark` which calls
`startSaving` with `RDBFLAGS_REPLICATION` but `rdbSaveRio` with
`RDBFLAGS_NONE`.
```C
int rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) {
char eofmark[RDB_EOF_MARK_SIZE];
startSaving(RDBFLAGS_REPLICATION);
getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE);
if (error) *error = 0;
if (rioWrite(rdb,"$EOF:",5) == 0) goto werr;
if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
if (rioWrite(rdb,"\r\n",2) == 0) goto werr;
if (rdbSaveRio(req,rdb,error,RDBFLAGS_NONE,rsi) == C_ERR) goto werr;
if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
stopSaving(1);
return C_OK;
werr: /* Write error. */
/* Set 'error' only if not already set by rdbSaveRio() call. */
if (error && *error == 0) *error = errno;
stopSaving(0);
return C_ERR;
}
```
In this PR, I refine the purpose of rdb saving with accurate flags.
2024-02-01 19:41:02 +08:00
/* This helper function is only used for diskless replication.
* This is just a wrapper to rdbSaveRio ( ) that additionally adds a prefix
2014-10-14 10:11:26 +02:00
* and a suffix to the generated RDB dump . The prefix is :
*
* $ EOF : < 40 bytes unguessable hex string > \ r \ n
*
* While the suffix is the 40 bytes hex string we announced in the prefix .
* This way processes receiving the payload can understand when it ends
* without doing any processing of the content . */
2022-01-02 09:39:01 +02:00
int rdbSaveRioWithEOFMark ( int req , rio * rdb , int * error , rdbSaveInfo * rsi ) {
2015-07-27 09:41:48 +02:00
char eofmark [ RDB_EOF_MARK_SIZE ] ;
2014-10-14 10:11:26 +02:00
2019-10-29 17:59:09 +02:00
startSaving ( RDBFLAGS_REPLICATION ) ;
2015-07-27 09:41:48 +02:00
getRandomHexChars ( eofmark , RDB_EOF_MARK_SIZE ) ;
2014-10-14 10:11:26 +02:00
if ( error ) * error = 0 ;
if ( rioWrite ( rdb , " $EOF: " , 5 ) = = 0 ) goto werr ;
2015-07-27 09:41:48 +02:00
if ( rioWrite ( rdb , eofmark , RDB_EOF_MARK_SIZE ) = = 0 ) goto werr ;
2014-10-14 10:11:26 +02:00
if ( rioWrite ( rdb , " \r \n " , 2 ) = = 0 ) goto werr ;
Refine the purpose of rdb saving with accurate flags (#12925)
In Redis, rdb is produced in three scenarios mainly.
- backup, such as `bgsave` and `save` command
- full sync in replication
- aof rewrite if `aof-use-rdb-preamble` is yes
We also have some RDB flags to identify the purpose of rdb saving.
```C
/* flags on the purpose of rdb save or load */
#define RDBFLAGS_NONE 0 /* No special RDB loading. */
#define RDBFLAGS_AOF_PREAMBLE (1<<0) /* Load/save the RDB as AOF preamble. */
#define RDBFLAGS_REPLICATION (1<<1) /* Load/save for SYNC. */
```
But currently, it seems that these flags and purposes of rdb saving
don't exactly match. I find it in `rdbSaveRioWithEOFMark` which calls
`startSaving` with `RDBFLAGS_REPLICATION` but `rdbSaveRio` with
`RDBFLAGS_NONE`.
```C
int rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) {
char eofmark[RDB_EOF_MARK_SIZE];
startSaving(RDBFLAGS_REPLICATION);
getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE);
if (error) *error = 0;
if (rioWrite(rdb,"$EOF:",5) == 0) goto werr;
if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
if (rioWrite(rdb,"\r\n",2) == 0) goto werr;
if (rdbSaveRio(req,rdb,error,RDBFLAGS_NONE,rsi) == C_ERR) goto werr;
if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
stopSaving(1);
return C_OK;
werr: /* Write error. */
/* Set 'error' only if not already set by rdbSaveRio() call. */
if (error && *error == 0) *error = errno;
stopSaving(0);
return C_ERR;
}
```
In this PR, I refine the purpose of rdb saving with accurate flags.
2024-02-01 19:41:02 +08:00
if ( rdbSaveRio ( req , rdb , error , RDBFLAGS_REPLICATION , rsi ) = = C_ERR ) goto werr ;
2015-07-27 09:41:48 +02:00
if ( rioWrite ( rdb , eofmark , RDB_EOF_MARK_SIZE ) = = 0 ) goto werr ;
2019-10-29 17:59:09 +02:00
stopSaving ( 1 ) ;
2015-07-26 23:17:55 +02:00
return C_OK ;
2014-10-14 10:11:26 +02:00
werr : /* Write error. */
/* Set 'error' only if not already set by rdbSaveRio() call. */
if ( error & & * error = = 0 ) * error = errno ;
2019-10-29 17:59:09 +02:00
stopSaving ( 0 ) ;
2015-07-26 23:17:55 +02:00
return C_ERR ;
2014-10-14 10:11:26 +02:00
}
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
static int rdbSaveInternal ( int req , const char * filename , rdbSaveInfo * rsi , int rdbflags ) {
2016-02-15 16:14:56 +01:00
char cwd [ MAXPATHLEN ] ; /* Current working dir path for error messages. */
2014-10-07 12:56:23 +02:00
rio rdb ;
2014-11-13 23:35:10 -05:00
int error = 0 ;
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
int saved_errno ;
2022-06-21 00:17:23 +08:00
char * err_op ; /* For a detailed log */
2014-10-07 12:56:23 +02:00
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
FILE * fp = fopen ( filename , " w " ) ;
2014-10-07 12:56:23 +02:00
if ( ! fp ) {
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
saved_errno = errno ;
2021-11-24 17:01:39 +03:00
char * str_err = strerror ( errno ) ;
2016-02-15 16:14:56 +01:00
char * cwdp = getcwd ( cwd , MAXPATHLEN ) ;
serverLog ( LL_WARNING ,
2021-11-24 17:01:39 +03:00
" Failed opening the temp RDB file %s (in server root dir %s) "
2016-02-15 16:14:56 +01:00
" for saving: %s " ,
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
filename ,
2016-02-15 16:14:56 +01:00
cwdp ? cwdp : " unknown " ,
2021-11-24 17:01:39 +03:00
str_err ) ;
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
errno = saved_errno ;
2015-07-26 23:17:55 +02:00
return C_ERR ;
2014-10-07 12:56:23 +02:00
}
rioInitWithFile ( & rdb , fp ) ;
2018-03-16 00:44:50 +08:00
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
if ( server . rdb_save_incremental_fsync ) {
2018-03-16 00:44:50 +08:00
rioSetAutoSync ( & rdb , REDIS_AUTOSYNC_BYTES ) ;
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
if ( ! ( rdbflags & RDBFLAGS_KEEP_CACHE ) ) rioSetReclaimCache ( & rdb , 1 ) ;
}
2018-03-16 00:44:50 +08:00
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
if ( rdbSaveRio ( req , & rdb , & error , rdbflags , rsi ) = = C_ERR ) {
2014-10-07 12:56:23 +02:00
errno = error ;
2022-06-21 00:17:23 +08:00
err_op = " rdbSaveRio " ;
2014-10-07 12:56:23 +02:00
goto werr ;
}
2012-04-09 22:40:41 +02:00
2010-06-22 00:07:48 +02:00
/* Make sure data will not remain on the OS's output buffers */
2022-06-21 00:17:23 +08:00
if ( fflush ( fp ) ) { err_op = " fflush " ; goto werr ; }
if ( fsync ( fileno ( fp ) ) ) { err_op = " fsync " ; goto werr ; }
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
if ( ! ( rdbflags & RDBFLAGS_KEEP_CACHE ) & & reclaimFilePageCache ( fileno ( fp ) , 0 , 0 ) = = - 1 ) {
serverLog ( LL_NOTICE , " Unable to reclaim cache after saving RDB: %s " , strerror ( errno ) ) ;
}
2022-06-21 00:17:23 +08:00
if ( fclose ( fp ) ) { fp = NULL ; err_op = " fclose " ; goto werr ; }
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
return C_OK ;
werr :
saved_errno = errno ;
serverLog ( LL_WARNING , " Write error while saving DB to the disk(%s): %s " , err_op , strerror ( errno ) ) ;
if ( fp ) fclose ( fp ) ;
unlink ( filename ) ;
errno = saved_errno ;
return C_ERR ;
}
/* Save DB to the file. Similar to rdbSave() but this function won't use a
* temporary file and won ' t update the metrics . */
int rdbSaveToFile ( const char * filename ) {
startSaving ( RDBFLAGS_NONE ) ;
if ( rdbSaveInternal ( SLAVE_REQ_NONE , filename , NULL , RDBFLAGS_NONE ) ! = C_OK ) {
int saved_errno = errno ;
stopSaving ( 0 ) ;
errno = saved_errno ;
return C_ERR ;
}
stopSaving ( 1 ) ;
return C_OK ;
}
/* Save the DB on disk. Return C_ERR on error, C_OK on success. */
int rdbSave ( int req , char * filename , rdbSaveInfo * rsi , int rdbflags ) {
char tmpfile [ 256 ] ;
char cwd [ MAXPATHLEN ] ; /* Current working dir path for error messages. */
Refine the purpose of rdb saving with accurate flags (#12925)
In Redis, rdb is produced in three scenarios mainly.
- backup, such as `bgsave` and `save` command
- full sync in replication
- aof rewrite if `aof-use-rdb-preamble` is yes
We also have some RDB flags to identify the purpose of rdb saving.
```C
/* flags on the purpose of rdb save or load */
#define RDBFLAGS_NONE 0 /* No special RDB loading. */
#define RDBFLAGS_AOF_PREAMBLE (1<<0) /* Load/save the RDB as AOF preamble. */
#define RDBFLAGS_REPLICATION (1<<1) /* Load/save for SYNC. */
```
But currently, it seems that these flags and purposes of rdb saving
don't exactly match. I find it in `rdbSaveRioWithEOFMark` which calls
`startSaving` with `RDBFLAGS_REPLICATION` but `rdbSaveRio` with
`RDBFLAGS_NONE`.
```C
int rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) {
char eofmark[RDB_EOF_MARK_SIZE];
startSaving(RDBFLAGS_REPLICATION);
getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE);
if (error) *error = 0;
if (rioWrite(rdb,"$EOF:",5) == 0) goto werr;
if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
if (rioWrite(rdb,"\r\n",2) == 0) goto werr;
if (rdbSaveRio(req,rdb,error,RDBFLAGS_NONE,rsi) == C_ERR) goto werr;
if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;
stopSaving(1);
return C_OK;
werr: /* Write error. */
/* Set 'error' only if not already set by rdbSaveRio() call. */
if (error && *error == 0) *error = errno;
stopSaving(0);
return C_ERR;
}
```
In this PR, I refine the purpose of rdb saving with accurate flags.
2024-02-01 19:41:02 +08:00
startSaving ( rdbflags ) ;
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
snprintf ( tmpfile , 256 , " temp-%d.rdb " , ( int ) getpid ( ) ) ;
if ( rdbSaveInternal ( req , tmpfile , rsi , rdbflags ) ! = C_OK ) {
stopSaving ( 0 ) ;
return C_ERR ;
}
2020-09-24 11:17:53 -04:00
2010-06-22 00:07:48 +02:00
/* Use RENAME to make sure the DB file is changed atomically only
* if the generate DB file is ok . */
if ( rename ( tmpfile , filename ) = = - 1 ) {
2021-11-24 17:01:39 +03:00
char * str_err = strerror ( errno ) ;
2016-02-15 16:14:56 +01:00
char * cwdp = getcwd ( cwd , MAXPATHLEN ) ;
serverLog ( LL_WARNING ,
" Error moving temp DB file %s on the final "
" destination %s (in server root dir %s): %s " ,
tmpfile ,
filename ,
cwdp ? cwdp : " unknown " ,
2021-11-24 17:01:39 +03:00
str_err ) ;
2010-06-22 00:07:48 +02:00
unlink ( tmpfile ) ;
2019-10-29 17:59:09 +02:00
stopSaving ( 0 ) ;
2015-07-26 23:17:55 +02:00
return C_ERR ;
2010-06-22 00:07:48 +02:00
}
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
if ( fsyncFileDir ( filename ) ! = 0 ) {
serverLog ( LL_WARNING ,
" Failed to fsync directory while saving DB: %s " , strerror ( errno ) ) ;
stopSaving ( 0 ) ;
return C_ERR ;
}
2016-02-15 16:14:56 +01:00
2015-07-27 09:41:48 +02:00
serverLog ( LL_NOTICE , " DB saved on disk " ) ;
2010-06-22 00:07:48 +02:00
server . dirty = 0 ;
server . lastsave = time ( NULL ) ;
2015-07-26 23:17:55 +02:00
server . lastbgsave_status = C_OK ;
2019-10-29 17:59:09 +02:00
stopSaving ( 1 ) ;
2015-07-26 23:17:55 +02:00
return C_OK ;
2010-06-22 00:07:48 +02:00
}
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
int rdbSaveBackground ( int req , char * filename , rdbSaveInfo * rsi , int rdbflags ) {
2010-06-22 00:07:48 +02:00
pid_t childpid ;
2019-09-27 12:03:09 +02:00
if ( hasActiveChildProcess ( ) ) return C_ERR ;
2022-02-17 14:32:48 +02:00
server . stat_rdb_saves + + ;
2011-01-05 18:38:31 +01:00
2010-08-30 10:32:32 +02:00
server . dirty_before_bgsave = server . dirty ;
2013-04-02 14:05:50 +02:00
server . lastbgsave_try = time ( NULL ) ;
2011-01-05 18:38:31 +01:00
2024-04-04 01:26:33 +07:00
if ( ( childpid = serverFork ( CHILD_TYPE_RDB ) ) = = 0 ) {
2011-01-05 18:38:31 +01:00
int retval ;
2010-06-22 00:07:48 +02:00
/* Child */
2024-04-04 01:26:33 +07:00
serverSetProcTitle ( " redis-rdb-bgsave " ) ;
serverSetCpuAffinity ( server . bgsave_cpulist ) ;
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
retval = rdbSave ( req , filename , rsi , rdbflags ) ;
2015-07-26 23:17:55 +02:00
if ( retval = = C_OK ) {
2021-02-16 16:06:51 +02:00
sendChildCowInfo ( CHILD_INFO_TYPE_RDB_COW_SIZE , " RDB " ) ;
2012-11-19 12:02:08 +01:00
}
2015-07-26 23:17:55 +02:00
exitFromChild ( ( retval = = C_OK ) ? 0 : 1 ) ;
2010-06-22 00:07:48 +02:00
} else {
/* Parent */
if ( childpid = = - 1 ) {
2015-07-26 23:17:55 +02:00
server . lastbgsave_status = C_ERR ;
2015-07-27 09:41:48 +02:00
serverLog ( LL_WARNING , " Can't save in background: fork: %s " ,
2010-06-22 00:07:48 +02:00
strerror ( errno ) ) ;
2015-07-26 23:17:55 +02:00
return C_ERR ;
2010-06-22 00:07:48 +02:00
}
2020-12-13 17:09:54 +02:00
serverLog ( LL_NOTICE , " Background saving started by pid %ld " , ( long ) childpid ) ;
2012-05-25 12:11:30 +02:00
server . rdb_save_time_start = time ( NULL ) ;
2015-07-27 09:41:48 +02:00
server . rdb_child_type = RDB_CHILD_TYPE_DISK ;
2015-07-26 23:17:55 +02:00
return C_OK ;
2010-06-22 00:07:48 +02:00
}
2015-07-26 23:17:55 +02:00
return C_OK ; /* unreached */
2010-06-22 00:07:48 +02:00
}
2020-09-17 23:20:10 +08:00
/* Note that we may call this function in signal handle 'sigShutdownHandler',
* so we need guarantee all functions we call are async - signal - safe .
2021-06-10 20:39:33 +08:00
* If we call this function from signal handle , we won ' t call bg_unlink that
2020-09-17 23:20:10 +08:00
* is not async - signal - safe . */
void rdbRemoveTempFile ( pid_t childpid , int from_signal ) {
2010-06-22 00:07:48 +02:00
char tmpfile [ 256 ] ;
2020-09-17 23:20:10 +08:00
char pid [ 32 ] ;
2021-06-10 20:39:33 +08:00
/* Generate temp rdb file name using async-signal safe functions. */
2022-07-18 10:56:26 +03:00
ll2string ( pid , sizeof ( pid ) , childpid ) ;
2024-04-10 16:50:52 -04:00
valkey_strlcpy ( tmpfile , " temp- " , sizeof ( tmpfile ) ) ;
2024-05-06 16:09:01 +09:00
valkey_strlcat ( tmpfile , pid , sizeof ( tmpfile ) ) ;
valkey_strlcat ( tmpfile , " .rdb " , sizeof ( tmpfile ) ) ;
2020-09-17 23:20:10 +08:00
if ( from_signal ) {
/* bg_unlink is not async-signal-safe, but in this case we don't really
* need to close the fd , it ' ll be released when the process exists . */
int fd = open ( tmpfile , O_RDONLY | O_NONBLOCK ) ;
UNUSED ( fd ) ;
unlink ( tmpfile ) ;
} else {
bg_unlink ( tmpfile ) ;
}
2010-06-22 00:07:48 +02:00
}
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
/* This function is called by rdbLoadObject() when the code is in RDB-check
* mode and we find a module value of type 2 that can be parsed without
* the need of the actual module . The value is parsed for errors , finally
2024-04-09 01:24:03 -07:00
* a dummy Object is returned just to conform to the API . */
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
robj * rdbLoadCheckModuleValue ( rio * rdb , char * modulename ) {
uint64_t opcode ;
while ( ( opcode = rdbLoadLen ( rdb , NULL ) ) ! = RDB_MODULE_OPCODE_EOF ) {
if ( opcode = = RDB_MODULE_OPCODE_SINT | |
opcode = = RDB_MODULE_OPCODE_UINT )
{
uint64_t len ;
if ( rdbLoadLenByRef ( rdb , NULL , & len ) = = - 1 ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB (
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
" Error reading integer from module %s value " , modulename ) ;
}
} else if ( opcode = = RDB_MODULE_OPCODE_STRING ) {
robj * o = rdbGenericLoadStringObject ( rdb , RDB_LOAD_NONE , NULL ) ;
if ( o = = NULL ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB (
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
" Error reading string from module %s value " , modulename ) ;
}
decrRefCount ( o ) ;
} else if ( opcode = = RDB_MODULE_OPCODE_FLOAT ) {
float val ;
if ( rdbLoadBinaryFloatValue ( rdb , & val ) = = - 1 ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB (
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
" Error reading float from module %s value " , modulename ) ;
}
} else if ( opcode = = RDB_MODULE_OPCODE_DOUBLE ) {
double val ;
if ( rdbLoadBinaryDoubleValue ( rdb , & val ) = = - 1 ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB (
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
" Error reading double from module %s value " , modulename ) ;
}
}
}
return createStringObject ( " module-dummy-value " , 18 ) ;
}
2021-09-09 23:18:53 +08:00
/* callback for hashZiplistConvertAndValidateIntegrity.
* Check that the ziplist doesn ' t have duplicate hash field names .
* The ziplist element pointed by ' p ' will be converted and stored into listpack . */
static int _ziplistPairsEntryConvertAndValidate ( unsigned char * p , unsigned int head_count , void * userdata ) {
unsigned char * str ;
unsigned int slen ;
long long vll ;
struct {
long count ;
dict * fields ;
unsigned char * * lp ;
} * data = userdata ;
if ( data - > fields = = NULL ) {
data - > fields = dictCreate ( & hashDictType ) ;
dictExpand ( data - > fields , head_count / 2 ) ;
}
if ( ! ziplistGet ( p , & str , & slen , & vll ) )
return 0 ;
/* Even records are field names, add to dict and check that's not a dup */
if ( ( ( data - > count ) & 1 ) = = 0 ) {
sds field = str ? sdsnewlen ( str , slen ) : sdsfromlonglong ( vll ) ;
if ( dictAdd ( data - > fields , field , NULL ) ! = DICT_OK ) {
/* Duplicate, return an error */
sdsfree ( field ) ;
return 0 ;
}
}
if ( str ) {
* ( data - > lp ) = lpAppend ( * ( data - > lp ) , ( unsigned char * ) str , slen ) ;
} else {
* ( data - > lp ) = lpAppendInteger ( * ( data - > lp ) , vll ) ;
}
( data - > count ) + + ;
return 1 ;
}
/* Validate the integrity of the data structure while converting it to
* listpack and storing it at ' lp ' .
* The function is safe to call on non - validated ziplists , it returns 0
* when encounter an integrity validation issue . */
int ziplistPairsConvertAndValidateIntegrity ( unsigned char * zl , size_t size , unsigned char * * lp ) {
/* Keep track of the field names to locate duplicate ones */
struct {
long count ;
dict * fields ; /* Initialisation at the first callback. */
unsigned char * * lp ;
} data = { 0 , NULL , lp } ;
int ret = ziplistValidateIntegrity ( zl , size , 1 , _ziplistPairsEntryConvertAndValidate , & data ) ;
/* make sure we have an even number of records. */
if ( data . count & 1 )
ret = 0 ;
if ( data . fields ) dictRelease ( data . fields ) ;
return ret ;
}
2021-11-24 19:34:13 +08:00
/* callback for ziplistValidateIntegrity.
* The ziplist element pointed by ' p ' will be converted and stored into listpack . */
static int _ziplistEntryConvertAndValidate ( unsigned char * p , unsigned int head_count , void * userdata ) {
UNUSED ( head_count ) ;
unsigned char * str ;
unsigned int slen ;
long long vll ;
unsigned char * * lp = ( unsigned char * * ) userdata ;
if ( ! ziplistGet ( p , & str , & slen , & vll ) ) return 0 ;
if ( str )
* lp = lpAppend ( * lp , ( unsigned char * ) str , slen ) ;
else
* lp = lpAppendInteger ( * lp , vll ) ;
return 1 ;
}
/* callback for ziplistValidateIntegrity.
* The ziplist element pointed by ' p ' will be converted and stored into quicklist . */
static int _listZiplistEntryConvertAndValidate ( unsigned char * p , unsigned int head_count , void * userdata ) {
UNUSED ( head_count ) ;
unsigned char * str ;
unsigned int slen ;
long long vll ;
char longstr [ 32 ] = { 0 } ;
quicklist * ql = ( quicklist * ) userdata ;
if ( ! ziplistGet ( p , & str , & slen , & vll ) ) return 0 ;
if ( ! str ) {
/* Write the longval as a string so we can re-add it */
slen = ll2string ( longstr , sizeof ( longstr ) , vll ) ;
str = ( unsigned char * ) longstr ;
}
quicklistPushTail ( ql , str , slen ) ;
return 1 ;
}
2021-09-09 23:18:53 +08:00
/* callback for to check the listpack doesn't have duplicate records */
2022-11-09 18:50:07 +01:00
static int _lpEntryValidation ( unsigned char * p , unsigned int head_count , void * userdata ) {
2021-09-09 23:18:53 +08:00
struct {
2022-11-09 18:50:07 +01:00
int pairs ;
2021-09-09 23:18:53 +08:00
long count ;
dict * fields ;
} * data = userdata ;
if ( data - > fields = = NULL ) {
data - > fields = dictCreate ( & hashDictType ) ;
2022-11-09 18:50:07 +01:00
dictExpand ( data - > fields , data - > pairs ? head_count / 2 : head_count ) ;
2021-09-09 23:18:53 +08:00
}
2022-11-09 18:50:07 +01:00
/* If we're checking pairs, then even records are field names. Otherwise
* we ' re checking all elements . Add to dict and check that ' s not a dup */
if ( ! data - > pairs | | ( ( data - > count ) & 1 ) = = 0 ) {
2021-09-09 23:18:53 +08:00
unsigned char * str ;
int64_t slen ;
unsigned char buf [ LP_INTBUF_SIZE ] ;
str = lpGet ( p , & slen , buf ) ;
sds field = sdsnewlen ( str , slen ) ;
if ( dictAdd ( data - > fields , field , NULL ) ! = DICT_OK ) {
/* Duplicate, return an error */
sdsfree ( field ) ;
return 0 ;
}
}
( data - > count ) + + ;
return 1 ;
}
/* Validate the integrity of the listpack structure.
* when ` deep ` is 0 , only the integrity of the header is validated .
2022-11-09 18:50:07 +01:00
* when ` deep ` is 1 , we scan all the entries one by one .
* when ` pairs ` is 0 , all elements need to be unique ( it ' s a set )
* when ` pairs ` is 1 , odd elements need to be unique ( it ' s a key - value map ) */
int lpValidateIntegrityAndDups ( unsigned char * lp , size_t size , int deep , int pairs ) {
2021-09-09 23:18:53 +08:00
if ( ! deep )
return lpValidateIntegrity ( lp , size , 0 , NULL , NULL ) ;
/* Keep track of the field names to locate duplicate ones */
struct {
2022-11-09 18:50:07 +01:00
int pairs ;
2021-09-09 23:18:53 +08:00
long count ;
dict * fields ; /* Initialisation at the first callback. */
2022-11-09 18:50:07 +01:00
} data = { pairs , 0 , NULL } ;
2021-09-09 23:18:53 +08:00
2022-11-09 18:50:07 +01:00
int ret = lpValidateIntegrity ( lp , size , 1 , _lpEntryValidation , & data ) ;
2021-09-09 23:18:53 +08:00
/* make sure we have an even number of records. */
2022-11-09 18:50:07 +01:00
if ( pairs & & data . count & 1 )
2021-09-09 23:18:53 +08:00
ret = 0 ;
if ( data . fields ) dictRelease ( data . fields ) ;
return ret ;
}
2024-04-09 01:24:03 -07:00
/* Load an Object of the specified type from the specified file.
2021-08-06 03:42:20 +08:00
* On success a newly allocated object is returned , otherwise NULL .
* When the function returns NULL and if ' error ' is not NULL , the
* integer pointed by ' error ' is set to the type of error that occurred */
robj * rdbLoadObject ( int rdbtype , rio * rdb , sds key , int dbid , int * error ) {
2014-05-12 11:44:37 -04:00
robj * o = NULL , * ele , * dec ;
2016-09-01 11:08:44 +02:00
uint64_t len ;
2010-07-02 19:57:12 +02:00
unsigned int i ;
2010-06-22 00:07:48 +02:00
2021-08-06 03:42:20 +08:00
/* Set default error of load object, it will be set to 0 on success. */
if ( error ) * error = RDB_LOAD_ERR_OTHER ;
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
int deep_integrity_validation = server . sanitize_dump_payload = = SANITIZE_DUMP_YES ;
if ( server . sanitize_dump_payload = = SANITIZE_DUMP_CLIENTS ) {
/* Skip sanitization when loading (an RDB), or getting a RESTORE command
* from either the master or a client using an ACL user with the skip - sanitize - payload flag . */
int skip = server . loading | |
( server . current_client & & ( server . current_client - > flags & CLIENT_MASTER ) ) ;
if ( ! skip & & server . current_client & & server . current_client - > user )
skip = ! ! ( server . current_client - > user - > flags & USER_FLAG_SANITIZE_PAYLOAD_SKIP ) ;
deep_integrity_validation = ! skip ;
}
2015-07-27 09:41:48 +02:00
if ( rdbtype = = RDB_TYPE_STRING ) {
2010-06-22 00:07:48 +02:00
/* Read string value */
2011-05-13 17:31:00 +02:00
if ( ( o = rdbLoadEncodedStringObject ( rdb ) ) = = NULL ) return NULL ;
2023-05-30 10:43:25 +03:00
o = tryObjectEncodingEx ( o , 0 ) ;
2015-07-27 09:41:48 +02:00
} else if ( rdbtype = = RDB_TYPE_LIST ) {
2010-06-22 00:07:48 +02:00
/* Read list value */
2015-07-27 09:41:48 +02:00
if ( ( len = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) return NULL ;
2021-08-06 03:42:20 +08:00
if ( len = = 0 ) goto emptykey ;
2010-06-22 00:07:48 +02:00
2024-02-08 20:36:11 +08:00
o = createQuicklistObject ( server . list_max_listpack_size , server . list_compress_depth ) ;
2010-06-22 00:07:48 +02:00
/* Load every single element of the list */
while ( len - - ) {
2020-05-03 09:31:50 +03:00
if ( ( ele = rdbLoadEncodedStringObject ( rdb ) ) = = NULL ) {
decrRefCount ( o ) ;
return NULL ;
}
2014-11-13 14:11:47 -05:00
dec = getDecodedObject ( ele ) ;
size_t len = sdslen ( dec - > ptr ) ;
2014-12-16 00:49:14 -05:00
quicklistPushTail ( o - > ptr , dec - > ptr , len ) ;
2014-11-13 14:11:47 -05:00
decrRefCount ( dec ) ;
decrRefCount ( ele ) ;
2010-06-22 00:07:48 +02:00
}
Add listpack encoding for list (#11303)
Improve memory efficiency of list keys
## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.
## Conversion rules
* Convert listpack to quicklist
When the listpack length or size reaches the `list-max-listpack-size` limit,
it will be converted to a quicklist.
* Convert quicklist to listpack
When a quicklist has only one node, and its length or size is reduced to half
of the `list-max-listpack-size` limit, it will be converted to a listpack.
This is done to avoid frequent conversions when we add or remove at the bounding size or length.
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
so when changing the direction, we need to use the current node (listTypeEntry->p) to
update `listTypeIterator->lpi` to the next node in the reverse direction.
## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement
### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement
From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
the main enhancement is brought by `addListListpackRangeReply()`.
## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.
## Note
1. Add conversion callback to support doing some work before conversion
Since the quicklist iterator decompresses the current node when it is released, we can
no longer decompress the quicklist after we convert the list.
2022-11-17 02:29:46 +08:00
listTypeTryConversion ( o , LIST_CONV_AUTO , NULL , NULL ) ;
2015-07-27 09:41:48 +02:00
} else if ( rdbtype = = RDB_TYPE_SET ) {
2015-07-31 18:01:23 +02:00
/* Read Set value */
2015-07-27 09:41:48 +02:00
if ( ( len = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) return NULL ;
2021-08-06 03:42:20 +08:00
if ( len = = 0 ) goto emptykey ;
2010-07-02 19:57:12 +02:00
/* Use a regular set when there are too many entries. */
2021-10-04 12:09:25 +03:00
size_t max_entries = server . set_max_intset_entries ;
if ( max_entries > = 1 < < 30 ) max_entries = 1 < < 30 ;
if ( len > max_entries ) {
2010-07-02 19:57:12 +02:00
o = createSetObject ( ) ;
/* It's faster to expand the dict to the right size asap in order
* to avoid rehashing */
2020-11-22 21:22:49 +02:00
if ( len > DICT_HT_INITIAL_SIZE & & dictTryExpand ( o - > ptr , len ) ! = DICT_OK ) {
rdbReportCorruptRDB ( " OOM in dictTryExpand %llu " , ( unsigned long long ) len ) ;
decrRefCount ( o ) ;
return NULL ;
}
2010-07-02 19:57:12 +02:00
} else {
o = createIntsetObject ( ) ;
}
2015-07-31 18:01:23 +02:00
/* Load every single element of the set */
2022-11-09 18:50:07 +01:00
size_t maxelelen = 0 , sumelelen = 0 ;
2010-07-02 19:57:12 +02:00
for ( i = 0 ; i < len ; i + + ) {
long long llval ;
2015-07-31 18:01:23 +02:00
sds sdsele ;
2020-05-03 09:31:50 +03:00
if ( ( sdsele = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ) = = NULL ) {
decrRefCount ( o ) ;
return NULL ;
}
2022-11-09 18:50:07 +01:00
size_t elelen = sdslen ( sdsele ) ;
sumelelen + = elelen ;
if ( elelen > maxelelen ) maxelelen = elelen ;
2010-07-02 19:57:12 +02:00
2015-07-26 15:28:00 +02:00
if ( o - > encoding = = OBJ_ENCODING_INTSET ) {
2015-08-04 09:20:55 +02:00
/* Fetch integer value from element. */
2015-07-31 18:01:23 +02:00
if ( isSdsRepresentableAsLongLong ( sdsele , & llval ) = = C_OK ) {
2020-11-02 09:35:37 +02:00
uint8_t success ;
o - > ptr = intsetAdd ( o - > ptr , llval , & success ) ;
if ( ! success ) {
rdbReportCorruptRDB ( " Duplicate set members detected " ) ;
decrRefCount ( o ) ;
sdsfree ( sdsele ) ;
return NULL ;
}
2022-11-09 18:50:07 +01:00
} else if ( setTypeSize ( o ) < server . set_max_listpack_entries & &
maxelelen < = server . set_max_listpack_value & &
lpSafeToAdd ( NULL , sumelelen ) )
{
/* We checked if it's safe to add one large element instead
* of many small ones . It ' s OK since lpSafeToAdd doesn ' t
* care about individual elements , only the total size . */
setTypeConvert ( o , OBJ_ENCODING_LISTPACK ) ;
2022-12-06 10:25:51 +01:00
} else if ( setTypeConvertAndExpand ( o , OBJ_ENCODING_HT , len , 0 ) ! = C_OK ) {
rdbReportCorruptRDB ( " OOM in dictTryExpand %llu " , ( unsigned long long ) len ) ;
sdsfree ( sdsele ) ;
decrRefCount ( o ) ;
return NULL ;
2010-07-02 19:57:12 +02:00
}
}
2022-11-09 18:50:07 +01:00
/* This will also be called when the set was just converted
* to a listpack encoded set . */
if ( o - > encoding = = OBJ_ENCODING_LISTPACK ) {
if ( setTypeSize ( o ) < server . set_max_listpack_entries & &
elelen < = server . set_max_listpack_value & &
lpSafeToAdd ( o - > ptr , elelen ) )
{
unsigned char * p = lpFirst ( o - > ptr ) ;
if ( p & & lpFind ( o - > ptr , p , ( unsigned char * ) sdsele , elelen , 0 ) ) {
rdbReportCorruptRDB ( " Duplicate set members detected " ) ;
decrRefCount ( o ) ;
sdsfree ( sdsele ) ;
return NULL ;
}
o - > ptr = lpAppend ( o - > ptr , ( unsigned char * ) sdsele , elelen ) ;
2022-12-06 10:25:51 +01:00
} else if ( setTypeConvertAndExpand ( o , OBJ_ENCODING_HT , len , 0 ) ! = C_OK ) {
rdbReportCorruptRDB ( " OOM in dictTryExpand %llu " ,
( unsigned long long ) len ) ;
sdsfree ( sdsele ) ;
decrRefCount ( o ) ;
return NULL ;
2022-11-09 18:50:07 +01:00
}
}
2010-07-02 19:57:12 +02:00
/* This will also be called when the set was just converted
2015-08-04 09:20:55 +02:00
* to a regular hash table encoded set . */
2015-07-26 15:28:00 +02:00
if ( o - > encoding = = OBJ_ENCODING_HT ) {
2020-08-14 16:05:34 +03:00
if ( dictAdd ( ( dict * ) o - > ptr , sdsele , NULL ) ! = DICT_OK ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Duplicate set members detected " ) ;
2020-08-14 16:05:34 +03:00
decrRefCount ( o ) ;
sdsfree ( sdsele ) ;
return NULL ;
}
2010-08-26 13:18:24 +02:00
} else {
2015-07-31 18:01:23 +02:00
sdsfree ( sdsele ) ;
2010-07-02 19:57:12 +02:00
}
2010-06-22 00:07:48 +02:00
}
2016-06-01 11:55:47 +02:00
} else if ( rdbtype = = RDB_TYPE_ZSET_2 | | rdbtype = = RDB_TYPE_ZSET ) {
2021-06-10 20:39:33 +08:00
/* Read sorted set value. */
2016-09-01 11:08:44 +02:00
uint64_t zsetlen ;
2021-10-04 12:11:02 +03:00
size_t maxelelen = 0 , totelelen = 0 ;
2010-06-22 00:07:48 +02:00
zset * zs ;
2015-07-27 09:41:48 +02:00
if ( ( zsetlen = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) return NULL ;
2021-08-06 03:42:20 +08:00
if ( zsetlen = = 0 ) goto emptykey ;
2010-06-22 00:07:48 +02:00
o = createZsetObject ( ) ;
zs = o - > ptr ;
2011-03-10 17:50:13 +01:00
2020-11-22 21:22:49 +02:00
if ( zsetlen > DICT_HT_INITIAL_SIZE & & dictTryExpand ( zs - > dict , zsetlen ) ! = DICT_OK ) {
rdbReportCorruptRDB ( " OOM in dictTryExpand %llu " , ( unsigned long long ) zsetlen ) ;
decrRefCount ( o ) ;
return NULL ;
}
2018-04-22 22:30:44 +08:00
2015-08-04 09:20:55 +02:00
/* Load every single element of the sorted set. */
2010-06-22 00:07:48 +02:00
while ( zsetlen - - ) {
2015-08-04 09:20:55 +02:00
sds sdsele ;
2010-09-22 18:07:52 +02:00
double score ;
zskiplistNode * znode ;
2010-06-22 00:07:48 +02:00
2020-05-03 09:31:50 +03:00
if ( ( sdsele = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ) = = NULL ) {
decrRefCount ( o ) ;
return NULL ;
}
2016-06-01 11:55:47 +02:00
if ( rdbtype = = RDB_TYPE_ZSET_2 ) {
2020-05-03 09:31:50 +03:00
if ( rdbLoadBinaryDoubleValue ( rdb , & score ) = = - 1 ) {
decrRefCount ( o ) ;
sdsfree ( sdsele ) ;
return NULL ;
}
2016-06-01 11:55:47 +02:00
} else {
2020-05-03 09:31:50 +03:00
if ( rdbLoadDoubleValue ( rdb , & score ) = = - 1 ) {
decrRefCount ( o ) ;
sdsfree ( sdsele ) ;
return NULL ;
}
2016-06-01 11:55:47 +02:00
}
2011-03-10 17:50:13 +01:00
2021-12-26 17:40:11 +08:00
if ( isnan ( score ) ) {
rdbReportCorruptRDB ( " Zset with NAN score detected " ) ;
decrRefCount ( o ) ;
sdsfree ( sdsele ) ;
return NULL ;
}
2011-03-10 17:50:13 +01:00
/* Don't care about integer-encoded strings. */
2015-08-04 09:20:55 +02:00
if ( sdslen ( sdsele ) > maxelelen ) maxelelen = sdslen ( sdsele ) ;
2021-10-04 12:11:02 +03:00
totelelen + = sdslen ( sdsele ) ;
2011-03-10 17:50:13 +01:00
2015-08-04 09:20:55 +02:00
znode = zslInsert ( zs - > zsl , score , sdsele ) ;
2020-11-02 09:35:37 +02:00
if ( dictAdd ( zs - > dict , sdsele , & znode - > score ) ! = DICT_OK ) {
rdbReportCorruptRDB ( " Duplicate zset fields detected " ) ;
decrRefCount ( o ) ;
2020-12-14 17:10:31 +02:00
/* no need to free 'sdsele', will be released by zslFree together with 'o' */
2020-11-02 09:35:37 +02:00
return NULL ;
}
2010-06-22 00:07:48 +02:00
}
2011-03-10 17:50:13 +01:00
/* Convert *after* loading, since sorted sets are not stored ordered. */
2021-09-09 23:18:53 +08:00
if ( zsetLength ( o ) < = server . zset_max_listpack_entries & &
2021-10-04 12:11:02 +03:00
maxelelen < = server . zset_max_listpack_value & &
lpSafeToAdd ( NULL , totelelen ) )
{
zsetConvert ( o , OBJ_ENCODING_LISTPACK ) ;
}
2015-07-27 09:41:48 +02:00
} else if ( rdbtype = = RDB_TYPE_HASH ) {
2016-09-01 11:08:44 +02:00
uint64_t len ;
2012-01-02 22:14:10 -08:00
int ret ;
2015-09-23 10:34:53 +02:00
sds field , value ;
2020-11-02 09:35:37 +02:00
dict * dupSearchDict = NULL ;
2012-01-02 22:14:10 -08:00
len = rdbLoadLen ( rdb , NULL ) ;
2015-07-27 09:41:48 +02:00
if ( len = = RDB_LENERR ) return NULL ;
2021-08-06 03:42:20 +08:00
if ( len = = 0 ) goto emptykey ;
2010-06-22 00:07:48 +02:00
o = createHashObject ( ) ;
2012-01-02 22:14:10 -08:00
2020-11-02 09:35:37 +02:00
/* Too many entries? Use a hash table right from the start. */
2021-08-10 14:18:49 +08:00
if ( len > server . hash_max_listpack_entries )
2015-07-26 15:28:00 +02:00
hashTypeConvert ( o , OBJ_ENCODING_HT ) ;
2020-11-02 09:35:37 +02:00
else if ( deep_integrity_validation ) {
/* In this mode, we need to guarantee that the server won't crash
* later when the ziplist is converted to a dict .
* Create a set ( dict with no values ) to for a dup search .
* We can dismiss it as soon as we convert the ziplist to a hash . */
2021-08-05 08:25:58 +03:00
dupSearchDict = dictCreate ( & hashDictType ) ;
2020-11-02 09:35:37 +02:00
}
2012-01-02 22:14:10 -08:00
/* Load every field and value into the ziplist */
2021-08-10 14:18:49 +08:00
while ( o - > encoding = = OBJ_ENCODING_LISTPACK & & len > 0 ) {
2012-03-13 09:49:11 +01:00
len - - ;
2012-01-02 22:14:10 -08:00
/* Load raw strings */
2020-05-03 09:31:50 +03:00
if ( ( field = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ) = = NULL ) {
decrRefCount ( o ) ;
2020-11-02 09:35:37 +02:00
if ( dupSearchDict ) dictRelease ( dupSearchDict ) ;
2020-05-03 09:31:50 +03:00
return NULL ;
}
if ( ( value = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ) = = NULL ) {
sdsfree ( field ) ;
decrRefCount ( o ) ;
2020-11-02 09:35:37 +02:00
if ( dupSearchDict ) dictRelease ( dupSearchDict ) ;
2020-05-03 09:31:50 +03:00
return NULL ;
}
2012-01-02 22:14:10 -08:00
2020-11-02 09:35:37 +02:00
if ( dupSearchDict ) {
sds field_dup = sdsdup ( field ) ;
if ( dictAdd ( dupSearchDict , field_dup , NULL ) ! = DICT_OK ) {
rdbReportCorruptRDB ( " Hash with dup elements " ) ;
dictRelease ( dupSearchDict ) ;
decrRefCount ( o ) ;
sdsfree ( field_dup ) ;
sdsfree ( field ) ;
sdsfree ( value ) ;
return NULL ;
}
}
2012-01-02 22:14:10 -08:00
/* Convert to hash table if size threshold is exceeded */
2021-08-10 14:18:49 +08:00
if ( sdslen ( field ) > server . hash_max_listpack_value | |
2021-10-04 12:11:02 +03:00
sdslen ( value ) > server . hash_max_listpack_value | |
! lpSafeToAdd ( o - > ptr , sdslen ( field ) + sdslen ( value ) ) )
2010-06-22 00:07:48 +02:00
{
2015-07-26 15:28:00 +02:00
hashTypeConvert ( o , OBJ_ENCODING_HT ) ;
2021-10-04 12:11:02 +03:00
ret = dictAdd ( ( dict * ) o - > ptr , field , value ) ;
if ( ret = = DICT_ERR ) {
rdbReportCorruptRDB ( " Duplicate hash fields detected " ) ;
if ( dupSearchDict ) dictRelease ( dupSearchDict ) ;
sdsfree ( value ) ;
sdsfree ( field ) ;
decrRefCount ( o ) ;
return NULL ;
}
2012-01-02 22:14:10 -08:00
break ;
2010-06-22 00:07:48 +02:00
}
2021-10-04 12:11:02 +03:00
/* Add pair to listpack */
o - > ptr = lpAppend ( o - > ptr , ( unsigned char * ) field , sdslen ( field ) ) ;
o - > ptr = lpAppend ( o - > ptr , ( unsigned char * ) value , sdslen ( value ) ) ;
2015-09-23 10:34:53 +02:00
sdsfree ( field ) ;
sdsfree ( value ) ;
2010-06-22 00:07:48 +02:00
}
2012-01-02 22:14:10 -08:00
2020-11-02 09:35:37 +02:00
if ( dupSearchDict ) {
/* We no longer need this, from now on the entries are added
* to a dict so the check is performed implicitly . */
dictRelease ( dupSearchDict ) ;
dupSearchDict = NULL ;
}
2020-11-22 21:22:49 +02:00
if ( o - > encoding = = OBJ_ENCODING_HT & & len > DICT_HT_INITIAL_SIZE ) {
if ( dictTryExpand ( o - > ptr , len ) ! = DICT_OK ) {
rdbReportCorruptRDB ( " OOM in dictTryExpand %llu " , ( unsigned long long ) len ) ;
decrRefCount ( o ) ;
return NULL ;
}
}
2018-04-22 22:30:44 +08:00
2012-01-02 22:14:10 -08:00
/* Load remaining fields and values into the hash table */
2015-07-26 15:28:00 +02:00
while ( o - > encoding = = OBJ_ENCODING_HT & & len > 0 ) {
2012-03-13 09:49:11 +01:00
len - - ;
2012-01-02 22:14:10 -08:00
/* Load encoded strings */
2020-05-03 09:31:50 +03:00
if ( ( field = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ) = = NULL ) {
decrRefCount ( o ) ;
return NULL ;
}
if ( ( value = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ) = = NULL ) {
sdsfree ( field ) ;
decrRefCount ( o ) ;
return NULL ;
}
2012-01-02 22:14:10 -08:00
/* Add pair to hash table */
ret = dictAdd ( ( dict * ) o - > ptr , field , value ) ;
2014-05-12 11:44:37 -04:00
if ( ret = = DICT_ERR ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Duplicate hash fields detected " ) ;
2020-08-14 16:05:34 +03:00
sdsfree ( value ) ;
sdsfree ( field ) ;
decrRefCount ( o ) ;
return NULL ;
2014-05-12 11:44:37 -04:00
}
2012-01-02 22:14:10 -08:00
}
/* All pairs should be read by now */
2015-07-26 15:29:53 +02:00
serverAssert ( len = = 0 ) ;
2021-11-03 20:47:18 +02:00
} else if ( rdbtype = = RDB_TYPE_LIST_QUICKLIST | | rdbtype = = RDB_TYPE_LIST_QUICKLIST_2 ) {
2015-07-27 09:41:48 +02:00
if ( ( len = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) return NULL ;
2021-08-06 03:42:20 +08:00
if ( len = = 0 ) goto emptykey ;
2024-02-08 20:36:11 +08:00
o = createQuicklistObject ( server . list_max_listpack_size , server . list_compress_depth ) ;
2021-11-24 19:34:13 +08:00
uint64_t container = QUICKLIST_NODE_CONTAINER_PACKED ;
2014-12-10 13:53:12 -05:00
while ( len - - ) {
2021-11-24 19:34:13 +08:00
unsigned char * lp ;
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
size_t encoded_len ;
2021-11-03 20:47:18 +02:00
if ( rdbtype = = RDB_TYPE_LIST_QUICKLIST_2 ) {
if ( ( container = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) {
decrRefCount ( o ) ;
return NULL ;
}
2021-11-24 19:34:13 +08:00
if ( container ! = QUICKLIST_NODE_CONTAINER_PACKED & & container ! = QUICKLIST_NODE_CONTAINER_PLAIN ) {
2021-11-03 20:47:18 +02:00
rdbReportCorruptRDB ( " Quicklist integrity check failed. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
}
unsigned char * data =
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
rdbGenericLoadStringObject ( rdb , RDB_LOAD_PLAIN , & encoded_len ) ;
2021-11-03 20:47:18 +02:00
if ( data = = NULL | | ( encoded_len = = 0 ) ) {
zfree ( data ) ;
2020-05-03 09:31:50 +03:00
decrRefCount ( o ) ;
return NULL ;
}
2021-11-03 20:47:18 +02:00
if ( container = = QUICKLIST_NODE_CONTAINER_PLAIN ) {
quicklistAppendPlainNode ( o - > ptr , data , encoded_len ) ;
continue ;
}
2021-11-24 19:34:13 +08:00
if ( rdbtype = = RDB_TYPE_LIST_QUICKLIST_2 ) {
lp = data ;
if ( deep_integrity_validation ) server . stat_dump_payload_sanitizations + + ;
if ( ! lpValidateIntegrity ( lp , encoded_len , deep_integrity_validation , NULL , NULL ) ) {
rdbReportCorruptRDB ( " Listpack integrity check failed. " ) ;
decrRefCount ( o ) ;
zfree ( lp ) ;
return NULL ;
}
} else {
lp = lpNew ( encoded_len ) ;
if ( ! ziplistValidateIntegrity ( data , encoded_len , 1 ,
_ziplistEntryConvertAndValidate , & lp ) )
{
rdbReportCorruptRDB ( " Ziplist integrity check failed. " ) ;
decrRefCount ( o ) ;
zfree ( data ) ;
zfree ( lp ) ;
return NULL ;
}
2021-11-03 20:47:18 +02:00
zfree ( data ) ;
2021-11-24 19:34:13 +08:00
lp = lpShrinkToFit ( lp ) ;
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
}
2021-08-09 22:13:46 +08:00
/* Silently skip empty ziplists, if we'll end up with empty quicklist we'll fail later. */
2021-11-24 19:34:13 +08:00
if ( lpLength ( lp ) = = 0 ) {
zfree ( lp ) ;
2021-08-09 22:13:46 +08:00
continue ;
} else {
2021-11-24 19:34:13 +08:00
quicklistAppendListpack ( o - > ptr , lp ) ;
2021-08-09 22:13:46 +08:00
}
}
if ( quicklistCount ( o - > ptr ) = = 0 ) {
decrRefCount ( o ) ;
goto emptykey ;
2014-12-10 13:53:12 -05:00
}
Add listpack encoding for list (#11303)
Improve memory efficiency of list keys
## Description of the feature
The new listpack encoding uses the old `list-max-listpack-size` config
to perform the conversion, which we can think it of as a node inside a
quicklist, but without 80 bytes overhead (internal fragmentation included)
of quicklist and quicklistNode structs.
For example, a list key with 5 items of 10 chars each, now takes 128 bytes
instead of 208 it used to take.
## Conversion rules
* Convert listpack to quicklist
When the listpack length or size reaches the `list-max-listpack-size` limit,
it will be converted to a quicklist.
* Convert quicklist to listpack
When a quicklist has only one node, and its length or size is reduced to half
of the `list-max-listpack-size` limit, it will be converted to a listpack.
This is done to avoid frequent conversions when we add or remove at the bounding size or length.
## Interface changes
1. add list entry param to listTypeSetIteratorDirection
When list encoding is listpack, `listTypeIterator->lpi` points to the next entry of current entry,
so when changing the direction, we need to use the current node (listTypeEntry->p) to
update `listTypeIterator->lpi` to the next node in the reverse direction.
## Benchmark
### Listpack VS Quicklist with one node
* LPUSH - roughly 0.3% improvement
* LRANGE - roughly 13% improvement
### Both are quicklist
* LRANGE - roughly 3% improvement
* LRANGE without pipeline - roughly 3% improvement
From the benchmark, as we can see from the results
1. When list is quicklist encoding, LRANGE improves performance by <5%.
2. When list is listpack encoding, LRANGE improves performance by ~13%,
the main enhancement is brought by `addListListpackRangeReply()`.
## Memory usage
1M lists(key:0~key:1000000) with 5 items of 10 chars ("hellohello") each.
shows memory usage down by 35.49%, from 214MB to 138MB.
## Note
1. Add conversion callback to support doing some work before conversion
Since the quicklist iterator decompresses the current node when it is released, we can
no longer decompress the quicklist after we convert the list.
2022-11-17 02:29:46 +08:00
listTypeTryConversion ( o , LIST_CONV_AUTO , NULL , NULL ) ;
2015-07-27 09:41:48 +02:00
} else if ( rdbtype = = RDB_TYPE_HASH_ZIPMAP | |
rdbtype = = RDB_TYPE_LIST_ZIPLIST | |
rdbtype = = RDB_TYPE_SET_INTSET | |
2022-11-09 18:50:07 +01:00
rdbtype = = RDB_TYPE_SET_LISTPACK | |
2015-07-27 09:41:48 +02:00
rdbtype = = RDB_TYPE_ZSET_ZIPLIST | |
2021-09-09 23:18:53 +08:00
rdbtype = = RDB_TYPE_ZSET_LISTPACK | |
2021-08-10 14:18:49 +08:00
rdbtype = = RDB_TYPE_HASH_ZIPLIST | |
rdbtype = = RDB_TYPE_HASH_LISTPACK )
2011-02-28 17:53:47 +01:00
{
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
size_t encoded_len ;
2016-05-18 11:45:40 +02:00
unsigned char * encoded =
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
rdbGenericLoadStringObject ( rdb , RDB_LOAD_PLAIN , & encoded_len ) ;
2015-01-07 10:20:55 +01:00
if ( encoded = = NULL ) return NULL ;
2020-11-02 09:35:37 +02:00
2015-07-26 15:28:00 +02:00
o = createObject ( OBJ_STRING , encoded ) ; /* Obj type fixed below. */
2011-02-28 17:53:47 +01:00
/* Fix the object encoding, and make sure to convert the encoded
* data type into the base type if accordingly to the current
* configuration there are too many elements in the encoded data
* type . Note that we only check the length and not max element
* size as this is an O ( N ) scan . Eventually everything will get
* converted . */
2011-05-13 22:14:39 +02:00
switch ( rdbtype ) {
2015-07-27 09:41:48 +02:00
case RDB_TYPE_HASH_ZIPMAP :
2020-11-02 09:35:37 +02:00
/* Since we don't keep zipmaps anymore, the rdb loading for these
* is O ( n ) anyway , use ` deep ` validation . */
if ( ! zipmapValidateIntegrity ( encoded , encoded_len , 1 ) ) {
rdbReportCorruptRDB ( " Zipmap integrity check failed. " ) ;
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
return NULL ;
}
2012-01-02 22:14:10 -08:00
/* Convert to ziplist encoded hash. This must be deprecated
2024-04-09 01:24:03 -07:00
* when loading dumps created by Redis OSS 2.4 gets deprecated . */
2012-01-02 22:14:10 -08:00
{
2021-10-04 12:11:02 +03:00
unsigned char * lp = lpNew ( 0 ) ;
2012-01-02 22:14:10 -08:00
unsigned char * zi = zipmapRewind ( o - > ptr ) ;
2012-01-25 13:26:25 -08:00
unsigned char * fstr , * vstr ;
unsigned int flen , vlen ;
unsigned int maxlen = 0 ;
2021-08-05 08:25:58 +03:00
dict * dupSearchDict = dictCreate ( & hashDictType ) ;
2012-01-02 22:14:10 -08:00
2012-01-25 13:26:25 -08:00
while ( ( zi = zipmapNext ( zi , & fstr , & flen , & vstr , & vlen ) ) ! = NULL ) {
if ( flen > maxlen ) maxlen = flen ;
if ( vlen > maxlen ) maxlen = vlen ;
2020-11-02 09:35:37 +02:00
/* search for duplicate records */
2020-11-22 21:22:49 +02:00
sds field = sdstrynewlen ( fstr , flen ) ;
2021-10-04 12:11:02 +03:00
if ( ! field | | dictAdd ( dupSearchDict , field , NULL ) ! = DICT_OK | |
! lpSafeToAdd ( lp , ( size_t ) flen + vlen ) ) {
2020-11-22 21:22:49 +02:00
rdbReportCorruptRDB ( " Hash zipmap with dup elements, or big length (%u) " , flen ) ;
2020-11-02 09:35:37 +02:00
dictRelease ( dupSearchDict ) ;
sdsfree ( field ) ;
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
return NULL ;
}
2021-10-04 12:11:02 +03:00
lp = lpAppend ( lp , fstr , flen ) ;
lp = lpAppend ( lp , vstr , vlen ) ;
2012-01-02 22:14:10 -08:00
}
2020-11-02 09:35:37 +02:00
dictRelease ( dupSearchDict ) ;
2012-01-02 22:14:10 -08:00
zfree ( o - > ptr ) ;
2021-10-04 12:11:02 +03:00
o - > ptr = lp ;
2015-07-26 15:28:00 +02:00
o - > type = OBJ_HASH ;
2021-08-10 14:18:49 +08:00
o - > encoding = OBJ_ENCODING_LISTPACK ;
2012-01-02 22:14:10 -08:00
2021-08-10 14:18:49 +08:00
if ( hashTypeLength ( o ) > server . hash_max_listpack_entries | |
maxlen > server . hash_max_listpack_value )
2012-01-25 13:26:25 -08:00
{
2015-07-26 15:28:00 +02:00
hashTypeConvert ( o , OBJ_ENCODING_HT ) ;
2012-01-25 13:26:25 -08:00
}
2012-01-02 22:14:10 -08:00
}
2011-02-28 17:53:47 +01:00
break ;
2021-11-24 19:34:13 +08:00
case RDB_TYPE_LIST_ZIPLIST :
{
quicklist * ql = quicklistNew ( server . list_max_listpack_size ,
server . list_compress_depth ) ;
if ( ! ziplistValidateIntegrity ( encoded , encoded_len , 1 ,
_listZiplistEntryConvertAndValidate , ql ) )
{
rdbReportCorruptRDB ( " List ziplist integrity check failed. " ) ;
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
quicklistRelease ( ql ) ;
return NULL ;
}
if ( ql - > len = = 0 ) {
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
quicklistRelease ( ql ) ;
goto emptykey ;
}
2021-08-09 22:13:46 +08:00
zfree ( encoded ) ;
2021-11-24 19:34:13 +08:00
o - > type = OBJ_LIST ;
o - > ptr = ql ;
o - > encoding = OBJ_ENCODING_QUICKLIST ;
break ;
2021-08-09 22:13:46 +08:00
}
2015-07-27 09:41:48 +02:00
case RDB_TYPE_SET_INTSET :
2020-11-02 09:35:37 +02:00
if ( deep_integrity_validation ) server . stat_dump_payload_sanitizations + + ;
if ( ! intsetValidateIntegrity ( encoded , encoded_len , deep_integrity_validation ) ) {
rdbReportCorruptRDB ( " Intset integrity check failed. " ) ;
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
return NULL ;
}
2015-07-26 15:28:00 +02:00
o - > type = OBJ_SET ;
o - > encoding = OBJ_ENCODING_INTSET ;
2011-02-28 17:53:47 +01:00
if ( intsetLen ( o - > ptr ) > server . set_max_intset_entries )
2015-07-26 15:28:00 +02:00
setTypeConvert ( o , OBJ_ENCODING_HT ) ;
2011-02-28 17:53:47 +01:00
break ;
2022-11-09 18:50:07 +01:00
case RDB_TYPE_SET_LISTPACK :
if ( deep_integrity_validation ) server . stat_dump_payload_sanitizations + + ;
if ( ! lpValidateIntegrityAndDups ( encoded , encoded_len , deep_integrity_validation , 0 ) ) {
rdbReportCorruptRDB ( " Set listpack integrity check failed. " ) ;
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
return NULL ;
}
o - > type = OBJ_SET ;
o - > encoding = OBJ_ENCODING_LISTPACK ;
2022-11-20 18:12:15 +08:00
if ( setTypeSize ( o ) = = 0 ) {
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
goto emptykey ;
}
2022-11-09 18:50:07 +01:00
if ( setTypeSize ( o ) > server . set_max_listpack_entries )
setTypeConvert ( o , OBJ_ENCODING_HT ) ;
break ;
2015-07-27 09:41:48 +02:00
case RDB_TYPE_ZSET_ZIPLIST :
2021-09-09 23:18:53 +08:00
{
unsigned char * lp = lpNew ( encoded_len ) ;
if ( ! ziplistPairsConvertAndValidateIntegrity ( encoded , encoded_len , & lp ) ) {
rdbReportCorruptRDB ( " Zset ziplist integrity check failed. " ) ;
zfree ( lp ) ;
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
return NULL ;
}
zfree ( o - > ptr ) ;
o - > type = OBJ_ZSET ;
o - > ptr = lp ;
o - > encoding = OBJ_ENCODING_LISTPACK ;
if ( zsetLength ( o ) = = 0 ) {
decrRefCount ( o ) ;
goto emptykey ;
}
if ( zsetLength ( o ) > server . zset_max_listpack_entries )
zsetConvert ( o , OBJ_ENCODING_SKIPLIST ) ;
2021-11-24 19:34:13 +08:00
else
o - > ptr = lpShrinkToFit ( o - > ptr ) ;
2021-09-09 23:18:53 +08:00
break ;
}
case RDB_TYPE_ZSET_LISTPACK :
2020-11-02 09:35:37 +02:00
if ( deep_integrity_validation ) server . stat_dump_payload_sanitizations + + ;
2022-11-09 18:50:07 +01:00
if ( ! lpValidateIntegrityAndDups ( encoded , encoded_len , deep_integrity_validation , 1 ) ) {
2021-09-09 23:18:53 +08:00
rdbReportCorruptRDB ( " Zset listpack integrity check failed. " ) ;
2020-11-02 09:35:37 +02:00
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
return NULL ;
}
2015-07-26 15:28:00 +02:00
o - > type = OBJ_ZSET ;
2021-09-09 23:18:53 +08:00
o - > encoding = OBJ_ENCODING_LISTPACK ;
2021-08-06 03:42:20 +08:00
if ( zsetLength ( o ) = = 0 ) {
decrRefCount ( o ) ;
goto emptykey ;
}
2021-09-09 23:18:53 +08:00
if ( zsetLength ( o ) > server . zset_max_listpack_entries )
2015-07-26 15:28:00 +02:00
zsetConvert ( o , OBJ_ENCODING_SKIPLIST ) ;
2011-03-09 13:16:38 +01:00
break ;
2015-07-27 09:41:48 +02:00
case RDB_TYPE_HASH_ZIPLIST :
2021-08-10 14:18:49 +08:00
{
unsigned char * lp = lpNew ( encoded_len ) ;
2021-09-09 23:18:53 +08:00
if ( ! ziplistPairsConvertAndValidateIntegrity ( encoded , encoded_len , & lp ) ) {
2021-08-10 14:18:49 +08:00
rdbReportCorruptRDB ( " Hash ziplist integrity check failed. " ) ;
zfree ( lp ) ;
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
return NULL ;
}
zfree ( o - > ptr ) ;
o - > ptr = lp ;
o - > type = OBJ_HASH ;
o - > encoding = OBJ_ENCODING_LISTPACK ;
if ( hashTypeLength ( o ) = = 0 ) {
decrRefCount ( o ) ;
goto emptykey ;
}
2021-11-24 19:34:13 +08:00
if ( hashTypeLength ( o ) > server . hash_max_listpack_entries )
2021-08-10 14:18:49 +08:00
hashTypeConvert ( o , OBJ_ENCODING_HT ) ;
2021-11-24 19:34:13 +08:00
else
o - > ptr = lpShrinkToFit ( o - > ptr ) ;
2021-08-10 14:18:49 +08:00
break ;
}
case RDB_TYPE_HASH_LISTPACK :
2020-11-02 09:35:37 +02:00
if ( deep_integrity_validation ) server . stat_dump_payload_sanitizations + + ;
2022-11-09 18:50:07 +01:00
if ( ! lpValidateIntegrityAndDups ( encoded , encoded_len , deep_integrity_validation , 1 ) ) {
2021-08-10 14:18:49 +08:00
rdbReportCorruptRDB ( " Hash listpack integrity check failed. " ) ;
2020-11-02 09:35:37 +02:00
zfree ( encoded ) ;
o - > ptr = NULL ;
decrRefCount ( o ) ;
return NULL ;
}
2015-07-26 15:28:00 +02:00
o - > type = OBJ_HASH ;
2021-08-10 14:18:49 +08:00
o - > encoding = OBJ_ENCODING_LISTPACK ;
2021-08-06 03:42:20 +08:00
if ( hashTypeLength ( o ) = = 0 ) {
decrRefCount ( o ) ;
goto emptykey ;
}
2021-08-10 14:18:49 +08:00
if ( hashTypeLength ( o ) > server . hash_max_listpack_entries )
2015-07-26 15:28:00 +02:00
hashTypeConvert ( o , OBJ_ENCODING_HT ) ;
2012-01-02 22:14:10 -08:00
break ;
2011-02-28 17:53:47 +01:00
default :
2019-07-16 11:00:34 +03:00
/* totally unreachable */
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Unknown RDB encoding type %d " , rdbtype ) ;
2011-02-28 17:53:47 +01:00
break ;
2011-02-28 16:55:34 +01:00
}
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 17:51:31 +05:30
} else if ( rdbtype = = RDB_TYPE_STREAM_LISTPACKS | |
rdbtype = = RDB_TYPE_STREAM_LISTPACKS_2 | |
rdbtype = = RDB_TYPE_STREAM_LISTPACKS_3 )
{
2017-09-05 16:24:11 +02:00
o = createStreamObject ( ) ;
stream * s = o - > ptr ;
uint64_t listpacks = rdbLoadLen ( rdb , NULL ) ;
2019-07-16 11:00:34 +03:00
if ( listpacks = = RDB_LENERR ) {
rdbReportReadError ( " Stream listpacks len loading failed. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
2017-09-05 16:24:11 +02:00
while ( listpacks - - ) {
2017-09-28 16:55:46 +02:00
/* Get the master ID, the one we'll use as key of the radix tree
* node : the entries inside the listpack itself are delta - encoded
* relatively to this ID . */
sds nodekey = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ;
2018-11-28 16:24:50 +01:00
if ( nodekey = = NULL ) {
2019-07-16 11:00:34 +03:00
rdbReportReadError ( " Stream master ID loading failed: invalid encoding or I/O error. " ) ;
decrRefCount ( o ) ;
return NULL ;
2018-11-28 16:24:50 +01:00
}
2017-09-28 16:55:46 +02:00
if ( sdslen ( nodekey ) ! = sizeof ( streamID ) ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Stream node key entry is not the "
2017-09-28 16:55:46 +02:00
" size of a stream ID " ) ;
2020-08-14 16:05:34 +03:00
sdsfree ( nodekey ) ;
decrRefCount ( o ) ;
return NULL ;
2017-09-28 16:55:46 +02:00
}
/* Load the listpack. */
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
size_t lp_size ;
2017-09-05 16:24:11 +02:00
unsigned char * lp =
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
rdbGenericLoadStringObject ( rdb , RDB_LOAD_PLAIN , & lp_size ) ;
2019-07-16 11:00:34 +03:00
if ( lp = = NULL ) {
rdbReportReadError ( " Stream listpacks loading failed. " ) ;
sdsfree ( nodekey ) ;
decrRefCount ( o ) ;
return NULL ;
}
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
if ( deep_integrity_validation ) server . stat_dump_payload_sanitizations + + ;
if ( ! streamValidateListpackIntegrity ( lp , lp_size , deep_integrity_validation ) ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Stream listpack integrity check failed. " ) ;
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
sdsfree ( nodekey ) ;
decrRefCount ( o ) ;
zfree ( lp ) ;
return NULL ;
}
2017-09-05 16:24:11 +02:00
unsigned char * first = lpFirst ( lp ) ;
if ( first = = NULL ) {
2017-09-28 16:55:46 +02:00
/* Serialized listpacks should never be empty, since on
2017-09-05 16:24:11 +02:00
* deletion we should remove the radix tree key if the
2018-07-01 13:24:50 +08:00
* resulting listpack is empty . */
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Empty listpack inside stream " ) ;
2020-08-14 16:05:34 +03:00
sdsfree ( nodekey ) ;
decrRefCount ( o ) ;
zfree ( lp ) ;
return NULL ;
2017-09-05 16:24:11 +02:00
}
2017-09-28 16:55:46 +02:00
/* Insert the key in the radix tree. */
2021-08-20 15:37:45 +08:00
int retval = raxTryInsert ( s - > rax ,
2017-09-28 16:55:46 +02:00
( unsigned char * ) nodekey , sizeof ( streamID ) , lp , NULL ) ;
sdsfree ( nodekey ) ;
2020-08-14 16:05:34 +03:00
if ( ! retval ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Listpack re-added with existing key " ) ;
2020-08-14 16:05:34 +03:00
decrRefCount ( o ) ;
zfree ( lp ) ;
return NULL ;
}
2017-09-05 16:24:11 +02:00
}
2017-09-06 12:00:18 +02:00
/* Load total number of items inside the stream. */
s - > length = rdbLoadLen ( rdb , NULL ) ;
2019-07-17 17:30:02 +02:00
2017-09-05 16:24:11 +02:00
/* Load the last entry ID. */
s - > last_id . ms = rdbLoadLen ( rdb , NULL ) ;
s - > last_id . seq = rdbLoadLen ( rdb , NULL ) ;
Add stream consumer group lag tracking and reporting (#9127)
Adds the ability to track the lag of a consumer group (CG), that is, the number
of entries yet-to-be-delivered from the stream.
The proposed constant-time solution is in the spirit of "best-effort."
Partially addresses #8737.
## Description of approach
We add a new "entries_added" property to the stream. This starts at 0 for a new
stream and is incremented by 1 with every `XADD`. It is essentially an all-time
counter of the entries added to the stream.
Given the stream's length and this counter value, we can trivially find the logical
"entries_added" counter of the first ID if and only if the stream is contiguous.
A fragmented stream contains one or more tombstones generated by `XDEL`s.
The new "xdel_max_id" stream property tracks the latest tombstone.
The CG also tracks its last delivered ID's as an "entries_read" counter and
increments it independently when delivering new messages, unless the this
read counter is invalid (-1 means invalid offset). When the CG's counter is
available, the reported lag is the difference between added and read counters.
Lastly, this also adds a "first_id" field to the stream structure in order to make
looking it up cheaper in most cases.
## Limitations
There are two cases in which the mechanism isn't able to track the lag.
In these cases, `XINFO` replies with `null` in the "lag" field.
The first case is when a CG is created with an arbitrary last delivered ID,
that isn't "0-0", nor the first or the last entries of the stream. In this case,
it is impossible to obtain a valid read counter (short of an O(N) operation).
The second case is when there are one or more tombstones fragmenting
the stream's entries range.
In both cases, given enough time and assuming that the consumers are
active (reading and lacking) and advancing, the CG should be able to
catch up with the tip of the stream and report zero lag.
Once that's achieved, lag tracking would resume as normal (until the
next tombstone is set).
## API changes
* `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
for explicitly specifying the new CG's counter.
* `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
for specifying the CG's counter.
* `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
number of entries added to the stream.
* `XINFO` reports the current lag and logical read counter of CGs.
* `XSETID` is an internal command that's used in replication/aof. It has been added with
the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
for propagating the CG's offset and maximal tombstone ID of the stream.
## The generic unsolved problem
The current stream implementation doesn't provide an efficient way to obtain the
approximate/exact size of a range of entries. While it could've been nice to have
that ability (#5813) in general, let alone specifically in the context of CGs, the risk
and complexities involved in such implementation are in all likelihood prohibitive.
## A refactoring note
The `streamGetEdgeID` has been refactored to accommodate both the existing seek
of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
argument). Furthermore, this refactoring also migrated the seek logic to use the
`streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
`skip_tombstones` Boolean struct field to control the emission of these.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-02-23 22:34:58 +02:00
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 17:51:31 +05:30
if ( rdbtype > = RDB_TYPE_STREAM_LISTPACKS_2 ) {
Add stream consumer group lag tracking and reporting (#9127)
Adds the ability to track the lag of a consumer group (CG), that is, the number
of entries yet-to-be-delivered from the stream.
The proposed constant-time solution is in the spirit of "best-effort."
Partially addresses #8737.
## Description of approach
We add a new "entries_added" property to the stream. This starts at 0 for a new
stream and is incremented by 1 with every `XADD`. It is essentially an all-time
counter of the entries added to the stream.
Given the stream's length and this counter value, we can trivially find the logical
"entries_added" counter of the first ID if and only if the stream is contiguous.
A fragmented stream contains one or more tombstones generated by `XDEL`s.
The new "xdel_max_id" stream property tracks the latest tombstone.
The CG also tracks its last delivered ID's as an "entries_read" counter and
increments it independently when delivering new messages, unless the this
read counter is invalid (-1 means invalid offset). When the CG's counter is
available, the reported lag is the difference between added and read counters.
Lastly, this also adds a "first_id" field to the stream structure in order to make
looking it up cheaper in most cases.
## Limitations
There are two cases in which the mechanism isn't able to track the lag.
In these cases, `XINFO` replies with `null` in the "lag" field.
The first case is when a CG is created with an arbitrary last delivered ID,
that isn't "0-0", nor the first or the last entries of the stream. In this case,
it is impossible to obtain a valid read counter (short of an O(N) operation).
The second case is when there are one or more tombstones fragmenting
the stream's entries range.
In both cases, given enough time and assuming that the consumers are
active (reading and lacking) and advancing, the CG should be able to
catch up with the tip of the stream and report zero lag.
Once that's achieved, lag tracking would resume as normal (until the
next tombstone is set).
## API changes
* `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
for explicitly specifying the new CG's counter.
* `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
for specifying the CG's counter.
* `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
number of entries added to the stream.
* `XINFO` reports the current lag and logical read counter of CGs.
* `XSETID` is an internal command that's used in replication/aof. It has been added with
the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
for propagating the CG's offset and maximal tombstone ID of the stream.
## The generic unsolved problem
The current stream implementation doesn't provide an efficient way to obtain the
approximate/exact size of a range of entries. While it could've been nice to have
that ability (#5813) in general, let alone specifically in the context of CGs, the risk
and complexities involved in such implementation are in all likelihood prohibitive.
## A refactoring note
The `streamGetEdgeID` has been refactored to accommodate both the existing seek
of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
argument). Furthermore, this refactoring also migrated the seek logic to use the
`streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
`skip_tombstones` Boolean struct field to control the emission of these.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-02-23 22:34:58 +02:00
/* Load the first entry ID. */
s - > first_id . ms = rdbLoadLen ( rdb , NULL ) ;
s - > first_id . seq = rdbLoadLen ( rdb , NULL ) ;
/* Load the maximal deleted entry ID. */
s - > max_deleted_entry_id . ms = rdbLoadLen ( rdb , NULL ) ;
s - > max_deleted_entry_id . seq = rdbLoadLen ( rdb , NULL ) ;
/* Load the offset. */
s - > entries_added = rdbLoadLen ( rdb , NULL ) ;
} else {
/* During migration the offset can be initialized to the stream's
* length . At this point , we also don ' t care about tombstones
* because CG offsets will be later initialized as well . */
s - > max_deleted_entry_id . ms = 0 ;
s - > max_deleted_entry_id . seq = 0 ;
s - > entries_added = s - > length ;
/* Since the rax is already loaded, we can find the first entry's
* ID . */
streamGetEdgeID ( s , 1 , 1 , & s - > first_id ) ;
}
2019-07-17 17:30:02 +02:00
if ( rioGetReadError ( rdb ) ) {
rdbReportReadError ( " Stream object metadata loading failed. " ) ;
2019-07-16 11:00:34 +03:00
decrRefCount ( o ) ;
return NULL ;
}
2022-04-14 08:29:35 +03:00
if ( s - > length & & ! raxSize ( s - > rax ) ) {
rdbReportCorruptRDB ( " Stream length inconsistent with rax entries " ) ;
decrRefCount ( o ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
/* Consumer groups loading */
2019-07-16 11:00:34 +03:00
uint64_t cgroups_count = rdbLoadLen ( rdb , NULL ) ;
if ( cgroups_count = = RDB_LENERR ) {
rdbReportReadError ( " Stream cgroup count loading failed. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
while ( cgroups_count - - ) {
/* Get the consumer group name and ID. We can then create the
* consumer group ASAP and populate its structure as
* we read more data . */
streamID cg_id ;
sds cgname = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ;
2018-02-21 11:17:46 +01:00
if ( cgname = = NULL ) {
2019-07-16 11:00:34 +03:00
rdbReportReadError (
2018-02-21 11:17:46 +01:00
" Error reading the consumer group name from Stream " ) ;
2019-07-16 11:00:34 +03:00
decrRefCount ( o ) ;
return NULL ;
}
2019-07-17 17:30:02 +02:00
cg_id . ms = rdbLoadLen ( rdb , NULL ) ;
cg_id . seq = rdbLoadLen ( rdb , NULL ) ;
if ( rioGetReadError ( rdb ) ) {
2019-07-16 11:00:34 +03:00
rdbReportReadError ( " Stream cgroup ID loading failed. " ) ;
sdsfree ( cgname ) ;
decrRefCount ( o ) ;
return NULL ;
2018-02-21 11:17:46 +01:00
}
Add stream consumer group lag tracking and reporting (#9127)
Adds the ability to track the lag of a consumer group (CG), that is, the number
of entries yet-to-be-delivered from the stream.
The proposed constant-time solution is in the spirit of "best-effort."
Partially addresses #8737.
## Description of approach
We add a new "entries_added" property to the stream. This starts at 0 for a new
stream and is incremented by 1 with every `XADD`. It is essentially an all-time
counter of the entries added to the stream.
Given the stream's length and this counter value, we can trivially find the logical
"entries_added" counter of the first ID if and only if the stream is contiguous.
A fragmented stream contains one or more tombstones generated by `XDEL`s.
The new "xdel_max_id" stream property tracks the latest tombstone.
The CG also tracks its last delivered ID's as an "entries_read" counter and
increments it independently when delivering new messages, unless the this
read counter is invalid (-1 means invalid offset). When the CG's counter is
available, the reported lag is the difference between added and read counters.
Lastly, this also adds a "first_id" field to the stream structure in order to make
looking it up cheaper in most cases.
## Limitations
There are two cases in which the mechanism isn't able to track the lag.
In these cases, `XINFO` replies with `null` in the "lag" field.
The first case is when a CG is created with an arbitrary last delivered ID,
that isn't "0-0", nor the first or the last entries of the stream. In this case,
it is impossible to obtain a valid read counter (short of an O(N) operation).
The second case is when there are one or more tombstones fragmenting
the stream's entries range.
In both cases, given enough time and assuming that the consumers are
active (reading and lacking) and advancing, the CG should be able to
catch up with the tip of the stream and report zero lag.
Once that's achieved, lag tracking would resume as normal (until the
next tombstone is set).
## API changes
* `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
for explicitly specifying the new CG's counter.
* `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
for specifying the CG's counter.
* `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
number of entries added to the stream.
* `XINFO` reports the current lag and logical read counter of CGs.
* `XSETID` is an internal command that's used in replication/aof. It has been added with
the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
for propagating the CG's offset and maximal tombstone ID of the stream.
## The generic unsolved problem
The current stream implementation doesn't provide an efficient way to obtain the
approximate/exact size of a range of entries. While it could've been nice to have
that ability (#5813) in general, let alone specifically in the context of CGs, the risk
and complexities involved in such implementation are in all likelihood prohibitive.
## A refactoring note
The `streamGetEdgeID` has been refactored to accommodate both the existing seek
of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
argument). Furthermore, this refactoring also migrated the seek logic to use the
`streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
`skip_tombstones` Boolean struct field to control the emission of these.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-02-23 22:34:58 +02:00
/* Load group offset. */
uint64_t cg_offset ;
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 17:51:31 +05:30
if ( rdbtype > = RDB_TYPE_STREAM_LISTPACKS_2 ) {
Add stream consumer group lag tracking and reporting (#9127)
Adds the ability to track the lag of a consumer group (CG), that is, the number
of entries yet-to-be-delivered from the stream.
The proposed constant-time solution is in the spirit of "best-effort."
Partially addresses #8737.
## Description of approach
We add a new "entries_added" property to the stream. This starts at 0 for a new
stream and is incremented by 1 with every `XADD`. It is essentially an all-time
counter of the entries added to the stream.
Given the stream's length and this counter value, we can trivially find the logical
"entries_added" counter of the first ID if and only if the stream is contiguous.
A fragmented stream contains one or more tombstones generated by `XDEL`s.
The new "xdel_max_id" stream property tracks the latest tombstone.
The CG also tracks its last delivered ID's as an "entries_read" counter and
increments it independently when delivering new messages, unless the this
read counter is invalid (-1 means invalid offset). When the CG's counter is
available, the reported lag is the difference between added and read counters.
Lastly, this also adds a "first_id" field to the stream structure in order to make
looking it up cheaper in most cases.
## Limitations
There are two cases in which the mechanism isn't able to track the lag.
In these cases, `XINFO` replies with `null` in the "lag" field.
The first case is when a CG is created with an arbitrary last delivered ID,
that isn't "0-0", nor the first or the last entries of the stream. In this case,
it is impossible to obtain a valid read counter (short of an O(N) operation).
The second case is when there are one or more tombstones fragmenting
the stream's entries range.
In both cases, given enough time and assuming that the consumers are
active (reading and lacking) and advancing, the CG should be able to
catch up with the tip of the stream and report zero lag.
Once that's achieved, lag tracking would resume as normal (until the
next tombstone is set).
## API changes
* `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
for explicitly specifying the new CG's counter.
* `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
for specifying the CG's counter.
* `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
number of entries added to the stream.
* `XINFO` reports the current lag and logical read counter of CGs.
* `XSETID` is an internal command that's used in replication/aof. It has been added with
the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
for propagating the CG's offset and maximal tombstone ID of the stream.
## The generic unsolved problem
The current stream implementation doesn't provide an efficient way to obtain the
approximate/exact size of a range of entries. While it could've been nice to have
that ability (#5813) in general, let alone specifically in the context of CGs, the risk
and complexities involved in such implementation are in all likelihood prohibitive.
## A refactoring note
The `streamGetEdgeID` has been refactored to accommodate both the existing seek
of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
argument). Furthermore, this refactoring also migrated the seek logic to use the
`streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
`skip_tombstones` Boolean struct field to control the emission of these.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-02-23 22:34:58 +02:00
cg_offset = rdbLoadLen ( rdb , NULL ) ;
if ( rioGetReadError ( rdb ) ) {
rdbReportReadError ( " Stream cgroup offset loading failed. " ) ;
sdsfree ( cgname ) ;
decrRefCount ( o ) ;
return NULL ;
}
} else {
cg_offset = streamEstimateDistanceFromFirstEverEntry ( s , & cg_id ) ;
}
2019-07-17 17:30:02 +02:00
Add stream consumer group lag tracking and reporting (#9127)
Adds the ability to track the lag of a consumer group (CG), that is, the number
of entries yet-to-be-delivered from the stream.
The proposed constant-time solution is in the spirit of "best-effort."
Partially addresses #8737.
## Description of approach
We add a new "entries_added" property to the stream. This starts at 0 for a new
stream and is incremented by 1 with every `XADD`. It is essentially an all-time
counter of the entries added to the stream.
Given the stream's length and this counter value, we can trivially find the logical
"entries_added" counter of the first ID if and only if the stream is contiguous.
A fragmented stream contains one or more tombstones generated by `XDEL`s.
The new "xdel_max_id" stream property tracks the latest tombstone.
The CG also tracks its last delivered ID's as an "entries_read" counter and
increments it independently when delivering new messages, unless the this
read counter is invalid (-1 means invalid offset). When the CG's counter is
available, the reported lag is the difference between added and read counters.
Lastly, this also adds a "first_id" field to the stream structure in order to make
looking it up cheaper in most cases.
## Limitations
There are two cases in which the mechanism isn't able to track the lag.
In these cases, `XINFO` replies with `null` in the "lag" field.
The first case is when a CG is created with an arbitrary last delivered ID,
that isn't "0-0", nor the first or the last entries of the stream. In this case,
it is impossible to obtain a valid read counter (short of an O(N) operation).
The second case is when there are one or more tombstones fragmenting
the stream's entries range.
In both cases, given enough time and assuming that the consumers are
active (reading and lacking) and advancing, the CG should be able to
catch up with the tip of the stream and report zero lag.
Once that's achieved, lag tracking would resume as normal (until the
next tombstone is set).
## API changes
* `XGROUP CREATE` added with the optional named argument `[ENTRIESREAD entries-read]`
for explicitly specifying the new CG's counter.
* `XGROUP SETID` added with an optional positional argument `[ENTRIESREAD entries-read]`
for specifying the CG's counter.
* `XINFO` reports the maximal tombstone ID, the recorded first entry ID, and total
number of entries added to the stream.
* `XINFO` reports the current lag and logical read counter of CGs.
* `XSETID` is an internal command that's used in replication/aof. It has been added with
the optional positional arguments `[ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-entry-id]`
for propagating the CG's offset and maximal tombstone ID of the stream.
## The generic unsolved problem
The current stream implementation doesn't provide an efficient way to obtain the
approximate/exact size of a range of entries. While it could've been nice to have
that ability (#5813) in general, let alone specifically in the context of CGs, the risk
and complexities involved in such implementation are in all likelihood prohibitive.
## A refactoring note
The `streamGetEdgeID` has been refactored to accommodate both the existing seek
of any entry as well as seeking non-deleted entries (the addition of the `skip_tombstones`
argument). Furthermore, this refactoring also migrated the seek logic to use the
`streamIterator` (rather than `raxIterator`) that was, in turn, extended with the
`skip_tombstones` Boolean struct field to control the emission of these.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-02-23 22:34:58 +02:00
streamCG * cgroup = streamCreateCG ( s , cgname , sdslen ( cgname ) , & cg_id , cg_offset ) ;
2020-08-14 16:05:34 +03:00
if ( cgroup = = NULL ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Duplicated consumer group name %s " ,
2018-02-14 16:37:24 +01:00
cgname ) ;
2020-08-14 16:05:34 +03:00
decrRefCount ( o ) ;
sdsfree ( cgname ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
sdsfree ( cgname ) ;
/* Load the global PEL for this consumer group, however we'll
* not yet populate the NACK structures with the message
* owner , since consumers for this group and their messages will
* be read as a next step . So for now leave them not resolved
* and later populate it . */
2019-07-16 11:00:34 +03:00
uint64_t pel_size = rdbLoadLen ( rdb , NULL ) ;
if ( pel_size = = RDB_LENERR ) {
rdbReportReadError ( " Stream PEL size loading failed. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
while ( pel_size - - ) {
unsigned char rawid [ sizeof ( streamID ) ] ;
2019-07-16 11:00:34 +03:00
if ( rioRead ( rdb , rawid , sizeof ( rawid ) ) = = 0 ) {
rdbReportReadError ( " Stream PEL ID loading failed. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
streamNACK * nack = streamCreateNACK ( NULL ) ;
2019-07-17 17:30:02 +02:00
nack - > delivery_time = rdbLoadMillisecondTime ( rdb , RDB_VERSION ) ;
nack - > delivery_count = rdbLoadLen ( rdb , NULL ) ;
if ( rioGetReadError ( rdb ) ) {
rdbReportReadError ( " Stream PEL NACK loading failed. " ) ;
2019-07-16 11:00:34 +03:00
decrRefCount ( o ) ;
streamFreeNACK ( nack ) ;
return NULL ;
}
2021-08-20 15:37:45 +08:00
if ( ! raxTryInsert ( cgroup - > pel , rawid , sizeof ( rawid ) , nack , NULL ) ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Duplicated global PEL entry "
2018-02-14 16:37:24 +01:00
" loading stream consumer group " ) ;
2020-08-14 16:05:34 +03:00
decrRefCount ( o ) ;
streamFreeNACK ( nack ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
}
/* Now that we loaded our global PEL, we need to load the
* consumers and their local PELs . */
2019-07-16 11:00:34 +03:00
uint64_t consumers_num = rdbLoadLen ( rdb , NULL ) ;
if ( consumers_num = = RDB_LENERR ) {
rdbReportReadError ( " Stream consumers num loading failed. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
while ( consumers_num - - ) {
sds cname = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ;
2018-02-21 11:17:46 +01:00
if ( cname = = NULL ) {
2019-07-16 11:00:34 +03:00
rdbReportReadError (
" Error reading the consumer name from Stream group. " ) ;
decrRefCount ( o ) ;
return NULL ;
2018-02-21 11:17:46 +01:00
}
2021-08-02 13:31:33 +08:00
streamConsumer * consumer = streamCreateConsumer ( cgroup , cname , NULL , 0 ,
SCC_NO_NOTIFY | SCC_NO_DIRTIFY ) ;
2018-02-14 16:37:24 +01:00
sdsfree ( cname ) ;
2021-12-09 00:11:57 +08:00
if ( ! consumer ) {
rdbReportCorruptRDB ( " Duplicate stream consumer detected. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 17:51:31 +05:30
2019-07-17 17:30:02 +02:00
consumer - > seen_time = rdbLoadMillisecondTime ( rdb , RDB_VERSION ) ;
if ( rioGetReadError ( rdb ) ) {
2019-07-16 11:00:34 +03:00
rdbReportReadError ( " Stream short read reading seen time. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
Stream consumers: Re-purpose seen-time, add active-time (#11099)
1. "Fixed" the current code so that seen-time/idle actually refers to interaction
attempts (as documented; breaking change)
2. Added active-time/inactive to refer to successful interaction (what
seen-time/idle used to be)
At first, I tried to avoid changing the behavior of seen-time/idle but then realized
that, in this case, the odds are the people read the docs and implemented their
code based on the docs (which didn't match the behavior).
For the most part, that would work fine, except that issue #9996 was found.
I was working under the assumption that people relied on the docs, and for
the most part, it could have worked well enough. so instead of fixing the docs,
as I would usually do, I fixed the code to match the docs in this particular case.
Note that, in case the consumer has never read any entries, the values
for both "active-time" (XINFO FULL) and "inactive" (XINFO CONSUMERS) will
be -1, meaning here that the consumer was never active.
Note that seen/active time is only affected by XREADGROUP / X[AUTO]CLAIM, not
by XPENDING, XINFO, and other "read-only" stream CG commands (always has been,
even before this PR)
Other changes:
* Another behavioral change (arguably a bugfix) is that XREADGROUP and X[AUTO]CLAIM
create the consumer regardless of whether it was able to perform some reading/claiming
* RDB format change to save the `active_time`, and set it to the same value of `seen_time` in old rdb files.
2022-11-30 17:51:31 +05:30
if ( rdbtype > = RDB_TYPE_STREAM_LISTPACKS_3 ) {
consumer - > active_time = rdbLoadMillisecondTime ( rdb , RDB_VERSION ) ;
if ( rioGetReadError ( rdb ) ) {
rdbReportReadError ( " Stream short read reading active time. " ) ;
decrRefCount ( o ) ;
return NULL ;
}
} else {
/* That's the best estimate we got */
consumer - > active_time = consumer - > seen_time ;
}
2018-02-14 16:37:24 +01:00
/* Load the PEL about entries owned by this specific
* consumer . */
pel_size = rdbLoadLen ( rdb , NULL ) ;
2019-07-16 11:00:34 +03:00
if ( pel_size = = RDB_LENERR ) {
2019-07-17 17:30:02 +02:00
rdbReportReadError (
" Stream consumer PEL num loading failed. " ) ;
2019-07-16 11:00:34 +03:00
decrRefCount ( o ) ;
return NULL ;
}
2018-02-14 16:37:24 +01:00
while ( pel_size - - ) {
unsigned char rawid [ sizeof ( streamID ) ] ;
2019-07-16 11:00:34 +03:00
if ( rioRead ( rdb , rawid , sizeof ( rawid ) ) = = 0 ) {
2019-07-17 17:30:02 +02:00
rdbReportReadError (
" Stream short read reading PEL streamID. " ) ;
2019-07-16 11:00:34 +03:00
decrRefCount ( o ) ;
return NULL ;
}
2023-12-14 17:50:18 -05:00
void * result ;
if ( ! raxFind ( cgroup - > pel , rawid , sizeof ( rawid ) , & result ) ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Consumer entry not found in "
2018-02-14 16:37:24 +01:00
" group global PEL " ) ;
2020-08-14 16:05:34 +03:00
decrRefCount ( o ) ;
return NULL ;
}
2023-12-14 17:50:18 -05:00
streamNACK * nack = result ;
2018-02-14 16:37:24 +01:00
/* Set the NACK consumer, that was left to NULL when
* loading the global PEL . Then set the same shared
* NACK structure also in the consumer - specific PEL . */
nack - > consumer = consumer ;
2021-08-20 15:37:45 +08:00
if ( ! raxTryInsert ( consumer - > pel , rawid , sizeof ( rawid ) , nack , NULL ) ) {
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " Duplicated consumer PEL entry "
2018-02-14 16:37:24 +01:00
" loading a stream consumer "
" group " ) ;
2020-08-14 16:05:34 +03:00
decrRefCount ( o ) ;
2021-08-20 15:37:45 +08:00
streamFreeNACK ( nack ) ;
2020-08-14 16:05:34 +03:00
return NULL ;
}
2018-02-14 16:37:24 +01:00
}
}
2021-08-05 22:56:14 +03:00
/* Verify that each PEL eventually got a consumer assigned to it. */
if ( deep_integrity_validation ) {
raxIterator ri_cg_pel ;
raxStart ( & ri_cg_pel , cgroup - > pel ) ;
raxSeek ( & ri_cg_pel , " ^ " , NULL , 0 ) ;
while ( raxNext ( & ri_cg_pel ) ) {
streamNACK * nack = ri_cg_pel . data ;
if ( ! nack - > consumer ) {
raxStop ( & ri_cg_pel ) ;
rdbReportCorruptRDB ( " Stream CG PEL entry without consumer " ) ;
decrRefCount ( o ) ;
return NULL ;
}
}
raxStop ( & ri_cg_pel ) ;
}
2018-02-14 16:37:24 +01:00
}
2022-08-15 21:41:44 +03:00
} else if ( rdbtype = = RDB_TYPE_MODULE_PRE_GA ) {
rdbReportCorruptRDB ( " Pre-release module format not supported " ) ;
return NULL ;
} else if ( rdbtype = = RDB_TYPE_MODULE_2 ) {
2016-05-18 11:45:40 +02:00
uint64_t moduleid = rdbLoadLen ( rdb , NULL ) ;
2020-02-05 19:47:09 +02:00
if ( rioGetReadError ( rdb ) ) {
rdbReportReadError ( " Short read module id " ) ;
return NULL ;
}
2016-05-18 11:45:40 +02:00
moduleType * mt = moduleTypeLookupModuleByID ( moduleid ) ;
2022-08-15 21:41:44 +03:00
if ( rdbCheckMode ) {
2021-03-02 09:39:37 +02:00
char name [ 10 ] ;
2018-03-16 13:47:10 +01:00
moduleTypeNameByID ( name , moduleid ) ;
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
return rdbLoadCheckModuleValue ( rdb , name ) ;
2018-03-16 13:47:10 +01:00
}
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
2016-05-18 11:45:40 +02:00
if ( mt = = NULL ) {
2021-03-02 09:39:37 +02:00
char name [ 10 ] ;
2016-05-18 11:45:40 +02:00
moduleTypeNameByID ( name , moduleid ) ;
2021-03-02 09:39:37 +02:00
rdbReportCorruptRDB ( " The RDB file contains module data I can't load: no matching module type '%s' " , name ) ;
2020-08-14 16:05:34 +03:00
return NULL ;
2016-05-18 11:45:40 +02:00
}
2024-04-05 16:59:55 -07:00
ValkeyModuleIO io ;
2020-04-09 10:24:10 +02:00
robj keyobj ;
initStaticStringObject ( keyobj , key ) ;
2021-06-16 14:45:49 +08:00
moduleInitIOContext ( io , mt , rdb , & keyobj , dbid ) ;
2016-05-18 11:45:40 +02:00
/* Call the rdb_load method of the module providing the 10 bit
* encoding version in the lower 10 bits of the module ID . */
void * ptr = mt - > rdb_load ( & io , moduleid & 1023 ) ;
2017-07-06 11:20:49 +02:00
if ( io . ctx ) {
moduleFreeContext ( io . ctx ) ;
zfree ( io . ctx ) ;
}
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
/* Module v2 serialization has an EOF mark at the end. */
2022-08-15 21:41:44 +03:00
uint64_t eof = rdbLoadLen ( rdb , NULL ) ;
if ( eof = = RDB_LENERR ) {
if ( ptr ) {
o = createModuleObject ( mt , ptr ) ; /* creating just in order to easily destroy */
decrRefCount ( o ) ;
2019-07-16 11:00:34 +03:00
}
2022-08-15 21:41:44 +03:00
return NULL ;
}
if ( eof ! = RDB_MODULE_OPCODE_EOF ) {
rdbReportCorruptRDB ( " The RDB file contains module data for the module '%s' that is not terminated by "
" the proper module value EOF marker " , moduleTypeModuleName ( mt ) ) ;
if ( ptr ) {
o = createModuleObject ( mt , ptr ) ; /* creating just in order to easily destroy */
decrRefCount ( o ) ;
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
}
2022-08-15 21:41:44 +03:00
return NULL ;
RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).
The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.
In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:
1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.
Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:
1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.
So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.
The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:09:33 +02:00
}
2016-05-18 11:45:40 +02:00
if ( ptr = = NULL ) {
2021-03-02 09:39:37 +02:00
rdbReportCorruptRDB ( " The RDB file contains module data for the module type '%s', that the responsible "
" module is not able to load. Check for modules log above for additional clues. " ,
moduleTypeModuleName ( mt ) ) ;
2020-08-14 16:05:34 +03:00
return NULL ;
2016-05-18 11:45:40 +02:00
}
o = createModuleObject ( mt , ptr ) ;
2010-06-22 00:07:48 +02:00
} else {
2019-07-16 11:00:34 +03:00
rdbReportReadError ( " Unknown RDB encoding type %d " , rdbtype ) ;
return NULL ;
2010-06-22 00:07:48 +02:00
}
2021-08-06 03:42:20 +08:00
if ( error ) * error = 0 ;
2010-06-22 00:07:48 +02:00
return o ;
2021-08-06 03:42:20 +08:00
emptykey :
if ( error ) * error = RDB_LOAD_ERR_EMPTY_KEY ;
return NULL ;
2010-06-22 00:07:48 +02:00
}
2010-11-08 11:52:03 +01:00
/* Mark that we are loading in the global state and setup the fields
* needed to provide loading stats . */
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
void startLoading ( size_t size , int rdbflags , int async ) {
2010-11-08 11:52:03 +01:00
/* Load the DB */
server . loading = 1 ;
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
if ( async = = 1 ) server . async_loading = 1 ;
2010-11-08 11:52:03 +01:00
server . loading_start_time = time ( NULL ) ;
2014-12-23 14:52:57 +01:00
server . loading_loaded_bytes = 0 ;
2019-07-01 15:22:29 +03:00
server . loading_total_bytes = size ;
2020-11-05 11:46:16 +02:00
server . loading_rdb_used_mem = 0 ;
2021-09-13 15:39:11 +08:00
server . rdb_last_load_keys_expired = 0 ;
server . rdb_last_load_keys_loaded = 0 ;
2020-09-03 08:47:29 +03:00
blockingOperationStarts ( ) ;
2019-10-29 17:59:09 +02:00
/* Fire the loading modules start event. */
int subevent ;
if ( rdbflags & RDBFLAGS_AOF_PREAMBLE )
2024-04-05 16:59:55 -07:00
subevent = VALKEYMODULE_SUBEVENT_LOADING_AOF_START ;
2019-10-29 17:59:09 +02:00
else if ( rdbflags & RDBFLAGS_REPLICATION )
2024-04-05 16:59:55 -07:00
subevent = VALKEYMODULE_SUBEVENT_LOADING_REPL_START ;
2019-10-29 17:59:09 +02:00
else
2024-04-05 16:59:55 -07:00
subevent = VALKEYMODULE_SUBEVENT_LOADING_RDB_START ;
moduleFireServerEvent ( VALKEYMODULE_EVENT_LOADING , subevent , NULL ) ;
2019-07-01 15:22:29 +03:00
}
/* Mark that we are loading in the global state and setup the fields
* needed to provide loading stats .
* ' filename ' is optional and used for rdb - check on error */
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-04 01:14:13 +08:00
void startLoadingFile ( size_t size , char * filename , int rdbflags ) {
2019-07-01 15:22:29 +03:00
rdbFileBeingLoaded = filename ;
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-04 01:14:13 +08:00
startLoading ( size , rdbflags , 0 ) ;
2010-11-08 11:52:03 +01:00
}
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-04 01:14:13 +08:00
/* Refresh the absolute loading progress info */
void loadingAbsProgress ( off_t pos ) {
2010-11-08 11:52:03 +01:00
server . loading_loaded_bytes = pos ;
2012-10-24 12:21:34 +02:00
if ( server . stat_peak_memory < zmalloc_used_memory ( ) )
server . stat_peak_memory = zmalloc_used_memory ( ) ;
2010-11-08 11:52:03 +01:00
}
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-04 01:14:13 +08:00
/* Refresh the incremental loading progress info */
void loadingIncrProgress ( off_t size ) {
server . loading_loaded_bytes + = size ;
if ( server . stat_peak_memory < zmalloc_used_memory ( ) )
server . stat_peak_memory = zmalloc_used_memory ( ) ;
}
/* Update the file name currently being loaded */
void updateLoadingFileName ( char * filename ) {
rdbFileBeingLoaded = filename ;
}
2010-11-08 11:52:03 +01:00
/* Loading finished */
2019-10-29 17:59:09 +02:00
void stopLoading ( int success ) {
2010-11-08 11:52:03 +01:00
server . loading = 0 ;
Replica keep serving data during repl-diskless-load=swapdb for better availability (#9323)
For diskless replication in swapdb mode, considering we already spend replica memory
having a backup of current db to restore in case of failure, we can have the following benefits
by instead swapping database only in case we succeeded in transferring db from master:
- Avoid `LOADING` response during failed and successful synchronization for cases where the
replica is already up and running with data.
- Faster total time of diskless replication, because now we're moving from Transfer + Flush + Load
time to Transfer + Load only. Flushing the tempDb is done asynchronously after swapping.
- This could be implemented also for disk replication with similar benefits if consumers are willing
to spend the extra memory usage.
General notes:
- The concept of `backupDb` becomes `tempDb` for clarity.
- Async loading mode will only kick in if the replica is syncing from a master that has the same
repl-id the one it had before. i.e. the data it's getting belongs to a different time of the same timeline.
- New property in INFO: `async_loading` to differentiate from the blocking loading
- Slot to Key mapping is now a field of `redisDb` as it's more natural to access it from both server.db
and the tempDb that is passed around.
- Because this is affecting replicas only, we assume that if they are not readonly and write commands
during replication, they are lost after SYNC same way as before, but we're still denying CONFIG SET
here anyways to avoid complications.
Considerations for review:
- We have many cases where server.loading flag is used and even though I tried my best, there may
be cases where async_loading should be checked as well and cases where it shouldn't (would require
very good understanding of whole code)
- Several places that had different behavior depending on the loading flag where actually meant to just
handle commands coming from the AOF client differently than ones coming from real clients, changed
to check CLIENT_ID_AOF instead.
**Additional for Release Notes**
- Bugfix - server.dirty was not incremented for any kind of diskless replication, as effect it wouldn't
contribute on triggering next database SAVE
- New flag for RM_GetContextFlags module API: REDISMODULE_CTX_FLAGS_ASYNC_LOADING
- Deprecated RedisModuleEvent_ReplBackup. Starting from Redis 7.0, we don't fire this event.
Instead, we have the new RedisModuleEvent_ReplAsyncLoad holding 3 sub-events: STARTED,
ABORTED and COMPLETED.
- New module flag REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD for RedisModule_SetModuleOptions
to allow modules to declare they support the diskless replication with async loading (when absent, we fall
back to disk-based loading).
Co-authored-by: Eduardo Semprebon <edus@saxobank.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2021-11-04 09:46:50 +01:00
server . async_loading = 0 ;
2020-09-03 08:47:29 +03:00
blockingOperationEnds ( ) ;
2019-07-01 15:22:29 +03:00
rdbFileBeingLoaded = NULL ;
2019-10-29 17:59:09 +02:00
/* Fire the loading modules end event. */
2024-04-05 16:59:55 -07:00
moduleFireServerEvent ( VALKEYMODULE_EVENT_LOADING ,
2019-10-29 17:59:09 +02:00
success ?
2024-04-05 16:59:55 -07:00
VALKEYMODULE_SUBEVENT_LOADING_ENDED :
VALKEYMODULE_SUBEVENT_LOADING_FAILED ,
Always create base AOF file when redis start from empty. (#10102)
Force create a BASE file (use a foreground `rewriteAppendOnlyFile`) when redis starts from an
empty data set and `appendonly` is yes.
The reasoning is that normally, after redis is running for some time, and the AOF has gone though
a few rewrites, there's always a base rdb file. and the scenario where the base file is missing, is
kinda rare (happens only at empty startup), so this change normalizes it.
But more importantly, there are or could be some complex modules that are started with some
configuration, when they create persistence they write that configuration to RDB AUX fields, so
that can can always know with which configuration the persistence file they're loading was
created (could be critical). there is (was) one scenario in which they could load their persisted data,
and that configuration was missing, and this change fixes it.
Add a new module event: REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START, similar to
REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START which is async.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-13 14:49:26 +08:00
NULL ) ;
2019-10-29 17:59:09 +02:00
}
void startSaving ( int rdbflags ) {
2022-06-07 22:38:31 +08:00
/* Fire the persistence modules start event. */
2019-10-29 17:59:09 +02:00
int subevent ;
Always create base AOF file when redis start from empty. (#10102)
Force create a BASE file (use a foreground `rewriteAppendOnlyFile`) when redis starts from an
empty data set and `appendonly` is yes.
The reasoning is that normally, after redis is running for some time, and the AOF has gone though
a few rewrites, there's always a base rdb file. and the scenario where the base file is missing, is
kinda rare (happens only at empty startup), so this change normalizes it.
But more importantly, there are or could be some complex modules that are started with some
configuration, when they create persistence they write that configuration to RDB AUX fields, so
that can can always know with which configuration the persistence file they're loading was
created (could be critical). there is (was) one scenario in which they could load their persisted data,
and that configuration was missing, and this change fixes it.
Add a new module event: REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START, similar to
REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START which is async.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-13 14:49:26 +08:00
if ( rdbflags & RDBFLAGS_AOF_PREAMBLE & & getpid ( ) ! = server . pid )
2024-04-05 16:59:55 -07:00
subevent = VALKEYMODULE_SUBEVENT_PERSISTENCE_AOF_START ;
Always create base AOF file when redis start from empty. (#10102)
Force create a BASE file (use a foreground `rewriteAppendOnlyFile`) when redis starts from an
empty data set and `appendonly` is yes.
The reasoning is that normally, after redis is running for some time, and the AOF has gone though
a few rewrites, there's always a base rdb file. and the scenario where the base file is missing, is
kinda rare (happens only at empty startup), so this change normalizes it.
But more importantly, there are or could be some complex modules that are started with some
configuration, when they create persistence they write that configuration to RDB AUX fields, so
that can can always know with which configuration the persistence file they're loading was
created (could be critical). there is (was) one scenario in which they could load their persisted data,
and that configuration was missing, and this change fixes it.
Add a new module event: REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START, similar to
REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START which is async.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-13 14:49:26 +08:00
else if ( rdbflags & RDBFLAGS_AOF_PREAMBLE )
2024-04-05 16:59:55 -07:00
subevent = VALKEYMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START ;
2019-10-29 17:59:09 +02:00
else if ( getpid ( ) ! = server . pid )
2024-04-05 16:59:55 -07:00
subevent = VALKEYMODULE_SUBEVENT_PERSISTENCE_RDB_START ;
2019-10-29 17:59:09 +02:00
else
2024-04-05 16:59:55 -07:00
subevent = VALKEYMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START ;
moduleFireServerEvent ( VALKEYMODULE_EVENT_PERSISTENCE , subevent , NULL ) ;
2019-10-29 17:59:09 +02:00
}
void stopSaving ( int success ) {
/* Fire the persistence modules end event. */
2024-04-05 16:59:55 -07:00
moduleFireServerEvent ( VALKEYMODULE_EVENT_PERSISTENCE ,
2019-10-29 17:59:09 +02:00
success ?
2024-04-05 16:59:55 -07:00
VALKEYMODULE_SUBEVENT_PERSISTENCE_ENDED :
VALKEYMODULE_SUBEVENT_PERSISTENCE_FAILED ,
2019-10-29 17:59:09 +02:00
NULL ) ;
2010-11-08 11:52:03 +01:00
}
2012-12-12 15:59:22 +02:00
/* Track loading progress in order to serve client's from time to time
and if needed calculate rdb checksum */
void rdbLoadProgressCallback ( rio * r , const void * buf , size_t len ) {
if ( server . rdb_checksum )
rioGenericUpdateChecksum ( r , buf , len ) ;
if ( server . loading_process_events_interval_bytes & &
2013-12-09 13:32:44 +01:00
( r - > processed_bytes + len ) / server . loading_process_events_interval_bytes > r - > processed_bytes / server . loading_process_events_interval_bytes )
{
2015-07-27 09:41:48 +02:00
if ( server . masterhost & & server . repl_state = = REPL_STATE_TRANSFER )
2013-12-10 18:38:26 +01:00
replicationSendNewlineToMaster ( ) ;
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-04 01:14:13 +08:00
loadingAbsProgress ( r - > processed_bytes ) ;
2014-04-24 17:36:47 +02:00
processEventsWhileBlocked ( ) ;
2019-10-29 17:59:09 +02:00
processModuleLoadingProgressEvent ( 0 ) ;
2012-12-12 15:59:22 +02:00
}
2022-05-31 13:07:33 +08:00
if ( server . repl_state = = REPL_STATE_TRANSFER & & rioCheckType ( r ) = = RIO_TYPE_CONN ) {
atomicIncr ( server . stat_net_repl_input_bytes , len ) ;
}
2012-12-12 15:59:22 +02:00
}
2021-12-26 09:03:37 +02:00
/* Save the given functions_ctx to the rdb.
* The err output parameter is optional and will be set with relevant error
* message on failure , it is the caller responsibility to free the error
2022-01-20 11:10:33 +02:00
* message on failure .
*
* The lib_ctx argument is also optional . If NULL is given , only verify rdb
* structure with out performing the actual functions loading . */
2022-08-15 21:41:44 +03:00
int rdbFunctionLoad ( rio * rdb , int ver , functionsLibCtx * lib_ctx , int rdbflags , sds * err ) {
2021-10-07 14:41:26 +03:00
UNUSED ( ver ) ;
2021-12-26 09:03:37 +02:00
sds error = NULL ;
2022-04-05 10:27:24 +03:00
sds final_payload = NULL ;
2021-10-07 14:41:26 +03:00
int res = C_ERR ;
2022-08-15 21:41:44 +03:00
if ( ! ( final_payload = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ) ) {
error = sdsnew ( " Failed loading library payload " ) ;
goto done ;
2021-10-07 14:41:26 +03:00
}
2022-01-20 11:10:33 +02:00
if ( lib_ctx ) {
2022-04-05 10:27:24 +03:00
sds library_name = NULL ;
2023-08-02 11:43:31 +03:00
if ( ! ( library_name = functionsCreateWithLibraryCtx ( final_payload , rdbflags & RDBFLAGS_ALLOW_DUP , & error , lib_ctx , 0 ) ) ) {
2022-01-20 11:10:33 +02:00
if ( ! error ) {
error = sdsnew ( " Failed creating the library " ) ;
}
2022-04-05 10:27:24 +03:00
goto done ;
2021-12-26 09:03:37 +02:00
}
2022-04-05 10:27:24 +03:00
sdsfree ( library_name ) ;
2021-10-07 14:41:26 +03:00
}
res = C_OK ;
2022-04-05 10:27:24 +03:00
done :
if ( final_payload ) sdsfree ( final_payload ) ;
2021-12-26 09:03:37 +02:00
if ( error ) {
if ( err ) {
* err = error ;
} else {
serverLog ( LL_WARNING , " Failed creating function, %s " , error ) ;
sdsfree ( error ) ;
}
}
2021-10-07 14:41:26 +03:00
return res ;
}
2016-08-11 15:27:23 +02:00
/* Load an RDB file from the rio stream 'rdb'. On success C_OK is returned,
* otherwise C_ERR is returned and ' errno ' is set accordingly . */
2021-10-07 14:41:26 +03:00
int rdbLoadRio ( rio * rdb , int rdbflags , rdbSaveInfo * rsi ) {
Redis Function Libraries (#10004)
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
2022-01-06 13:39:38 +02:00
functionsLibCtx * functions_lib_ctx = functionsLibCtxGetCurrent ( ) ;
rdbLoadingCtx loading_ctx = { . dbarray = server . db , . functions_lib_ctx = functions_lib_ctx } ;
2021-10-07 14:41:26 +03:00
int retval = rdbLoadRioWithLoadingCtx ( rdb , rdbflags , rsi , & loading_ctx ) ;
return retval ;
}
/* Load an RDB file from the rio stream 'rdb'. On success C_OK is returned,
2022-08-04 15:47:37 +08:00
* otherwise C_ERR is returned .
2021-10-07 14:41:26 +03:00
* The rdb_loading_ctx argument holds objects to which the rdb will be loaded to ,
Redis Function Libraries (#10004)
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
2022-01-06 13:39:38 +02:00
* currently it only allow to set db object and functionLibCtx to which the data
2021-10-07 14:41:26 +03:00
* will be loaded ( in the future it might contains more such objects ) . */
int rdbLoadRioWithLoadingCtx ( rio * rdb , int rdbflags , rdbSaveInfo * rsi , rdbLoadingCtx * rdb_loading_ctx ) {
2021-09-13 15:39:11 +08:00
uint64_t dbid = 0 ;
2011-06-14 15:34:27 +02:00
int type , rdbver ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
uint64_t db_size = 0 , expires_size = 0 ;
2023-12-06 16:59:56 +08:00
int should_expand_db = 0 ;
2024-04-03 10:02:43 +07:00
serverDb * db = rdb_loading_ctx - > dbarray + 0 ;
2010-06-22 00:07:48 +02:00
char buf [ 1024 ] ;
2021-08-06 03:42:20 +08:00
int error ;
2021-09-13 15:39:11 +08:00
long long empty_keys_skipped = 0 ;
2010-06-22 00:07:48 +02:00
2016-08-11 15:27:23 +02:00
rdb - > update_cksum = rdbLoadProgressCallback ;
rdb - > max_processing_chunk = server . loading_process_events_interval_bytes ;
if ( rioRead ( rdb , buf , 9 ) = = 0 ) goto eoferr ;
2010-06-22 00:07:48 +02:00
buf [ 9 ] = ' \0 ' ;
if ( memcmp ( buf , " REDIS " , 5 ) ! = 0 ) {
2015-07-27 09:41:48 +02:00
serverLog ( LL_WARNING , " Wrong signature trying to load DB from file " ) ;
2015-07-26 23:17:55 +02:00
return C_ERR ;
2010-06-22 00:07:48 +02:00
}
rdbver = atoi ( buf + 5 ) ;
2015-07-27 09:41:48 +02:00
if ( rdbver < 1 | | rdbver > RDB_VERSION ) {
serverLog ( LL_WARNING , " Can't handle RDB format version %d " , rdbver ) ;
2015-07-26 23:17:55 +02:00
return C_ERR ;
2010-06-22 00:07:48 +02:00
}
2010-11-08 11:52:03 +01:00
2018-03-15 16:24:53 +01:00
/* Key-specific attributes, set by opcodes before the key type. */
2018-06-20 14:40:18 +07:00
long long lru_idle = - 1 , lfu_freq = - 1 , expiretime = - 1 , now = mstime ( ) ;
2018-03-15 16:24:53 +01:00
long long lru_clock = LRU_CLOCK ( ) ;
2019-03-02 21:17:40 +01:00
2010-06-22 00:07:48 +02:00
while ( 1 ) {
2020-04-09 10:24:10 +02:00
sds key ;
robj * val ;
2010-11-08 11:52:03 +01:00
2010-06-22 00:07:48 +02:00
/* Read type. */
2016-08-11 15:27:23 +02:00
if ( ( type = rdbLoadType ( rdb ) ) = = - 1 ) goto eoferr ;
2015-01-07 15:25:58 +01:00
/* Handle special types. */
2015-07-27 09:41:48 +02:00
if ( type = = RDB_OPCODE_EXPIRETIME ) {
2015-01-07 15:25:58 +01:00
/* EXPIRETIME: load an expire associated with the next key
* to load . Note that after loading an expire we need to
* load the actual type , and continue . */
2019-07-17 17:30:02 +02:00
expiretime = rdbLoadTime ( rdb ) ;
2011-11-09 16:51:19 +01:00
expiretime * = 1000 ;
2019-07-17 17:30:02 +02:00
if ( rioGetReadError ( rdb ) ) goto eoferr ;
2018-03-15 16:24:53 +01:00
continue ; /* Read next opcode. */
2015-07-27 09:41:48 +02:00
} else if ( type = = RDB_OPCODE_EXPIRETIME_MS ) {
2015-01-07 15:25:58 +01:00
/* EXPIRETIME_MS: milliseconds precision expire times introduced
* with RDB v3 . Like EXPIRETIME but no with more precision . */
2019-07-17 17:30:02 +02:00
expiretime = rdbLoadMillisecondTime ( rdb , rdbver ) ;
if ( rioGetReadError ( rdb ) ) goto eoferr ;
2018-03-15 16:24:53 +01:00
continue ; /* Read next opcode. */
} else if ( type = = RDB_OPCODE_FREQ ) {
/* FREQ: LFU frequency. */
uint8_t byte ;
if ( rioRead ( rdb , & byte , 1 ) = = 0 ) goto eoferr ;
lfu_freq = byte ;
2018-03-15 16:33:18 +01:00
continue ; /* Read next opcode. */
2018-03-15 16:24:53 +01:00
} else if ( type = = RDB_OPCODE_IDLE ) {
/* IDLE: LRU idle time. */
2018-06-20 14:40:18 +07:00
uint64_t qword ;
if ( ( qword = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) goto eoferr ;
lru_idle = qword ;
2018-03-15 16:33:18 +01:00
continue ; /* Read next opcode. */
2015-07-27 09:41:48 +02:00
} else if ( type = = RDB_OPCODE_EOF ) {
2015-01-07 15:25:58 +01:00
/* EOF: End of file, exit the main loop. */
2011-05-13 22:14:39 +02:00
break ;
2015-07-27 09:41:48 +02:00
} else if ( type = = RDB_OPCODE_SELECTDB ) {
2015-01-07 15:25:58 +01:00
/* SELECTDB: Select the specified database. */
2018-03-15 16:24:53 +01:00
if ( ( dbid = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR ) goto eoferr ;
2010-06-22 00:07:48 +02:00
if ( dbid > = ( unsigned ) server . dbnum ) {
2015-07-27 09:41:48 +02:00
serverLog ( LL_WARNING ,
2015-01-07 15:25:58 +01:00
" FATAL: Data file was created with a Redis "
" server configured to handle more than %d "
" databases. Exiting \n " , server . dbnum ) ;
2010-06-22 00:07:48 +02:00
exit ( 1 ) ;
}
2021-10-07 14:41:26 +03:00
db = rdb_loading_ctx - > dbarray + dbid ;
2018-03-15 16:24:53 +01:00
continue ; /* Read next opcode. */
2015-07-27 09:41:48 +02:00
} else if ( type = = RDB_OPCODE_RESIZEDB ) {
2015-01-07 15:25:58 +01:00
/* RESIZEDB: Hint about the size of the keys in the currently
* selected data base , in order to avoid useless rehashing . */
2016-08-11 15:27:23 +02:00
if ( ( db_size = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR )
2015-01-07 11:08:41 +01:00
goto eoferr ;
2016-08-11 15:27:23 +02:00
if ( ( expires_size = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR )
2015-01-07 11:08:41 +01:00
goto eoferr ;
2023-12-06 16:59:56 +08:00
should_expand_db = 1 ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
continue ; /* Read next opcode. */
} else if ( type = = RDB_OPCODE_SLOT_INFO ) {
uint64_t slot_id , slot_size , expires_slot_size ;
if ( ( slot_id = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR )
goto eoferr ;
if ( ( slot_size = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR )
goto eoferr ;
if ( ( expires_slot_size = rdbLoadLen ( rdb , NULL ) ) = = RDB_LENERR )
goto eoferr ;
if ( ! server . cluster_enabled ) {
continue ; /* Ignore gracefully. */
}
/* In cluster mode we resize individual slot specific dictionaries based on the number of keys that slot holds. */
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
kvstoreDictExpand ( db - > keys , slot_id , slot_size ) ;
2024-02-12 21:46:06 +02:00
kvstoreDictExpand ( db - > expires , slot_id , expires_slot_size ) ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
should_expand_db = 0 ;
2018-03-15 16:24:53 +01:00
continue ; /* Read next opcode. */
2015-07-27 09:41:48 +02:00
} else if ( type = = RDB_OPCODE_AUX ) {
2015-01-08 08:56:35 +01:00
/* AUX: generic string-string fields. Use to add state to RDB
* which is backward compatible . Implementations of RDB loading
Squash merging 125 typo/grammar/comment/doc PRs (#7773)
List of squashed commits or PRs
===============================
commit 66801ea
Author: hwware <wen.hui.ware@gmail.com>
Date: Mon Jan 13 00:54:31 2020 -0500
typo fix in acl.c
commit 46f55db
Author: Itamar Haber <itamar@redislabs.com>
Date: Sun Sep 6 18:24:11 2020 +0300
Updates a couple of comments
Specifically:
* RM_AutoMemory completed instead of pointing to docs
* Updated link to custom type doc
commit 61a2aa0
Author: xindoo <xindoo@qq.com>
Date: Tue Sep 1 19:24:59 2020 +0800
Correct errors in code comments
commit a5871d1
Author: yz1509 <pro-756@qq.com>
Date: Tue Sep 1 18:36:06 2020 +0800
fix typos in module.c
commit 41eede7
Author: bookug <bookug@qq.com>
Date: Sat Aug 15 01:11:33 2020 +0800
docs: fix typos in comments
commit c303c84
Author: lazy-snail <ws.niu@outlook.com>
Date: Fri Aug 7 11:15:44 2020 +0800
fix spelling in redis.conf
commit 1eb76bf
Author: zhujian <zhujianxyz@gmail.com>
Date: Thu Aug 6 15:22:10 2020 +0800
add a missing 'n' in comment
commit 1530ec2
Author: Daniel Dai <764122422@qq.com>
Date: Mon Jul 27 00:46:35 2020 -0400
fix spelling in tracking.c
commit e517b31
Author: Hunter-Chen <huntcool001@gmail.com>
Date: Fri Jul 17 22:33:32 2020 +0800
Update redis.conf
Co-authored-by: Itamar Haber <itamar@redislabs.com>
commit c300eff
Author: Hunter-Chen <huntcool001@gmail.com>
Date: Fri Jul 17 22:33:23 2020 +0800
Update redis.conf
Co-authored-by: Itamar Haber <itamar@redislabs.com>
commit 4c058a8
Author: 陈浩鹏 <chenhaopeng@heytea.com>
Date: Thu Jun 25 19:00:56 2020 +0800
Grammar fix and clarification
commit 5fcaa81
Author: bodong.ybd <bodong.ybd@alibaba-inc.com>
Date: Fri Jun 19 10:09:00 2020 +0800
Fix typos
commit 4caca9a
Author: Pruthvi P <pruthvi@ixigo.com>
Date: Fri May 22 00:33:22 2020 +0530
Fix typo eviciton => eviction
commit b2a25f6
Author: Brad Dunbar <dunbarb2@gmail.com>
Date: Sun May 17 12:39:59 2020 -0400
Fix a typo.
commit 12842ae
Author: hwware <wen.hui.ware@gmail.com>
Date: Sun May 3 17:16:59 2020 -0400
fix spelling in redis conf
commit ddba07c
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date: Sat May 2 23:25:34 2020 +0100
Correct a "conflicts" spelling error.
commit 8fc7bf2
Author: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
Date: Thu Apr 30 10:25:27 2020 +0900
docs: fix EXPIRE_FAST_CYCLE_DURATION to ACTIVE_EXPIRE_CYCLE_FAST_DURATION
commit 9b2b67a
Author: Brad Dunbar <dunbarb2@gmail.com>
Date: Fri Apr 24 11:46:22 2020 -0400
Fix a typo.
commit 0746f10
Author: devilinrust <63737265+devilinrust@users.noreply.github.com>
Date: Thu Apr 16 00:17:53 2020 +0200
Fix typos in server.c
commit 92b588d
Author: benjessop12 <56115861+benjessop12@users.noreply.github.com>
Date: Mon Apr 13 13:43:55 2020 +0100
Fix spelling mistake in lazyfree.c
commit 1da37aa
Merge: 2d4ba28 af347a8
Author: hwware <wen.hui.ware@gmail.com>
Date: Thu Mar 5 22:41:31 2020 -0500
Merge remote-tracking branch 'upstream/unstable' into expiretypofix
commit 2d4ba28
Author: hwware <wen.hui.ware@gmail.com>
Date: Mon Mar 2 00:09:40 2020 -0500
fix typo in expire.c
commit 1a746f7
Author: SennoYuki <minakami1yuki@gmail.com>
Date: Thu Feb 27 16:54:32 2020 +0800
fix typo
commit 8599b1a
Author: dongheejeong <donghee950403@gmail.com>
Date: Sun Feb 16 20:31:43 2020 +0000
Fix typo in server.c
commit f38d4e8
Author: hwware <wen.hui.ware@gmail.com>
Date: Sun Feb 2 22:58:38 2020 -0500
fix typo in evict.c
commit fe143fc
Author: Leo Murillo <leonardo.murillo@gmail.com>
Date: Sun Feb 2 01:57:22 2020 -0600
Fix a few typos in redis.conf
commit 1ab4d21
Author: viraja1 <anchan.viraj@gmail.com>
Date: Fri Dec 27 17:15:58 2019 +0530
Fix typo in Latency API docstring
commit ca1f70e
Author: gosth <danxuedexing@qq.com>
Date: Wed Dec 18 15:18:02 2019 +0800
fix typo in sort.c
commit a57c06b
Author: ZYunH <zyunhjob@163.com>
Date: Mon Dec 16 22:28:46 2019 +0800
fix-zset-typo
commit b8c92b5
Author: git-hulk <hulk.website@gmail.com>
Date: Mon Dec 16 15:51:42 2019 +0800
FIX: typo in cluster.c, onformation->information
commit 9dd981c
Author: wujm2007 <jim.wujm@gmail.com>
Date: Mon Dec 16 09:37:52 2019 +0800
Fix typo
commit e132d7a
Author: Sebastien Williams-Wynn <s.williamswynn.mail@gmail.com>
Date: Fri Nov 15 00:14:07 2019 +0000
Minor typo change
commit 47f44d5
Author: happynote3966 <01ssrmikururudevice01@gmail.com>
Date: Mon Nov 11 22:08:48 2019 +0900
fix comment typo in redis-cli.c
commit b8bdb0d
Author: fulei <fulei@kuaishou.com>
Date: Wed Oct 16 18:00:17 2019 +0800
Fix a spelling mistake of comments in defragDictBucketCallback
commit 0def46a
Author: fulei <fulei@kuaishou.com>
Date: Wed Oct 16 13:09:27 2019 +0800
fix some spelling mistakes of comments in defrag.c
commit f3596fd
Author: Phil Rajchgot <tophil@outlook.com>
Date: Sun Oct 13 02:02:32 2019 -0400
Typo and grammar fixes
Redis and its documentation are great -- just wanted to submit a few corrections in the spirit of Hacktoberfest. Thanks for all your work on this project. I use it all the time and it works beautifully.
commit 2b928cd
Author: KangZhiDong <worldkzd@gmail.com>
Date: Sun Sep 1 07:03:11 2019 +0800
fix typos
commit 33aea14
Author: Axlgrep <axlgrep@gmail.com>
Date: Tue Aug 27 11:02:18 2019 +0800
Fixed eviction spelling issues
commit e282a80
Author: Simen Flatby <simen@oms.no>
Date: Tue Aug 20 15:25:51 2019 +0200
Update comments to reflect prop name
In the comments the prop is referenced as replica-validity-factor,
but it is really named cluster-replica-validity-factor.
commit 74d1f9a
Author: Jim Green <jimgreen2013@qq.com>
Date: Tue Aug 20 20:00:31 2019 +0800
fix comment error, the code is ok
commit eea1407
Author: Liao Tonglang <liaotonglang@gmail.com>
Date: Fri May 31 10:16:18 2019 +0800
typo fix
fix cna't to can't
commit 0da553c
Author: KAWACHI Takashi <tkawachi@gmail.com>
Date: Wed Jul 17 00:38:16 2019 +0900
Fix typo
commit 7fc8fb6
Author: Michael Prokop <mika@grml.org>
Date: Tue May 28 17:58:42 2019 +0200
Typo fixes
s/familar/familiar/
s/compatiblity/compatibility/
s/ ot / to /
s/itsef/itself/
commit 5f46c9d
Author: zhumoing <34539422+zhumoing@users.noreply.github.com>
Date: Tue May 21 21:16:50 2019 +0800
typo-fixes
typo-fixes
commit 321dfe1
Author: wxisme <850885154@qq.com>
Date: Sat Mar 16 15:10:55 2019 +0800
typo fix
commit b4fb131
Merge: 267e0e6 3df1eb8
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date: Fri Feb 8 22:55:45 2019 +0200
Merge branch 'unstable' of antirez/redis into unstable
commit 267e0e6
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date: Wed Jan 30 21:26:04 2019 +0200
Minor typo fix
commit 30544e7
Author: inshal96 <39904558+inshal96@users.noreply.github.com>
Date: Fri Jan 4 16:54:50 2019 +0500
remove an extra 'a' in the comments
commit 337969d
Author: BrotherGao <yangdongheng11@gmail.com>
Date: Sat Dec 29 12:37:29 2018 +0800
fix typo in redis.conf
commit 9f4b121
Merge: 423a030 e504583
Author: BrotherGao <yangdongheng@xiaomi.com>
Date: Sat Dec 29 11:41:12 2018 +0800
Merge branch 'unstable' of antirez/redis into unstable
commit 423a030
Merge: 42b02b7 46a51cd
Author: 杨东衡 <yangdongheng@xiaomi.com>
Date: Tue Dec 4 23:56:11 2018 +0800
Merge branch 'unstable' of antirez/redis into unstable
commit 42b02b7
Merge: 68c0e6e b8febe6
Author: Dongheng Yang <yangdongheng11@gmail.com>
Date: Sun Oct 28 15:54:23 2018 +0800
Merge pull request #1 from antirez/unstable
update local data
commit 714b589
Author: Christian <crifei93@gmail.com>
Date: Fri Dec 28 01:17:26 2018 +0100
fix typo "resulution"
commit e23259d
Author: garenchan <1412950785@qq.com>
Date: Wed Dec 26 09:58:35 2018 +0800
fix typo: segfauls -> segfault
commit a9359f8
Author: xjp <jianping_xie@aliyun.com>
Date: Tue Dec 18 17:31:44 2018 +0800
Fixed REDISMODULE_H spell bug
commit a12c3e4
Author: jdiaz <jrd.palacios@gmail.com>
Date: Sat Dec 15 23:39:52 2018 -0600
Fixes hyperloglog hash function comment block description
commit 770eb11
Author: 林上耀 <1210tom@163.com>
Date: Sun Nov 25 17:16:10 2018 +0800
fix typo
commit fd97fbb
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date: Fri Nov 23 17:14:01 2018 +0100
Correct "unsupported" typo.
commit a85522d
Author: Jungnam Lee <jungnam.lee@oracle.com>
Date: Thu Nov 8 23:01:29 2018 +0900
fix typo in test comments
commit ade8007
Author: Arun Kumar <palerdot@users.noreply.github.com>
Date: Tue Oct 23 16:56:35 2018 +0530
Fixed grammatical typo
Fixed typo for word 'dictionary'
commit 869ee39
Author: Hamid Alaei <hamid.a85@gmail.com>
Date: Sun Aug 12 16:40:02 2018 +0430
fix documentations: (ThreadSafeContextStart/Stop -> ThreadSafeContextLock/Unlock), minor typo
commit f89d158
Author: Mayank Jain <mayankjain255@gmail.com>
Date: Tue Jul 31 23:01:21 2018 +0530
Updated README.md with some spelling corrections.
Made correction in spelling of some misspelled words.
commit 892198e
Author: dsomeshwar <someshwar.dhayalan@gmail.com>
Date: Sat Jul 21 23:23:04 2018 +0530
typo fix
commit 8a4d780
Author: Itamar Haber <itamar@redislabs.com>
Date: Mon Apr 30 02:06:52 2018 +0300
Fixes some typos
commit e3acef6
Author: Noah Rosamilia <ivoahivoah@gmail.com>
Date: Sat Mar 3 23:41:21 2018 -0500
Fix typo in /deps/README.md
commit 04442fb
Author: WuYunlong <xzsyeb@126.com>
Date: Sat Mar 3 10:32:42 2018 +0800
Fix typo in readSyncBulkPayload() comment.
commit 9f36880
Author: WuYunlong <xzsyeb@126.com>
Date: Sat Mar 3 10:20:37 2018 +0800
replication.c comment: run_id -> replid.
commit f866b4a
Author: Francesco 'makevoid' Canessa <makevoid@gmail.com>
Date: Thu Feb 22 22:01:56 2018 +0000
fix comment typo in server.c
commit 0ebc69b
Author: 줍 <jubee0124@gmail.com>
Date: Mon Feb 12 16:38:48 2018 +0900
Fix typo in redis.conf
Fix `five behaviors` to `eight behaviors` in [this sentence ](antirez/redis@unstable/redis.conf#L564)
commit b50a620
Author: martinbroadhurst <martinbroadhurst@users.noreply.github.com>
Date: Thu Dec 28 12:07:30 2017 +0000
Fix typo in valgrind.sup
commit 7d8f349
Author: Peter Boughton <peter@sorcerersisle.com>
Date: Mon Nov 27 19:52:19 2017 +0000
Update CONTRIBUTING; refer doc updates to redis-doc repo.
commit 02dec7e
Author: Klauswk <klauswk1@hotmail.com>
Date: Tue Oct 24 16:18:38 2017 -0200
Fix typo in comment
commit e1efbc8
Author: chenshi <baiwfg2@gmail.com>
Date: Tue Oct 3 18:26:30 2017 +0800
Correct two spelling errors of comments
commit 93327d8
Author: spacewander <spacewanderlzx@gmail.com>
Date: Wed Sep 13 16:47:24 2017 +0800
Update the comment for OBJ_ENCODING_EMBSTR_SIZE_LIMIT's value
The value of OBJ_ENCODING_EMBSTR_SIZE_LIMIT is 44 now instead of 39.
commit 63d361f
Author: spacewander <spacewanderlzx@gmail.com>
Date: Tue Sep 12 15:06:42 2017 +0800
Fix <prevlen> related doc in ziplist.c
According to the definition of ZIP_BIG_PREVLEN and other related code,
the guard of single byte <prevlen> should be 254 instead of 255.
commit ebe228d
Author: hanael80 <hanael80@gmail.com>
Date: Tue Aug 15 09:09:40 2017 +0900
Fix typo
commit 6b696e6
Author: Matt Robenolt <matt@ydekproductions.com>
Date: Mon Aug 14 14:50:47 2017 -0700
Fix typo in LATENCY DOCTOR output
commit a2ec6ae
Author: caosiyang <caosiyang@qiyi.com>
Date: Tue Aug 15 14:15:16 2017 +0800
Fix a typo: form => from
commit 3ab7699
Author: caosiyang <caosiyang@qiyi.com>
Date: Thu Aug 10 18:40:33 2017 +0800
Fix a typo: replicationFeedSlavesFromMaster() => replicationFeedSlavesFromMasterStream()
commit 72d43ef
Author: caosiyang <caosiyang@qiyi.com>
Date: Tue Aug 8 15:57:25 2017 +0800
fix a typo: servewr => server
commit 707c958
Author: Bo Cai <charpty@gmail.com>
Date: Wed Jul 26 21:49:42 2017 +0800
redis-cli.c typo: conut -> count.
Signed-off-by: Bo Cai <charpty@gmail.com>
commit b9385b2
Author: JackDrogon <jack.xsuperman@gmail.com>
Date: Fri Jun 30 14:22:31 2017 +0800
Fix some spell problems
commit 20d9230
Author: akosel <aaronjkosel@gmail.com>
Date: Sun Jun 4 19:35:13 2017 -0500
Fix typo
commit b167bfc
Author: Krzysiek Witkowicz <krzysiekwitkowicz@gmail.com>
Date: Mon May 22 21:32:27 2017 +0100
Fix #4008 small typo in comment
commit 2b78ac8
Author: Jake Clarkson <jacobwclarkson@gmail.com>
Date: Wed Apr 26 15:49:50 2017 +0100
Correct typo in tests/unit/hyperloglog.tcl
commit b0f1cdb
Author: Qi Luo <qiluo-msft@users.noreply.github.com>
Date: Wed Apr 19 14:25:18 2017 -0700
Fix typo
commit a90b0f9
Author: charsyam <charsyam@naver.com>
Date: Thu Mar 16 18:19:53 2017 +0900
fix typos
fix typos
fix typos
commit 8430a79
Author: Richard Hart <richardhart92@gmail.com>
Date: Mon Mar 13 22:17:41 2017 -0400
Fixed log message typo in listenToPort.
commit 481a1c2
Author: Vinod Kumar <kumar003vinod@gmail.com>
Date: Sun Jan 15 23:04:51 2017 +0530
src/db.c: Correct "save" -> "safe" typo
commit 586b4d3
Author: wangshaonan <wshn13@gmail.com>
Date: Wed Dec 21 20:28:27 2016 +0800
Fix typo they->the in helloworld.c
commit c1c4b5e
Author: Jenner <hypxm@qq.com>
Date: Mon Dec 19 16:39:46 2016 +0800
typo error
commit 1ee1a3f
Author: tielei <43289893@qq.com>
Date: Mon Jul 18 13:52:25 2016 +0800
fix some comments
commit 11a41fb
Author: Otto Kekäläinen <otto@seravo.fi>
Date: Sun Jul 3 10:23:55 2016 +0100
Fix spelling in documentation and comments
commit 5fb5d82
Author: francischan <f1ancis621@gmail.com>
Date: Tue Jun 28 00:19:33 2016 +0800
Fix outdated comments about redis.c file.
It should now refer to server.c file.
commit 6b254bc
Author: lmatt-bit <lmatt123n@gmail.com>
Date: Thu Apr 21 21:45:58 2016 +0800
Refine the comment of dictRehashMilliseconds func
SLAVECONF->REPLCONF in comment - by andyli029
commit ee9869f
Author: clark.kang <charsyam@naver.com>
Date: Tue Mar 22 11:09:51 2016 +0900
fix typos
commit f7b3b11
Author: Harisankar H <harisankarh@gmail.com>
Date: Wed Mar 9 11:49:42 2016 +0530
Typo correction: "faield" --> "failed"
Typo correction: "faield" --> "failed"
commit 3fd40fc
Author: Itamar Haber <itamar@redislabs.com>
Date: Thu Feb 25 10:31:51 2016 +0200
Fixes a typo in comments
commit 621c160
Author: Prayag Verma <prayag.verma@gmail.com>
Date: Mon Feb 1 12:36:20 2016 +0530
Fix typo in Readme.md
Spelling mistakes -
`eviciton` > `eviction`
`familar` > `familiar`
commit d7d07d6
Author: WonCheol Lee <toctoc21c@gmail.com>
Date: Wed Dec 30 15:11:34 2015 +0900
Typo fixed
commit a4dade7
Author: Felix Bünemann <buenemann@louis.info>
Date: Mon Dec 28 11:02:55 2015 +0100
[ci skip] Improve supervised upstart config docs
This mentions that "expect stop" is required for supervised upstart
to work correctly. See http://upstart.ubuntu.com/cookbook/#expect-stop
for an explanation.
commit d9caba9
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:30:03 2015 +1100
README: Remove trailing whitespace
commit 72d42e5
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:29:32 2015 +1100
README: Fix typo. th => the
commit dd6e957
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:29:20 2015 +1100
README: Fix typo. familar => familiar
commit 3a12b23
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:28:54 2015 +1100
README: Fix typo. eviciton => eviction
commit 2d1d03b
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:21:45 2015 +1100
README: Fix typo. sever => server
commit 3973b06
Author: Itamar Haber <itamar@garantiadata.com>
Date: Sat Dec 19 17:01:20 2015 +0200
Typo fix
commit 4f2e460
Author: Steve Gao <fu@2token.com>
Date: Fri Dec 4 10:22:05 2015 +0800
Update README - fix typos
commit b21667c
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 22:48:37 2015 +0800
delete redundancy color judge in sdscatcolor
commit 88894c7
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 22:14:42 2015 +0800
the example output shoule be HelloWorld
commit 2763470
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 17:41:39 2015 +0800
modify error word keyevente
Signed-off-by: binyan <binbin.yan@nokia.com>
commit 0847b3d
Author: Bruno Martins <bscmartins@gmail.com>
Date: Wed Nov 4 11:37:01 2015 +0000
typo
commit bbb9e9e
Author: dawedawe <dawedawe@gmx.de>
Date: Fri Mar 27 00:46:41 2015 +0100
typo: zimap -> zipmap
commit 5ed297e
Author: Axel Advento <badwolf.bloodseeker.rev@gmail.com>
Date: Tue Mar 3 15:58:29 2015 +0800
Fix 'salve' typos to 'slave'
commit edec9d6
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date: Wed Jun 12 14:12:47 2019 +0200
Update README.md
Co-Authored-By: Qix <Qix-@users.noreply.github.com>
commit 692a7af
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date: Tue May 28 14:32:04 2019 +0200
grammar
commit d962b0a
Author: Nick Frost <nickfrostatx@gmail.com>
Date: Wed Jul 20 15:17:12 2016 -0700
Minor grammar fix
commit 24fff01aaccaf5956973ada8c50ceb1462e211c6 (typos)
Author: Chad Miller <chadm@squareup.com>
Date: Tue Sep 8 13:46:11 2020 -0400
Fix faulty comment about operation of unlink()
commit 3cd5c1f3326c52aa552ada7ec797c6bb16452355
Author: Kevin <kevin.xgr@gmail.com>
Date: Wed Nov 20 00:13:50 2019 +0800
Fix typo in server.c.
From a83af59 Mon Sep 17 00:00:00 2001
From: wuwo <wuwo@wacai.com>
Date: Fri, 17 Mar 2017 20:37:45 +0800
Subject: [PATCH] falure to failure
From c961896 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=B7=A6=E6=87=B6?= <veficos@gmail.com>
Date: Sat, 27 May 2017 15:33:04 +0800
Subject: [PATCH] fix typo
From e600ef2 Mon Sep 17 00:00:00 2001
From: "rui.zou" <rui.zou@yunify.com>
Date: Sat, 30 Sep 2017 12:38:15 +0800
Subject: [PATCH] fix a typo
From c7d07fa Mon Sep 17 00:00:00 2001
From: Alexandre Perrin <alex@kaworu.ch>
Date: Thu, 16 Aug 2018 10:35:31 +0200
Subject: [PATCH] deps README.md typo
From b25cb67 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 10:55:37 +0300
Subject: [PATCH 1/2] fix typos in header
From ad28ca6 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 11:02:36 +0300
Subject: [PATCH 2/2] fix typos
commit 34924cdedd8552466fc22c1168d49236cb7ee915
Author: Adrian Lynch <adi_ady_ade@hotmail.com>
Date: Sat Apr 4 21:59:15 2015 +0100
Typos fixed
commit fd2a1e7
Author: Jan <jsteemann@users.noreply.github.com>
Date: Sat Oct 27 19:13:01 2018 +0200
Fix typos
Fix typos
commit e14e47c1a234b53b0e103c5f6a1c61481cbcbb02
Author: Andy Lester <andy@petdance.com>
Date: Fri Aug 2 22:30:07 2019 -0500
Fix multiple misspellings of "following"
commit 79b948ce2dac6b453fe80995abbcaac04c213d5a
Author: Andy Lester <andy@petdance.com>
Date: Fri Aug 2 22:24:28 2019 -0500
Fix misspelling of create-cluster
commit 1fffde52666dc99ab35efbd31071a4c008cb5a71
Author: Andy Lester <andy@petdance.com>
Date: Wed Jul 31 17:57:56 2019 -0500
Fix typos
commit 204c9ba9651e9e05fd73936b452b9a30be456cfe
Author: Xiaobo Zhu <xiaobo.zhu@shopee.com>
Date: Tue Aug 13 22:19:25 2019 +0800
fix typos
Squashed commit of the following:
commit 1d9aaf8
Author: danmedani <danmedani@gmail.com>
Date: Sun Aug 2 11:40:26 2015 -0700
README typo fix.
Squashed commit of the following:
commit 32bfa7c
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date: Mon Jul 6 21:15:08 2015 +0200
Fixed grammer
Squashed commit of the following:
commit b24f69c
Author: Sisir Koppaka <sisir.koppaka@gmail.com>
Date: Mon Mar 2 22:38:45 2015 -0500
utils/hashtable/rehashing.c: Fix typos
Squashed commit of the following:
commit 4e04082
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date: Mon Mar 23 08:22:21 2015 +0000
Small config file documentation improvements
Squashed commit of the following:
commit acb8773
Author: ctd1500 <ctd1500@gmail.com>
Date: Fri May 8 01:52:48 2015 -0700
Typo and grammar fixes in readme
commit 2eb75b6
Author: ctd1500 <ctd1500@gmail.com>
Date: Fri May 8 01:36:18 2015 -0700
fixed redis.conf comment
Squashed commit of the following:
commit a8249a2
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Fri Dec 11 11:39:52 2015 +0530
Revise correction of typos.
Squashed commit of the following:
commit 3c02028
Author: zhaojun11 <zhaojun11@jd.com>
Date: Wed Jan 17 19:05:28 2018 +0800
Fix typos include two code typos in cluster.c and latency.c
Squashed commit of the following:
commit 9dba47c
Author: q191201771 <191201771@qq.com>
Date: Sat Jan 4 11:31:04 2020 +0800
fix function listCreate comment in adlist.c
Update src/server.c
commit 2c7c2cb536e78dd211b1ac6f7bda00f0f54faaeb
Author: charpty <charpty@gmail.com>
Date: Tue May 1 23:16:59 2018 +0800
server.c typo: modules system dictionary type comment
Signed-off-by: charpty <charpty@gmail.com>
commit a8395323fb63cb59cb3591cb0f0c8edb7c29a680
Author: Itamar Haber <itamar@redislabs.com>
Date: Sun May 6 00:25:18 2018 +0300
Updates test_helper.tcl's help with undocumented options
Specifically:
* Host
* Port
* Client
commit bde6f9ced15755cd6407b4af7d601b030f36d60b
Author: wxisme <850885154@qq.com>
Date: Wed Aug 8 15:19:19 2018 +0800
fix comments in deps files
commit 3172474ba991532ab799ee1873439f3402412331
Author: wxisme <850885154@qq.com>
Date: Wed Aug 8 14:33:49 2018 +0800
fix some comments
commit 01b6f2b6858b5cf2ce4ad5092d2c746e755f53f0
Author: Thor Juhasz <thor@juhasz.pro>
Date: Sun Nov 18 14:37:41 2018 +0100
Minor fixes to comments
Found some parts a little unclear on a first read, which prompted me to have a better look at the file and fix some minor things I noticed.
Fixing minor typos and grammar. There are no changes to configuration options.
These changes are only meant to help the user better understand the explanations to the various configuration options
2020-09-10 13:43:38 +03:00
* are required to skip AUX fields they don ' t understand .
2015-01-08 08:56:35 +01:00
*
* An AUX field is composed of two strings : key and value . */
robj * auxkey , * auxval ;
2016-08-11 15:27:23 +02:00
if ( ( auxkey = rdbLoadStringObject ( rdb ) ) = = NULL ) goto eoferr ;
2021-11-29 12:09:08 +02:00
if ( ( auxval = rdbLoadStringObject ( rdb ) ) = = NULL ) {
decrRefCount ( auxkey ) ;
goto eoferr ;
}
2015-01-08 08:56:35 +01:00
if ( ( ( char * ) auxkey - > ptr ) [ 0 ] = = ' % ' ) {
/* All the fields with a name staring with '%' are considered
* information fields and are logged at startup with a log
* level of NOTICE . */
2015-07-27 09:41:48 +02:00
serverLog ( LL_NOTICE , " RDB '%s': %s " ,
2015-01-21 14:51:42 +01:00
( char * ) auxkey - > ptr ,
( char * ) auxval - > ptr ) ;
PSYNC2: different improvements to Redis replication.
The gist of the changes is that now, partial resynchronizations between
slaves and masters (without the need of a full resync with RDB transfer
and so forth), work in a number of cases when it was impossible
in the past. For instance:
1. When a slave is promoted to mastrer, the slaves of the old master can
partially resynchronize with the new master.
2. Chained slalves (slaves of slaves) can be moved to replicate to other
slaves or the master itsef, without requiring a full resync.
3. The master itself, after being turned into a slave, is able to
partially resynchronize with the new master, when it joins replication
again.
In order to obtain this, the following main changes were operated:
* Slaves also take a replication backlog, not just masters.
* Same stream replication for all the slaves and sub slaves. The
replication stream is identical from the top level master to its slaves
and is also the same from the slaves to their sub-slaves and so forth.
This means that if a slave is later promoted to master, it has the
same replication backlong, and can partially resynchronize with its
slaves (that were previously slaves of the old master).
* A given replication history is no longer identified by the `runid` of
a Redis node. There is instead a `replication ID` which changes every
time the instance has a new history no longer coherent with the past
one. So, for example, slaves publish the same replication history of
their master, however when they are turned into masters, they publish
a new replication ID, but still remember the old ID, so that they are
able to partially resynchronize with slaves of the old master (up to a
given offset).
* The replication protocol was slightly modified so that a new extended
+CONTINUE reply from the master is able to inform the slave of a
replication ID change.
* REPLCONF CAPA is used in order to notify masters that a slave is able
to understand the new +CONTINUE reply.
* The RDB file was extended with an auxiliary field that is able to
select a given DB after loading in the slave, so that the slave can
continue receiving the replication stream from the point it was
disconnected without requiring the master to insert "SELECT" statements.
This is useful in order to guarantee the "same stream" property, because
the slave must be able to accumulate an identical backlog.
* Slave pings to sub-slaves are now sent in a special form, when the
top-level master is disconnected, in order to don't interfer with the
replication stream. We just use out of band "\n" bytes as in other parts
of the Redis protocol.
An old design document is available here:
https://gist.github.com/antirez/ae068f95c0d084891305
However the implementation is not identical to the description because
during the work to implement it, different changes were needed in order
to make things working well.
2016-11-09 11:31:06 +01:00
} else if ( ! strcasecmp ( auxkey - > ptr , " repl-stream-db " ) ) {
if ( rsi ) rsi - > repl_stream_db = atoi ( auxval - > ptr ) ;
2016-11-10 12:35:29 +01:00
} else if ( ! strcasecmp ( auxkey - > ptr , " repl-id " ) ) {
if ( rsi & & sdslen ( auxval - > ptr ) = = CONFIG_RUN_ID_SIZE ) {
memcpy ( rsi - > repl_id , auxval - > ptr , CONFIG_RUN_ID_SIZE + 1 ) ;
rsi - > repl_id_is_set = 1 ;
}
} else if ( ! strcasecmp ( auxkey - > ptr , " repl-offset " ) ) {
if ( rsi ) rsi - > repl_offset = strtoll ( auxval - > ptr , NULL , 10 ) ;
2017-11-29 15:09:07 +01:00
} else if ( ! strcasecmp ( auxkey - > ptr , " lua " ) ) {
2021-12-21 14:32:42 +08:00
/* Won't load the script back in memory anymore. */
2019-03-02 21:17:40 +01:00
} else if ( ! strcasecmp ( auxkey - > ptr , " redis-ver " ) ) {
2024-04-03 14:52:36 -07:00
serverLog ( LL_NOTICE , " Loading RDB produced by Redis version %s " ,
( char * ) auxval - > ptr ) ;
2024-04-05 21:15:57 -07:00
} else if ( ! strcasecmp ( auxkey - > ptr , " valkey-ver " ) ) {
serverLog ( LL_NOTICE , " Loading RDB produced by valkey version %s " ,
2019-03-04 19:43:00 +08:00
( char * ) auxval - > ptr ) ;
2019-03-02 21:17:40 +01:00
} else if ( ! strcasecmp ( auxkey - > ptr , " ctime " ) ) {
time_t age = time ( NULL ) - strtol ( auxval - > ptr , NULL , 10 ) ;
if ( age < 0 ) age = 0 ;
serverLog ( LL_NOTICE , " RDB age %ld seconds " ,
( unsigned long ) age ) ;
} else if ( ! strcasecmp ( auxkey - > ptr , " used-mem " ) ) {
long long usedmem = strtoll ( auxval - > ptr , NULL , 10 ) ;
serverLog ( LL_NOTICE , " RDB memory usage when created %.2f Mb " ,
( double ) usedmem / ( 1024 * 1024 ) ) ;
2020-11-05 11:46:16 +02:00
server . loading_rdb_used_mem = usedmem ;
2019-03-02 21:17:40 +01:00
} else if ( ! strcasecmp ( auxkey - > ptr , " aof-preamble " ) ) {
long long haspreamble = strtoll ( auxval - > ptr , NULL , 10 ) ;
if ( haspreamble ) serverLog ( LL_NOTICE , " RDB has an AOF tail " ) ;
2022-02-12 00:47:03 +08:00
} else if ( ! strcasecmp ( auxkey - > ptr , " aof-base " ) ) {
long long isbase = strtoll ( auxval - > ptr , NULL , 10 ) ;
if ( isbase ) serverLog ( LL_NOTICE , " RDB is base AOF " ) ;
2019-03-02 21:17:40 +01:00
} else if ( ! strcasecmp ( auxkey - > ptr , " redis-bits " ) ) {
/* Just ignored. */
2015-01-08 08:56:35 +01:00
} else {
/* We ignore fields we don't understand, as by AUX field
* contract . */
2015-07-27 09:41:48 +02:00
serverLog ( LL_DEBUG , " Unrecognized RDB AUX field: '%s' " ,
2015-01-21 14:51:42 +01:00
( char * ) auxkey - > ptr ) ;
2015-01-08 08:56:35 +01:00
}
2015-01-08 16:23:48 -05:00
decrRefCount ( auxkey ) ;
decrRefCount ( auxval ) ;
2015-01-08 08:56:35 +01:00
continue ; /* Read type again. */
2018-03-16 13:47:10 +01:00
} else if ( type = = RDB_OPCODE_MODULE_AUX ) {
2024-04-09 01:24:03 -07:00
/* Load module data that is not related to the server key space.
2019-07-21 17:41:03 +03:00
* Such data can be potentially be stored both before and after the
* RDB keys - values section . */
2019-07-17 17:30:02 +02:00
uint64_t moduleid = rdbLoadLen ( rdb , NULL ) ;
2019-09-05 14:11:37 +03:00
int when_opcode = rdbLoadLen ( rdb , NULL ) ;
2019-07-21 17:41:03 +03:00
int when = rdbLoadLen ( rdb , NULL ) ;
2019-07-17 17:30:02 +02:00
if ( rioGetReadError ( rdb ) ) goto eoferr ;
2021-07-06 08:21:17 +03:00
if ( when_opcode ! = RDB_MODULE_OPCODE_UINT ) {
2019-09-05 14:11:37 +03:00
rdbReportReadError ( " bad when_opcode " ) ;
2021-07-06 08:21:17 +03:00
goto eoferr ;
}
2018-03-16 13:47:10 +01:00
moduleType * mt = moduleTypeLookupModuleByID ( moduleid ) ;
char name [ 10 ] ;
moduleTypeNameByID ( name , moduleid ) ;
if ( ! rdbCheckMode & & mt = = NULL ) {
/* Unknown module. */
serverLog ( LL_WARNING , " The RDB file contains AUX module data I can't load: no matching module '%s' " , name ) ;
exit ( 1 ) ;
} else if ( ! rdbCheckMode & & mt ! = NULL ) {
2019-07-21 17:41:03 +03:00
if ( ! mt - > aux_load ) {
/* Module doesn't support AUX. */
serverLog ( LL_WARNING , " The RDB file contains module AUX data, but the module '%s' doesn't seem to support it. " , name ) ;
exit ( 1 ) ;
}
2024-04-05 16:59:55 -07:00
ValkeyModuleIO io ;
2021-06-16 14:45:49 +08:00
moduleInitIOContext ( io , mt , rdb , NULL , - 1 ) ;
2019-07-21 17:41:03 +03:00
/* Call the rdb_load method of the module providing the 10 bit
* encoding version in the lower 10 bits of the module ID . */
2021-10-31 15:59:48 +02:00
int rc = mt - > aux_load ( & io , moduleid & 1023 , when ) ;
2019-07-21 17:41:03 +03:00
if ( io . ctx ) {
moduleFreeContext ( io . ctx ) ;
zfree ( io . ctx ) ;
}
2024-04-05 16:59:55 -07:00
if ( rc ! = VALKEYMODULE_OK | | io . error ) {
2021-10-31 15:59:48 +02:00
moduleTypeNameByID ( name , moduleid ) ;
serverLog ( LL_WARNING , " The RDB file contains module AUX data for the module type '%s', that the responsible module is not able to load. Check for modules log above for additional clues. " , name ) ;
goto eoferr ;
}
2019-07-21 17:41:03 +03:00
uint64_t eof = rdbLoadLen ( rdb , NULL ) ;
if ( eof ! = RDB_MODULE_OPCODE_EOF ) {
serverLog ( LL_WARNING , " The RDB file contains module AUX data for the module '%s' that is not terminated by the proper module value EOF marker " , name ) ;
2021-07-06 08:21:17 +03:00
goto eoferr ;
2019-07-21 17:41:03 +03:00
}
continue ;
2018-03-16 13:47:10 +01:00
} else {
/* RDB check mode. */
robj * aux = rdbLoadCheckModuleValue ( rdb , name ) ;
decrRefCount ( aux ) ;
2019-07-19 11:12:39 +02:00
continue ; /* Read next opcode. */
2018-03-16 13:47:10 +01:00
}
2022-08-15 21:41:44 +03:00
} else if ( type = = RDB_OPCODE_FUNCTION_PRE_GA ) {
rdbReportCorruptRDB ( " Pre-release function format not supported. " ) ;
exit ( 1 ) ;
} else if ( type = = RDB_OPCODE_FUNCTION2 ) {
2021-12-26 09:03:37 +02:00
sds err = NULL ;
2022-08-15 21:41:44 +03:00
if ( rdbFunctionLoad ( rdb , rdbver , rdb_loading_ctx - > functions_lib_ctx , rdbflags , & err ) ! = C_OK ) {
Redis Function Libraries (#10004)
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error.
2022-01-06 13:39:38 +02:00
serverLog ( LL_WARNING , " Failed loading library, %s " , err ) ;
2021-12-26 09:03:37 +02:00
sdsfree ( err ) ;
2021-10-07 14:41:26 +03:00
goto eoferr ;
}
continue ;
2010-06-22 00:07:48 +02:00
}
2015-01-07 15:25:58 +01:00
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
/* If there is no slot info, it means that it's either not cluster mode or we are trying to load legacy RDB file.
* In this case we want to estimate number of keys per slot and resize accordingly . */
if ( should_expand_db ) {
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
dbExpand ( db , db_size , 0 ) ;
2024-02-12 21:55:37 +02:00
dbExpandExpires ( db , expires_size , 0 ) ;
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-10-14 23:58:26 -07:00
should_expand_db = 0 ;
}
2010-06-22 00:07:48 +02:00
/* Read key */
2020-04-09 10:24:10 +02:00
if ( ( key = rdbGenericLoadStringObject ( rdb , RDB_LOAD_SDS , NULL ) ) = = NULL )
goto eoferr ;
2010-06-22 00:07:48 +02:00
/* Read value */
2021-08-06 03:42:20 +08:00
val = rdbLoadObject ( type , rdb , key , db - > id , & error ) ;
2019-10-24 09:45:25 +03:00
2012-01-13 17:49:16 -08:00
/* Check if the key already expired. This function is used when loading
* an RDB file from disk , either at startup , or when an RDB was
* received from the master . In the latter case , the master is
* responsible for key expiry . If we would expire keys here , the
2020-04-09 11:09:40 +02:00
* snapshot taken by the master may not be reflected on the slave .
2022-02-12 00:47:03 +08:00
* Similarly , if the base AOF is RDB format , we want to load all
* the keys they are , since the log of operations in the incr AOF
* is assumed to work in the exact keyspace state . */
2021-08-06 03:42:20 +08:00
if ( val = = NULL ) {
/* Since we used to have bug that could lead to empty keys
* ( See # 8453 ) , we rather not fail when empty key is encountered
* in an RDB file , instead we will silently discard it and
* continue loading . */
if ( error = = RDB_LOAD_ERR_EMPTY_KEY ) {
if ( empty_keys_skipped + + < 10 )
2023-02-19 22:33:19 +08:00
serverLog ( LL_NOTICE , " rdbLoadObject skipping empty key: %s " , key ) ;
2021-08-06 03:42:20 +08:00
sdsfree ( key ) ;
} else {
sdsfree ( key ) ;
goto eoferr ;
}
} else if ( iAmMaster ( ) & &
2020-04-09 11:09:40 +02:00
! ( rdbflags & RDBFLAGS_AOF_PREAMBLE ) & &
expiretime ! = - 1 & & expiretime < now )
{
2021-09-13 15:39:11 +08:00
if ( rdbflags & RDBFLAGS_FEED_REPL ) {
/* Caller should have created replication backlog,
* and now this path only works when rebooting ,
* so we don ' t have replicas yet . */
serverAssert ( server . repl_backlog ! = NULL & & listLength ( server . slaves ) = = 0 ) ;
robj keyobj ;
initStaticStringObject ( keyobj , key ) ;
robj * argv [ 2 ] ;
argv [ 0 ] = server . lazyfree_lazy_expire ? shared . unlink : shared . del ;
argv [ 1 ] = & keyobj ;
2024-05-06 21:40:28 -07:00
replicationFeedSlaves ( dbid , argv , 2 ) ;
2021-09-13 15:39:11 +08:00
}
2020-04-09 10:24:10 +02:00
sdsfree ( key ) ;
2010-06-22 00:07:48 +02:00
decrRefCount ( val ) ;
2021-09-13 15:39:11 +08:00
server . rdb_last_load_keys_expired + + ;
2018-03-15 16:24:53 +01:00
} else {
2020-04-09 12:02:27 +02:00
robj keyobj ;
2020-07-23 12:38:51 +03:00
initStaticStringObject ( keyobj , key ) ;
2020-04-09 12:02:27 +02:00
2018-03-15 16:24:53 +01:00
/* Add the new object in the hash table */
2020-04-09 16:21:48 +02:00
int added = dbAddRDBLoad ( db , key , val ) ;
2021-09-13 15:39:11 +08:00
server . rdb_last_load_keys_loaded + + ;
2020-04-09 16:21:48 +02:00
if ( ! added ) {
2020-04-09 12:02:27 +02:00
if ( rdbflags & RDBFLAGS_ALLOW_DUP ) {
/* This flag is useful for DEBUG RELOAD special modes.
* When it ' s set we allow new keys to replace the current
* keys with the same name . */
dbSyncDelete ( db , & keyobj ) ;
2020-04-09 16:21:48 +02:00
dbAddRDBLoad ( db , key , val ) ;
2020-04-09 12:02:27 +02:00
} else {
serverLog ( LL_WARNING ,
" RDB has duplicated key '%s' in DB %d " , key , db - > id ) ;
serverPanic ( " Duplicated key found in RDB file " ) ;
}
2020-04-09 10:24:10 +02:00
}
2011-06-14 15:34:27 +02:00
2018-03-15 16:24:53 +01:00
/* Set the expire time if needed */
2020-04-09 10:24:10 +02:00
if ( expiretime ! = - 1 ) {
setExpire ( NULL , db , & keyobj , expiretime ) ;
}
2019-03-02 21:17:40 +01:00
2018-06-20 14:40:18 +07:00
/* Set usage information (for eviction). */
2019-11-10 09:04:39 +02:00
objectSetLRUOrLFU ( val , lfu_freq , lru_idle , lru_clock , 1000 ) ;
2020-07-23 12:38:51 +03:00
/* call key space notification on key loaded for modules only */
moduleNotifyKeyspaceEvent ( NOTIFY_LOADED , " loaded " , & keyobj , db - > id ) ;
2018-03-15 16:24:53 +01:00
}
2020-04-09 10:24:10 +02:00
/* Loading the database more slowly is useful in order to test
* certain edge cases . */
2020-09-03 08:47:29 +03:00
if ( server . key_load_delay )
debugDelay ( server . key_load_delay ) ;
2010-06-22 00:07:48 +02:00
2018-03-15 16:24:53 +01:00
/* Reset the state that is key-specified and is populated by
* opcodes before the key , so that we start from scratch again . */
expiretime = - 1 ;
lfu_freq = - 1 ;
lru_idle = - 1 ;
2010-06-22 00:07:48 +02:00
}
2012-04-09 22:40:41 +02:00
/* Verify the checksum if RDB version is >= 5 */
2018-05-08 19:22:13 +08:00
if ( rdbver > = 5 ) {
2016-08-11 15:27:23 +02:00
uint64_t cksum , expected = rdb - > cksum ;
2012-04-09 22:40:41 +02:00
2016-08-11 15:27:23 +02:00
if ( rioRead ( rdb , & cksum , 8 ) = = 0 ) goto eoferr ;
2020-08-14 16:05:34 +03:00
if ( server . rdb_checksum & & ! server . skip_checksum_validation ) {
2018-05-08 19:22:13 +08:00
memrev64ifbe ( & cksum ) ;
if ( cksum = = 0 ) {
2023-02-19 22:33:19 +08:00
serverLog ( LL_NOTICE , " RDB file was saved with checksum disabled: no check performed. " ) ;
2018-05-08 19:22:13 +08:00
} else if ( cksum ! = expected ) {
2020-04-24 16:59:24 -07:00
serverLog ( LL_WARNING , " Wrong RDB checksum expected: (%llx) but "
2020-05-02 00:02:18 +02:00
" got (%llx). Aborting now. " ,
( unsigned long long ) expected ,
( unsigned long long ) cksum ) ;
2020-11-02 09:35:37 +02:00
rdbReportCorruptRDB ( " RDB CRC error " ) ;
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
return C_ERR ;
2018-05-08 19:22:13 +08:00
}
2012-04-09 22:40:41 +02:00
}
}
2021-08-06 03:42:20 +08:00
if ( empty_keys_skipped ) {
2023-02-19 22:33:19 +08:00
serverLog ( LL_NOTICE ,
2021-08-06 03:42:20 +08:00
" Done loading RDB, keys loaded: %lld, keys expired: %lld, empty keys skipped: %lld. " ,
2021-09-13 15:39:11 +08:00
server . rdb_last_load_keys_loaded , server . rdb_last_load_keys_expired , empty_keys_skipped ) ;
2021-08-06 03:42:20 +08:00
} else {
2021-09-13 15:39:11 +08:00
serverLog ( LL_NOTICE ,
2021-08-06 03:42:20 +08:00
" Done loading RDB, keys loaded: %lld, keys expired: %lld. " ,
2021-09-13 15:39:11 +08:00
server . rdb_last_load_keys_loaded , server . rdb_last_load_keys_expired ) ;
2021-08-06 03:42:20 +08:00
}
2015-07-26 23:17:55 +02:00
return C_OK ;
2010-06-22 00:07:48 +02:00
2019-07-18 12:37:55 +02:00
/* Unexpected end of file is handled here calling rdbReportReadError():
2024-04-09 01:24:03 -07:00
* this will in turn either abort the server in most cases , or if we are loading
2019-07-18 12:37:55 +02:00
* the RDB file from a socket during initial SYNC ( diskless replica mode ) ,
* we ' ll report the error to the caller , so that we can retry . */
eoferr :
serverLog ( LL_WARNING ,
" Short read or OOM loading DB. Unrecoverable error, aborting now. " ) ;
2019-07-16 11:00:34 +03:00
rdbReportReadError ( " Unexpected EOF reading RDB file " ) ;
return C_ERR ;
2010-06-22 00:07:48 +02:00
}
2016-08-11 15:27:23 +02:00
/* Like rdbLoadRio() but takes a filename instead of a rio stream. The
* filename is open for reading and a rio stream object created in order
* to do the actual loading . Moreover the ETA displayed in the INFO
PSYNC2: different improvements to Redis replication.
The gist of the changes is that now, partial resynchronizations between
slaves and masters (without the need of a full resync with RDB transfer
and so forth), work in a number of cases when it was impossible
in the past. For instance:
1. When a slave is promoted to mastrer, the slaves of the old master can
partially resynchronize with the new master.
2. Chained slalves (slaves of slaves) can be moved to replicate to other
slaves or the master itsef, without requiring a full resync.
3. The master itself, after being turned into a slave, is able to
partially resynchronize with the new master, when it joins replication
again.
In order to obtain this, the following main changes were operated:
* Slaves also take a replication backlog, not just masters.
* Same stream replication for all the slaves and sub slaves. The
replication stream is identical from the top level master to its slaves
and is also the same from the slaves to their sub-slaves and so forth.
This means that if a slave is later promoted to master, it has the
same replication backlong, and can partially resynchronize with its
slaves (that were previously slaves of the old master).
* A given replication history is no longer identified by the `runid` of
a Redis node. There is instead a `replication ID` which changes every
time the instance has a new history no longer coherent with the past
one. So, for example, slaves publish the same replication history of
their master, however when they are turned into masters, they publish
a new replication ID, but still remember the old ID, so that they are
able to partially resynchronize with slaves of the old master (up to a
given offset).
* The replication protocol was slightly modified so that a new extended
+CONTINUE reply from the master is able to inform the slave of a
replication ID change.
* REPLCONF CAPA is used in order to notify masters that a slave is able
to understand the new +CONTINUE reply.
* The RDB file was extended with an auxiliary field that is able to
select a given DB after loading in the slave, so that the slave can
continue receiving the replication stream from the point it was
disconnected without requiring the master to insert "SELECT" statements.
This is useful in order to guarantee the "same stream" property, because
the slave must be able to accumulate an identical backlog.
* Slave pings to sub-slaves are now sent in a special form, when the
top-level master is disconnected, in order to don't interfer with the
replication stream. We just use out of band "\n" bytes as in other parts
of the Redis protocol.
An old design document is available here:
https://gist.github.com/antirez/ae068f95c0d084891305
However the implementation is not identical to the description because
during the work to implement it, different changes were needed in order
to make things working well.
2016-11-09 11:31:06 +01:00
* output is initialized and finalized .
*
2022-07-07 11:31:59 +08:00
* If you pass an ' rsi ' structure initialized with RDB_SAVE_INFO_INIT , the
2021-04-21 18:43:06 +08:00
* loading code will fill the information fields in the structure . */
2019-10-29 17:59:09 +02:00
int rdbLoad ( char * filename , rdbSaveInfo * rsi , int rdbflags ) {
2016-08-11 15:27:23 +02:00
FILE * fp ;
rio rdb ;
int retval ;
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-04 01:14:13 +08:00
struct stat sb ;
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
int rdb_fd ;
2016-08-11 15:27:23 +02:00
2022-07-25 10:09:58 +03:00
fp = fopen ( filename , " r " ) ;
if ( fp = = NULL ) {
2022-08-04 15:47:37 +08:00
if ( errno = = ENOENT ) return RDB_NOT_EXIST ;
2022-07-25 10:09:58 +03:00
serverLog ( LL_WARNING , " Fatal error: can't open the RDB file %s for reading: %s " , filename , strerror ( errno ) ) ;
2022-08-04 15:47:37 +08:00
return RDB_FAILED ;
2022-07-25 10:09:58 +03:00
}
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-04 01:14:13 +08:00
if ( fstat ( fileno ( fp ) , & sb ) = = - 1 )
sb . st_size = 0 ;
startLoadingFile ( sb . st_size , filename , rdbflags ) ;
2016-08-11 15:27:23 +02:00
rioInitWithFile ( & rdb , fp ) ;
2021-10-07 14:41:26 +03:00
retval = rdbLoadRio ( & rdb , rdbflags , rsi ) ;
2016-08-11 15:27:23 +02:00
fclose ( fp ) ;
2019-10-29 17:59:09 +02:00
stopLoading ( retval = = C_OK ) ;
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
/* Reclaim the cache backed by rdb */
if ( retval = = C_OK & & ! ( rdbflags & RDBFLAGS_KEEP_CACHE ) ) {
/* TODO: maybe we could combine the fopen and open into one in the future */
Add RM_RdbLoad and RM_RdbSave module API functions (#11852)
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```
2023-04-09 12:07:32 +03:00
rdb_fd = open ( filename , O_RDONLY ) ;
2023-11-23 03:14:17 -05:00
if ( rdb_fd > = 0 ) bioCreateCloseJob ( rdb_fd , 0 , 1 ) ;
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
}
2022-07-26 20:13:13 +08:00
return ( retval = = C_OK ) ? RDB_OK : RDB_FAILED ;
2016-08-11 15:27:23 +02:00
}
2014-10-14 10:11:26 +02:00
/* A background saving child (BGSAVE) terminated its work. Handle this.
* This function covers the case of actual BGSAVEs . */
2023-12-07 17:03:51 -08:00
static void backgroundSaveDoneHandlerDisk ( int exitcode , int bysignal , time_t save_end ) {
2010-06-22 00:07:48 +02:00
if ( ! bysignal & & exitcode = = 0 ) {
2015-07-27 09:41:48 +02:00
serverLog ( LL_NOTICE ,
2010-06-22 00:07:48 +02:00
" Background saving terminated with success " ) ;
2010-08-30 10:32:32 +02:00
server . dirty = server . dirty - server . dirty_before_bgsave ;
2023-12-07 17:03:51 -08:00
server . lastsave = save_end ;
2015-07-26 23:17:55 +02:00
server . lastbgsave_status = C_OK ;
2010-06-22 00:07:48 +02:00
} else if ( ! bysignal & & exitcode ! = 0 ) {
2015-07-27 09:41:48 +02:00
serverLog ( LL_WARNING , " Background saving error " ) ;
2015-07-26 23:17:55 +02:00
server . lastbgsave_status = C_ERR ;
2010-06-22 00:07:48 +02:00
} else {
2014-07-01 17:19:08 +02:00
mstime_t latency ;
2015-07-27 09:41:48 +02:00
serverLog ( LL_WARNING ,
2011-01-07 18:15:14 +01:00
" Background saving terminated by signal %d " , bysignal ) ;
2014-07-01 17:19:08 +02:00
latencyStartMonitor ( latency ) ;
Refactory fork child related infra, Unify child pid
This is a refactory commit, isn't suppose to have any actual impact.
it does the following:
- keep just one server struct fork child pid variable instead of 3
- have one server struct variable indicating the purpose of the current fork
child.
- redisFork is now responsible of updating the server struct with the pid,
which means it can be the one that calls updateDictResizePolicy
- move child info pipe handling into redisFork instead of having them
repeated outside
- there are two classes of fork purposes, mutually exclusive group (AOF, RDB,
Module), and one that can create several forks to coexist in parallel (LDB,
but maybe Modules some day too, Module API allows for that).
- minor fix to killRDBChild:
unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild
doesn't clear the pid variable or call wait4, so checkChildrenDone does
the cleanup for it.
This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe,
updateDictResizePolicy, which didn't do any harm, but where unnecessary.
2020-12-16 15:14:04 +02:00
rdbRemoveTempFile ( server . child_pid , 0 ) ;
2014-07-01 17:19:08 +02:00
latencyEndMonitor ( latency ) ;
latencyAddSampleIfNeeded ( " rdb-unlink-temp-file " , latency ) ;
2013-01-14 10:29:14 +01:00
/* SIGUSR1 is whitelisted, so we have a way to kill a child without
Squash merging 125 typo/grammar/comment/doc PRs (#7773)
List of squashed commits or PRs
===============================
commit 66801ea
Author: hwware <wen.hui.ware@gmail.com>
Date: Mon Jan 13 00:54:31 2020 -0500
typo fix in acl.c
commit 46f55db
Author: Itamar Haber <itamar@redislabs.com>
Date: Sun Sep 6 18:24:11 2020 +0300
Updates a couple of comments
Specifically:
* RM_AutoMemory completed instead of pointing to docs
* Updated link to custom type doc
commit 61a2aa0
Author: xindoo <xindoo@qq.com>
Date: Tue Sep 1 19:24:59 2020 +0800
Correct errors in code comments
commit a5871d1
Author: yz1509 <pro-756@qq.com>
Date: Tue Sep 1 18:36:06 2020 +0800
fix typos in module.c
commit 41eede7
Author: bookug <bookug@qq.com>
Date: Sat Aug 15 01:11:33 2020 +0800
docs: fix typos in comments
commit c303c84
Author: lazy-snail <ws.niu@outlook.com>
Date: Fri Aug 7 11:15:44 2020 +0800
fix spelling in redis.conf
commit 1eb76bf
Author: zhujian <zhujianxyz@gmail.com>
Date: Thu Aug 6 15:22:10 2020 +0800
add a missing 'n' in comment
commit 1530ec2
Author: Daniel Dai <764122422@qq.com>
Date: Mon Jul 27 00:46:35 2020 -0400
fix spelling in tracking.c
commit e517b31
Author: Hunter-Chen <huntcool001@gmail.com>
Date: Fri Jul 17 22:33:32 2020 +0800
Update redis.conf
Co-authored-by: Itamar Haber <itamar@redislabs.com>
commit c300eff
Author: Hunter-Chen <huntcool001@gmail.com>
Date: Fri Jul 17 22:33:23 2020 +0800
Update redis.conf
Co-authored-by: Itamar Haber <itamar@redislabs.com>
commit 4c058a8
Author: 陈浩鹏 <chenhaopeng@heytea.com>
Date: Thu Jun 25 19:00:56 2020 +0800
Grammar fix and clarification
commit 5fcaa81
Author: bodong.ybd <bodong.ybd@alibaba-inc.com>
Date: Fri Jun 19 10:09:00 2020 +0800
Fix typos
commit 4caca9a
Author: Pruthvi P <pruthvi@ixigo.com>
Date: Fri May 22 00:33:22 2020 +0530
Fix typo eviciton => eviction
commit b2a25f6
Author: Brad Dunbar <dunbarb2@gmail.com>
Date: Sun May 17 12:39:59 2020 -0400
Fix a typo.
commit 12842ae
Author: hwware <wen.hui.ware@gmail.com>
Date: Sun May 3 17:16:59 2020 -0400
fix spelling in redis conf
commit ddba07c
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date: Sat May 2 23:25:34 2020 +0100
Correct a "conflicts" spelling error.
commit 8fc7bf2
Author: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
Date: Thu Apr 30 10:25:27 2020 +0900
docs: fix EXPIRE_FAST_CYCLE_DURATION to ACTIVE_EXPIRE_CYCLE_FAST_DURATION
commit 9b2b67a
Author: Brad Dunbar <dunbarb2@gmail.com>
Date: Fri Apr 24 11:46:22 2020 -0400
Fix a typo.
commit 0746f10
Author: devilinrust <63737265+devilinrust@users.noreply.github.com>
Date: Thu Apr 16 00:17:53 2020 +0200
Fix typos in server.c
commit 92b588d
Author: benjessop12 <56115861+benjessop12@users.noreply.github.com>
Date: Mon Apr 13 13:43:55 2020 +0100
Fix spelling mistake in lazyfree.c
commit 1da37aa
Merge: 2d4ba28 af347a8
Author: hwware <wen.hui.ware@gmail.com>
Date: Thu Mar 5 22:41:31 2020 -0500
Merge remote-tracking branch 'upstream/unstable' into expiretypofix
commit 2d4ba28
Author: hwware <wen.hui.ware@gmail.com>
Date: Mon Mar 2 00:09:40 2020 -0500
fix typo in expire.c
commit 1a746f7
Author: SennoYuki <minakami1yuki@gmail.com>
Date: Thu Feb 27 16:54:32 2020 +0800
fix typo
commit 8599b1a
Author: dongheejeong <donghee950403@gmail.com>
Date: Sun Feb 16 20:31:43 2020 +0000
Fix typo in server.c
commit f38d4e8
Author: hwware <wen.hui.ware@gmail.com>
Date: Sun Feb 2 22:58:38 2020 -0500
fix typo in evict.c
commit fe143fc
Author: Leo Murillo <leonardo.murillo@gmail.com>
Date: Sun Feb 2 01:57:22 2020 -0600
Fix a few typos in redis.conf
commit 1ab4d21
Author: viraja1 <anchan.viraj@gmail.com>
Date: Fri Dec 27 17:15:58 2019 +0530
Fix typo in Latency API docstring
commit ca1f70e
Author: gosth <danxuedexing@qq.com>
Date: Wed Dec 18 15:18:02 2019 +0800
fix typo in sort.c
commit a57c06b
Author: ZYunH <zyunhjob@163.com>
Date: Mon Dec 16 22:28:46 2019 +0800
fix-zset-typo
commit b8c92b5
Author: git-hulk <hulk.website@gmail.com>
Date: Mon Dec 16 15:51:42 2019 +0800
FIX: typo in cluster.c, onformation->information
commit 9dd981c
Author: wujm2007 <jim.wujm@gmail.com>
Date: Mon Dec 16 09:37:52 2019 +0800
Fix typo
commit e132d7a
Author: Sebastien Williams-Wynn <s.williamswynn.mail@gmail.com>
Date: Fri Nov 15 00:14:07 2019 +0000
Minor typo change
commit 47f44d5
Author: happynote3966 <01ssrmikururudevice01@gmail.com>
Date: Mon Nov 11 22:08:48 2019 +0900
fix comment typo in redis-cli.c
commit b8bdb0d
Author: fulei <fulei@kuaishou.com>
Date: Wed Oct 16 18:00:17 2019 +0800
Fix a spelling mistake of comments in defragDictBucketCallback
commit 0def46a
Author: fulei <fulei@kuaishou.com>
Date: Wed Oct 16 13:09:27 2019 +0800
fix some spelling mistakes of comments in defrag.c
commit f3596fd
Author: Phil Rajchgot <tophil@outlook.com>
Date: Sun Oct 13 02:02:32 2019 -0400
Typo and grammar fixes
Redis and its documentation are great -- just wanted to submit a few corrections in the spirit of Hacktoberfest. Thanks for all your work on this project. I use it all the time and it works beautifully.
commit 2b928cd
Author: KangZhiDong <worldkzd@gmail.com>
Date: Sun Sep 1 07:03:11 2019 +0800
fix typos
commit 33aea14
Author: Axlgrep <axlgrep@gmail.com>
Date: Tue Aug 27 11:02:18 2019 +0800
Fixed eviction spelling issues
commit e282a80
Author: Simen Flatby <simen@oms.no>
Date: Tue Aug 20 15:25:51 2019 +0200
Update comments to reflect prop name
In the comments the prop is referenced as replica-validity-factor,
but it is really named cluster-replica-validity-factor.
commit 74d1f9a
Author: Jim Green <jimgreen2013@qq.com>
Date: Tue Aug 20 20:00:31 2019 +0800
fix comment error, the code is ok
commit eea1407
Author: Liao Tonglang <liaotonglang@gmail.com>
Date: Fri May 31 10:16:18 2019 +0800
typo fix
fix cna't to can't
commit 0da553c
Author: KAWACHI Takashi <tkawachi@gmail.com>
Date: Wed Jul 17 00:38:16 2019 +0900
Fix typo
commit 7fc8fb6
Author: Michael Prokop <mika@grml.org>
Date: Tue May 28 17:58:42 2019 +0200
Typo fixes
s/familar/familiar/
s/compatiblity/compatibility/
s/ ot / to /
s/itsef/itself/
commit 5f46c9d
Author: zhumoing <34539422+zhumoing@users.noreply.github.com>
Date: Tue May 21 21:16:50 2019 +0800
typo-fixes
typo-fixes
commit 321dfe1
Author: wxisme <850885154@qq.com>
Date: Sat Mar 16 15:10:55 2019 +0800
typo fix
commit b4fb131
Merge: 267e0e6 3df1eb8
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date: Fri Feb 8 22:55:45 2019 +0200
Merge branch 'unstable' of antirez/redis into unstable
commit 267e0e6
Author: Nikitas Bastas <nikitasbst@gmail.com>
Date: Wed Jan 30 21:26:04 2019 +0200
Minor typo fix
commit 30544e7
Author: inshal96 <39904558+inshal96@users.noreply.github.com>
Date: Fri Jan 4 16:54:50 2019 +0500
remove an extra 'a' in the comments
commit 337969d
Author: BrotherGao <yangdongheng11@gmail.com>
Date: Sat Dec 29 12:37:29 2018 +0800
fix typo in redis.conf
commit 9f4b121
Merge: 423a030 e504583
Author: BrotherGao <yangdongheng@xiaomi.com>
Date: Sat Dec 29 11:41:12 2018 +0800
Merge branch 'unstable' of antirez/redis into unstable
commit 423a030
Merge: 42b02b7 46a51cd
Author: 杨东衡 <yangdongheng@xiaomi.com>
Date: Tue Dec 4 23:56:11 2018 +0800
Merge branch 'unstable' of antirez/redis into unstable
commit 42b02b7
Merge: 68c0e6e b8febe6
Author: Dongheng Yang <yangdongheng11@gmail.com>
Date: Sun Oct 28 15:54:23 2018 +0800
Merge pull request #1 from antirez/unstable
update local data
commit 714b589
Author: Christian <crifei93@gmail.com>
Date: Fri Dec 28 01:17:26 2018 +0100
fix typo "resulution"
commit e23259d
Author: garenchan <1412950785@qq.com>
Date: Wed Dec 26 09:58:35 2018 +0800
fix typo: segfauls -> segfault
commit a9359f8
Author: xjp <jianping_xie@aliyun.com>
Date: Tue Dec 18 17:31:44 2018 +0800
Fixed REDISMODULE_H spell bug
commit a12c3e4
Author: jdiaz <jrd.palacios@gmail.com>
Date: Sat Dec 15 23:39:52 2018 -0600
Fixes hyperloglog hash function comment block description
commit 770eb11
Author: 林上耀 <1210tom@163.com>
Date: Sun Nov 25 17:16:10 2018 +0800
fix typo
commit fd97fbb
Author: Chris Lamb <chris@chris-lamb.co.uk>
Date: Fri Nov 23 17:14:01 2018 +0100
Correct "unsupported" typo.
commit a85522d
Author: Jungnam Lee <jungnam.lee@oracle.com>
Date: Thu Nov 8 23:01:29 2018 +0900
fix typo in test comments
commit ade8007
Author: Arun Kumar <palerdot@users.noreply.github.com>
Date: Tue Oct 23 16:56:35 2018 +0530
Fixed grammatical typo
Fixed typo for word 'dictionary'
commit 869ee39
Author: Hamid Alaei <hamid.a85@gmail.com>
Date: Sun Aug 12 16:40:02 2018 +0430
fix documentations: (ThreadSafeContextStart/Stop -> ThreadSafeContextLock/Unlock), minor typo
commit f89d158
Author: Mayank Jain <mayankjain255@gmail.com>
Date: Tue Jul 31 23:01:21 2018 +0530
Updated README.md with some spelling corrections.
Made correction in spelling of some misspelled words.
commit 892198e
Author: dsomeshwar <someshwar.dhayalan@gmail.com>
Date: Sat Jul 21 23:23:04 2018 +0530
typo fix
commit 8a4d780
Author: Itamar Haber <itamar@redislabs.com>
Date: Mon Apr 30 02:06:52 2018 +0300
Fixes some typos
commit e3acef6
Author: Noah Rosamilia <ivoahivoah@gmail.com>
Date: Sat Mar 3 23:41:21 2018 -0500
Fix typo in /deps/README.md
commit 04442fb
Author: WuYunlong <xzsyeb@126.com>
Date: Sat Mar 3 10:32:42 2018 +0800
Fix typo in readSyncBulkPayload() comment.
commit 9f36880
Author: WuYunlong <xzsyeb@126.com>
Date: Sat Mar 3 10:20:37 2018 +0800
replication.c comment: run_id -> replid.
commit f866b4a
Author: Francesco 'makevoid' Canessa <makevoid@gmail.com>
Date: Thu Feb 22 22:01:56 2018 +0000
fix comment typo in server.c
commit 0ebc69b
Author: 줍 <jubee0124@gmail.com>
Date: Mon Feb 12 16:38:48 2018 +0900
Fix typo in redis.conf
Fix `five behaviors` to `eight behaviors` in [this sentence ](antirez/redis@unstable/redis.conf#L564)
commit b50a620
Author: martinbroadhurst <martinbroadhurst@users.noreply.github.com>
Date: Thu Dec 28 12:07:30 2017 +0000
Fix typo in valgrind.sup
commit 7d8f349
Author: Peter Boughton <peter@sorcerersisle.com>
Date: Mon Nov 27 19:52:19 2017 +0000
Update CONTRIBUTING; refer doc updates to redis-doc repo.
commit 02dec7e
Author: Klauswk <klauswk1@hotmail.com>
Date: Tue Oct 24 16:18:38 2017 -0200
Fix typo in comment
commit e1efbc8
Author: chenshi <baiwfg2@gmail.com>
Date: Tue Oct 3 18:26:30 2017 +0800
Correct two spelling errors of comments
commit 93327d8
Author: spacewander <spacewanderlzx@gmail.com>
Date: Wed Sep 13 16:47:24 2017 +0800
Update the comment for OBJ_ENCODING_EMBSTR_SIZE_LIMIT's value
The value of OBJ_ENCODING_EMBSTR_SIZE_LIMIT is 44 now instead of 39.
commit 63d361f
Author: spacewander <spacewanderlzx@gmail.com>
Date: Tue Sep 12 15:06:42 2017 +0800
Fix <prevlen> related doc in ziplist.c
According to the definition of ZIP_BIG_PREVLEN and other related code,
the guard of single byte <prevlen> should be 254 instead of 255.
commit ebe228d
Author: hanael80 <hanael80@gmail.com>
Date: Tue Aug 15 09:09:40 2017 +0900
Fix typo
commit 6b696e6
Author: Matt Robenolt <matt@ydekproductions.com>
Date: Mon Aug 14 14:50:47 2017 -0700
Fix typo in LATENCY DOCTOR output
commit a2ec6ae
Author: caosiyang <caosiyang@qiyi.com>
Date: Tue Aug 15 14:15:16 2017 +0800
Fix a typo: form => from
commit 3ab7699
Author: caosiyang <caosiyang@qiyi.com>
Date: Thu Aug 10 18:40:33 2017 +0800
Fix a typo: replicationFeedSlavesFromMaster() => replicationFeedSlavesFromMasterStream()
commit 72d43ef
Author: caosiyang <caosiyang@qiyi.com>
Date: Tue Aug 8 15:57:25 2017 +0800
fix a typo: servewr => server
commit 707c958
Author: Bo Cai <charpty@gmail.com>
Date: Wed Jul 26 21:49:42 2017 +0800
redis-cli.c typo: conut -> count.
Signed-off-by: Bo Cai <charpty@gmail.com>
commit b9385b2
Author: JackDrogon <jack.xsuperman@gmail.com>
Date: Fri Jun 30 14:22:31 2017 +0800
Fix some spell problems
commit 20d9230
Author: akosel <aaronjkosel@gmail.com>
Date: Sun Jun 4 19:35:13 2017 -0500
Fix typo
commit b167bfc
Author: Krzysiek Witkowicz <krzysiekwitkowicz@gmail.com>
Date: Mon May 22 21:32:27 2017 +0100
Fix #4008 small typo in comment
commit 2b78ac8
Author: Jake Clarkson <jacobwclarkson@gmail.com>
Date: Wed Apr 26 15:49:50 2017 +0100
Correct typo in tests/unit/hyperloglog.tcl
commit b0f1cdb
Author: Qi Luo <qiluo-msft@users.noreply.github.com>
Date: Wed Apr 19 14:25:18 2017 -0700
Fix typo
commit a90b0f9
Author: charsyam <charsyam@naver.com>
Date: Thu Mar 16 18:19:53 2017 +0900
fix typos
fix typos
fix typos
commit 8430a79
Author: Richard Hart <richardhart92@gmail.com>
Date: Mon Mar 13 22:17:41 2017 -0400
Fixed log message typo in listenToPort.
commit 481a1c2
Author: Vinod Kumar <kumar003vinod@gmail.com>
Date: Sun Jan 15 23:04:51 2017 +0530
src/db.c: Correct "save" -> "safe" typo
commit 586b4d3
Author: wangshaonan <wshn13@gmail.com>
Date: Wed Dec 21 20:28:27 2016 +0800
Fix typo they->the in helloworld.c
commit c1c4b5e
Author: Jenner <hypxm@qq.com>
Date: Mon Dec 19 16:39:46 2016 +0800
typo error
commit 1ee1a3f
Author: tielei <43289893@qq.com>
Date: Mon Jul 18 13:52:25 2016 +0800
fix some comments
commit 11a41fb
Author: Otto Kekäläinen <otto@seravo.fi>
Date: Sun Jul 3 10:23:55 2016 +0100
Fix spelling in documentation and comments
commit 5fb5d82
Author: francischan <f1ancis621@gmail.com>
Date: Tue Jun 28 00:19:33 2016 +0800
Fix outdated comments about redis.c file.
It should now refer to server.c file.
commit 6b254bc
Author: lmatt-bit <lmatt123n@gmail.com>
Date: Thu Apr 21 21:45:58 2016 +0800
Refine the comment of dictRehashMilliseconds func
SLAVECONF->REPLCONF in comment - by andyli029
commit ee9869f
Author: clark.kang <charsyam@naver.com>
Date: Tue Mar 22 11:09:51 2016 +0900
fix typos
commit f7b3b11
Author: Harisankar H <harisankarh@gmail.com>
Date: Wed Mar 9 11:49:42 2016 +0530
Typo correction: "faield" --> "failed"
Typo correction: "faield" --> "failed"
commit 3fd40fc
Author: Itamar Haber <itamar@redislabs.com>
Date: Thu Feb 25 10:31:51 2016 +0200
Fixes a typo in comments
commit 621c160
Author: Prayag Verma <prayag.verma@gmail.com>
Date: Mon Feb 1 12:36:20 2016 +0530
Fix typo in Readme.md
Spelling mistakes -
`eviciton` > `eviction`
`familar` > `familiar`
commit d7d07d6
Author: WonCheol Lee <toctoc21c@gmail.com>
Date: Wed Dec 30 15:11:34 2015 +0900
Typo fixed
commit a4dade7
Author: Felix Bünemann <buenemann@louis.info>
Date: Mon Dec 28 11:02:55 2015 +0100
[ci skip] Improve supervised upstart config docs
This mentions that "expect stop" is required for supervised upstart
to work correctly. See http://upstart.ubuntu.com/cookbook/#expect-stop
for an explanation.
commit d9caba9
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:30:03 2015 +1100
README: Remove trailing whitespace
commit 72d42e5
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:29:32 2015 +1100
README: Fix typo. th => the
commit dd6e957
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:29:20 2015 +1100
README: Fix typo. familar => familiar
commit 3a12b23
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:28:54 2015 +1100
README: Fix typo. eviciton => eviction
commit 2d1d03b
Author: daurnimator <quae@daurnimator.com>
Date: Mon Dec 21 18:21:45 2015 +1100
README: Fix typo. sever => server
commit 3973b06
Author: Itamar Haber <itamar@garantiadata.com>
Date: Sat Dec 19 17:01:20 2015 +0200
Typo fix
commit 4f2e460
Author: Steve Gao <fu@2token.com>
Date: Fri Dec 4 10:22:05 2015 +0800
Update README - fix typos
commit b21667c
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 22:48:37 2015 +0800
delete redundancy color judge in sdscatcolor
commit 88894c7
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 22:14:42 2015 +0800
the example output shoule be HelloWorld
commit 2763470
Author: binyan <binbin.yan@nokia.com>
Date: Wed Dec 2 17:41:39 2015 +0800
modify error word keyevente
Signed-off-by: binyan <binbin.yan@nokia.com>
commit 0847b3d
Author: Bruno Martins <bscmartins@gmail.com>
Date: Wed Nov 4 11:37:01 2015 +0000
typo
commit bbb9e9e
Author: dawedawe <dawedawe@gmx.de>
Date: Fri Mar 27 00:46:41 2015 +0100
typo: zimap -> zipmap
commit 5ed297e
Author: Axel Advento <badwolf.bloodseeker.rev@gmail.com>
Date: Tue Mar 3 15:58:29 2015 +0800
Fix 'salve' typos to 'slave'
commit edec9d6
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date: Wed Jun 12 14:12:47 2019 +0200
Update README.md
Co-Authored-By: Qix <Qix-@users.noreply.github.com>
commit 692a7af
Author: LudwikJaniuk <ludvig.janiuk@gmail.com>
Date: Tue May 28 14:32:04 2019 +0200
grammar
commit d962b0a
Author: Nick Frost <nickfrostatx@gmail.com>
Date: Wed Jul 20 15:17:12 2016 -0700
Minor grammar fix
commit 24fff01aaccaf5956973ada8c50ceb1462e211c6 (typos)
Author: Chad Miller <chadm@squareup.com>
Date: Tue Sep 8 13:46:11 2020 -0400
Fix faulty comment about operation of unlink()
commit 3cd5c1f3326c52aa552ada7ec797c6bb16452355
Author: Kevin <kevin.xgr@gmail.com>
Date: Wed Nov 20 00:13:50 2019 +0800
Fix typo in server.c.
From a83af59 Mon Sep 17 00:00:00 2001
From: wuwo <wuwo@wacai.com>
Date: Fri, 17 Mar 2017 20:37:45 +0800
Subject: [PATCH] falure to failure
From c961896 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=B7=A6=E6=87=B6?= <veficos@gmail.com>
Date: Sat, 27 May 2017 15:33:04 +0800
Subject: [PATCH] fix typo
From e600ef2 Mon Sep 17 00:00:00 2001
From: "rui.zou" <rui.zou@yunify.com>
Date: Sat, 30 Sep 2017 12:38:15 +0800
Subject: [PATCH] fix a typo
From c7d07fa Mon Sep 17 00:00:00 2001
From: Alexandre Perrin <alex@kaworu.ch>
Date: Thu, 16 Aug 2018 10:35:31 +0200
Subject: [PATCH] deps README.md typo
From b25cb67 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 10:55:37 +0300
Subject: [PATCH 1/2] fix typos in header
From ad28ca6 Mon Sep 17 00:00:00 2001
From: Guy Korland <gkorland@gmail.com>
Date: Wed, 26 Sep 2018 11:02:36 +0300
Subject: [PATCH 2/2] fix typos
commit 34924cdedd8552466fc22c1168d49236cb7ee915
Author: Adrian Lynch <adi_ady_ade@hotmail.com>
Date: Sat Apr 4 21:59:15 2015 +0100
Typos fixed
commit fd2a1e7
Author: Jan <jsteemann@users.noreply.github.com>
Date: Sat Oct 27 19:13:01 2018 +0200
Fix typos
Fix typos
commit e14e47c1a234b53b0e103c5f6a1c61481cbcbb02
Author: Andy Lester <andy@petdance.com>
Date: Fri Aug 2 22:30:07 2019 -0500
Fix multiple misspellings of "following"
commit 79b948ce2dac6b453fe80995abbcaac04c213d5a
Author: Andy Lester <andy@petdance.com>
Date: Fri Aug 2 22:24:28 2019 -0500
Fix misspelling of create-cluster
commit 1fffde52666dc99ab35efbd31071a4c008cb5a71
Author: Andy Lester <andy@petdance.com>
Date: Wed Jul 31 17:57:56 2019 -0500
Fix typos
commit 204c9ba9651e9e05fd73936b452b9a30be456cfe
Author: Xiaobo Zhu <xiaobo.zhu@shopee.com>
Date: Tue Aug 13 22:19:25 2019 +0800
fix typos
Squashed commit of the following:
commit 1d9aaf8
Author: danmedani <danmedani@gmail.com>
Date: Sun Aug 2 11:40:26 2015 -0700
README typo fix.
Squashed commit of the following:
commit 32bfa7c
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date: Mon Jul 6 21:15:08 2015 +0200
Fixed grammer
Squashed commit of the following:
commit b24f69c
Author: Sisir Koppaka <sisir.koppaka@gmail.com>
Date: Mon Mar 2 22:38:45 2015 -0500
utils/hashtable/rehashing.c: Fix typos
Squashed commit of the following:
commit 4e04082
Author: Erik Dubbelboer <erik@dubbelboer.com>
Date: Mon Mar 23 08:22:21 2015 +0000
Small config file documentation improvements
Squashed commit of the following:
commit acb8773
Author: ctd1500 <ctd1500@gmail.com>
Date: Fri May 8 01:52:48 2015 -0700
Typo and grammar fixes in readme
commit 2eb75b6
Author: ctd1500 <ctd1500@gmail.com>
Date: Fri May 8 01:36:18 2015 -0700
fixed redis.conf comment
Squashed commit of the following:
commit a8249a2
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Fri Dec 11 11:39:52 2015 +0530
Revise correction of typos.
Squashed commit of the following:
commit 3c02028
Author: zhaojun11 <zhaojun11@jd.com>
Date: Wed Jan 17 19:05:28 2018 +0800
Fix typos include two code typos in cluster.c and latency.c
Squashed commit of the following:
commit 9dba47c
Author: q191201771 <191201771@qq.com>
Date: Sat Jan 4 11:31:04 2020 +0800
fix function listCreate comment in adlist.c
Update src/server.c
commit 2c7c2cb536e78dd211b1ac6f7bda00f0f54faaeb
Author: charpty <charpty@gmail.com>
Date: Tue May 1 23:16:59 2018 +0800
server.c typo: modules system dictionary type comment
Signed-off-by: charpty <charpty@gmail.com>
commit a8395323fb63cb59cb3591cb0f0c8edb7c29a680
Author: Itamar Haber <itamar@redislabs.com>
Date: Sun May 6 00:25:18 2018 +0300
Updates test_helper.tcl's help with undocumented options
Specifically:
* Host
* Port
* Client
commit bde6f9ced15755cd6407b4af7d601b030f36d60b
Author: wxisme <850885154@qq.com>
Date: Wed Aug 8 15:19:19 2018 +0800
fix comments in deps files
commit 3172474ba991532ab799ee1873439f3402412331
Author: wxisme <850885154@qq.com>
Date: Wed Aug 8 14:33:49 2018 +0800
fix some comments
commit 01b6f2b6858b5cf2ce4ad5092d2c746e755f53f0
Author: Thor Juhasz <thor@juhasz.pro>
Date: Sun Nov 18 14:37:41 2018 +0100
Minor fixes to comments
Found some parts a little unclear on a first read, which prompted me to have a better look at the file and fix some minor things I noticed.
Fixing minor typos and grammar. There are no changes to configuration options.
These changes are only meant to help the user better understand the explanations to the various configuration options
2020-09-10 13:43:38 +03:00
* triggering an error condition . */
2013-01-14 10:29:14 +01:00
if ( bysignal ! = SIGUSR1 )
2015-07-26 23:17:55 +02:00
server . lastbgsave_status = C_ERR ;
2010-06-22 00:07:48 +02:00
}
2014-10-14 10:11:26 +02:00
}
/* A background saving child (BGSAVE) terminated its work. Handle this.
2018-03-16 09:59:17 +01:00
* This function covers the case of RDB - > Slaves socket transfers for
2014-10-15 11:35:00 +02:00
* diskless replication . */
2020-10-23 20:26:30 +08:00
static void backgroundSaveDoneHandlerSocket ( int exitcode , int bysignal ) {
2014-10-14 10:11:26 +02:00
if ( ! bysignal & & exitcode = = 0 ) {
2015-07-27 09:41:48 +02:00
serverLog ( LL_NOTICE ,
2014-10-14 10:11:26 +02:00
" Background RDB transfer terminated with success " ) ;
} else if ( ! bysignal & & exitcode ! = 0 ) {
2015-07-27 09:41:48 +02:00
serverLog ( LL_WARNING , " Background transfer error " ) ;
2014-10-14 10:11:26 +02:00
} else {
2015-07-27 09:41:48 +02:00
serverLog ( LL_WARNING ,
2014-10-14 10:11:26 +02:00
" Background transfer terminated by signal %d " , bysignal ) ;
}
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
if ( server . rdb_child_exit_pipe ! = - 1 )
close ( server . rdb_child_exit_pipe ) ;
2021-05-26 14:51:53 +03:00
aeDeleteFileEvent ( server . el , server . rdb_pipe_read , AE_READABLE ) ;
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
close ( server . rdb_pipe_read ) ;
server . rdb_child_exit_pipe = - 1 ;
server . rdb_pipe_read = - 1 ;
zfree ( server . rdb_pipe_conns ) ;
server . rdb_pipe_conns = NULL ;
server . rdb_pipe_numconns = 0 ;
server . rdb_pipe_numconns_writing = 0 ;
zfree ( server . rdb_pipe_buff ) ;
server . rdb_pipe_buff = NULL ;
server . rdb_pipe_bufflen = 0 ;
2014-10-14 10:11:26 +02:00
}
/* When a background RDB saving/transfer terminates, call the right handler. */
void backgroundSaveDoneHandler ( int exitcode , int bysignal ) {
2020-10-23 20:26:30 +08:00
int type = server . rdb_child_type ;
2023-12-07 17:03:51 -08:00
time_t save_end = time ( NULL ) ;
2014-10-14 10:11:26 +02:00
switch ( server . rdb_child_type ) {
2015-07-27 09:41:48 +02:00
case RDB_CHILD_TYPE_DISK :
2023-12-07 17:03:51 -08:00
backgroundSaveDoneHandlerDisk ( exitcode , bysignal , save_end ) ;
2014-10-14 10:11:26 +02:00
break ;
2015-07-27 09:41:48 +02:00
case RDB_CHILD_TYPE_SOCKET :
2014-10-14 10:11:26 +02:00
backgroundSaveDoneHandlerSocket ( exitcode , bysignal ) ;
break ;
default :
2015-07-27 09:41:48 +02:00
serverPanic ( " Unknown RDB child type. " ) ;
2014-10-14 10:11:26 +02:00
break ;
}
2020-10-23 20:26:30 +08:00
server . rdb_child_type = RDB_CHILD_TYPE_NONE ;
2023-12-07 17:03:51 -08:00
server . rdb_save_time_last = save_end - server . rdb_save_time_start ;
2020-10-23 20:26:30 +08:00
server . rdb_save_time_start = - 1 ;
/* Possibly there are slaves waiting for a BGSAVE in order to be served
* ( the first stage of SYNC is a bulk transfer of dump . rdb ) */
updateSlavesWaitingBgsave ( ( ! bysignal & & exitcode = = 0 ) ? C_OK : C_ERR , type ) ;
2014-10-14 10:11:26 +02:00
}
2019-01-21 11:28:44 +01:00
/* Kill the RDB saving child using SIGUSR1 (so that the parent will know
* the child did not exit for an error , but because we wanted ) , and performs
* the cleanup needed . */
void killRDBChild ( void ) {
Refactory fork child related infra, Unify child pid
This is a refactory commit, isn't suppose to have any actual impact.
it does the following:
- keep just one server struct fork child pid variable instead of 3
- have one server struct variable indicating the purpose of the current fork
child.
- redisFork is now responsible of updating the server struct with the pid,
which means it can be the one that calls updateDictResizePolicy
- move child info pipe handling into redisFork instead of having them
repeated outside
- there are two classes of fork purposes, mutually exclusive group (AOF, RDB,
Module), and one that can create several forks to coexist in parallel (LDB,
but maybe Modules some day too, Module API allows for that).
- minor fix to killRDBChild:
unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild
doesn't clear the pid variable or call wait4, so checkChildrenDone does
the cleanup for it.
This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe,
updateDictResizePolicy, which didn't do any harm, but where unnecessary.
2020-12-16 15:14:04 +02:00
kill ( server . child_pid , SIGUSR1 ) ;
2021-03-24 08:41:05 -07:00
/* Because we are not using here waitpid (like we have in killAppendOnlyChild
Refactory fork child related infra, Unify child pid
This is a refactory commit, isn't suppose to have any actual impact.
it does the following:
- keep just one server struct fork child pid variable instead of 3
- have one server struct variable indicating the purpose of the current fork
child.
- redisFork is now responsible of updating the server struct with the pid,
which means it can be the one that calls updateDictResizePolicy
- move child info pipe handling into redisFork instead of having them
repeated outside
- there are two classes of fork purposes, mutually exclusive group (AOF, RDB,
Module), and one that can create several forks to coexist in parallel (LDB,
but maybe Modules some day too, Module API allows for that).
- minor fix to killRDBChild:
unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild
doesn't clear the pid variable or call wait4, so checkChildrenDone does
the cleanup for it.
This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe,
updateDictResizePolicy, which didn't do any harm, but where unnecessary.
2020-12-16 15:14:04 +02:00
* and TerminateModuleForkChild ) , all the cleanup operations is done by
* checkChildrenDone , that later will find that the process killed .
* This includes :
* - resetChildState
* - rdbRemoveTempFile */
2019-01-21 11:28:44 +01:00
}
2014-10-14 10:11:26 +02:00
/* Spawn an RDB child that writes the RDB to the sockets of the slaves
2015-07-27 09:41:48 +02:00
* that are currently in SLAVE_STATE_WAIT_BGSAVE_START state . */
2022-01-02 09:39:01 +02:00
int rdbSaveToSlavesSockets ( int req , rdbSaveInfo * rsi ) {
2014-10-14 10:11:26 +02:00
listNode * ln ;
listIter li ;
pid_t childpid ;
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
int pipefds [ 2 ] , rdb_pipe_write , safe_to_exit_pipe ;
2014-10-14 10:11:26 +02:00
2019-09-27 12:03:09 +02:00
if ( hasActiveChildProcess ( ) ) return C_ERR ;
2014-10-14 10:11:26 +02:00
2019-08-11 16:07:53 +03:00
/* Even if the previous fork child exited, don't start a new one until we
* drained the pipe . */
if ( server . rdb_pipe_conns ) return C_ERR ;
2014-10-14 10:11:26 +02:00
2019-08-11 16:07:53 +03:00
/* Before to fork, create a pipe that is used to transfer the rdb bytes to
2019-10-15 17:21:33 +03:00
* the parent , we can ' t let it write directly to the sockets , since in case
* of TLS we must let the parent handle a continuous TLS state when the
2019-08-11 16:07:53 +03:00
* child terminates and parent takes over . */
2021-10-06 21:08:13 +08:00
if ( anetPipe ( pipefds , O_NONBLOCK , 0 ) = = - 1 ) return C_ERR ;
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
server . rdb_pipe_read = pipefds [ 0 ] ; /* read end */
rdb_pipe_write = pipefds [ 1 ] ; /* write end */
2014-10-14 15:29:07 +02:00
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
/* create another pipe that is used by the parent to signal to the child
* that it can exit . */
2021-10-06 21:08:13 +08:00
if ( anetPipe ( pipefds , 0 , 0 ) = = - 1 ) {
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
close ( rdb_pipe_write ) ;
close ( server . rdb_pipe_read ) ;
return C_ERR ;
}
safe_to_exit_pipe = pipefds [ 0 ] ; /* read end */
server . rdb_child_exit_pipe = pipefds [ 1 ] ; /* write end */
2019-08-11 16:07:53 +03:00
/* Collect the connections of the replicas we want to transfer
2014-10-14 15:29:07 +02:00
* the RDB to , which are i WAIT_BGSAVE_START state . */
2019-08-11 16:07:53 +03:00
server . rdb_pipe_conns = zmalloc ( sizeof ( connection * ) * listLength ( server . slaves ) ) ;
server . rdb_pipe_numconns = 0 ;
server . rdb_pipe_numconns_writing = 0 ;
2014-10-14 10:11:26 +02:00
listRewind ( server . slaves , & li ) ;
while ( ( ln = listNext ( & li ) ) ) {
2015-07-26 15:20:46 +02:00
client * slave = ln - > value ;
2015-07-27 09:41:48 +02:00
if ( slave - > replstate = = SLAVE_STATE_WAIT_BGSAVE_START ) {
2022-01-02 09:39:01 +02:00
/* Check slave has the exact requirements */
if ( slave - > slave_req ! = req )
continue ;
2019-08-11 16:07:53 +03:00
server . rdb_pipe_conns [ server . rdb_pipe_numconns + + ] = slave - > conn ;
2015-08-05 13:34:46 +02:00
replicationSetupSlaveForFullResync ( slave , getPsyncInitialOffset ( ) ) ;
2014-10-14 10:11:26 +02:00
}
}
2014-10-14 15:29:07 +02:00
/* Create the child process. */
2024-04-04 01:26:33 +07:00
if ( ( childpid = serverFork ( CHILD_TYPE_RDB ) ) = = 0 ) {
2014-10-14 10:11:26 +02:00
/* Child */
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
int retval , dummy ;
2019-08-11 16:07:53 +03:00
rio rdb ;
2014-10-14 10:11:26 +02:00
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
rioInitWithFd ( & rdb , rdb_pipe_write ) ;
2014-10-14 10:11:26 +02:00
diskless master, avoid bgsave child hung when fork parent crashes (#11463)
During a diskless sync, if the master main process crashes, the child would
have hung in `write`. This fix closes the read fd on the child side, so that if the
parent crashes, the child will get a write error and exit.
This change also fixes disk-based replication, BGSAVE and AOFRW.
In that case the child wouldn't have been hang, it would have just kept
running until done which may be pointless.
There is a certain degree of risk here. in case there's a BGSAVE child that could
maybe succeed and the parent dies for some reason, the old code would have let
the child keep running and maybe succeed and avoid data loss.
On the other hand, if the parent is restarted, it would have loaded an old rdb file
(or none), and then the child could reach the end and rename the rdb file (data
conflicting with what the parent has), or also have a race with another BGSAVE
child that the new parent started.
Note that i removed a comment saying a write error will be ignored in the child
and handled by the parent (this comment was very old and i don't think relevant).
2022-11-09 10:02:18 +02:00
/* Close the reading part, so that if the parent crashes, the child will
* get a write error and exit . */
close ( server . rdb_pipe_read ) ;
2024-04-04 01:26:33 +07:00
serverSetProcTitle ( " redis-rdb-to-slaves " ) ;
serverSetCpuAffinity ( server . bgsave_cpulist ) ;
2014-10-14 10:11:26 +02:00
2022-01-02 09:39:01 +02:00
retval = rdbSaveRioWithEOFMark ( req , & rdb , NULL , rsi ) ;
2019-08-11 16:07:53 +03:00
if ( retval = = C_OK & & rioFlush ( & rdb ) = = 0 )
2015-07-26 23:17:55 +02:00
retval = C_ERR ;
2014-10-17 11:36:12 +02:00
2015-07-26 23:17:55 +02:00
if ( retval = = C_OK ) {
2021-02-16 16:06:51 +02:00
sendChildCowInfo ( CHILD_INFO_TYPE_RDB_COW_SIZE , " RDB " ) ;
2014-10-14 10:11:26 +02:00
}
2019-10-16 17:08:07 +03:00
2019-08-11 16:07:53 +03:00
rioFreeFd ( & rdb ) ;
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
/* wake up the reader, tell it we're done. */
close ( rdb_pipe_write ) ;
close ( server . rdb_child_exit_pipe ) ; /* close write end so that we can detect the close on the parent. */
/* hold exit until the parent tells us it's safe. we're not expecting
* to read anything , just get the error when the pipe is closed . */
dummy = read ( safe_to_exit_pipe , pipefds , 1 ) ;
UNUSED ( dummy ) ;
2015-07-26 23:17:55 +02:00
exitFromChild ( ( retval = = C_OK ) ? 0 : 1 ) ;
2014-10-14 10:11:26 +02:00
} else {
/* Parent */
if ( childpid = = - 1 ) {
2015-07-27 09:41:48 +02:00
serverLog ( LL_WARNING , " Can't save in background: fork: %s " ,
2014-10-14 10:11:26 +02:00
strerror ( errno ) ) ;
2015-09-07 16:09:23 +02:00
/* Undo the state change. The caller will perform cleanup on
* all the slaves in BGSAVE_START state , but an early call to
* replicationSetupSlaveForFullResync ( ) turned it into BGSAVE_END */
listRewind ( server . slaves , & li ) ;
while ( ( ln = listNext ( & li ) ) ) {
client * slave = ln - > value ;
2019-08-11 16:07:53 +03:00
if ( slave - > replstate = = SLAVE_STATE_WAIT_BGSAVE_END ) {
slave - > replstate = SLAVE_STATE_WAIT_BGSAVE_START ;
2015-09-07 16:09:23 +02:00
}
}
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
close ( rdb_pipe_write ) ;
2019-08-11 16:07:53 +03:00
close ( server . rdb_pipe_read ) ;
2024-01-08 23:36:34 +08:00
close ( server . rdb_child_exit_pipe ) ;
2019-08-11 16:07:53 +03:00
zfree ( server . rdb_pipe_conns ) ;
server . rdb_pipe_conns = NULL ;
server . rdb_pipe_numconns = 0 ;
server . rdb_pipe_numconns_writing = 0 ;
2015-09-07 16:09:23 +02:00
} else {
2020-12-13 17:09:54 +02:00
serverLog ( LL_NOTICE , " Background RDB transfer started by pid %ld " ,
( long ) childpid ) ;
2015-09-07 16:09:23 +02:00
server . rdb_save_time_start = time ( NULL ) ;
server . rdb_child_type = RDB_CHILD_TYPE_SOCKET ;
if diskless repl child is killed, make sure to reap the pid (#7742)
Starting redis 6.0 and the changes we made to the diskless master to be
suitable for TLS, I made the master avoid reaping (wait3) the pid of the
child until we know all replicas are done reading their rdb.
I did that in order to avoid a state where the rdb_child_pid is -1 but
we don't yet want to start another fork (still busy serving that data to
replicas).
It turns out that the solution used so far was problematic in case the
fork child was being killed (e.g. by the kernel OOM killer), in that
case there's a chance that we currently disabled the read event on the
rdb pipe, since we're waiting for a replica to become writable again.
and in that scenario the master would have never realized the child
exited, and the replica will remain hung too.
Note that there's no mechanism to detect a hung replica while it's in
rdb transfer state.
The solution here is to add another pipe which is used by the parent to
tell the child it is safe to exit. this mean that when the child exits,
for whatever reason, it is safe to reap it.
Besides that, i'm re-introducing an adjustment to REPLCONF ACK which was
part of #6271 (Accelerate diskless master connections) but was dropped
when that PR was rebased after the TLS fork/pipe changes (5a47794).
Now that RdbPipeCleanup no longer calls checkChildrenDone, and the ACK
has chance to detect that the child exited, it should be the one to call
it so that we don't have to wait for cron (server.hz) to do that.
2020-09-06 16:43:57 +03:00
close ( rdb_pipe_write ) ; /* close write in parent so that it can detect the close on the child. */
2019-08-11 16:07:53 +03:00
if ( aeCreateFileEvent ( server . el , server . rdb_pipe_read , AE_READABLE , rdbPipeReadHandler , NULL ) = = AE_ERR ) {
serverPanic ( " Unrecoverable error creating server.rdb_pipe_read file event. " ) ;
}
2014-10-14 10:11:26 +02:00
}
2021-10-24 16:52:44 +03:00
close ( safe_to_exit_pipe ) ;
2015-09-07 16:09:23 +02:00
return ( childpid = = - 1 ) ? C_ERR : C_OK ;
2014-10-14 10:11:26 +02:00
}
2015-09-07 16:09:23 +02:00
return C_OK ; /* Unreached. */
2010-06-22 00:07:48 +02:00
}
2011-01-07 18:15:14 +01:00
2015-07-26 15:20:46 +02:00
void saveCommand ( client * c ) {
Refactory fork child related infra, Unify child pid
This is a refactory commit, isn't suppose to have any actual impact.
it does the following:
- keep just one server struct fork child pid variable instead of 3
- have one server struct variable indicating the purpose of the current fork
child.
- redisFork is now responsible of updating the server struct with the pid,
which means it can be the one that calls updateDictResizePolicy
- move child info pipe handling into redisFork instead of having them
repeated outside
- there are two classes of fork purposes, mutually exclusive group (AOF, RDB,
Module), and one that can create several forks to coexist in parallel (LDB,
but maybe Modules some day too, Module API allows for that).
- minor fix to killRDBChild:
unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild
doesn't clear the pid variable or call wait4, so checkChildrenDone does
the cleanup for it.
This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe,
updateDictResizePolicy, which didn't do any harm, but where unnecessary.
2020-12-16 15:14:04 +02:00
if ( server . child_type = = CHILD_TYPE_RDB ) {
2011-01-07 18:15:14 +01:00
addReplyError ( c , " Background save already in progress " ) ;
return ;
}
2022-06-07 22:38:31 +08:00
server . stat_rdb_saves + + ;
2017-09-20 13:47:42 +08:00
rdbSaveInfo rsi , * rsiptr ;
rsiptr = rdbPopulateSaveInfo ( & rsi ) ;
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
if ( rdbSave ( SLAVE_REQ_NONE , server . rdb_filename , rsiptr , RDBFLAGS_NONE ) = = C_OK ) {
2011-01-07 18:15:14 +01:00
addReply ( c , shared . ok ) ;
} else {
2020-12-23 19:06:25 -08:00
addReplyErrorObject ( c , shared . err ) ;
2011-01-07 18:15:14 +01:00
}
}
2016-07-21 18:34:53 +02:00
/* BGSAVE [SCHEDULE] */
2015-07-26 15:20:46 +02:00
void bgsaveCommand ( client * c ) {
2016-07-21 18:34:53 +02:00
int schedule = 0 ;
/* The SCHEDULE option changes the behavior of BGSAVE when an AOF rewrite
* is in progress . Instead of returning an error a BGSAVE gets scheduled . */
if ( c - > argc > 1 ) {
if ( c - > argc = = 2 & & ! strcasecmp ( c - > argv [ 1 ] - > ptr , " schedule " ) ) {
schedule = 1 ;
} else {
2020-12-23 19:06:25 -08:00
addReplyErrorObject ( c , shared . syntaxerr ) ;
2016-07-21 18:34:53 +02:00
return ;
}
}
2017-11-01 17:52:43 +08:00
rdbSaveInfo rsi , * rsiptr ;
rsiptr = rdbPopulateSaveInfo ( & rsi ) ;
Refactory fork child related infra, Unify child pid
This is a refactory commit, isn't suppose to have any actual impact.
it does the following:
- keep just one server struct fork child pid variable instead of 3
- have one server struct variable indicating the purpose of the current fork
child.
- redisFork is now responsible of updating the server struct with the pid,
which means it can be the one that calls updateDictResizePolicy
- move child info pipe handling into redisFork instead of having them
repeated outside
- there are two classes of fork purposes, mutually exclusive group (AOF, RDB,
Module), and one that can create several forks to coexist in parallel (LDB,
but maybe Modules some day too, Module API allows for that).
- minor fix to killRDBChild:
unlike killAppendOnlyChild and TerminateModuleForkChild, the killRDBChild
doesn't clear the pid variable or call wait4, so checkChildrenDone does
the cleanup for it.
This commit removes the explicit calls to rdbRemoveTempFile, closeChildInfoPipe,
updateDictResizePolicy, which didn't do any harm, but where unnecessary.
2020-12-16 15:14:04 +02:00
if ( server . child_type = = CHILD_TYPE_RDB ) {
2011-01-07 18:15:14 +01:00
addReplyError ( c , " Background save already in progress " ) ;
2022-01-04 12:37:47 +01:00
} else if ( hasActiveChildProcess ( ) | | server . in_exec ) {
if ( schedule | | server . in_exec ) {
2016-07-21 18:34:53 +02:00
server . rdb_bgsave_scheduled = 1 ;
addReplyStatus ( c , " Background saving scheduled " ) ;
} else {
addReplyError ( c ,
2019-09-27 11:59:37 +02:00
" Another child process is active (AOF?): can't BGSAVE right now. "
" Use BGSAVE SCHEDULE in order to schedule a BGSAVE whenever "
" possible. " ) ;
2016-07-21 18:34:53 +02:00
}
Reclaim page cache of RDB file (#11248)
# Background
The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service.
Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation.
# What the PR does
The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way.
Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false.
# Something deserve noting
1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0.
2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache.
# About test
A unit test is added to verify the effect of `posix_fadvise`.
In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
2023-02-12 15:23:29 +08:00
} else if ( rdbSaveBackground ( SLAVE_REQ_NONE , server . rdb_filename , rsiptr , RDBFLAGS_NONE ) = = C_OK ) {
2011-01-07 18:15:14 +01:00
addReplyStatus ( c , " Background saving started " ) ;
} else {
2020-12-23 19:06:25 -08:00
addReplyErrorObject ( c , shared . err ) ;
2011-01-07 18:15:14 +01:00
}
}
2017-09-19 23:03:39 +02:00
/* Populate the rdbSaveInfo structure used to persist the replication
2017-09-20 11:28:13 +02:00
* information inside the RDB file . Currently the structure explicitly
2017-09-19 23:03:39 +02:00
* contains just the currently selected DB from the master stream , however
* if the rdbSave * ( ) family functions receive a NULL rsi structure also
2021-06-10 20:39:33 +08:00
* the Replication ID / offset is not saved . The function populates ' rsi '
2017-09-19 23:03:39 +02:00
* that is normally stack - allocated in the caller , returns the populated
* pointer if the instance has a valid master client , otherwise NULL
2017-09-20 13:47:42 +08:00
* is returned , and the RDB saving will not persist any replication related
2017-09-19 23:03:39 +02:00
* information . */
rdbSaveInfo * rdbPopulateSaveInfo ( rdbSaveInfo * rsi ) {
rdbSaveInfo rsi_init = RDB_SAVE_INFO_INIT ;
* rsi = rsi_init ;
2017-09-20 11:28:13 +02:00
/* If the instance is a master, we can populate the replication info
2017-11-02 10:45:33 +08:00
* only when repl_backlog is not NULL . If the repl_backlog is NULL ,
* it means that the instance isn ' t in any replication chains . In this
* scenario the replication info is useless , because when a slave
2017-11-24 11:08:22 +01:00
* connects to us , the NULL repl_backlog will trigger a full
* synchronization , at the same time we will use a new replid and clear
* replid2 . */
2017-11-02 10:45:33 +08:00
if ( ! server . masterhost & & server . repl_backlog ) {
2017-11-22 12:05:30 +08:00
/* Note that when server.slaveseldb is -1, it means that this master
* didn ' t apply any write commands after a full synchronization .
* So we can let repl_stream_db be 0 , this allows a restarted slave
* to reload replication ID / offset , it ' s safe because the next write
* command must generate a SELECT statement . */
2017-11-24 11:08:22 +01:00
rsi - > repl_stream_db = server . slaveseldb = = - 1 ? 0 : server . slaveseldb ;
2017-09-20 13:47:42 +08:00
return rsi ;
}
2017-09-20 11:28:13 +02:00
2017-11-04 23:05:00 +08:00
/* If the instance is a slave we need a connected master
* in order to fetch the currently selected DB . */
2017-09-19 23:03:39 +02:00
if ( server . master ) {
rsi - > repl_stream_db = server . master - > db - > id ;
return rsi ;
}
2017-11-24 11:08:22 +01:00
/* If we have a cached master we can use it in order to populate the
* replication selected DB info inside the RDB file : the slave can
* increment the master_repl_offset only from data arriving from the
* master , so if we are disconnected the offset in the cached master
* is valid . */
2017-11-04 23:05:00 +08:00
if ( server . cached_master ) {
rsi - > repl_stream_db = server . cached_master - > db - > id ;
return rsi ;
}
2017-09-20 11:28:13 +02:00
return NULL ;
2017-09-19 23:03:39 +02:00
}