Merge tag 'tags/6.0.10' into redismerge_2021-01-20
Former-commit-id: dadce055f897cee83946c2d3e5cbb76341b94230
This commit is contained in:
commit
c068f2cd3d
3
.github/workflows/daily.yml
vendored
3
.github/workflows/daily.yml
vendored
@ -100,6 +100,7 @@ jobs:
|
|||||||
make valgrind
|
make valgrind
|
||||||
- name: test
|
- name: test
|
||||||
run: |
|
run: |
|
||||||
|
sudo apt-get update
|
||||||
sudo apt-get install tcl8.5 valgrind -y
|
sudo apt-get install tcl8.5 valgrind -y
|
||||||
./runtest --valgrind --verbose --clients 1
|
./runtest --valgrind --verbose --clients 1
|
||||||
- name: module api test
|
- name: module api test
|
||||||
@ -169,7 +170,7 @@ jobs:
|
|||||||
run: make
|
run: make
|
||||||
- name: test
|
- name: test
|
||||||
run: |
|
run: |
|
||||||
./runtest --accurate --verbose
|
./runtest --accurate --verbose --no-latency
|
||||||
- name: module api test
|
- name: module api test
|
||||||
run: ./runtest-moduleapi --verbose
|
run: ./runtest-moduleapi --verbose
|
||||||
- name: sentinel tests
|
- name: sentinel tests
|
||||||
|
416
00-RELEASENOTES
416
00-RELEASENOTES
@ -11,6 +11,422 @@ CRITICAL: There is a critical bug affecting MOST USERS. Upgrade ASAP.
|
|||||||
SECURITY: There are security fixes in the release.
|
SECURITY: There are security fixes in the release.
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
================================================================================
|
||||||
|
Redis 6.0.10 Released Tue Jan 12 16:20:20 IST 2021
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
Upgrade urgency MODERATE: several bugs with moderate impact are fixed,
|
||||||
|
Here is a comprehensive list of changes in this release compared to 6.0.9.
|
||||||
|
|
||||||
|
Command behavior changes:
|
||||||
|
* SWAPDB invalidates WATCHed keys (#8239)
|
||||||
|
* SORT command behaves differently when used on a writable replica (#8283)
|
||||||
|
* EXISTS should not alter LRU (#8016)
|
||||||
|
In Redis 5.0 and 6.0 it would have touched the LRU/LFU of the key.
|
||||||
|
* OBJECT should not reveal logically expired keys (#8016)
|
||||||
|
Will now behave the same TYPE or any other non-DEBUG command.
|
||||||
|
* GEORADIUS[BYMEMBER] can fail with -OOM if Redis is over the memory limit (#8107)
|
||||||
|
|
||||||
|
Other behavior changes:
|
||||||
|
* Sentinel: Fix missing updates to the config file after SENTINEL SET command (#8229)
|
||||||
|
* CONFIG REWRITE is atomic and safer, but requires write access to the config file's folder (#7824, #8051)
|
||||||
|
This change was already present in 6.0.9, but was missing from the release notes.
|
||||||
|
|
||||||
|
Bug fixes with compatibility implications (bugs introduced in Redis 6.0):
|
||||||
|
* Fix RDB CRC64 checksum on big-endian systems (#8270)
|
||||||
|
If you're using big-endian please consider the compatibility implications with
|
||||||
|
RESTORE, replication and persistence.
|
||||||
|
* Fix wrong order of key/value in Lua's map response (#8266)
|
||||||
|
If your scripts use redis.setresp() or return a map (new in Redis 6.0), please
|
||||||
|
consider the implications.
|
||||||
|
|
||||||
|
Bug fixes:
|
||||||
|
* Fix an issue where a forked process deletes the parent's pidfile (#8231)
|
||||||
|
* Fix crashes when enabling io-threads-do-reads (#8230)
|
||||||
|
* Fix a crash in redis-cli after executing cluster backup (#8267)
|
||||||
|
* Handle output buffer limits for module blocked clients (#8141)
|
||||||
|
Could result in a module sending reply to a blocked client to go beyond the limit.
|
||||||
|
* Fix setproctitle related crashes. (#8150, #8088)
|
||||||
|
Caused various crashes on startup, mainly on Apple M1 chips or under instrumentation.
|
||||||
|
* Backup/restore cluster mode keys to slots map for repl-diskless-load=swapdb (#8108)
|
||||||
|
In cluster mode with repl-diskless-load, when loading failed, slot map wouldn't
|
||||||
|
have been restored.
|
||||||
|
* Fix oom-score-adj-values range, and bug when used in config file (#8046)
|
||||||
|
Enabling setting this in the config file in a line after enabling it, would
|
||||||
|
have been buggy.
|
||||||
|
* Reset average ttl when empty databases (#8106)
|
||||||
|
Just causing misleading metric in INFO
|
||||||
|
* Disable rehash when Redis has child process (#8007)
|
||||||
|
This could have caused excessive CoW during BGSAVE, replication or AOFRW.
|
||||||
|
* Further improved ACL algorithm for picking categories (#7966)
|
||||||
|
Output of ACL GETUSER is now more similar to the one provided by ACL SETUSER.
|
||||||
|
* Fix bug with module GIL being released prematurely (#8061)
|
||||||
|
Could in theory (and rarely) cause multi-threaded modules to corrupt memory.
|
||||||
|
* Reduce effect of client tracking causing feedback loop in key eviction (#8100)
|
||||||
|
* Fix cluster access to unaligned memory (SIGBUS on old ARM) (#7958)
|
||||||
|
* Fix saving of strings larger than 2GB into RDB files (#8306)
|
||||||
|
|
||||||
|
Additional improvements:
|
||||||
|
* Avoid wasteful transient memory allocation in certain cases (#8286, #5954)
|
||||||
|
|
||||||
|
Platform / toolchain support related improvements:
|
||||||
|
* Fix crash log registers output on ARM. (#8020)
|
||||||
|
* Add a check for an ARM64 Linux kernel bug (#8224)
|
||||||
|
Due to the potential severity of this issue, Redis will print log warning on startup.
|
||||||
|
* Raspberry build fix. (#8095)
|
||||||
|
|
||||||
|
New configuration options:
|
||||||
|
* oom-score-adj-values config can now take absolute values (besides relative ones) (#8046)
|
||||||
|
|
||||||
|
Module related fixes:
|
||||||
|
* Moved RMAPI_FUNC_SUPPORTED so that it's usable (#8037)
|
||||||
|
* Improve timer accuracy (#7987)
|
||||||
|
* Allow '\0' inside of result of RM_CreateStringPrintf (#6260)
|
||||||
|
|
||||||
|
================================================================================
|
||||||
|
Redis 6.0.9 Released Mon Oct 26 10:37:47 IST 2020
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
Upgrade urgency: SECURITY if you use an affected platform (see below).
|
||||||
|
Otherwise the upgrade urgency is MODERATE.
|
||||||
|
|
||||||
|
This release fixes a potential heap overflow when using a heap allocator other
|
||||||
|
than jemalloc or glibc's malloc. See:
|
||||||
|
https://github.com/redis/redis/pull/7963
|
||||||
|
|
||||||
|
Other fixes in this release:
|
||||||
|
|
||||||
|
New:
|
||||||
|
* Memory reporting of clients argv (#7874)
|
||||||
|
* Add redis-cli control on raw format line delimiter (#7841)
|
||||||
|
* Add redis-cli support for rediss:// -u prefix (#7900)
|
||||||
|
* Get rss size support for NetBSD and DragonFlyBSD
|
||||||
|
|
||||||
|
Behavior changes:
|
||||||
|
* WATCH no longer ignores keys which have expired for MULTI/EXEC (#7920)
|
||||||
|
* Correct OBJECT ENCODING response for stream type (#7797)
|
||||||
|
* Allow blocked XREAD on a cluster replica (#7881)
|
||||||
|
* TLS: Do not require CA config if not used (#7862)
|
||||||
|
|
||||||
|
Bug fixes:
|
||||||
|
* INFO report real peak memory (before eviction) (#7894)
|
||||||
|
* Allow requirepass config to clear the password (#7899)
|
||||||
|
* Fix config rewrite file handling to make it really atomic (#7824)
|
||||||
|
* Fix excessive categories being displayed from ACLs (#7889)
|
||||||
|
* Add fsync in replica when full RDB payload was received (#7839)
|
||||||
|
* Don't write replies to socket when output buffer limit reached (#7202)
|
||||||
|
* Fix redis-check-rdb support for modules aux data (#7826)
|
||||||
|
* Other smaller bug fixes
|
||||||
|
|
||||||
|
Modules API:
|
||||||
|
* Add APIs for version and compatibility checks (#7865)
|
||||||
|
* Add RM_GetClientCertificate (#7866)
|
||||||
|
* Add RM_GetDetachedThreadSafeContext (#7886)
|
||||||
|
* Add RM_GetCommandKeys (#7884)
|
||||||
|
* Add Swapdb Module Event (#7804)
|
||||||
|
* RM_GetContextFlags provides indication of being in a fork child (#7783)
|
||||||
|
* RM_GetContextFlags document missing flags: MULTI_DIRTY, IS_CHILD (#7821)
|
||||||
|
* Expose real client on connection events (#7867)
|
||||||
|
* Minor improvements to module blocked on keys (#7903)
|
||||||
|
|
||||||
|
Full list of commits:
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit ce0d74d8f:
|
||||||
|
Fix wrong zmalloc_size() assumption. (#7963)
|
||||||
|
1 file changed, 3 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit d3ef26822:
|
||||||
|
Attempt to fix sporadic test failures due to wait_for_log_messages (#7955)
|
||||||
|
1 file changed, 2 insertions(+)
|
||||||
|
|
||||||
|
David CARLIER in commit 76993a0d4:
|
||||||
|
cpu affinity: DragonFlyBSD support (#7956)
|
||||||
|
2 files changed, 9 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
Zach Fewtrell in commit b23cdc14a:
|
||||||
|
fix invalid 'failover' identifier in cluster slave selection test (#7942)
|
||||||
|
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||||
|
|
||||||
|
WuYunlong in commit 99a4cb401:
|
||||||
|
Update rdb_last_bgsave_time_sec in INFO on diskless replication (#7917)
|
||||||
|
1 file changed, 11 insertions(+), 14 deletions(-)
|
||||||
|
|
||||||
|
Wen Hui in commit 258287c35:
|
||||||
|
do not add save parameter during config rewrite in sentinel mode (#7945)
|
||||||
|
1 file changed, 6 insertions(+)
|
||||||
|
|
||||||
|
Qu Chen in commit 6134279e2:
|
||||||
|
WATCH no longer ignores keys which have expired for MULTI/EXEC. (#7920)
|
||||||
|
2 files changed, 3 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit d15ec67c6:
|
||||||
|
improve verbose logging on failed test. print log file lines (#7938)
|
||||||
|
1 file changed, 4 insertions(+)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit 8a2e6d24f:
|
||||||
|
Add a --no-latency tests flag. (#7939)
|
||||||
|
5 files changed, 23 insertions(+), 9 deletions(-)
|
||||||
|
|
||||||
|
filipe oliveira in commit 0a1737dc5:
|
||||||
|
Fixed bug concerning redis-benchmark non clustered benchmark forcing always the same hash tag {tag} (#7931)
|
||||||
|
1 file changed, 31 insertions(+), 24 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit 6d9b3df71:
|
||||||
|
fix 32bit build warnings (#7926)
|
||||||
|
2 files changed, 3 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
Wen Hui in commit ed6f7a55e:
|
||||||
|
fix double fclose in aofrewrite (#7919)
|
||||||
|
1 file changed, 6 insertions(+), 5 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit 331d73c92:
|
||||||
|
INFO report peak memory before eviction (#7894)
|
||||||
|
1 file changed, 11 insertions(+), 1 deletion(-)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit e88e13528:
|
||||||
|
Fix tests failure on busybox systems. (#7916)
|
||||||
|
2 files changed, 2 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit b7f53738e:
|
||||||
|
Allow requirepass config to clear the password (#7899)
|
||||||
|
1 file changed, 18 insertions(+), 8 deletions(-)
|
||||||
|
|
||||||
|
Wang Yuan in commit 2ecb28b68:
|
||||||
|
Remove temporary aof and rdb files in a background thread (#7905)
|
||||||
|
2 files changed, 3 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
guybe7 in commit 7bc605e6b:
|
||||||
|
Minor improvements to module blocked on keys (#7903)
|
||||||
|
3 files changed, 15 insertions(+), 9 deletions(-)
|
||||||
|
|
||||||
|
Andreas Lind in commit 1b484608d:
|
||||||
|
Support redis-cli -u rediss://... (#7900)
|
||||||
|
1 file changed, 9 insertions(+), 1 deletion(-)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit 95095d680:
|
||||||
|
Modules: fix RM_GetCommandKeys API. (#7901)
|
||||||
|
3 files changed, 4 insertions(+), 7 deletions(-)
|
||||||
|
|
||||||
|
Meir Shpilraien (Spielrein) in commit cd3ae2f2c:
|
||||||
|
Add Module API for version and compatibility checks (#7865)
|
||||||
|
9 files changed, 180 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit 1d723f734:
|
||||||
|
Module API: Add RM_GetClientCertificate(). (#7866)
|
||||||
|
6 files changed, 88 insertions(+)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit d72172752:
|
||||||
|
Modules: Add RM_GetDetachedThreadSafeContext(). (#7886)
|
||||||
|
4 files changed, 52 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit e4f9aff19:
|
||||||
|
Modules: add RM_GetCommandKeys().
|
||||||
|
6 files changed, 238 insertions(+), 1 deletion(-)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit 6682b913e:
|
||||||
|
Introduce getKeysResult for getKeysFromCommand.
|
||||||
|
7 files changed, 170 insertions(+), 121 deletions(-)
|
||||||
|
|
||||||
|
Madelyn Olson in commit 9db65919c:
|
||||||
|
Fixed excessive categories being displayed from acls (#7889)
|
||||||
|
2 files changed, 29 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit f34c50cf6:
|
||||||
|
Add some additional signal info to the crash log (#7891)
|
||||||
|
1 file changed, 4 insertions(+), 1 deletion(-)
|
||||||
|
|
||||||
|
Oran Agra in commit 300bb4701:
|
||||||
|
Allow blocked XREAD on a cluster replica (#7881)
|
||||||
|
3 files changed, 43 insertions(+)
|
||||||
|
|
||||||
|
Oran Agra in commit bc5cf0f1a:
|
||||||
|
memory reporting of clients argv (#7874)
|
||||||
|
5 files changed, 55 insertions(+), 5 deletions(-)
|
||||||
|
|
||||||
|
DvirDukhan in commit 13d2e6a57:
|
||||||
|
redis-cli add control on raw format line delimiter (#7841)
|
||||||
|
1 file changed, 8 insertions(+), 6 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit d54e25620:
|
||||||
|
Include internal sds fragmentation in MEMORY reporting (#7864)
|
||||||
|
2 files changed, 7 insertions(+), 7 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit ac2c2b74e:
|
||||||
|
Fix crash in script timeout during AOF loading (#7870)
|
||||||
|
2 files changed, 47 insertions(+), 4 deletions(-)
|
||||||
|
|
||||||
|
Rafi Einstein in commit 00d2082e7:
|
||||||
|
Makefile: enable program suffixes via PROG_SUFFIX (#7868)
|
||||||
|
2 files changed, 10 insertions(+), 6 deletions(-)
|
||||||
|
|
||||||
|
nitaicaro in commit d2c2c26e7:
|
||||||
|
Fixed Tracking test “The other connection is able to get invalidations” (#7871)
|
||||||
|
1 file changed, 3 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit 2c172556f:
|
||||||
|
Modules: expose real client on conn events.
|
||||||
|
1 file changed, 11 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit 2972d0c1f:
|
||||||
|
Module API: Fail ineffective auth calls.
|
||||||
|
1 file changed, 5 insertions(+)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit aeb2a3b6a:
|
||||||
|
TLS: Do not require CA config if not used. (#7862)
|
||||||
|
1 file changed, 5 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit d8e64aeb8:
|
||||||
|
warning: comparison between signed and unsigned integer in 32bit build (#7838)
|
||||||
|
1 file changed, 2 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
David CARLIER in commit 151209982:
|
||||||
|
Add support for Haiku OS (#7435)
|
||||||
|
3 files changed, 16 insertions(+)
|
||||||
|
|
||||||
|
Gavrie Philipson in commit b1d3e169f:
|
||||||
|
Fix typo in module API docs (#7861)
|
||||||
|
1 file changed, 2 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
David CARLIER in commit 08e3b8d13:
|
||||||
|
getting rss size implementation for netbsd (#7293)
|
||||||
|
1 file changed, 20 insertions(+)
|
||||||
|
|
||||||
|
Oran Agra in commit 0377a889b:
|
||||||
|
Fix new obuf-limits tests to work with TLS (#7848)
|
||||||
|
2 files changed, 29 insertions(+), 13 deletions(-)
|
||||||
|
|
||||||
|
caozb in commit a057ad9b1:
|
||||||
|
ignore slaveof no one in redis.conf (#7842)
|
||||||
|
1 file changed, 10 insertions(+), 1 deletion(-)
|
||||||
|
|
||||||
|
Wang Yuan in commit 87ecee645:
|
||||||
|
Don't support Gopher if enable io threads to read queries (#7851)
|
||||||
|
2 files changed, 8 insertions(+), 5 deletions(-)
|
||||||
|
|
||||||
|
Wang Yuan in commit b92902236:
|
||||||
|
Set 'loading' and 'shutdown_asap' to volatile sig_atomic_t type (#7845)
|
||||||
|
1 file changed, 2 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
Uri Shachar in commit ee0875a02:
|
||||||
|
Fix config rewrite file handling to make it really atomic (#7824)
|
||||||
|
1 file changed, 49 insertions(+), 47 deletions(-)
|
||||||
|
|
||||||
|
WuYunlong in commit d577519e1:
|
||||||
|
Add fsync to readSyncBulkPayload(). (#7839)
|
||||||
|
1 file changed, 11 insertions(+)
|
||||||
|
|
||||||
|
Wen Hui in commit 104e0ea3e:
|
||||||
|
rdb.c: handle fclose error case differently to avoid double fclose (#7307)
|
||||||
|
1 file changed, 7 insertions(+), 6 deletions(-)
|
||||||
|
|
||||||
|
Wang Yuan in commit 0eb015ac6:
|
||||||
|
Don't write replies if close the client ASAP (#7202)
|
||||||
|
7 files changed, 144 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
Guy Korland in commit 08a03e32c:
|
||||||
|
Fix RedisModule_HashGet examples (#6697)
|
||||||
|
1 file changed, 4 insertions(+), 4 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit 09551645d:
|
||||||
|
fix recently broken TLS build error, and add coverage for CI (#7833)
|
||||||
|
2 files changed, 4 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
David CARLIER in commit c545ba5d0:
|
||||||
|
Further NetBSD update and build fixes. (#7831)
|
||||||
|
3 files changed, 72 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
WuYunlong in commit ec9050053:
|
||||||
|
Fix redundancy use of semicolon in do-while macros in ziplist.c. (#7832)
|
||||||
|
1 file changed, 3 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
yixiang in commit 27a4d1314:
|
||||||
|
Fix connGetSocketError usage (#7811)
|
||||||
|
2 files changed, 6 insertions(+), 4 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit 30795dcae:
|
||||||
|
RM_GetContextFlags - document missing flags (#7821)
|
||||||
|
1 file changed, 6 insertions(+)
|
||||||
|
|
||||||
|
Yossi Gottlieb in commit 14a12849f:
|
||||||
|
Fix occasional hangs on replication reconnection. (#7830)
|
||||||
|
2 files changed, 14 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
Ariel Shtul in commit d5a1b06dc:
|
||||||
|
Fix redis-check-rdb support for modules aux data (#7826)
|
||||||
|
3 files changed, 21 insertions(+), 1 deletion(-)
|
||||||
|
|
||||||
|
Wen Hui in commit 39f793693:
|
||||||
|
refactor rewriteStreamObject code for adding missing streamIteratorStop call (#7829)
|
||||||
|
1 file changed, 36 insertions(+), 18 deletions(-)
|
||||||
|
|
||||||
|
WuYunlong in commit faad29bfb:
|
||||||
|
Make IO threads killable so that they can be canceled at any time.
|
||||||
|
1 file changed, 1 insertion(+)
|
||||||
|
|
||||||
|
WuYunlong in commit b3f1b5830:
|
||||||
|
Make main thread killable so that it can be canceled at any time. Refine comment of makeThreadKillable().
|
||||||
|
3 files changed, 11 insertions(+), 4 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit 0f43d1f55:
|
||||||
|
RM_GetContextFlags provides indication that we're in a fork child (#7783)
|
||||||
|
8 files changed, 28 insertions(+), 18 deletions(-)
|
||||||
|
|
||||||
|
Wen Hui in commit a55ea9cdf:
|
||||||
|
Add Swapdb Module Event (#7804)
|
||||||
|
5 files changed, 52 insertions(+)
|
||||||
|
|
||||||
|
Daniel Dai in commit 1d8f72bef:
|
||||||
|
fix make warnings in debug.c MacOS (#7805)
|
||||||
|
2 files changed, 3 insertions(+), 2 deletions(-)
|
||||||
|
|
||||||
|
David CARLIER in commit 556953d93:
|
||||||
|
debug.c: NetBSD build warning fix. (#7810)
|
||||||
|
1 file changed, 4 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
Wang Yuan in commit d02435b66:
|
||||||
|
Remove tmp rdb file in background thread (#7762)
|
||||||
|
6 files changed, 82 insertions(+), 8 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit 1bd7bfdc0:
|
||||||
|
Add printf attribute and fix warnings and a minor bug (#7803)
|
||||||
|
2 files changed, 12 insertions(+), 4 deletions(-)
|
||||||
|
|
||||||
|
WuYunlong in commit d25147b4c:
|
||||||
|
bio: doFastMemoryTest should try to kill io threads as well.
|
||||||
|
3 files changed, 19 insertions(+)
|
||||||
|
|
||||||
|
WuYunlong in commit 4489ba081:
|
||||||
|
bio: fix doFastMemoryTest.
|
||||||
|
4 files changed, 25 insertions(+), 3 deletions(-)
|
||||||
|
|
||||||
|
Wen Hui in commit cf85def67:
|
||||||
|
correct OBJECT ENCODING response for stream type (#7797)
|
||||||
|
1 file changed, 1 insertion(+)
|
||||||
|
|
||||||
|
WuYunlong in commit cf5bcf892:
|
||||||
|
Clarify help text of tcl scripts. (#7798)
|
||||||
|
1 file changed, 1 insertion(+)
|
||||||
|
|
||||||
|
Mykhailo Pylyp in commit f72665c65:
|
||||||
|
Recalculate hardcoded variables from $::instances_count in sentinel tests (#7561)
|
||||||
|
3 files changed, 15 insertions(+), 13 deletions(-)
|
||||||
|
|
||||||
|
Oran Agra in commit c67b19e7a:
|
||||||
|
Fix failing valgrind installation in github actions (#7792)
|
||||||
|
1 file changed, 1 insertion(+)
|
||||||
|
|
||||||
|
Oran Agra in commit 92763fd2a:
|
||||||
|
fix broken PEXPIREAT test (#7791)
|
||||||
|
1 file changed, 10 insertions(+), 6 deletions(-)
|
||||||
|
|
||||||
|
Wang Yuan in commit f5b4c0ccb:
|
||||||
|
Remove dead global variable 'lru_clock' (#7782)
|
||||||
|
1 file changed, 1 deletion(-)
|
||||||
|
|
||||||
|
Oran Agra in commit 82d431fd6:
|
||||||
|
Squash merging 125 typo/grammar/comment/doc PRs (#7773)
|
||||||
|
80 files changed, 436 insertions(+), 416 deletions(-)
|
||||||
|
|
||||||
================================================================================
|
================================================================================
|
||||||
Redis 6.0.8 Released Wed Sep 09 23:34:17 IDT 2020
|
Redis 6.0.8 Released Wed Sep 09 23:34:17 IDT 2020
|
||||||
================================================================================
|
================================================================================
|
||||||
|
17
README.md
17
README.md
@ -106,6 +106,9 @@ as libsystemd-dev on Debian/Ubuntu or systemd-devel on CentOS) and run:
|
|||||||
|
|
||||||
% make USE_SYSTEMD=yes
|
% make USE_SYSTEMD=yes
|
||||||
|
|
||||||
|
To append a suffix to KeyDB program names, use:
|
||||||
|
|
||||||
|
% make PROG_SUFFIX="-alt"
|
||||||
|
|
||||||
***Note that the following dependencies may be needed:
|
***Note that the following dependencies may be needed:
|
||||||
% sudo apt-get install autoconf autotools-dev libnuma-dev libtool
|
% sudo apt-get install autoconf autotools-dev libnuma-dev libtool
|
||||||
@ -120,7 +123,7 @@ installed):
|
|||||||
Fixing build problems with dependencies or cached build options
|
Fixing build problems with dependencies or cached build options
|
||||||
---------
|
---------
|
||||||
|
|
||||||
KeyDB has some dependencies which are included into the `deps` directory.
|
KeyDB has some dependencies which are included in the `deps` directory.
|
||||||
`make` does not automatically rebuild dependencies even if something in
|
`make` does not automatically rebuild dependencies even if something in
|
||||||
the source code of dependencies changes.
|
the source code of dependencies changes.
|
||||||
|
|
||||||
@ -147,7 +150,7 @@ with a 64 bit target, or the other way around, you need to perform a
|
|||||||
In case of build errors when trying to build a 32 bit binary of KeyDB, try
|
In case of build errors when trying to build a 32 bit binary of KeyDB, try
|
||||||
the following steps:
|
the following steps:
|
||||||
|
|
||||||
* Install the packages libc6-dev-i386 (also try g++-multilib).
|
* Install the package libc6-dev-i386 (also try g++-multilib).
|
||||||
* Try using the following command line instead of `make 32bit`:
|
* Try using the following command line instead of `make 32bit`:
|
||||||
`make CFLAGS="-m32 -march=native" LDFLAGS="-m32"`
|
`make CFLAGS="-m32 -march=native" LDFLAGS="-m32"`
|
||||||
|
|
||||||
@ -172,14 +175,14 @@ Verbose build
|
|||||||
-------------
|
-------------
|
||||||
|
|
||||||
KeyDB will build with a user friendly colorized output by default.
|
KeyDB will build with a user friendly colorized output by default.
|
||||||
If you want to see a more verbose output use the following:
|
If you want to see a more verbose output, use the following:
|
||||||
|
|
||||||
% make V=1
|
% make V=1
|
||||||
|
|
||||||
Running KeyDB
|
Running KeyDB
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
To run KeyDB with the default configuration just type:
|
To run KeyDB with the default configuration, just type:
|
||||||
|
|
||||||
% cd src
|
% cd src
|
||||||
% ./keydb-server
|
% ./keydb-server
|
||||||
@ -232,7 +235,7 @@ You can find the list of all the available commands at https://docs.keydb.dev/do
|
|||||||
Installing KeyDB
|
Installing KeyDB
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
In order to install KeyDB binaries into /usr/local/bin just use:
|
In order to install KeyDB binaries into /usr/local/bin, just use:
|
||||||
|
|
||||||
% make install
|
% make install
|
||||||
|
|
||||||
@ -241,8 +244,8 @@ different destination.
|
|||||||
|
|
||||||
Make install will just install binaries in your system, but will not configure
|
Make install will just install binaries in your system, but will not configure
|
||||||
init scripts and configuration files in the appropriate place. This is not
|
init scripts and configuration files in the appropriate place. This is not
|
||||||
needed if you want just to play a bit with KeyDB, but if you are installing
|
needed if you just want to play a bit with KeyDB, but if you are installing
|
||||||
it the proper way for a production system, we have a script doing this
|
it the proper way for a production system, we have a script that does this
|
||||||
for Ubuntu and Debian systems:
|
for Ubuntu and Debian systems:
|
||||||
|
|
||||||
% cd utils
|
% cd utils
|
||||||
|
6
deps/README.md
vendored
6
deps/README.md
vendored
@ -21,7 +21,7 @@ just following tose steps:
|
|||||||
|
|
||||||
1. Remove the jemalloc directory.
|
1. Remove the jemalloc directory.
|
||||||
2. Substitute it with the new jemalloc source tree.
|
2. Substitute it with the new jemalloc source tree.
|
||||||
3. Edit the Makefile localted in the same directory as the README you are
|
3. Edit the Makefile located in the same directory as the README you are
|
||||||
reading, and change the --with-version in the Jemalloc configure script
|
reading, and change the --with-version in the Jemalloc configure script
|
||||||
options with the version you are using. This is required because otherwise
|
options with the version you are using. This is required because otherwise
|
||||||
Jemalloc configuration script is broken and will not work nested in another
|
Jemalloc configuration script is broken and will not work nested in another
|
||||||
@ -33,7 +33,7 @@ If you want to upgrade Jemalloc while also providing support for
|
|||||||
active defragmentation, in addition to the above steps you need to perform
|
active defragmentation, in addition to the above steps you need to perform
|
||||||
the following additional steps:
|
the following additional steps:
|
||||||
|
|
||||||
5. In Jemalloc three, file `include/jemalloc/jemalloc_macros.h.in`, make sure
|
5. In Jemalloc tree, file `include/jemalloc/jemalloc_macros.h.in`, make sure
|
||||||
to add `#define JEMALLOC_FRAG_HINT`.
|
to add `#define JEMALLOC_FRAG_HINT`.
|
||||||
6. Implement the function `je_get_defrag_hint()` inside `src/jemalloc.c`. You
|
6. Implement the function `je_get_defrag_hint()` inside `src/jemalloc.c`. You
|
||||||
can see how it is implemented in the current Jemalloc source tree shipped
|
can see how it is implemented in the current Jemalloc source tree shipped
|
||||||
@ -49,7 +49,7 @@ Hiredis uses the SDS string library, that must be the same version used inside R
|
|||||||
1. Check with diff if hiredis API changed and what impact it could have in Redis.
|
1. Check with diff if hiredis API changed and what impact it could have in Redis.
|
||||||
2. Make sure that the SDS library inside Hiredis and inside Redis are compatible.
|
2. Make sure that the SDS library inside Hiredis and inside Redis are compatible.
|
||||||
3. After the upgrade, run the Redis Sentinel test.
|
3. After the upgrade, run the Redis Sentinel test.
|
||||||
4. Check manually that redis-cli and redis-benchmark behave as expecteed, since we have no tests for CLI utilities currently.
|
4. Check manually that redis-cli and redis-benchmark behave as expected, since we have no tests for CLI utilities currently.
|
||||||
|
|
||||||
Linenoise
|
Linenoise
|
||||||
---
|
---
|
||||||
|
4
deps/linenoise/linenoise.c
vendored
4
deps/linenoise/linenoise.c
vendored
@ -625,7 +625,7 @@ static void refreshMultiLine(struct linenoiseState *l) {
|
|||||||
rpos2 = (plen+l->pos+l->cols)/l->cols; /* current cursor relative row. */
|
rpos2 = (plen+l->pos+l->cols)/l->cols; /* current cursor relative row. */
|
||||||
lndebug("rpos2 %d", rpos2);
|
lndebug("rpos2 %d", rpos2);
|
||||||
|
|
||||||
/* Go up till we reach the expected positon. */
|
/* Go up till we reach the expected position. */
|
||||||
if (rows-rpos2 > 0) {
|
if (rows-rpos2 > 0) {
|
||||||
lndebug("go-up %d", rows-rpos2);
|
lndebug("go-up %d", rows-rpos2);
|
||||||
snprintf(seq,64,"\x1b[%dA", rows-rpos2);
|
snprintf(seq,64,"\x1b[%dA", rows-rpos2);
|
||||||
@ -767,7 +767,7 @@ void linenoiseEditBackspace(struct linenoiseState *l) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Delete the previosu word, maintaining the cursor at the start of the
|
/* Delete the previous word, maintaining the cursor at the start of the
|
||||||
* current word. */
|
* current word. */
|
||||||
void linenoiseEditDeletePrevWord(struct linenoiseState *l) {
|
void linenoiseEditDeletePrevWord(struct linenoiseState *l) {
|
||||||
size_t old_pos = l->pos;
|
size_t old_pos = l->pos;
|
||||||
|
158
keydb.conf
158
keydb.conf
@ -24,7 +24,7 @@
|
|||||||
# to customize a few per-server settings. Include files can include
|
# to customize a few per-server settings. Include files can include
|
||||||
# other files, so use this wisely.
|
# other files, so use this wisely.
|
||||||
#
|
#
|
||||||
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
|
# Note that option "include" won't be rewritten by command "CONFIG REWRITE"
|
||||||
# from admin or KeyDB Sentinel. Since KeyDB always uses the last processed
|
# from admin or KeyDB Sentinel. Since KeyDB always uses the last processed
|
||||||
# line as value of a configuration directive, you'd better put includes
|
# line as value of a configuration directive, you'd better put includes
|
||||||
# at the beginning of this file to avoid overwriting config change at runtime.
|
# at the beginning of this file to avoid overwriting config change at runtime.
|
||||||
@ -46,7 +46,7 @@
|
|||||||
################################## NETWORK #####################################
|
################################## NETWORK #####################################
|
||||||
|
|
||||||
# By default, if no "bind" configuration directive is specified, KeyDB listens
|
# By default, if no "bind" configuration directive is specified, KeyDB listens
|
||||||
# for connections from all the network interfaces available on the server.
|
# for connections from all available network interfaces on the host machine.
|
||||||
# It is possible to listen to just one or multiple selected interfaces using
|
# It is possible to listen to just one or multiple selected interfaces using
|
||||||
# the "bind" configuration directive, followed by one or more IP addresses.
|
# the "bind" configuration directive, followed by one or more IP addresses.
|
||||||
#
|
#
|
||||||
@ -58,13 +58,12 @@
|
|||||||
# ~~~ WARNING ~~~ If the computer running KeyDB is directly exposed to the
|
# ~~~ WARNING ~~~ If the computer running KeyDB is directly exposed to the
|
||||||
# internet, binding to all the interfaces is dangerous and will expose the
|
# internet, binding to all the interfaces is dangerous and will expose the
|
||||||
# instance to everybody on the internet. So by default we uncomment the
|
# instance to everybody on the internet. So by default we uncomment the
|
||||||
# following bind directive, that will force KeyDB to listen only into
|
# following bind directive, that will force KeyDB to listen only on the
|
||||||
# the IPv4 loopback interface address (this means KeyDB will be able to
|
# IPv4 loopback interface address (this means KeyDB will only be able to
|
||||||
# accept connections only from clients running into the same computer it
|
# accept client connections from the same host that it is running on).
|
||||||
# is running).
|
|
||||||
#
|
#
|
||||||
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
|
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
|
||||||
# JUST COMMENT THE FOLLOWING LINE.
|
# JUST COMMENT OUT THE FOLLOWING LINE.
|
||||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
bind 127.0.0.1
|
bind 127.0.0.1
|
||||||
|
|
||||||
@ -93,8 +92,8 @@ port 6379
|
|||||||
|
|
||||||
# TCP listen() backlog.
|
# TCP listen() backlog.
|
||||||
#
|
#
|
||||||
# In high requests-per-second environments you need an high backlog in order
|
# In high requests-per-second environments you need a high backlog in order
|
||||||
# to avoid slow clients connections issues. Note that the Linux kernel
|
# to avoid slow clients connection issues. Note that the Linux kernel
|
||||||
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
|
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
|
||||||
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
|
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
|
||||||
# in order to get the desired effect.
|
# in order to get the desired effect.
|
||||||
@ -118,8 +117,8 @@ timeout 0
|
|||||||
# of communication. This is useful for two reasons:
|
# of communication. This is useful for two reasons:
|
||||||
#
|
#
|
||||||
# 1) Detect dead peers.
|
# 1) Detect dead peers.
|
||||||
# 2) Take the connection alive from the point of view of network
|
# 2) Force network equipment in the middle to consider the connection to be
|
||||||
# equipment in the middle.
|
# alive.
|
||||||
#
|
#
|
||||||
# On Linux, the specified value (in seconds) is the period used to send ACKs.
|
# On Linux, the specified value (in seconds) is the period used to send ACKs.
|
||||||
# Note that to close the connection the double of the time is needed.
|
# Note that to close the connection the double of the time is needed.
|
||||||
@ -228,11 +227,12 @@ daemonize no
|
|||||||
# supervision tree. Options:
|
# supervision tree. Options:
|
||||||
# supervised no - no supervision interaction
|
# supervised no - no supervision interaction
|
||||||
# supervised upstart - signal upstart by putting KeyDB into SIGSTOP mode
|
# supervised upstart - signal upstart by putting KeyDB into SIGSTOP mode
|
||||||
|
# requires "expect stop" in your upstart job config
|
||||||
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
|
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
|
||||||
# supervised auto - detect upstart or systemd method based on
|
# supervised auto - detect upstart or systemd method based on
|
||||||
# UPSTART_JOB or NOTIFY_SOCKET environment variables
|
# UPSTART_JOB or NOTIFY_SOCKET environment variables
|
||||||
# Note: these supervision methods only signal "process is ready."
|
# Note: these supervision methods only signal "process is ready."
|
||||||
# They do not enable continuous liveness pings back to your supervisor.
|
# They do not enable continuous pings back to your supervisor.
|
||||||
supervised no
|
supervised no
|
||||||
|
|
||||||
# If a pid file is specified, KeyDB writes it where specified at startup
|
# If a pid file is specified, KeyDB writes it where specified at startup
|
||||||
@ -294,7 +294,7 @@ always-show-logo yes
|
|||||||
# Will save the DB if both the given number of seconds and the given
|
# Will save the DB if both the given number of seconds and the given
|
||||||
# number of write operations against the DB occurred.
|
# number of write operations against the DB occurred.
|
||||||
#
|
#
|
||||||
# In the example below the behaviour will be to save:
|
# In the example below the behavior will be to save:
|
||||||
# after 900 sec (15 min) if at least 1 key changed
|
# after 900 sec (15 min) if at least 1 key changed
|
||||||
# after 300 sec (5 min) if at least 10 keys changed
|
# after 300 sec (5 min) if at least 10 keys changed
|
||||||
# after 60 sec if at least 10000 keys changed
|
# after 60 sec if at least 10000 keys changed
|
||||||
@ -327,7 +327,7 @@ save 60 10000
|
|||||||
stop-writes-on-bgsave-error yes
|
stop-writes-on-bgsave-error yes
|
||||||
|
|
||||||
# Compress string objects using LZF when dump .rdb databases?
|
# Compress string objects using LZF when dump .rdb databases?
|
||||||
# For default that's set to 'yes' as it's almost always a win.
|
# By default compression is enabled as it's almost always a win.
|
||||||
# If you want to save some CPU in the saving child set it to 'no' but
|
# If you want to save some CPU in the saving child set it to 'no' but
|
||||||
# the dataset will likely be bigger if you have compressible values or keys.
|
# the dataset will likely be bigger if you have compressible values or keys.
|
||||||
rdbcompression yes
|
rdbcompression yes
|
||||||
@ -415,11 +415,11 @@ dir ./
|
|||||||
# still reply to client requests, possibly with out of date data, or the
|
# still reply to client requests, possibly with out of date data, or the
|
||||||
# data set may just be empty if this is the first synchronization.
|
# data set may just be empty if this is the first synchronization.
|
||||||
#
|
#
|
||||||
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
|
# 2) If replica-serve-stale-data is set to 'no' the replica will reply with
|
||||||
# an error "SYNC with master in progress" to all the kind of commands
|
# an error "SYNC with master in progress" to all commands except:
|
||||||
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
|
# INFO, REPLICAOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,
|
||||||
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
|
# UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,
|
||||||
# COMMAND, POST, HOST: and LATENCY.
|
# HOST and LATENCY.
|
||||||
#
|
#
|
||||||
replica-serve-stale-data yes
|
replica-serve-stale-data yes
|
||||||
|
|
||||||
@ -504,7 +504,7 @@ repl-diskless-sync-delay 5
|
|||||||
#
|
#
|
||||||
# Replica can load the RDB it reads from the replication link directly from the
|
# Replica can load the RDB it reads from the replication link directly from the
|
||||||
# socket, or store the RDB to a file and read that file after it was completely
|
# socket, or store the RDB to a file and read that file after it was completely
|
||||||
# recived from the master.
|
# received from the master.
|
||||||
#
|
#
|
||||||
# In many cases the disk is slower than the network, and storing and loading
|
# In many cases the disk is slower than the network, and storing and loading
|
||||||
# the RDB file may increase replication time (and even increase the master's
|
# the RDB file may increase replication time (and even increase the master's
|
||||||
@ -534,7 +534,8 @@ repl-diskless-load disabled
|
|||||||
#
|
#
|
||||||
# It is important to make sure that this value is greater than the value
|
# It is important to make sure that this value is greater than the value
|
||||||
# specified for repl-ping-replica-period otherwise a timeout will be detected
|
# specified for repl-ping-replica-period otherwise a timeout will be detected
|
||||||
# every time there is low traffic between the master and the replica.
|
# every time there is low traffic between the master and the replica. The default
|
||||||
|
# value is 60 seconds.
|
||||||
#
|
#
|
||||||
# repl-timeout 60
|
# repl-timeout 60
|
||||||
|
|
||||||
@ -559,21 +560,21 @@ repl-disable-tcp-nodelay no
|
|||||||
# partial resync is enough, just passing the portion of data the replica
|
# partial resync is enough, just passing the portion of data the replica
|
||||||
# missed while disconnected.
|
# missed while disconnected.
|
||||||
#
|
#
|
||||||
# The bigger the replication backlog, the longer the time the replica can be
|
# The bigger the replication backlog, the longer the replica can endure the
|
||||||
# disconnected and later be able to perform a partial resynchronization.
|
# disconnect and later be able to perform a partial resynchronization.
|
||||||
#
|
#
|
||||||
# The backlog is only allocated once there is at least a replica connected.
|
# The backlog is only allocated if there is at least one replica connected.
|
||||||
#
|
#
|
||||||
# repl-backlog-size 1mb
|
# repl-backlog-size 1mb
|
||||||
|
|
||||||
# After a master has no longer connected replicas for some time, the backlog
|
# After a master has no connected replicas for some time, the backlog will be
|
||||||
# will be freed. The following option configures the amount of seconds that
|
# freed. The following option configures the amount of seconds that need to
|
||||||
# need to elapse, starting from the time the last replica disconnected, for
|
# elapse, starting from the time the last replica disconnected, for the backlog
|
||||||
# the backlog buffer to be freed.
|
# buffer to be freed.
|
||||||
#
|
#
|
||||||
# Note that replicas never free the backlog for timeout, since they may be
|
# Note that replicas never free the backlog for timeout, since they may be
|
||||||
# promoted to masters later, and should be able to correctly "partially
|
# promoted to masters later, and should be able to correctly "partially
|
||||||
# resynchronize" with the replicas: hence they should always accumulate backlog.
|
# resynchronize" with other replicas: hence they should always accumulate backlog.
|
||||||
#
|
#
|
||||||
# A value of 0 means to never release the backlog.
|
# A value of 0 means to never release the backlog.
|
||||||
#
|
#
|
||||||
@ -623,8 +624,8 @@ replica-priority 100
|
|||||||
# Another place where this info is available is in the output of the
|
# Another place where this info is available is in the output of the
|
||||||
# "ROLE" command of a master.
|
# "ROLE" command of a master.
|
||||||
#
|
#
|
||||||
# The listed IP and address normally reported by a replica is obtained
|
# The listed IP address and port normally reported by a replica is
|
||||||
# in the following way:
|
# obtained in the following way:
|
||||||
#
|
#
|
||||||
# IP: The address is auto detected by checking the peer address
|
# IP: The address is auto detected by checking the peer address
|
||||||
# of the socket used by the replica to connect with the master.
|
# of the socket used by the replica to connect with the master.
|
||||||
@ -634,7 +635,7 @@ replica-priority 100
|
|||||||
# listen for connections.
|
# listen for connections.
|
||||||
#
|
#
|
||||||
# However when port forwarding or Network Address Translation (NAT) is
|
# However when port forwarding or Network Address Translation (NAT) is
|
||||||
# used, the replica may be actually reachable via different IP and port
|
# used, the replica may actually be reachable via different IP and port
|
||||||
# pairs. The following two options can be used by a replica in order to
|
# pairs. The following two options can be used by a replica in order to
|
||||||
# report to its master a specific set of IP and port, so that both INFO
|
# report to its master a specific set of IP and port, so that both INFO
|
||||||
# and ROLE will report those values.
|
# and ROLE will report those values.
|
||||||
@ -651,7 +652,7 @@ replica-priority 100
|
|||||||
# This is implemented using an invalidation table that remembers, using
|
# This is implemented using an invalidation table that remembers, using
|
||||||
# 16 millions of slots, what clients may have certain subsets of keys. In turn
|
# 16 millions of slots, what clients may have certain subsets of keys. In turn
|
||||||
# this is used in order to send invalidation messages to clients. Please
|
# this is used in order to send invalidation messages to clients. Please
|
||||||
# to understand more about the feature check this page:
|
# check this page to understand more about the feature:
|
||||||
#
|
#
|
||||||
# https://redis.io/topics/client-side-caching
|
# https://redis.io/topics/client-side-caching
|
||||||
#
|
#
|
||||||
@ -683,7 +684,7 @@ replica-priority 100
|
|||||||
|
|
||||||
################################## SECURITY ###################################
|
################################## SECURITY ###################################
|
||||||
|
|
||||||
# Warning: since KeyDB is pretty fast an outside user can try up to
|
# Warning: since KeyDB is pretty fast, an outside user can try up to
|
||||||
# 1 million passwords per second against a modern box. This means that you
|
# 1 million passwords per second against a modern box. This means that you
|
||||||
# should use very strong passwords, otherwise they will be very easy to break.
|
# should use very strong passwords, otherwise they will be very easy to break.
|
||||||
# Note that because the password is really a shared secret between the client
|
# Note that because the password is really a shared secret between the client
|
||||||
@ -707,7 +708,7 @@ replica-priority 100
|
|||||||
# AUTH (or the HELLO command AUTH option) in order to be authenticated and
|
# AUTH (or the HELLO command AUTH option) in order to be authenticated and
|
||||||
# start to work.
|
# start to work.
|
||||||
#
|
#
|
||||||
# The ACL rules that describe what an user can do are the following:
|
# The ACL rules that describe what a user can do are the following:
|
||||||
#
|
#
|
||||||
# on Enable the user: it is possible to authenticate as this user.
|
# on Enable the user: it is possible to authenticate as this user.
|
||||||
# off Disable the user: it's no longer possible to authenticate
|
# off Disable the user: it's no longer possible to authenticate
|
||||||
@ -735,7 +736,7 @@ replica-priority 100
|
|||||||
# It is possible to specify multiple patterns.
|
# It is possible to specify multiple patterns.
|
||||||
# allkeys Alias for ~*
|
# allkeys Alias for ~*
|
||||||
# resetkeys Flush the list of allowed keys patterns.
|
# resetkeys Flush the list of allowed keys patterns.
|
||||||
# ><password> Add this passowrd to the list of valid password for the user.
|
# ><password> Add this password to the list of valid password for the user.
|
||||||
# For example >mypass will add "mypass" to the list.
|
# For example >mypass will add "mypass" to the list.
|
||||||
# This directive clears the "nopass" flag (see later).
|
# This directive clears the "nopass" flag (see later).
|
||||||
# <<password> Remove this password from the list of valid passwords.
|
# <<password> Remove this password from the list of valid passwords.
|
||||||
@ -789,7 +790,7 @@ acllog-max-len 128
|
|||||||
#
|
#
|
||||||
# Instead of configuring users here in this file, it is possible to use
|
# Instead of configuring users here in this file, it is possible to use
|
||||||
# a stand-alone file just listing users. The two methods cannot be mixed:
|
# a stand-alone file just listing users. The two methods cannot be mixed:
|
||||||
# if you configure users here and at the same time you activate the exteranl
|
# if you configure users here and at the same time you activate the external
|
||||||
# ACL file, the server will refuse to start.
|
# ACL file, the server will refuse to start.
|
||||||
#
|
#
|
||||||
# The format of the external ACL user file is exactly the same as the
|
# The format of the external ACL user file is exactly the same as the
|
||||||
@ -797,7 +798,7 @@ acllog-max-len 128
|
|||||||
#
|
#
|
||||||
# aclfile /etc/keydb/users.acl
|
# aclfile /etc/keydb/users.acl
|
||||||
|
|
||||||
# IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatiblity
|
# IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatibility
|
||||||
# layer on top of the new ACL system. The option effect will be just setting
|
# layer on top of the new ACL system. The option effect will be just setting
|
||||||
# the password for the default user. Clients will still authenticate using
|
# the password for the default user. Clients will still authenticate using
|
||||||
# AUTH <password> as usually, or more explicitly with AUTH default <password>
|
# AUTH <password> as usually, or more explicitly with AUTH default <password>
|
||||||
@ -908,8 +909,8 @@ acllog-max-len 128
|
|||||||
|
|
||||||
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
|
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
|
||||||
# algorithms (in order to save memory), so you can tune it for speed or
|
# algorithms (in order to save memory), so you can tune it for speed or
|
||||||
# accuracy. For default KeyDB will check five keys and pick the one that was
|
# accuracy. By default KeyDB will check five keys and pick the one that was
|
||||||
# used less recently, you can change the sample size using the following
|
# used least recently, you can change the sample size using the following
|
||||||
# configuration directive.
|
# configuration directive.
|
||||||
#
|
#
|
||||||
# The default of 5 produces good enough results. 10 Approximates very closely
|
# The default of 5 produces good enough results. 10 Approximates very closely
|
||||||
@ -949,8 +950,8 @@ acllog-max-len 128
|
|||||||
# it is possible to increase the expire "effort" that is normally set to
|
# it is possible to increase the expire "effort" that is normally set to
|
||||||
# "1", to a greater value, up to the value "10". At its maximum value the
|
# "1", to a greater value, up to the value "10". At its maximum value the
|
||||||
# system will use more CPU, longer cycles (and technically may introduce
|
# system will use more CPU, longer cycles (and technically may introduce
|
||||||
# more latency), and will tollerate less already expired keys still present
|
# more latency), and will tolerate less already expired keys still present
|
||||||
# in the system. It's a tradeoff betweeen memory, CPU and latecy.
|
# in the system. It's a tradeoff between memory, CPU and latency.
|
||||||
#
|
#
|
||||||
# active-expire-effort 1
|
# active-expire-effort 1
|
||||||
|
|
||||||
@ -1036,7 +1037,7 @@ lazyfree-lazy-user-del no
|
|||||||
#
|
#
|
||||||
# io-threads 4
|
# io-threads 4
|
||||||
#
|
#
|
||||||
# Setting io-threads to 1 will just use the main thread as usually.
|
# Setting io-threads to 1 will just use the main thread as usual.
|
||||||
# When I/O threads are enabled, we only use threads for writes, that is
|
# When I/O threads are enabled, we only use threads for writes, that is
|
||||||
# to thread the write(2) syscall and transfer the client buffers to the
|
# to thread the write(2) syscall and transfer the client buffers to the
|
||||||
# socket. However it is also possible to enable threading of reads and
|
# socket. However it is also possible to enable threading of reads and
|
||||||
@ -1053,7 +1054,7 @@ lazyfree-lazy-user-del no
|
|||||||
#
|
#
|
||||||
# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make
|
# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make
|
||||||
# sure you also run the benchmark itself in threaded mode, using the
|
# sure you also run the benchmark itself in threaded mode, using the
|
||||||
# --threads option to match the number of Redis theads, otherwise you'll not
|
# --threads option to match the number of Redis threads, otherwise you'll not
|
||||||
# be able to notice the improvements.
|
# be able to notice the improvements.
|
||||||
|
|
||||||
############################ KERNEL OOM CONTROL ##############################
|
############################ KERNEL OOM CONTROL ##############################
|
||||||
@ -1065,21 +1066,26 @@ lazyfree-lazy-user-del no
|
|||||||
# for all its processes, depending on their role. The default scores will
|
# for all its processes, depending on their role. The default scores will
|
||||||
# attempt to have background child processes killed before all others, and
|
# attempt to have background child processes killed before all others, and
|
||||||
# replicas killed before masters.
|
# replicas killed before masters.
|
||||||
|
#
|
||||||
|
# Redis supports three options:
|
||||||
|
#
|
||||||
|
# no: Don't make changes to oom-score-adj (default).
|
||||||
|
# yes: Alias to "relative" see below.
|
||||||
|
# absolute: Values in oom-score-adj-values are written as is to the kernel.
|
||||||
|
# relative: Values are used relative to the initial value of oom_score_adj when
|
||||||
|
# the server starts and are then clamped to a range of -1000 to 1000.
|
||||||
|
# Because typically the initial value is 0, they will often match the
|
||||||
|
# absolute values.
|
||||||
oom-score-adj no
|
oom-score-adj no
|
||||||
|
|
||||||
# When oom-score-adj is used, this directive controls the specific values used
|
# When oom-score-adj is used, this directive controls the specific values used
|
||||||
# for master, replica and background child processes. Values range -1000 to
|
# for master, replica and background child processes. Values range -2000 to
|
||||||
# 1000 (higher means more likely to be killed).
|
# 2000 (higher means more likely to be killed).
|
||||||
#
|
#
|
||||||
# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)
|
# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)
|
||||||
# can freely increase their value, but not decrease it below its initial
|
# can freely increase their value, but not decrease it below its initial
|
||||||
# settings.
|
# settings. This means that setting oom-score-adj to "relative" and setting the
|
||||||
#
|
# oom-score-adj-values to positive values will always succeed.
|
||||||
# Values are used relative to the initial value of oom_score_adj when the server
|
|
||||||
# starts. Because typically the initial value is 0, they will often match the
|
|
||||||
# absolute values.
|
|
||||||
|
|
||||||
oom-score-adj-values 0 200 800
|
oom-score-adj-values 0 200 800
|
||||||
|
|
||||||
############################## APPEND ONLY MODE ###############################
|
############################## APPEND ONLY MODE ###############################
|
||||||
@ -1206,8 +1212,8 @@ aof-load-truncated yes
|
|||||||
#
|
#
|
||||||
# [RDB file][AOF tail]
|
# [RDB file][AOF tail]
|
||||||
#
|
#
|
||||||
# When loading KeyDB recognizes that the AOF file starts with the "REDIS"
|
# When loading, KeyDB recognizes that the AOF file starts with the "REDIS"
|
||||||
# string and loads the prefixed RDB file, and continues loading the AOF
|
# string and loads the prefixed RDB file, then continues loading the AOF
|
||||||
# tail.
|
# tail.
|
||||||
aof-use-rdb-preamble yes
|
aof-use-rdb-preamble yes
|
||||||
|
|
||||||
@ -1221,7 +1227,7 @@ aof-use-rdb-preamble yes
|
|||||||
#
|
#
|
||||||
# When a long running script exceeds the maximum execution time only the
|
# When a long running script exceeds the maximum execution time only the
|
||||||
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
|
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
|
||||||
# used to stop a script that did not yet called write commands. The second
|
# used to stop a script that did not yet call any write commands. The second
|
||||||
# is the only way to shut down the server in the case a write command was
|
# is the only way to shut down the server in the case a write command was
|
||||||
# already issued by the script but the user doesn't want to wait for the natural
|
# already issued by the script but the user doesn't want to wait for the natural
|
||||||
# termination of the script.
|
# termination of the script.
|
||||||
@ -1247,7 +1253,7 @@ lua-time-limit 5000
|
|||||||
|
|
||||||
# Cluster node timeout is the amount of milliseconds a node must be unreachable
|
# Cluster node timeout is the amount of milliseconds a node must be unreachable
|
||||||
# for it to be considered in failure state.
|
# for it to be considered in failure state.
|
||||||
# Most other internal time limits are multiple of the node timeout.
|
# Most other internal time limits are a multiple of the node timeout.
|
||||||
#
|
#
|
||||||
# cluster-node-timeout 15000
|
# cluster-node-timeout 15000
|
||||||
|
|
||||||
@ -1274,18 +1280,18 @@ lua-time-limit 5000
|
|||||||
# the failover if, since the last interaction with the master, the time
|
# the failover if, since the last interaction with the master, the time
|
||||||
# elapsed is greater than:
|
# elapsed is greater than:
|
||||||
#
|
#
|
||||||
# (node-timeout * replica-validity-factor) + repl-ping-replica-period
|
# (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period
|
||||||
#
|
#
|
||||||
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
|
# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor
|
||||||
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
|
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
|
||||||
# replica will not try to failover if it was not able to talk with the master
|
# replica will not try to failover if it was not able to talk with the master
|
||||||
# for longer than 310 seconds.
|
# for longer than 310 seconds.
|
||||||
#
|
#
|
||||||
# A large replica-validity-factor may allow replicas with too old data to failover
|
# A large cluster-replica-validity-factor may allow replicas with too old data to failover
|
||||||
# a master, while a too small value may prevent the cluster from being able to
|
# a master, while a too small value may prevent the cluster from being able to
|
||||||
# elect a replica at all.
|
# elect a replica at all.
|
||||||
#
|
#
|
||||||
# For maximum availability, it is possible to set the replica-validity-factor
|
# For maximum availability, it is possible to set the cluster-replica-validity-factor
|
||||||
# to a value of 0, which means, that replicas will always try to failover the
|
# to a value of 0, which means, that replicas will always try to failover the
|
||||||
# master regardless of the last time they interacted with the master.
|
# master regardless of the last time they interacted with the master.
|
||||||
# (However they'll always try to apply a delay proportional to their
|
# (However they'll always try to apply a delay proportional to their
|
||||||
@ -1316,7 +1322,7 @@ lua-time-limit 5000
|
|||||||
# cluster-migration-barrier 1
|
# cluster-migration-barrier 1
|
||||||
|
|
||||||
# By default KeyDB Cluster nodes stop accepting queries if they detect there
|
# By default KeyDB Cluster nodes stop accepting queries if they detect there
|
||||||
# is at least an hash slot uncovered (no available node is serving it).
|
# is at least a hash slot uncovered (no available node is serving it).
|
||||||
# This way if the cluster is partially down (for example a range of hash slots
|
# This way if the cluster is partially down (for example a range of hash slots
|
||||||
# are no longer covered) all the cluster becomes, eventually, unavailable.
|
# are no longer covered) all the cluster becomes, eventually, unavailable.
|
||||||
# It automatically returns available as soon as all the slots are covered again.
|
# It automatically returns available as soon as all the slots are covered again.
|
||||||
@ -1371,7 +1377,7 @@ lua-time-limit 5000
|
|||||||
# * cluster-announce-port
|
# * cluster-announce-port
|
||||||
# * cluster-announce-bus-port
|
# * cluster-announce-bus-port
|
||||||
#
|
#
|
||||||
# Each instruct the node about its address, client port, and cluster message
|
# Each instructs the node about its address, client port, and cluster message
|
||||||
# bus port. The information is then published in the header of the bus packets
|
# bus port. The information is then published in the header of the bus packets
|
||||||
# so that other nodes will be able to correctly map the address of the node
|
# so that other nodes will be able to correctly map the address of the node
|
||||||
# publishing the information.
|
# publishing the information.
|
||||||
@ -1382,7 +1388,7 @@ lua-time-limit 5000
|
|||||||
# Note that when remapped, the bus port may not be at the fixed offset of
|
# Note that when remapped, the bus port may not be at the fixed offset of
|
||||||
# clients port + 10000, so you can specify any port and bus-port depending
|
# clients port + 10000, so you can specify any port and bus-port depending
|
||||||
# on how they get remapped. If the bus-port is not set, a fixed offset of
|
# on how they get remapped. If the bus-port is not set, a fixed offset of
|
||||||
# 10000 will be used as usually.
|
# 10000 will be used as usual.
|
||||||
#
|
#
|
||||||
# Example:
|
# Example:
|
||||||
#
|
#
|
||||||
@ -1511,7 +1517,7 @@ notify-keyspace-events ""
|
|||||||
# two kind of inline requests that were anyway illegal: an empty request
|
# two kind of inline requests that were anyway illegal: an empty request
|
||||||
# or any request that starts with "/" (there are no KeyDB commands starting
|
# or any request that starts with "/" (there are no KeyDB commands starting
|
||||||
# with such a slash). Normal RESP2/RESP3 requests are completely out of the
|
# with such a slash). Normal RESP2/RESP3 requests are completely out of the
|
||||||
# path of the Gopher protocol implementation and are served as usually as well.
|
# path of the Gopher protocol implementation and are served as usual as well.
|
||||||
#
|
#
|
||||||
# If you open a connection to KeyDB when Gopher is enabled and send it
|
# If you open a connection to KeyDB when Gopher is enabled and send it
|
||||||
# a string like "/foo", if there is a key named "/foo" it is served via the
|
# a string like "/foo", if there is a key named "/foo" it is served via the
|
||||||
@ -1535,8 +1541,11 @@ notify-keyspace-events ""
|
|||||||
#
|
#
|
||||||
# So use the 'requirepass' option to protect your instance.
|
# So use the 'requirepass' option to protect your instance.
|
||||||
#
|
#
|
||||||
# To enable Gopher support uncomment the following line and set
|
# Note that Gopher is not currently supported when 'io-threads-do-reads'
|
||||||
# the option from no (the default) to yes.
|
# is enabled.
|
||||||
|
#
|
||||||
|
# To enable Gopher support, uncomment the following line and set the option
|
||||||
|
# from no (the default) to yes.
|
||||||
#
|
#
|
||||||
# gopher-enabled no
|
# gopher-enabled no
|
||||||
|
|
||||||
@ -1683,7 +1692,7 @@ client-output-buffer-limit pubsub 32mb 8mb 60
|
|||||||
# client-query-buffer-limit 1gb
|
# client-query-buffer-limit 1gb
|
||||||
|
|
||||||
# In the KeyDB protocol, bulk requests, that are, elements representing single
|
# In the KeyDB protocol, bulk requests, that are, elements representing single
|
||||||
# strings, are normally limited ot 512 mb. However you can change this limit
|
# strings, are normally limited to 512 mb. However you can change this limit
|
||||||
# here, but must be 1mb or greater
|
# here, but must be 1mb or greater
|
||||||
#
|
#
|
||||||
# proto-max-bulk-len 512mb
|
# proto-max-bulk-len 512mb
|
||||||
@ -1712,7 +1721,7 @@ hz 10
|
|||||||
#
|
#
|
||||||
# Since the default HZ value by default is conservatively set to 10, KeyDB
|
# Since the default HZ value by default is conservatively set to 10, KeyDB
|
||||||
# offers, and enables by default, the ability to use an adaptive HZ value
|
# offers, and enables by default, the ability to use an adaptive HZ value
|
||||||
# which will temporary raise when there are many connected clients.
|
# which will temporarily raise when there are many connected clients.
|
||||||
#
|
#
|
||||||
# When dynamic HZ is enabled, the actual configured HZ will be used
|
# When dynamic HZ is enabled, the actual configured HZ will be used
|
||||||
# as a baseline, but multiples of the configured HZ value will be actually
|
# as a baseline, but multiples of the configured HZ value will be actually
|
||||||
@ -1779,7 +1788,7 @@ rdb-save-incremental-fsync yes
|
|||||||
# for the key counter to be divided by two (or decremented if it has a value
|
# for the key counter to be divided by two (or decremented if it has a value
|
||||||
# less <= 10).
|
# less <= 10).
|
||||||
#
|
#
|
||||||
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
|
# The default value for the lfu-decay-time is 1. A special value of 0 means to
|
||||||
# decay the counter every time it happens to be scanned.
|
# decay the counter every time it happens to be scanned.
|
||||||
#
|
#
|
||||||
# lfu-log-factor 10
|
# lfu-log-factor 10
|
||||||
@ -1799,7 +1808,7 @@ rdb-save-incremental-fsync yes
|
|||||||
# restart is needed in order to lower the fragmentation, or at least to flush
|
# restart is needed in order to lower the fragmentation, or at least to flush
|
||||||
# away all the data and create it again. However thanks to this feature
|
# away all the data and create it again. However thanks to this feature
|
||||||
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
|
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
|
||||||
# in an "hot" way, while the server is running.
|
# in a "hot" way, while the server is running.
|
||||||
#
|
#
|
||||||
# Basically when the fragmentation is over a certain level (see the
|
# Basically when the fragmentation is over a certain level (see the
|
||||||
# configuration options below) KeyDB will start to create new copies of the
|
# configuration options below) KeyDB will start to create new copies of the
|
||||||
@ -1877,6 +1886,13 @@ jemalloc-bg-thread yes
|
|||||||
# Set bgsave child process to cpu affinity 1,10,11
|
# Set bgsave child process to cpu affinity 1,10,11
|
||||||
# bgsave_cpulist 1,10-11
|
# bgsave_cpulist 1,10-11
|
||||||
|
|
||||||
|
# In some cases KeyDB will emit warnings and even refuse to start if it detects
|
||||||
|
# that the system is in bad state, it is possible to suppress these warnings
|
||||||
|
# by setting the following config which takes a space delimited list of warnings
|
||||||
|
# to suppress
|
||||||
|
#
|
||||||
|
# ignore-warnings ARM64-COW-BUG
|
||||||
|
|
||||||
# The minimum number of clients on a thread before KeyDB assigns new connections to a different thread
|
# The minimum number of clients on a thread before KeyDB assigns new connections to a different thread
|
||||||
# Tuning this parameter is a tradeoff between locking overhead and distributing the workload over multiple cores
|
# Tuning this parameter is a tradeoff between locking overhead and distributing the workload over multiple cores
|
||||||
# min-clients-per-thread 50
|
# min-clients-per-thread 50
|
||||||
|
@ -28,4 +28,5 @@ $TCLSH tests/test_helper.tcl \
|
|||||||
--single unit/moduleapi/keyspace_events \
|
--single unit/moduleapi/keyspace_events \
|
||||||
--single unit/moduleapi/blockedclient \
|
--single unit/moduleapi/blockedclient \
|
||||||
--single unit/moduleapi/moduleloadsave \
|
--single unit/moduleapi/moduleloadsave \
|
||||||
|
--single unit/moduleapi/getkeys \
|
||||||
"${@}"
|
"${@}"
|
||||||
|
@ -259,6 +259,6 @@ sentinel deny-scripts-reconfig yes
|
|||||||
# SENTINEL SET can also be used in order to perform this configuration at runtime.
|
# SENTINEL SET can also be used in order to perform this configuration at runtime.
|
||||||
#
|
#
|
||||||
# In order to set a command back to its original name (undo the renaming), it
|
# In order to set a command back to its original name (undo the renaming), it
|
||||||
# is possible to just rename a command to itsef:
|
# is possible to just rename a command to itself:
|
||||||
#
|
#
|
||||||
# SENTINEL rename-command mymaster CONFIG CONFIG
|
# SENTINEL rename-command mymaster CONFIG CONFIG
|
||||||
|
31
src/Makefile
31
src/Makefile
@ -152,12 +152,21 @@ ifeq ($(uname_S),OpenBSD)
|
|||||||
endif
|
endif
|
||||||
|
|
||||||
else
|
else
|
||||||
|
ifeq ($(uname_S),NetBSD)
|
||||||
|
# NetBSD
|
||||||
|
FINAL_LIBS+= -lpthread
|
||||||
|
ifeq ($(USE_BACKTRACE),yes)
|
||||||
|
FINAL_CFLAGS+= -DUSE_BACKTRACE -I/usr/pkg/include
|
||||||
|
FINAL_LDFLAGS+= -L/usr/pkg/lib
|
||||||
|
FINAL_LIBS+= -lexecinfo
|
||||||
|
endif
|
||||||
|
else
|
||||||
ifeq ($(uname_S),FreeBSD)
|
ifeq ($(uname_S),FreeBSD)
|
||||||
# FreeBSD
|
# FreeBSD
|
||||||
FINAL_LIBS+= -lpthread -lexecinfo
|
FINAL_LIBS+= -lpthread -lexecinfo
|
||||||
else
|
else
|
||||||
ifeq ($(uname_S),DragonFly)
|
ifeq ($(uname_S),DragonFly)
|
||||||
# FreeBSD
|
# DragonFly
|
||||||
FINAL_LIBS+= -lpthread -lexecinfo
|
FINAL_LIBS+= -lpthread -lexecinfo
|
||||||
else
|
else
|
||||||
ifeq ($(uname_S),OpenBSD)
|
ifeq ($(uname_S),OpenBSD)
|
||||||
@ -167,6 +176,12 @@ else
|
|||||||
ifeq ($(uname_S),NetBSD)
|
ifeq ($(uname_S),NetBSD)
|
||||||
# NetBSD
|
# NetBSD
|
||||||
FINAL_LIBS+= -lpthread -lexecinfo
|
FINAL_LIBS+= -lpthread -lexecinfo
|
||||||
|
else
|
||||||
|
ifeq ($(uname_S),Haiku)
|
||||||
|
# Haiku
|
||||||
|
FINAL_CFLAGS+= -DBSD_SOURCE
|
||||||
|
FINAL_LDFLAGS+= -lbsd -lnetwork
|
||||||
|
FINAL_LIBS+= -lpthread
|
||||||
else
|
else
|
||||||
# All the other OSes (notably Linux)
|
# All the other OSes (notably Linux)
|
||||||
FINAL_LDFLAGS+= -rdynamic
|
FINAL_LDFLAGS+= -rdynamic
|
||||||
@ -184,6 +199,8 @@ endif
|
|||||||
endif
|
endif
|
||||||
endif
|
endif
|
||||||
endif
|
endif
|
||||||
|
endif
|
||||||
|
endif
|
||||||
# Include paths to dependencies
|
# Include paths to dependencies
|
||||||
FINAL_CFLAGS+= -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src
|
FINAL_CFLAGS+= -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src
|
||||||
FINAL_CXXFLAGS+= -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src
|
FINAL_CXXFLAGS+= -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src
|
||||||
@ -277,15 +294,15 @@ QUIET_LINK = @printf ' %b %b\n' $(LINKCOLOR)LINK$(ENDCOLOR) $(BINCOLOR)$@$(EN
|
|||||||
QUIET_INSTALL = @printf ' %b %b\n' $(LINKCOLOR)INSTALL$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR);
|
QUIET_INSTALL = @printf ' %b %b\n' $(LINKCOLOR)INSTALL$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR);
|
||||||
endif
|
endif
|
||||||
|
|
||||||
REDIS_SERVER_NAME=keydb-server
|
REDIS_SERVER_NAME=keydb-server$(PROG_SUFFIX)
|
||||||
REDIS_SENTINEL_NAME=keydb-sentinel
|
REDIS_SENTINEL_NAME=keydb-sentinel$(PROG_SUFFIX)
|
||||||
REDIS_SERVER_OBJ=adlist.o quicklist.o ae.o anet.o dict.o server.o sds.o zmalloc.o lzf_c.o lzf_d.o pqsort.o zipmap.o sha1.o ziplist.o release.o networking.o util.o object.o db.o replication.o rdb.o t_string.o t_list.o t_set.o t_zset.o t_hash.o t_nhash.o config.o aof.o pubsub.o multi.o debug.o sort.o intset.o syncio.o cluster.o crc16.o endianconv.o slowlog.o scripting.o bio.o rio.o rand.o memtest.o crcspeed.o crc64.o bitops.o sentinel.o notify.o setproctitle.o blocked.o hyperloglog.o latency.o sparkline.o redis-check-rdb.o redis-check-aof.o geo.o lazyfree.o module.o evict.o expire.o geohash.o geohash_helper.o childinfo.o defrag.o siphash.o rax.o t_stream.o listpack.o localtime.o acl.o storage.o rdb-s3.o fastlock.o new.o tracking.o cron.o connection.o tls.o sha256.o motd.o timeout.o setcpuaffinity.o $(ASM_OBJ)
|
REDIS_SERVER_OBJ=adlist.o quicklist.o ae.o anet.o dict.o server.o sds.o zmalloc.o lzf_c.o lzf_d.o pqsort.o zipmap.o sha1.o ziplist.o release.o networking.o util.o object.o db.o replication.o rdb.o t_string.o t_list.o t_set.o t_zset.o t_hash.o t_nhash.o config.o aof.o pubsub.o multi.o debug.o sort.o intset.o syncio.o cluster.o crc16.o endianconv.o slowlog.o scripting.o bio.o rio.o rand.o memtest.o crcspeed.o crc64.o bitops.o sentinel.o notify.o setproctitle.o blocked.o hyperloglog.o latency.o sparkline.o redis-check-rdb.o redis-check-aof.o geo.o lazyfree.o module.o evict.o expire.o geohash.o geohash_helper.o childinfo.o defrag.o siphash.o rax.o t_stream.o listpack.o localtime.o acl.o storage.o rdb-s3.o fastlock.o new.o tracking.o cron.o connection.o tls.o sha256.o motd.o timeout.o setcpuaffinity.o $(ASM_OBJ)
|
||||||
REDIS_CLI_NAME=keydb-cli
|
REDIS_CLI_NAME=keydb-cli$(PROG_SUFFIX)
|
||||||
REDIS_CLI_OBJ=anet.o adlist.o dict.o redis-cli.o redis-cli-cpphelper.o zmalloc.o release.o anet.o ae.o crcspeed.o crc64.o siphash.o crc16.o storage-lite.o fastlock.o new.o motd.o $(ASM_OBJ)
|
REDIS_CLI_OBJ=anet.o adlist.o dict.o redis-cli.o redis-cli-cpphelper.o zmalloc.o release.o anet.o ae.o crcspeed.o crc64.o siphash.o crc16.o storage-lite.o fastlock.o new.o motd.o $(ASM_OBJ)
|
||||||
REDIS_BENCHMARK_NAME=keydb-benchmark
|
REDIS_BENCHMARK_NAME=keydb-benchmark$(PROG_SUFFIX)
|
||||||
REDIS_BENCHMARK_OBJ=ae.o anet.o redis-benchmark.o adlist.o dict.o zmalloc.o siphash.o redis-benchmark.o storage-lite.o fastlock.o new.o $(ASM_OBJ)
|
REDIS_BENCHMARK_OBJ=ae.o anet.o redis-benchmark.o adlist.o dict.o zmalloc.o siphash.o redis-benchmark.o storage-lite.o fastlock.o new.o $(ASM_OBJ)
|
||||||
REDIS_CHECK_RDB_NAME=keydb-check-rdb
|
REDIS_CHECK_RDB_NAME=keydb-check-rdb$(PROG_SUFFIX)
|
||||||
REDIS_CHECK_AOF_NAME=keydb-check-aof
|
REDIS_CHECK_AOF_NAME=keydb-check-aof$(PROG_SUFFIX)
|
||||||
|
|
||||||
all: $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) $(REDIS_CHECK_RDB_NAME) $(REDIS_CHECK_AOF_NAME)
|
all: $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) $(REDIS_CHECK_RDB_NAME) $(REDIS_CHECK_AOF_NAME)
|
||||||
@echo ""
|
@echo ""
|
||||||
|
80
src/acl.cpp
80
src/acl.cpp
@ -300,9 +300,15 @@ void ACLFreeUserAndKillClients(user *u) {
|
|||||||
* it in non authenticated mode. */
|
* it in non authenticated mode. */
|
||||||
c->puser = DefaultUser;
|
c->puser = DefaultUser;
|
||||||
c->authenticated = 0;
|
c->authenticated = 0;
|
||||||
|
/* We will write replies to this client later, so we can't
|
||||||
|
* close it directly even if async. */
|
||||||
|
if (c == serverTL->current_client) {
|
||||||
|
c->flags |= CLIENT_CLOSE_AFTER_COMMAND;
|
||||||
|
} else {
|
||||||
freeClientAsync(c);
|
freeClientAsync(c);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
ACLFreeUser(u);
|
ACLFreeUser(u);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -472,21 +478,68 @@ sds ACLDescribeUserCommandRules(user *u) {
|
|||||||
ACLSetUser(fakeuser,"-@all",-1);
|
ACLSetUser(fakeuser,"-@all",-1);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Try to add or subtract each category one after the other. Often a
|
/* Attempt to find a good approximation for categories and commands
|
||||||
* single category will not perfectly match the set of commands into
|
* based on the current bits used, by looping over the category list
|
||||||
* it, so at the end we do a final pass adding/removing the single commands
|
* and applying the best fit each time. Often a set of categories will not
|
||||||
* needed to make the bitmap exactly match. */
|
* perfectly match the set of commands into it, so at the end we do a
|
||||||
|
* final pass adding/removing the single commands needed to make the bitmap
|
||||||
|
* exactly match. A temp user is maintained to keep track of categories
|
||||||
|
* already applied. */
|
||||||
|
user tu = {0};
|
||||||
|
user *tempuser = &tu;
|
||||||
|
|
||||||
|
/* Keep track of the categories that have been applied, to prevent
|
||||||
|
* applying them twice. */
|
||||||
|
char applied[sizeof(ACLCommandCategories)/sizeof(ACLCommandCategories[0])];
|
||||||
|
memset(applied, 0, sizeof(applied));
|
||||||
|
|
||||||
|
memcpy(tempuser->allowed_commands,
|
||||||
|
u->allowed_commands,
|
||||||
|
sizeof(u->allowed_commands));
|
||||||
|
while (1) {
|
||||||
|
int best = -1;
|
||||||
|
unsigned long mindiff = INT_MAX, maxsame = 0;
|
||||||
for (int j = 0; ACLCommandCategories[j].flag != 0; j++) {
|
for (int j = 0; ACLCommandCategories[j].flag != 0; j++) {
|
||||||
unsigned long on, off;
|
if (applied[j]) continue;
|
||||||
ACLCountCategoryBitsForUser(u,&on,&off,ACLCommandCategories[j].name);
|
|
||||||
if ((additive && on > off) || (!additive && off > on)) {
|
unsigned long on, off, diff, same;
|
||||||
|
ACLCountCategoryBitsForUser(tempuser,&on,&off,ACLCommandCategories[j].name);
|
||||||
|
/* Check if the current category is the best this loop:
|
||||||
|
* * It has more commands in common with the user than commands
|
||||||
|
* that are different.
|
||||||
|
* AND EITHER
|
||||||
|
* * It has the fewest number of differences
|
||||||
|
* than the best match we have found so far.
|
||||||
|
* * OR it matches the fewest number of differences
|
||||||
|
* that we've seen but it has more in common. */
|
||||||
|
diff = additive ? off : on;
|
||||||
|
same = additive ? on : off;
|
||||||
|
if (same > diff &&
|
||||||
|
((diff < mindiff) || (diff == mindiff && same > maxsame)))
|
||||||
|
{
|
||||||
|
best = j;
|
||||||
|
mindiff = diff;
|
||||||
|
maxsame = same;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* We didn't find a match */
|
||||||
|
if (best == -1) break;
|
||||||
|
|
||||||
sds op = sdsnewlen(additive ? "+@" : "-@", 2);
|
sds op = sdsnewlen(additive ? "+@" : "-@", 2);
|
||||||
op = sdscat(op,ACLCommandCategories[j].name);
|
op = sdscat(op,ACLCommandCategories[best].name);
|
||||||
ACLSetUser(fakeuser,op,-1);
|
ACLSetUser(fakeuser,op,-1);
|
||||||
|
|
||||||
|
sds invop = sdsnewlen(additive ? "-@" : "+@", 2);
|
||||||
|
invop = sdscat(invop,ACLCommandCategories[best].name);
|
||||||
|
ACLSetUser(tempuser,invop,-1);
|
||||||
|
|
||||||
rules = sdscatsds(rules,op);
|
rules = sdscatsds(rules,op);
|
||||||
rules = sdscatlen(rules," ",1);
|
rules = sdscatlen(rules," ",1);
|
||||||
sdsfree(op);
|
sdsfree(op);
|
||||||
}
|
sdsfree(invop);
|
||||||
|
|
||||||
|
applied[best] = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Fix the final ACLs with single commands differences. */
|
/* Fix the final ACLs with single commands differences. */
|
||||||
@ -1099,8 +1152,9 @@ int ACLCheckCommandPerm(client *c, int *keyidxptr) {
|
|||||||
if (!(c->puser->flags & USER_FLAG_ALLKEYS) &&
|
if (!(c->puser->flags & USER_FLAG_ALLKEYS) &&
|
||||||
(c->cmd->getkeys_proc || c->cmd->firstkey))
|
(c->cmd->getkeys_proc || c->cmd->firstkey))
|
||||||
{
|
{
|
||||||
int numkeys;
|
getKeysResult result = GETKEYS_RESULT_INIT;
|
||||||
int *keyidx = getKeysFromCommand(c->cmd,c->argv,c->argc,&numkeys);
|
int numkeys = getKeysFromCommand(c->cmd,c->argv,c->argc,&result);
|
||||||
|
int *keyidx = result.keys;
|
||||||
for (int j = 0; j < numkeys; j++) {
|
for (int j = 0; j < numkeys; j++) {
|
||||||
listIter li;
|
listIter li;
|
||||||
listNode *ln;
|
listNode *ln;
|
||||||
@ -1121,11 +1175,11 @@ int ACLCheckCommandPerm(client *c, int *keyidxptr) {
|
|||||||
}
|
}
|
||||||
if (!match) {
|
if (!match) {
|
||||||
if (keyidxptr) *keyidxptr = keyidx[j];
|
if (keyidxptr) *keyidxptr = keyidx[j];
|
||||||
getKeysFreeResult(keyidx);
|
getKeysFreeResult(&result);
|
||||||
return ACL_DENIED_KEY;
|
return ACL_DENIED_KEY;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
getKeysFreeResult(keyidx);
|
getKeysFreeResult(&result);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* If we survived all the above checks, the user can execute the
|
/* If we survived all the above checks, the user can execute the
|
||||||
|
@ -34,8 +34,9 @@
|
|||||||
#include "zmalloc.h"
|
#include "zmalloc.h"
|
||||||
|
|
||||||
/* Create a new list. The created list can be freed with
|
/* Create a new list. The created list can be freed with
|
||||||
* AlFreeList(), but private value of every node need to be freed
|
* listRelease(), but private value of every node need to be freed
|
||||||
* by the user before to call AlFreeList().
|
* by the user before to call listRelease(), or by setting a free method using
|
||||||
|
* listSetFreeMethod.
|
||||||
*
|
*
|
||||||
* On error, NULL is returned. Otherwise the pointer to the new list. */
|
* On error, NULL is returned. Otherwise the pointer to the new list. */
|
||||||
list *listCreate(void)
|
list *listCreate(void)
|
||||||
@ -217,8 +218,8 @@ void listRewindTail(list *list, listIter *li) {
|
|||||||
* listDelNode(), but not to remove other elements.
|
* listDelNode(), but not to remove other elements.
|
||||||
*
|
*
|
||||||
* The function returns a pointer to the next element of the list,
|
* The function returns a pointer to the next element of the list,
|
||||||
* or NULL if there are no more elements, so the classical usage patter
|
* or NULL if there are no more elements, so the classical usage
|
||||||
* is:
|
* pattern is:
|
||||||
*
|
*
|
||||||
* iter = listGetIterator(list,<direction>);
|
* iter = listGetIterator(list,<direction>);
|
||||||
* while ((node = listNext(iter)) != NULL) {
|
* while ((node = listNext(iter)) != NULL) {
|
||||||
|
@ -232,7 +232,7 @@ static void aeApiDelEvent(aeEventLoop *eventLoop, int fd, int mask) {
|
|||||||
/*
|
/*
|
||||||
* ENOMEM is a potentially transient condition, but the kernel won't
|
* ENOMEM is a potentially transient condition, but the kernel won't
|
||||||
* generally return it unless things are really bad. EAGAIN indicates
|
* generally return it unless things are really bad. EAGAIN indicates
|
||||||
* we've reached an resource limit, for which it doesn't make sense to
|
* we've reached a resource limit, for which it doesn't make sense to
|
||||||
* retry (counter-intuitively). All other errors indicate a bug. In any
|
* retry (counter-intuitively). All other errors indicate a bug. In any
|
||||||
* of these cases, the best we can do is to abort.
|
* of these cases, the best we can do is to abort.
|
||||||
*/
|
*/
|
||||||
|
84
src/aof.cpp
84
src/aof.cpp
@ -566,7 +566,7 @@ sds catAppendOnlyGenericCommand(sds dst, int argc, robj **argv) {
|
|||||||
return dst;
|
return dst;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Create the sds representation of an PEXPIREAT command, using
|
/* Create the sds representation of a PEXPIREAT command, using
|
||||||
* 'seconds' as time to live and 'cmd' to understand what command
|
* 'seconds' as time to live and 'cmd' to understand what command
|
||||||
* we are translating into a PEXPIREAT.
|
* we are translating into a PEXPIREAT.
|
||||||
*
|
*
|
||||||
@ -752,6 +752,7 @@ struct client *createAOFClient(void) {
|
|||||||
c->querybuf_peak = 0;
|
c->querybuf_peak = 0;
|
||||||
c->argc = 0;
|
c->argc = 0;
|
||||||
c->argv = NULL;
|
c->argv = NULL;
|
||||||
|
c->argv_len_sum = 0;
|
||||||
c->bufpos = 0;
|
c->bufpos = 0;
|
||||||
c->flags = 0;
|
c->flags = 0;
|
||||||
c->fPendingAsyncWrite = FALSE;
|
c->fPendingAsyncWrite = FALSE;
|
||||||
@ -781,6 +782,7 @@ void freeFakeClientArgv(struct client *c) {
|
|||||||
for (j = 0; j < c->argc; j++)
|
for (j = 0; j < c->argc; j++)
|
||||||
decrRefCount(c->argv[j]);
|
decrRefCount(c->argv[j]);
|
||||||
zfree(c->argv);
|
zfree(c->argv);
|
||||||
|
c->argv_len_sum = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void freeFakeClient(struct client *c) {
|
void freeFakeClient(struct client *c) {
|
||||||
@ -1159,7 +1161,7 @@ int rewriteSortedSetObject(rio *r, robj *key, robj *o) {
|
|||||||
}
|
}
|
||||||
} else if (o->encoding == OBJ_ENCODING_SKIPLIST) {
|
} else if (o->encoding == OBJ_ENCODING_SKIPLIST) {
|
||||||
zset *zs = (zset*)ptrFromObj(o);
|
zset *zs = (zset*)ptrFromObj(o);
|
||||||
dictIterator *di = dictGetIterator(zs->pdict);
|
dictIterator *di = dictGetIterator(zs->dict);
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
|
|
||||||
while((de = dictNext(di)) != NULL) {
|
while((de = dictNext(di)) != NULL) {
|
||||||
@ -1292,16 +1294,24 @@ int rewriteStreamObject(rio *r, robj *key, robj *o) {
|
|||||||
* the ID, the second is an array of field-value pairs. */
|
* the ID, the second is an array of field-value pairs. */
|
||||||
|
|
||||||
/* Emit the XADD <key> <id> ...fields... command. */
|
/* Emit the XADD <key> <id> ...fields... command. */
|
||||||
if (rioWriteBulkCount(r,'*',3+numfields*2) == 0) return 0;
|
if (!rioWriteBulkCount(r,'*',3+numfields*2) ||
|
||||||
if (rioWriteBulkString(r,"XADD",4) == 0) return 0;
|
!rioWriteBulkString(r,"XADD",4) ||
|
||||||
if (rioWriteBulkObject(r,key) == 0) return 0;
|
!rioWriteBulkObject(r,key) ||
|
||||||
if (rioWriteBulkStreamID(r,&id) == 0) return 0;
|
!rioWriteBulkStreamID(r,&id))
|
||||||
|
{
|
||||||
|
streamIteratorStop(&si);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
while(numfields--) {
|
while(numfields--) {
|
||||||
unsigned char *field, *value;
|
unsigned char *field, *value;
|
||||||
int64_t field_len, value_len;
|
int64_t field_len, value_len;
|
||||||
streamIteratorGetField(&si,&field,&value,&field_len,&value_len);
|
streamIteratorGetField(&si,&field,&value,&field_len,&value_len);
|
||||||
if (rioWriteBulkString(r,(char*)field,field_len) == 0) return 0;
|
if (!rioWriteBulkString(r,(char*)field,field_len) ||
|
||||||
if (rioWriteBulkString(r,(char*)value,value_len) == 0) return 0;
|
!rioWriteBulkString(r,(char*)value,value_len))
|
||||||
|
{
|
||||||
|
streamIteratorStop(&si);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
@ -1309,22 +1319,30 @@ int rewriteStreamObject(rio *r, robj *key, robj *o) {
|
|||||||
* the key we are serializing is an empty string, which is possible
|
* the key we are serializing is an empty string, which is possible
|
||||||
* for the Stream type. */
|
* for the Stream type. */
|
||||||
id.ms = 0; id.seq = 1;
|
id.ms = 0; id.seq = 1;
|
||||||
if (rioWriteBulkCount(r,'*',7) == 0) return 0;
|
if (!rioWriteBulkCount(r,'*',7) ||
|
||||||
if (rioWriteBulkString(r,"XADD",4) == 0) return 0;
|
!rioWriteBulkString(r,"XADD",4) ||
|
||||||
if (rioWriteBulkObject(r,key) == 0) return 0;
|
!rioWriteBulkObject(r,key) ||
|
||||||
if (rioWriteBulkString(r,"MAXLEN",6) == 0) return 0;
|
!rioWriteBulkString(r,"MAXLEN",6) ||
|
||||||
if (rioWriteBulkString(r,"0",1) == 0) return 0;
|
!rioWriteBulkString(r,"0",1) ||
|
||||||
if (rioWriteBulkStreamID(r,&id) == 0) return 0;
|
!rioWriteBulkStreamID(r,&id) ||
|
||||||
if (rioWriteBulkString(r,"x",1) == 0) return 0;
|
!rioWriteBulkString(r,"x",1) ||
|
||||||
if (rioWriteBulkString(r,"y",1) == 0) return 0;
|
!rioWriteBulkString(r,"y",1))
|
||||||
|
{
|
||||||
|
streamIteratorStop(&si);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Append XSETID after XADD, make sure lastid is correct,
|
/* Append XSETID after XADD, make sure lastid is correct,
|
||||||
* in case of XDEL lastid. */
|
* in case of XDEL lastid. */
|
||||||
if (rioWriteBulkCount(r,'*',3) == 0) return 0;
|
if (!rioWriteBulkCount(r,'*',3) ||
|
||||||
if (rioWriteBulkString(r,"XSETID",6) == 0) return 0;
|
!rioWriteBulkString(r,"XSETID",6) ||
|
||||||
if (rioWriteBulkObject(r,key) == 0) return 0;
|
!rioWriteBulkObject(r,key) ||
|
||||||
if (rioWriteBulkStreamID(r,&s->last_id) == 0) return 0;
|
!rioWriteBulkStreamID(r,&s->last_id))
|
||||||
|
{
|
||||||
|
streamIteratorStop(&si);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/* Create all the stream consumer groups. */
|
/* Create all the stream consumer groups. */
|
||||||
@ -1343,6 +1361,7 @@ int rewriteStreamObject(rio *r, robj *key, robj *o) {
|
|||||||
!rioWriteBulkStreamID(r,&group->last_id))
|
!rioWriteBulkStreamID(r,&group->last_id))
|
||||||
{
|
{
|
||||||
raxStop(&ri);
|
raxStop(&ri);
|
||||||
|
streamIteratorStop(&si);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1368,6 +1387,7 @@ int rewriteStreamObject(rio *r, robj *key, robj *o) {
|
|||||||
raxStop(&ri_pel);
|
raxStop(&ri_pel);
|
||||||
raxStop(&ri_cons);
|
raxStop(&ri_cons);
|
||||||
raxStop(&ri);
|
raxStop(&ri);
|
||||||
|
streamIteratorStop(&si);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1422,7 +1442,7 @@ int rewriteAppendOnlyFileRio(rio *aof) {
|
|||||||
for (j = 0; j < cserver.dbnum; j++) {
|
for (j = 0; j < cserver.dbnum; j++) {
|
||||||
char selectcmd[] = "*2\r\n$6\r\nSELECT\r\n";
|
char selectcmd[] = "*2\r\n$6\r\nSELECT\r\n";
|
||||||
redisDb *db = g_pserver->db+j;
|
redisDb *db = g_pserver->db+j;
|
||||||
dict *d = db->pdict;
|
dict *d = db->dict;
|
||||||
if (dictSize(d) == 0) continue;
|
if (dictSize(d) == 0) continue;
|
||||||
di = dictGetSafeIterator(d);
|
di = dictGetSafeIterator(d);
|
||||||
|
|
||||||
@ -1509,7 +1529,7 @@ werr:
|
|||||||
* are inserted using a single command. */
|
* are inserted using a single command. */
|
||||||
int rewriteAppendOnlyFile(char *filename) {
|
int rewriteAppendOnlyFile(char *filename) {
|
||||||
rio aof;
|
rio aof;
|
||||||
FILE *fp;
|
FILE *fp = NULL;
|
||||||
char tmpfile[256];
|
char tmpfile[256];
|
||||||
char byte;
|
char byte;
|
||||||
int nodata = 0;
|
int nodata = 0;
|
||||||
@ -1587,9 +1607,10 @@ int rewriteAppendOnlyFile(char *filename) {
|
|||||||
goto werr;
|
goto werr;
|
||||||
|
|
||||||
/* Make sure data will not remain on the OS's output buffers */
|
/* Make sure data will not remain on the OS's output buffers */
|
||||||
if (fflush(fp) == EOF) goto werr;
|
if (fflush(fp)) goto werr;
|
||||||
if (fsync(fileno(fp)) == -1) goto werr;
|
if (fsync(fileno(fp))) goto werr;
|
||||||
if (fclose(fp) == EOF) goto werr;
|
if (fclose(fp)) { fp = NULL; goto werr; }
|
||||||
|
fp = NULL;
|
||||||
|
|
||||||
/* Use RENAME to make sure the DB file is changed atomically only
|
/* Use RENAME to make sure the DB file is changed atomically only
|
||||||
* if the generate DB file is ok. */
|
* if the generate DB file is ok. */
|
||||||
@ -1605,7 +1626,7 @@ int rewriteAppendOnlyFile(char *filename) {
|
|||||||
|
|
||||||
werr:
|
werr:
|
||||||
serverLog(LL_WARNING,"Write error writing append only file on disk: %s", strerror(errno));
|
serverLog(LL_WARNING,"Write error writing append only file on disk: %s", strerror(errno));
|
||||||
fclose(fp);
|
if (fp) fclose(fp);
|
||||||
unlink(tmpfile);
|
unlink(tmpfile);
|
||||||
stopSaving(0);
|
stopSaving(0);
|
||||||
return C_ERR;
|
return C_ERR;
|
||||||
@ -1719,7 +1740,7 @@ int rewriteAppendOnlyFileBackground(void) {
|
|||||||
if (hasActiveChildProcess()) return C_ERR;
|
if (hasActiveChildProcess()) return C_ERR;
|
||||||
if (aofCreatePipes() != C_OK) return C_ERR;
|
if (aofCreatePipes() != C_OK) return C_ERR;
|
||||||
openChildInfoPipe();
|
openChildInfoPipe();
|
||||||
if ((childpid = redisFork()) == 0) {
|
if ((childpid = redisFork(CHILD_TYPE_AOF)) == 0) {
|
||||||
char tmpfile[256];
|
char tmpfile[256];
|
||||||
|
|
||||||
/* Child */
|
/* Child */
|
||||||
@ -1727,7 +1748,7 @@ int rewriteAppendOnlyFileBackground(void) {
|
|||||||
redisSetCpuAffinity(g_pserver->aof_rewrite_cpulist);
|
redisSetCpuAffinity(g_pserver->aof_rewrite_cpulist);
|
||||||
snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof", (int) getpid());
|
snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof", (int) getpid());
|
||||||
if (rewriteAppendOnlyFile(tmpfile) == C_OK) {
|
if (rewriteAppendOnlyFile(tmpfile) == C_OK) {
|
||||||
sendChildCOWInfo(CHILD_INFO_TYPE_AOF, "AOF rewrite");
|
sendChildCOWInfo(CHILD_TYPE_AOF, "AOF rewrite");
|
||||||
exitFromChild(0);
|
exitFromChild(0);
|
||||||
} else {
|
} else {
|
||||||
exitFromChild(1);
|
exitFromChild(1);
|
||||||
@ -1747,6 +1768,7 @@ int rewriteAppendOnlyFileBackground(void) {
|
|||||||
g_pserver->aof_rewrite_scheduled = 0;
|
g_pserver->aof_rewrite_scheduled = 0;
|
||||||
g_pserver->aof_rewrite_time_start = time(NULL);
|
g_pserver->aof_rewrite_time_start = time(NULL);
|
||||||
g_pserver->aof_child_pid = childpid;
|
g_pserver->aof_child_pid = childpid;
|
||||||
|
updateDictResizePolicy();
|
||||||
/* We set appendseldb to -1 in order to force the next call to the
|
/* We set appendseldb to -1 in order to force the next call to the
|
||||||
* feedAppendOnlyFile() to issue a SELECT command, so the differences
|
* feedAppendOnlyFile() to issue a SELECT command, so the differences
|
||||||
* accumulated by the parent into g_pserver->aof_rewrite_buf will start
|
* accumulated by the parent into g_pserver->aof_rewrite_buf will start
|
||||||
@ -1776,10 +1798,10 @@ void aofRemoveTempFile(pid_t childpid) {
|
|||||||
char tmpfile[256];
|
char tmpfile[256];
|
||||||
|
|
||||||
snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof", (int) childpid);
|
snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof", (int) childpid);
|
||||||
unlink(tmpfile);
|
bg_unlink(tmpfile);
|
||||||
|
|
||||||
snprintf(tmpfile,256,"temp-rewriteaof-%d.aof", (int) childpid);
|
snprintf(tmpfile,256,"temp-rewriteaof-%d.aof", (int) childpid);
|
||||||
unlink(tmpfile);
|
bg_unlink(tmpfile);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Update the g_pserver->aof_current_size field explicitly using stat(2)
|
/* Update the g_pserver->aof_current_size field explicitly using stat(2)
|
||||||
@ -1934,7 +1956,7 @@ void backgroundRewriteDoneHandler(int exitcode, int bysignal) {
|
|||||||
"Background AOF rewrite terminated with error");
|
"Background AOF rewrite terminated with error");
|
||||||
} else {
|
} else {
|
||||||
/* SIGUSR1 is whitelisted, so we have a way to kill a child without
|
/* SIGUSR1 is whitelisted, so we have a way to kill a child without
|
||||||
* tirggering an error condition. */
|
* triggering an error condition. */
|
||||||
if (bysignal != SIGUSR1)
|
if (bysignal != SIGUSR1)
|
||||||
g_pserver->aof_lastbgrewrite_status = C_ERR;
|
g_pserver->aof_lastbgrewrite_status = C_ERR;
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@
|
|||||||
*
|
*
|
||||||
* Never use return value from the macros, instead use the AtomicGetIncr()
|
* Never use return value from the macros, instead use the AtomicGetIncr()
|
||||||
* if you need to get the current value and increment it atomically, like
|
* if you need to get the current value and increment it atomically, like
|
||||||
* in the followign example:
|
* in the following example:
|
||||||
*
|
*
|
||||||
* long oldvalue;
|
* long oldvalue;
|
||||||
* atomicGetIncr(myvar,oldvalue,1);
|
* atomicGetIncr(myvar,oldvalue,1);
|
||||||
|
10
src/bio.cpp
10
src/bio.cpp
@ -168,10 +168,7 @@ void *bioProcessBackgroundJobs(void *arg) {
|
|||||||
|
|
||||||
redisSetCpuAffinity(g_pserver->bio_cpulist);
|
redisSetCpuAffinity(g_pserver->bio_cpulist);
|
||||||
|
|
||||||
/* Make the thread killable at any time, so that bioKillThreads()
|
makeThreadKillable();
|
||||||
* can work reliably. */
|
|
||||||
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);
|
|
||||||
pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, NULL);
|
|
||||||
|
|
||||||
pthread_mutex_lock(&bio_mutex[type]);
|
pthread_mutex_lock(&bio_mutex[type]);
|
||||||
/* Block SIGALRM so we are sure that only the main thread will
|
/* Block SIGALRM so we are sure that only the main thread will
|
||||||
@ -206,7 +203,7 @@ void *bioProcessBackgroundJobs(void *arg) {
|
|||||||
/* What we free changes depending on what arguments are set:
|
/* What we free changes depending on what arguments are set:
|
||||||
* arg1 -> free the object at pointer.
|
* arg1 -> free the object at pointer.
|
||||||
* arg2 & arg3 -> free two dictionaries (a Redis DB).
|
* arg2 & arg3 -> free two dictionaries (a Redis DB).
|
||||||
* only arg3 -> free the skiplist. */
|
* only arg3 -> free the radix tree. */
|
||||||
if (job->arg1)
|
if (job->arg1)
|
||||||
lazyfreeFreeObjectFromBioThread((robj*)job->arg1);
|
lazyfreeFreeObjectFromBioThread((robj*)job->arg1);
|
||||||
else if (job->arg2 && job->arg3)
|
else if (job->arg2 && job->arg3)
|
||||||
@ -268,10 +265,11 @@ void bioKillThreads(void) {
|
|||||||
int err, j;
|
int err, j;
|
||||||
|
|
||||||
for (j = 0; j < BIO_NUM_OPS; j++) {
|
for (j = 0; j < BIO_NUM_OPS; j++) {
|
||||||
|
if (bio_threads[j] == pthread_self()) continue;
|
||||||
if (bio_threads[j] && pthread_cancel(bio_threads[j]) == 0) {
|
if (bio_threads[j] && pthread_cancel(bio_threads[j]) == 0) {
|
||||||
if ((err = pthread_join(bio_threads[j],NULL)) != 0) {
|
if ((err = pthread_join(bio_threads[j],NULL)) != 0) {
|
||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"Bio thread for job type #%d can be joined: %s",
|
"Bio thread for job type #%d can not be joined: %s",
|
||||||
j, strerror(err));
|
j, strerror(err));
|
||||||
} else {
|
} else {
|
||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
|
@ -36,7 +36,7 @@
|
|||||||
|
|
||||||
/* Count number of bits set in the binary array pointed by 's' and long
|
/* Count number of bits set in the binary array pointed by 's' and long
|
||||||
* 'count' bytes. The implementation of this function is required to
|
* 'count' bytes. The implementation of this function is required to
|
||||||
* work with a input string length up to 512 MB. */
|
* work with an input string length up to 512 MB. */
|
||||||
size_t redisPopcount(const void *s, long count) {
|
size_t redisPopcount(const void *s, long count) {
|
||||||
size_t bits = 0;
|
size_t bits = 0;
|
||||||
unsigned char *p = (unsigned char*)s;
|
unsigned char *p = (unsigned char*)s;
|
||||||
@ -107,7 +107,7 @@ long redisBitpos(const void *s, unsigned long count, int bit) {
|
|||||||
int found;
|
int found;
|
||||||
|
|
||||||
/* Process whole words first, seeking for first word that is not
|
/* Process whole words first, seeking for first word that is not
|
||||||
* all ones or all zeros respectively if we are lookig for zeros
|
* all ones or all zeros respectively if we are looking for zeros
|
||||||
* or ones. This is much faster with large strings having contiguous
|
* or ones. This is much faster with large strings having contiguous
|
||||||
* blocks of 1 or 0 bits compared to the vanilla bit per bit processing.
|
* blocks of 1 or 0 bits compared to the vanilla bit per bit processing.
|
||||||
*
|
*
|
||||||
@ -498,7 +498,7 @@ robj *lookupStringForBitCommand(client *c, size_t maxbit) {
|
|||||||
* in 'len'. The user is required to pass (likely stack allocated) buffer
|
* in 'len'. The user is required to pass (likely stack allocated) buffer
|
||||||
* 'llbuf' of at least LONG_STR_SIZE bytes. Such a buffer is used in the case
|
* 'llbuf' of at least LONG_STR_SIZE bytes. Such a buffer is used in the case
|
||||||
* the object is integer encoded in order to provide the representation
|
* the object is integer encoded in order to provide the representation
|
||||||
* without usign heap allocation.
|
* without using heap allocation.
|
||||||
*
|
*
|
||||||
* The function returns the pointer to the object array of bytes representing
|
* The function returns the pointer to the object array of bytes representing
|
||||||
* the string it contains, that may be a pointer to 'llbuf' or to the
|
* the string it contains, that may be a pointer to 'llbuf' or to the
|
||||||
|
@ -53,7 +53,7 @@
|
|||||||
* to 0, no timeout is processed).
|
* to 0, no timeout is processed).
|
||||||
* It usually just needs to send a reply to the client.
|
* It usually just needs to send a reply to the client.
|
||||||
*
|
*
|
||||||
* When implementing a new type of blocking opeation, the implementation
|
* When implementing a new type of blocking operation, the implementation
|
||||||
* should modify unblockClient() and replyToBlockedClientTimedOut() in order
|
* should modify unblockClient() and replyToBlockedClientTimedOut() in order
|
||||||
* to handle the btype-specific behavior of this two functions.
|
* to handle the btype-specific behavior of this two functions.
|
||||||
* If the blocking operation waits for certain keys to change state, the
|
* If the blocking operation waits for certain keys to change state, the
|
||||||
@ -128,7 +128,7 @@ void processUnblockedClients(int iel) {
|
|||||||
|
|
||||||
/* This function will schedule the client for reprocessing at a safe time.
|
/* This function will schedule the client for reprocessing at a safe time.
|
||||||
*
|
*
|
||||||
* This is useful when a client was blocked for some reason (blocking opeation,
|
* This is useful when a client was blocked for some reason (blocking operation,
|
||||||
* CLIENT PAUSE, or whatever), because it may end with some accumulated query
|
* CLIENT PAUSE, or whatever), because it may end with some accumulated query
|
||||||
* buffer that needs to be processed ASAP:
|
* buffer that needs to be processed ASAP:
|
||||||
*
|
*
|
||||||
@ -522,7 +522,7 @@ void handleClientsBlockedOnKeys(void) {
|
|||||||
serverTL->fixed_time_expire++;
|
serverTL->fixed_time_expire++;
|
||||||
updateCachedTime(0);
|
updateCachedTime(0);
|
||||||
|
|
||||||
/* Serve clients blocked on list key. */
|
/* Serve clients blocked on the key. */
|
||||||
robj *o = lookupKeyWrite(rl->db,rl->key);
|
robj *o = lookupKeyWrite(rl->db,rl->key);
|
||||||
|
|
||||||
if (o != NULL) {
|
if (o != NULL) {
|
||||||
|
@ -76,11 +76,11 @@ void receiveChildInfo(void) {
|
|||||||
if (read(g_pserver->child_info_pipe[0],&g_pserver->child_info_data,wlen) == wlen &&
|
if (read(g_pserver->child_info_pipe[0],&g_pserver->child_info_data,wlen) == wlen &&
|
||||||
g_pserver->child_info_data.magic == CHILD_INFO_MAGIC)
|
g_pserver->child_info_data.magic == CHILD_INFO_MAGIC)
|
||||||
{
|
{
|
||||||
if (g_pserver->child_info_data.process_type == CHILD_INFO_TYPE_RDB) {
|
if (g_pserver->child_info_data.process_type == CHILD_TYPE_RDB) {
|
||||||
g_pserver->stat_rdb_cow_bytes = g_pserver->child_info_data.cow_size;
|
g_pserver->stat_rdb_cow_bytes = g_pserver->child_info_data.cow_size;
|
||||||
} else if (g_pserver->child_info_data.process_type == CHILD_INFO_TYPE_AOF) {
|
} else if (g_pserver->child_info_data.process_type == CHILD_TYPE_AOF) {
|
||||||
g_pserver->stat_aof_cow_bytes = g_pserver->child_info_data.cow_size;
|
g_pserver->stat_aof_cow_bytes = g_pserver->child_info_data.cow_size;
|
||||||
} else if (g_pserver->child_info_data.process_type == CHILD_INFO_TYPE_MODULE) {
|
} else if (g_pserver->child_info_data.process_type == CHILD_TYPE_MODULE) {
|
||||||
g_pserver->stat_module_cow_bytes = g_pserver->child_info_data.cow_size;
|
g_pserver->stat_module_cow_bytes = g_pserver->child_info_data.cow_size;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
123
src/cluster.cpp
123
src/cluster.cpp
@ -77,6 +77,9 @@ uint64_t clusterGetMaxEpoch(void);
|
|||||||
int clusterBumpConfigEpochWithoutConsensus(void);
|
int clusterBumpConfigEpochWithoutConsensus(void);
|
||||||
void moduleCallClusterReceivers(const char *sender_id, uint64_t module_id, uint8_t type, const unsigned char *payload, uint32_t len);
|
void moduleCallClusterReceivers(const char *sender_id, uint64_t module_id, uint8_t type, const unsigned char *payload, uint32_t len);
|
||||||
|
|
||||||
|
#define RCVBUF_INIT_LEN 1024
|
||||||
|
#define RCVBUF_MAX_PREALLOC (1<<20) /* 1MB */
|
||||||
|
|
||||||
struct redisMaster *getFirstMaster()
|
struct redisMaster *getFirstMaster()
|
||||||
{
|
{
|
||||||
serverAssert(listLength(g_pserver->masters) <= 1);
|
serverAssert(listLength(g_pserver->masters) <= 1);
|
||||||
@ -394,7 +397,7 @@ void clusterSaveConfigOrDie(int do_fsync) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Lock the cluster config using flock(), and leaks the file descritor used to
|
/* Lock the cluster config using flock(), and leaks the file descriptor used to
|
||||||
* acquire the lock so that the file will be locked forever.
|
* acquire the lock so that the file will be locked forever.
|
||||||
*
|
*
|
||||||
* This works because we always update nodes.conf with a new version
|
* This works because we always update nodes.conf with a new version
|
||||||
@ -566,13 +569,13 @@ void clusterInit(void) {
|
|||||||
|
|
||||||
/* Reset a node performing a soft or hard reset:
|
/* Reset a node performing a soft or hard reset:
|
||||||
*
|
*
|
||||||
* 1) All other nodes are forget.
|
* 1) All other nodes are forgotten.
|
||||||
* 2) All the assigned / open slots are released.
|
* 2) All the assigned / open slots are released.
|
||||||
* 3) If the node is a slave, it turns into a master.
|
* 3) If the node is a slave, it turns into a master.
|
||||||
* 5) Only for hard reset: a new Node ID is generated.
|
* 4) Only for hard reset: a new Node ID is generated.
|
||||||
* 6) Only for hard reset: currentEpoch and configEpoch are set to 0.
|
* 5) Only for hard reset: currentEpoch and configEpoch are set to 0.
|
||||||
* 7) The new configuration is saved and the cluster state updated.
|
* 6) The new configuration is saved and the cluster state updated.
|
||||||
* 8) If the node was a slave, the whole data set is flushed away. */
|
* 7) If the node was a slave, the whole data set is flushed away. */
|
||||||
void clusterReset(int hard) {
|
void clusterReset(int hard) {
|
||||||
dictIterator *di;
|
dictIterator *di;
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
@ -639,7 +642,8 @@ clusterLink *createClusterLink(clusterNode *node) {
|
|||||||
clusterLink *link = (clusterLink*)zmalloc(sizeof(*link), MALLOC_LOCAL);
|
clusterLink *link = (clusterLink*)zmalloc(sizeof(*link), MALLOC_LOCAL);
|
||||||
link->ctime = mstime();
|
link->ctime = mstime();
|
||||||
link->sndbuf = sdsempty();
|
link->sndbuf = sdsempty();
|
||||||
link->rcvbuf = sdsempty();
|
link->rcvbuf = (char*)zmalloc(link->rcvbuf_alloc = RCVBUF_INIT_LEN);
|
||||||
|
link->rcvbuf_len = 0;
|
||||||
link->node = node;
|
link->node = node;
|
||||||
link->conn = NULL;
|
link->conn = NULL;
|
||||||
return link;
|
return link;
|
||||||
@ -666,7 +670,7 @@ void freeClusterLink(clusterLink *link) {
|
|||||||
link->conn = NULL;
|
link->conn = NULL;
|
||||||
}
|
}
|
||||||
sdsfree(link->sndbuf);
|
sdsfree(link->sndbuf);
|
||||||
sdsfree(link->rcvbuf);
|
zfree(link->rcvbuf);
|
||||||
if (link->node)
|
if (link->node)
|
||||||
link->node->link = NULL;
|
link->node->link = NULL;
|
||||||
zfree(link);
|
zfree(link);
|
||||||
@ -684,7 +688,7 @@ static void clusterConnAcceptHandler(connection *conn) {
|
|||||||
|
|
||||||
/* Create a link object we use to handle the connection.
|
/* Create a link object we use to handle the connection.
|
||||||
* It gets passed to the readable handler when data is available.
|
* It gets passed to the readable handler when data is available.
|
||||||
* Initiallly the link->node pointer is set to NULL as we don't know
|
* Initially the link->node pointer is set to NULL as we don't know
|
||||||
* which node is, but the right node is references once we know the
|
* which node is, but the right node is references once we know the
|
||||||
* node identity. */
|
* node identity. */
|
||||||
link = createClusterLink(NULL);
|
link = createClusterLink(NULL);
|
||||||
@ -1098,7 +1102,7 @@ uint64_t clusterGetMaxEpoch(void) {
|
|||||||
* 3) Persist the configuration on disk before sending packets with the
|
* 3) Persist the configuration on disk before sending packets with the
|
||||||
* new configuration.
|
* new configuration.
|
||||||
*
|
*
|
||||||
* If the new config epoch is generated and assigend, C_OK is returned,
|
* If the new config epoch is generated and assigned, C_OK is returned,
|
||||||
* otherwise C_ERR is returned (since the node has already the greatest
|
* otherwise C_ERR is returned (since the node has already the greatest
|
||||||
* configuration around) and no operation is performed.
|
* configuration around) and no operation is performed.
|
||||||
*
|
*
|
||||||
@ -1171,7 +1175,7 @@ int clusterBumpConfigEpochWithoutConsensus(void) {
|
|||||||
*
|
*
|
||||||
* In general we want a system that eventually always ends with different
|
* In general we want a system that eventually always ends with different
|
||||||
* masters having different configuration epochs whatever happened, since
|
* masters having different configuration epochs whatever happened, since
|
||||||
* nothign is worse than a split-brain condition in a distributed system.
|
* nothing is worse than a split-brain condition in a distributed system.
|
||||||
*
|
*
|
||||||
* BEHAVIOR
|
* BEHAVIOR
|
||||||
*
|
*
|
||||||
@ -1230,7 +1234,7 @@ void clusterHandleConfigEpochCollision(clusterNode *sender) {
|
|||||||
* entries from the black list. This is an O(N) operation but it is not a
|
* entries from the black list. This is an O(N) operation but it is not a
|
||||||
* problem since add / exists operations are called very infrequently and
|
* problem since add / exists operations are called very infrequently and
|
||||||
* the hash table is supposed to contain very little elements at max.
|
* the hash table is supposed to contain very little elements at max.
|
||||||
* However without the cleanup during long uptimes and with some automated
|
* However without the cleanup during long uptime and with some automated
|
||||||
* node add/removal procedures, entries could accumulate. */
|
* node add/removal procedures, entries could accumulate. */
|
||||||
void clusterBlacklistCleanup(void) {
|
void clusterBlacklistCleanup(void) {
|
||||||
dictIterator *di;
|
dictIterator *di;
|
||||||
@ -1384,12 +1388,12 @@ int clusterHandshakeInProgress(char *ip, int port, int cport) {
|
|||||||
return de != NULL;
|
return de != NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Start an handshake with the specified address if there is not one
|
/* Start a handshake with the specified address if there is not one
|
||||||
* already in progress. Returns non-zero if the handshake was actually
|
* already in progress. Returns non-zero if the handshake was actually
|
||||||
* started. On error zero is returned and errno is set to one of the
|
* started. On error zero is returned and errno is set to one of the
|
||||||
* following values:
|
* following values:
|
||||||
*
|
*
|
||||||
* EAGAIN - There is already an handshake in progress for this address.
|
* EAGAIN - There is already a handshake in progress for this address.
|
||||||
* EINVAL - IP or port are not valid. */
|
* EINVAL - IP or port are not valid. */
|
||||||
int clusterStartHandshake(char *ip, int port, int cport) {
|
int clusterStartHandshake(char *ip, int port, int cport) {
|
||||||
clusterNode *n;
|
clusterNode *n;
|
||||||
@ -1770,7 +1774,7 @@ int clusterProcessPacket(clusterLink *link) {
|
|||||||
|
|
||||||
/* Perform sanity checks */
|
/* Perform sanity checks */
|
||||||
if (totlen < 16) return 1; /* At least signature, version, totlen, count. */
|
if (totlen < 16) return 1; /* At least signature, version, totlen, count. */
|
||||||
if (totlen > sdslen(link->rcvbuf)) return 1;
|
if (totlen > link->rcvbuf_len) return 1;
|
||||||
|
|
||||||
if (ntohs(hdr->ver) != CLUSTER_PROTO_VER) {
|
if (ntohs(hdr->ver) != CLUSTER_PROTO_VER) {
|
||||||
/* Can't handle messages of different versions. */
|
/* Can't handle messages of different versions. */
|
||||||
@ -1835,7 +1839,7 @@ int clusterProcessPacket(clusterLink *link) {
|
|||||||
if (sender) sender->data_received = now;
|
if (sender) sender->data_received = now;
|
||||||
|
|
||||||
if (sender && !nodeInHandshake(sender)) {
|
if (sender && !nodeInHandshake(sender)) {
|
||||||
/* Update our curretEpoch if we see a newer epoch in the cluster. */
|
/* Update our currentEpoch if we see a newer epoch in the cluster. */
|
||||||
senderCurrentEpoch = ntohu64(hdr->currentEpoch);
|
senderCurrentEpoch = ntohu64(hdr->currentEpoch);
|
||||||
senderConfigEpoch = ntohu64(hdr->configEpoch);
|
senderConfigEpoch = ntohu64(hdr->configEpoch);
|
||||||
if (senderCurrentEpoch > g_pserver->cluster->currentEpoch)
|
if (senderCurrentEpoch > g_pserver->cluster->currentEpoch)
|
||||||
@ -2327,7 +2331,7 @@ void clusterReadHandler(connection *conn) {
|
|||||||
unsigned int readlen, rcvbuflen;
|
unsigned int readlen, rcvbuflen;
|
||||||
|
|
||||||
while(1) { /* Read as long as there is data to read. */
|
while(1) { /* Read as long as there is data to read. */
|
||||||
rcvbuflen = sdslen(link->rcvbuf);
|
rcvbuflen = link->rcvbuf_len;
|
||||||
if (rcvbuflen < 8) {
|
if (rcvbuflen < 8) {
|
||||||
/* First, obtain the first 8 bytes to get the full message
|
/* First, obtain the first 8 bytes to get the full message
|
||||||
* length. */
|
* length. */
|
||||||
@ -2363,7 +2367,15 @@ void clusterReadHandler(connection *conn) {
|
|||||||
return;
|
return;
|
||||||
} else {
|
} else {
|
||||||
/* Read data and recast the pointer to the new buffer. */
|
/* Read data and recast the pointer to the new buffer. */
|
||||||
link->rcvbuf = sdscatlen(link->rcvbuf,buf,nread);
|
size_t unused = link->rcvbuf_alloc - link->rcvbuf_len;
|
||||||
|
if ((size_t)nread > unused) {
|
||||||
|
size_t required = link->rcvbuf_len + nread;
|
||||||
|
/* If less than 1mb, grow to twice the needed size, if larger grow by 1mb. */
|
||||||
|
link->rcvbuf_alloc = required < RCVBUF_MAX_PREALLOC ? required * 2: required + RCVBUF_MAX_PREALLOC;
|
||||||
|
link->rcvbuf = (char*)zrealloc(link->rcvbuf, link->rcvbuf_alloc);
|
||||||
|
}
|
||||||
|
memcpy(link->rcvbuf + link->rcvbuf_len, buf, nread);
|
||||||
|
link->rcvbuf_len += nread;
|
||||||
hdr = (clusterMsg*) link->rcvbuf;
|
hdr = (clusterMsg*) link->rcvbuf;
|
||||||
rcvbuflen += nread;
|
rcvbuflen += nread;
|
||||||
}
|
}
|
||||||
@ -2371,8 +2383,11 @@ void clusterReadHandler(connection *conn) {
|
|||||||
/* Total length obtained? Process this packet. */
|
/* Total length obtained? Process this packet. */
|
||||||
if (rcvbuflen >= 8 && rcvbuflen == ntohl(hdr->totlen)) {
|
if (rcvbuflen >= 8 && rcvbuflen == ntohl(hdr->totlen)) {
|
||||||
if (clusterProcessPacket(link)) {
|
if (clusterProcessPacket(link)) {
|
||||||
sdsfree(link->rcvbuf);
|
if (link->rcvbuf_alloc > RCVBUF_INIT_LEN) {
|
||||||
link->rcvbuf = sdsempty();
|
zfree(link->rcvbuf);
|
||||||
|
link->rcvbuf = (char*)zmalloc(link->rcvbuf_alloc = RCVBUF_INIT_LEN);
|
||||||
|
}
|
||||||
|
link->rcvbuf_len = 0;
|
||||||
} else {
|
} else {
|
||||||
return; /* Link no longer valid. */
|
return; /* Link no longer valid. */
|
||||||
}
|
}
|
||||||
@ -2530,7 +2545,7 @@ void clusterSetGossipEntry(clusterMsg *hdr, int i, clusterNode *n) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Send a PING or PONG packet to the specified node, making sure to add enough
|
/* Send a PING or PONG packet to the specified node, making sure to add enough
|
||||||
* gossip informations. */
|
* gossip information. */
|
||||||
void clusterSendPing(clusterLink *link, int type) {
|
void clusterSendPing(clusterLink *link, int type) {
|
||||||
unsigned char *buf;
|
unsigned char *buf;
|
||||||
clusterMsg *hdr;
|
clusterMsg *hdr;
|
||||||
@ -2550,7 +2565,7 @@ void clusterSendPing(clusterLink *link, int type) {
|
|||||||
* node_timeout we exchange with each other node at least 4 packets
|
* node_timeout we exchange with each other node at least 4 packets
|
||||||
* (we ping in the worst case in node_timeout/2 time, and we also
|
* (we ping in the worst case in node_timeout/2 time, and we also
|
||||||
* receive two pings from the host), we have a total of 8 packets
|
* receive two pings from the host), we have a total of 8 packets
|
||||||
* in the node_timeout*2 falure reports validity time. So we have
|
* in the node_timeout*2 failure reports validity time. So we have
|
||||||
* that, for a single PFAIL node, we can expect to receive the following
|
* that, for a single PFAIL node, we can expect to receive the following
|
||||||
* number of failure reports (in the specified window of time):
|
* number of failure reports (in the specified window of time):
|
||||||
*
|
*
|
||||||
@ -2577,7 +2592,7 @@ void clusterSendPing(clusterLink *link, int type) {
|
|||||||
* faster to propagate to go from PFAIL to FAIL state. */
|
* faster to propagate to go from PFAIL to FAIL state. */
|
||||||
int pfail_wanted = g_pserver->cluster->stats_pfail_nodes;
|
int pfail_wanted = g_pserver->cluster->stats_pfail_nodes;
|
||||||
|
|
||||||
/* Compute the maxium totlen to allocate our buffer. We'll fix the totlen
|
/* Compute the maximum totlen to allocate our buffer. We'll fix the totlen
|
||||||
* later according to the number of gossip sections we really were able
|
* later according to the number of gossip sections we really were able
|
||||||
* to put inside the packet. */
|
* to put inside the packet. */
|
||||||
totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
|
totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
|
||||||
@ -2614,7 +2629,7 @@ void clusterSendPing(clusterLink *link, int type) {
|
|||||||
if (thisNode->flags & (CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_NOADDR) ||
|
if (thisNode->flags & (CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_NOADDR) ||
|
||||||
(thisNode->link == NULL && thisNode->numslots == 0))
|
(thisNode->link == NULL && thisNode->numslots == 0))
|
||||||
{
|
{
|
||||||
freshnodes--; /* Tecnically not correct, but saves CPU. */
|
freshnodes--; /* Technically not correct, but saves CPU. */
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -3199,7 +3214,7 @@ void clusterHandleSlaveFailover(void) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* If the previous failover attempt timedout and the retry time has
|
/* If the previous failover attempt timeout and the retry time has
|
||||||
* elapsed, we can setup a new one. */
|
* elapsed, we can setup a new one. */
|
||||||
if (auth_age > auth_retry_time) {
|
if (auth_age > auth_retry_time) {
|
||||||
g_pserver->cluster->failover_auth_time = mstime() +
|
g_pserver->cluster->failover_auth_time = mstime() +
|
||||||
@ -3305,7 +3320,7 @@ void clusterHandleSlaveFailover(void) {
|
|||||||
*
|
*
|
||||||
* Slave migration is the process that allows a slave of a master that is
|
* Slave migration is the process that allows a slave of a master that is
|
||||||
* already covered by at least another slave, to "migrate" to a master that
|
* already covered by at least another slave, to "migrate" to a master that
|
||||||
* is orpaned, that is, left with no working slaves.
|
* is orphaned, that is, left with no working slaves.
|
||||||
* ------------------------------------------------------------------------- */
|
* ------------------------------------------------------------------------- */
|
||||||
|
|
||||||
/* This function is responsible to decide if this replica should be migrated
|
/* This function is responsible to decide if this replica should be migrated
|
||||||
@ -3322,7 +3337,7 @@ void clusterHandleSlaveFailover(void) {
|
|||||||
* the nodes anyway, so we spend time into clusterHandleSlaveMigration()
|
* the nodes anyway, so we spend time into clusterHandleSlaveMigration()
|
||||||
* if definitely needed.
|
* if definitely needed.
|
||||||
*
|
*
|
||||||
* The fuction is called with a pre-computed max_slaves, that is the max
|
* The function is called with a pre-computed max_slaves, that is the max
|
||||||
* number of working (not in FAIL state) slaves for a single master.
|
* number of working (not in FAIL state) slaves for a single master.
|
||||||
*
|
*
|
||||||
* Additional conditions for migration are examined inside the function.
|
* Additional conditions for migration are examined inside the function.
|
||||||
@ -3441,7 +3456,7 @@ void clusterHandleSlaveMigration(int max_slaves) {
|
|||||||
* data loss due to the asynchronous master-slave replication.
|
* data loss due to the asynchronous master-slave replication.
|
||||||
* -------------------------------------------------------------------------- */
|
* -------------------------------------------------------------------------- */
|
||||||
|
|
||||||
/* Reset the manual failover state. This works for both masters and slavesa
|
/* Reset the manual failover state. This works for both masters and slaves
|
||||||
* as all the state about manual failover is cleared.
|
* as all the state about manual failover is cleared.
|
||||||
*
|
*
|
||||||
* The function can be used both to initialize the manual failover state at
|
* The function can be used both to initialize the manual failover state at
|
||||||
@ -3733,7 +3748,7 @@ void clusterCron(void) {
|
|||||||
replicationAddMaster(myself->slaveof->ip, myself->slaveof->port);
|
replicationAddMaster(myself->slaveof->ip, myself->slaveof->port);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Abourt a manual failover if the timeout is reached. */
|
/* Abort a manual failover if the timeout is reached. */
|
||||||
manualFailoverCheckTimeout();
|
manualFailoverCheckTimeout();
|
||||||
|
|
||||||
if (nodeIsSlave(myself)) {
|
if (nodeIsSlave(myself)) {
|
||||||
@ -3838,12 +3853,12 @@ int clusterNodeSetSlotBit(clusterNode *n, int slot) {
|
|||||||
* target for replicas migration, if and only if at least one of
|
* target for replicas migration, if and only if at least one of
|
||||||
* the other masters has slaves right now.
|
* the other masters has slaves right now.
|
||||||
*
|
*
|
||||||
* Normally masters are valid targerts of replica migration if:
|
* Normally masters are valid targets of replica migration if:
|
||||||
* 1. The used to have slaves (but no longer have).
|
* 1. The used to have slaves (but no longer have).
|
||||||
* 2. They are slaves failing over a master that used to have slaves.
|
* 2. They are slaves failing over a master that used to have slaves.
|
||||||
*
|
*
|
||||||
* However new masters with slots assigned are considered valid
|
* However new masters with slots assigned are considered valid
|
||||||
* migration tagets if the rest of the cluster is not a slave-less.
|
* migration targets if the rest of the cluster is not a slave-less.
|
||||||
*
|
*
|
||||||
* See https://github.com/antirez/redis/issues/3043 for more info. */
|
* See https://github.com/antirez/redis/issues/3043 for more info. */
|
||||||
if (n->numslots == 1 && clusterMastersHaveSlaves())
|
if (n->numslots == 1 && clusterMastersHaveSlaves())
|
||||||
@ -4027,7 +4042,7 @@ void clusterUpdateState(void) {
|
|||||||
* A) If no other node is in charge according to the current cluster
|
* A) If no other node is in charge according to the current cluster
|
||||||
* configuration, we add these slots to our node.
|
* configuration, we add these slots to our node.
|
||||||
* B) If according to our config other nodes are already in charge for
|
* B) If according to our config other nodes are already in charge for
|
||||||
* this lots, we set the slots as IMPORTING from our point of view
|
* this slots, we set the slots as IMPORTING from our point of view
|
||||||
* in order to justify we have those slots, and in order to make
|
* in order to justify we have those slots, and in order to make
|
||||||
* keydb-trib aware of the issue, so that it can try to fix it.
|
* keydb-trib aware of the issue, so that it can try to fix it.
|
||||||
* 2) If we find data in a DB different than DB0 we return C_ERR to
|
* 2) If we find data in a DB different than DB0 we return C_ERR to
|
||||||
@ -4056,7 +4071,7 @@ int verifyClusterConfigWithData(void) {
|
|||||||
|
|
||||||
/* Make sure we only have keys in DB0. */
|
/* Make sure we only have keys in DB0. */
|
||||||
for (j = 1; j < cserver.dbnum; j++) {
|
for (j = 1; j < cserver.dbnum; j++) {
|
||||||
if (dictSize(g_pserver->db[j].pdict)) return C_ERR;
|
if (dictSize(g_pserver->db[j].dict)) return C_ERR;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Check that all the slots we see populated memory have a corresponding
|
/* Check that all the slots we see populated memory have a corresponding
|
||||||
@ -4437,7 +4452,7 @@ NULL
|
|||||||
clusterReplyMultiBulkSlots(c);
|
clusterReplyMultiBulkSlots(c);
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]),"flushslots") && c->argc == 2) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]),"flushslots") && c->argc == 2) {
|
||||||
/* CLUSTER FLUSHSLOTS */
|
/* CLUSTER FLUSHSLOTS */
|
||||||
if (dictSize(g_pserver->db[0].pdict) != 0) {
|
if (dictSize(g_pserver->db[0].dict) != 0) {
|
||||||
addReplyError(c,"DB must be empty to perform CLUSTER FLUSHSLOTS.");
|
addReplyError(c,"DB must be empty to perform CLUSTER FLUSHSLOTS.");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -4557,7 +4572,7 @@ NULL
|
|||||||
}
|
}
|
||||||
/* If this slot is in migrating status but we have no keys
|
/* If this slot is in migrating status but we have no keys
|
||||||
* for it assigning the slot to another node will clear
|
* for it assigning the slot to another node will clear
|
||||||
* the migratig status. */
|
* the migrating status. */
|
||||||
if (countKeysInSlot(slot) == 0 &&
|
if (countKeysInSlot(slot) == 0 &&
|
||||||
g_pserver->cluster->migrating_slots_to[slot])
|
g_pserver->cluster->migrating_slots_to[slot])
|
||||||
g_pserver->cluster->migrating_slots_to[slot] = NULL;
|
g_pserver->cluster->migrating_slots_to[slot] = NULL;
|
||||||
@ -4770,7 +4785,7 @@ NULL
|
|||||||
* slots nor keys to accept to replicate some other node.
|
* slots nor keys to accept to replicate some other node.
|
||||||
* Slaves can switch to another master without issues. */
|
* Slaves can switch to another master without issues. */
|
||||||
if (nodeIsMaster(myself) &&
|
if (nodeIsMaster(myself) &&
|
||||||
(myself->numslots != 0 || dictSize(g_pserver->db[0].pdict) != 0)) {
|
(myself->numslots != 0 || dictSize(g_pserver->db[0].dict) != 0)) {
|
||||||
addReplyError(c,
|
addReplyError(c,
|
||||||
"To set a master the node must be empty and "
|
"To set a master the node must be empty and "
|
||||||
"without assigned slots.");
|
"without assigned slots.");
|
||||||
@ -4902,7 +4917,7 @@ NULL
|
|||||||
g_pserver->cluster->currentEpoch = epoch;
|
g_pserver->cluster->currentEpoch = epoch;
|
||||||
/* No need to fsync the config here since in the unlucky event
|
/* No need to fsync the config here since in the unlucky event
|
||||||
* of a failure to persist the config, the conflict resolution code
|
* of a failure to persist the config, the conflict resolution code
|
||||||
* will assign an unique config to this node. */
|
* will assign a unique config to this node. */
|
||||||
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|
|
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|
|
||||||
CLUSTER_TODO_SAVE_CONFIG);
|
CLUSTER_TODO_SAVE_CONFIG);
|
||||||
addReply(c,shared.ok);
|
addReply(c,shared.ok);
|
||||||
@ -4927,7 +4942,7 @@ NULL
|
|||||||
|
|
||||||
/* Slaves can be reset while containing data, but not master nodes
|
/* Slaves can be reset while containing data, but not master nodes
|
||||||
* that must be empty. */
|
* that must be empty. */
|
||||||
if (nodeIsMaster(myself) && dictSize(c->db->pdict) != 0) {
|
if (nodeIsMaster(myself) && dictSize(c->db->dict) != 0) {
|
||||||
addReplyError(c,"CLUSTER RESET can't be called with "
|
addReplyError(c,"CLUSTER RESET can't be called with "
|
||||||
"master nodes containing keys");
|
"master nodes containing keys");
|
||||||
return;
|
return;
|
||||||
@ -4950,7 +4965,7 @@ void createDumpPayload(rio *payload, robj_roptr o, robj *key) {
|
|||||||
unsigned char buf[2];
|
unsigned char buf[2];
|
||||||
uint64_t crc;
|
uint64_t crc;
|
||||||
|
|
||||||
/* Serialize the object in a RDB-like format. It consist of an object type
|
/* Serialize the object in an RDB-like format. It consist of an object type
|
||||||
* byte followed by the serialized object. This is understood by RESTORE. */
|
* byte followed by the serialized object. This is understood by RESTORE. */
|
||||||
rioInitWithBuffer(payload,sdsempty());
|
rioInitWithBuffer(payload,sdsempty());
|
||||||
serverAssert(rdbSaveObjectType(payload,o));
|
serverAssert(rdbSaveObjectType(payload,o));
|
||||||
@ -5665,7 +5680,7 @@ void readwriteCommand(client *c) {
|
|||||||
* resharding in progress).
|
* resharding in progress).
|
||||||
*
|
*
|
||||||
* On success the function returns the node that is able to serve the request.
|
* On success the function returns the node that is able to serve the request.
|
||||||
* If the node is not 'myself' a redirection must be perfomed. The kind of
|
* If the node is not 'myself' a redirection must be performed. The kind of
|
||||||
* redirection is specified setting the integer passed by reference
|
* redirection is specified setting the integer passed by reference
|
||||||
* 'error_code', which will be set to CLUSTER_REDIR_ASK or
|
* 'error_code', which will be set to CLUSTER_REDIR_ASK or
|
||||||
* CLUSTER_REDIR_MOVED.
|
* CLUSTER_REDIR_MOVED.
|
||||||
@ -5743,7 +5758,10 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
|
|||||||
margc = ms->commands[i].argc;
|
margc = ms->commands[i].argc;
|
||||||
margv = ms->commands[i].argv;
|
margv = ms->commands[i].argv;
|
||||||
|
|
||||||
keyindex = getKeysFromCommand(mcmd,margv,margc,&numkeys);
|
getKeysResult result = GETKEYS_RESULT_INIT;
|
||||||
|
numkeys = getKeysFromCommand(mcmd,margv,margc,&result);
|
||||||
|
keyindex = result.keys;
|
||||||
|
|
||||||
for (j = 0; j < numkeys; j++) {
|
for (j = 0; j < numkeys; j++) {
|
||||||
robj *thiskey = margv[keyindex[j]];
|
robj *thiskey = margv[keyindex[j]];
|
||||||
int thisslot = keyHashSlot((char*)ptrFromObj(thiskey),
|
int thisslot = keyHashSlot((char*)ptrFromObj(thiskey),
|
||||||
@ -5761,7 +5779,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
|
|||||||
* not trapped earlier in processCommand(). Report the same
|
* not trapped earlier in processCommand(). Report the same
|
||||||
* error to the client. */
|
* error to the client. */
|
||||||
if (n == NULL) {
|
if (n == NULL) {
|
||||||
getKeysFreeResult(keyindex);
|
getKeysFreeResult(&result);
|
||||||
if (error_code)
|
if (error_code)
|
||||||
*error_code = CLUSTER_REDIR_DOWN_UNBOUND;
|
*error_code = CLUSTER_REDIR_DOWN_UNBOUND;
|
||||||
return NULL;
|
return NULL;
|
||||||
@ -5785,7 +5803,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
|
|||||||
if (!equalStringObjects(firstkey,thiskey)) {
|
if (!equalStringObjects(firstkey,thiskey)) {
|
||||||
if (slot != thisslot) {
|
if (slot != thisslot) {
|
||||||
/* Error: multiple keys from different slots. */
|
/* Error: multiple keys from different slots. */
|
||||||
getKeysFreeResult(keyindex);
|
getKeysFreeResult(&result);
|
||||||
if (error_code)
|
if (error_code)
|
||||||
*error_code = CLUSTER_REDIR_CROSS_SLOT;
|
*error_code = CLUSTER_REDIR_CROSS_SLOT;
|
||||||
return NULL;
|
return NULL;
|
||||||
@ -5797,14 +5815,14 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Migarting / Improrting slot? Count keys we don't have. */
|
/* Migrating / Importing slot? Count keys we don't have. */
|
||||||
if ((migrating_slot || importing_slot) &&
|
if ((migrating_slot || importing_slot) &&
|
||||||
lookupKeyRead(&g_pserver->db[0],thiskey) == nullptr)
|
lookupKeyRead(&g_pserver->db[0],thiskey) == nullptr)
|
||||||
{
|
{
|
||||||
missing_keys++;
|
missing_keys++;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
getKeysFreeResult(keyindex);
|
getKeysFreeResult(&result);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* No key at all in command? then we can serve the request
|
/* No key at all in command? then we can serve the request
|
||||||
@ -5866,7 +5884,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Handle the read-only client case reading from a slave: if this
|
/* Handle the read-only client case reading from a slave: if this
|
||||||
* node is a slave and the request is about an hash slot our master
|
* node is a slave and the request is about a hash slot our master
|
||||||
* is serving, we can reply without redirection. */
|
* is serving, we can reply without redirection. */
|
||||||
int is_readonly_command = (c->cmd->flags & CMD_READONLY) ||
|
int is_readonly_command = (c->cmd->flags & CMD_READONLY) ||
|
||||||
(c->cmd->proc == execCommand && !(c->mstate.cmd_inv_flags & CMD_READONLY));
|
(c->cmd->proc == execCommand && !(c->mstate.cmd_inv_flags & CMD_READONLY));
|
||||||
@ -5880,7 +5898,7 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Base case: just return the right node. However if this node is not
|
/* Base case: just return the right node. However if this node is not
|
||||||
* myself, set error_code to MOVED since we need to issue a rediretion. */
|
* myself, set error_code to MOVED since we need to issue a redirection. */
|
||||||
if (n != myself && error_code) *error_code = CLUSTER_REDIR_MOVED;
|
if (n != myself && error_code) *error_code = CLUSTER_REDIR_MOVED;
|
||||||
return n;
|
return n;
|
||||||
}
|
}
|
||||||
@ -5926,7 +5944,7 @@ void clusterRedirectClient(client *c, clusterNode *n, int hashslot, int error_co
|
|||||||
* 3) The client may remain blocked forever (or up to the max timeout time)
|
* 3) The client may remain blocked forever (or up to the max timeout time)
|
||||||
* waiting for a key change that will never happen.
|
* waiting for a key change that will never happen.
|
||||||
*
|
*
|
||||||
* If the client is found to be blocked into an hash slot this node no
|
* If the client is found to be blocked into a hash slot this node no
|
||||||
* longer handles, the client is sent a redirection error, and the function
|
* longer handles, the client is sent a redirection error, and the function
|
||||||
* returns 1. Otherwise 0 is returned and no operation is performed. */
|
* returns 1. Otherwise 0 is returned and no operation is performed. */
|
||||||
int clusterRedirectBlockedClientIfNeeded(client *c) {
|
int clusterRedirectBlockedClientIfNeeded(client *c) {
|
||||||
@ -5955,6 +5973,15 @@ int clusterRedirectBlockedClientIfNeeded(client *c) {
|
|||||||
int slot = keyHashSlot((char*)ptrFromObj(key), sdslen(szFromObj(key)));
|
int slot = keyHashSlot((char*)ptrFromObj(key), sdslen(szFromObj(key)));
|
||||||
clusterNode *node = g_pserver->cluster->slots[slot];
|
clusterNode *node = g_pserver->cluster->slots[slot];
|
||||||
|
|
||||||
|
/* if the client is read-only and attempting to access key that our
|
||||||
|
* replica can handle, allow it. */
|
||||||
|
if ((c->flags & CLIENT_READONLY) &&
|
||||||
|
(c->lastcmd->flags & CMD_READONLY) &&
|
||||||
|
nodeIsSlave(myself) && myself->slaveof == node)
|
||||||
|
{
|
||||||
|
node = myself;
|
||||||
|
}
|
||||||
|
|
||||||
/* We send an error and unblock the client if:
|
/* We send an error and unblock the client if:
|
||||||
* 1) The slot is unassigned, emitting a cluster down error.
|
* 1) The slot is unassigned, emitting a cluster down error.
|
||||||
* 2) The slot is not handled by this node, nor being imported. */
|
* 2) The slot is not handled by this node, nor being imported. */
|
||||||
|
@ -42,7 +42,9 @@ typedef struct clusterLink {
|
|||||||
mstime_t ctime; /* Link creation time */
|
mstime_t ctime; /* Link creation time */
|
||||||
connection *conn; /* Connection to remote node */
|
connection *conn; /* Connection to remote node */
|
||||||
sds sndbuf; /* Packet send buffer */
|
sds sndbuf; /* Packet send buffer */
|
||||||
sds rcvbuf; /* Packet reception buffer */
|
char *rcvbuf; /* Packet reception buffer */
|
||||||
|
size_t rcvbuf_len; /* Used size of rcvbuf */
|
||||||
|
size_t rcvbuf_alloc; /* Used size of rcvbuf */
|
||||||
struct clusterNode *node; /* Node related to this link if any, or NULL */
|
struct clusterNode *node; /* Node related to this link if any, or NULL */
|
||||||
} clusterLink;
|
} clusterLink;
|
||||||
|
|
||||||
@ -55,8 +57,8 @@ typedef struct clusterLink {
|
|||||||
#define CLUSTER_NODE_HANDSHAKE 32 /* We have still to exchange the first ping */
|
#define CLUSTER_NODE_HANDSHAKE 32 /* We have still to exchange the first ping */
|
||||||
#define CLUSTER_NODE_NOADDR 64 /* We don't know the address of this node */
|
#define CLUSTER_NODE_NOADDR 64 /* We don't know the address of this node */
|
||||||
#define CLUSTER_NODE_MEET 128 /* Send a MEET message to this node */
|
#define CLUSTER_NODE_MEET 128 /* Send a MEET message to this node */
|
||||||
#define CLUSTER_NODE_MIGRATE_TO 256 /* Master elegible for replica migration. */
|
#define CLUSTER_NODE_MIGRATE_TO 256 /* Master eligible for replica migration. */
|
||||||
#define CLUSTER_NODE_NOFAILOVER 512 /* Slave will not try to failver. */
|
#define CLUSTER_NODE_NOFAILOVER 512 /* Slave will not try to failover. */
|
||||||
#define CLUSTER_NODE_NULL_NAME "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"
|
#define CLUSTER_NODE_NULL_NAME "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"
|
||||||
|
|
||||||
#define nodeIsMaster(n) ((n)->flags & CLUSTER_NODE_MASTER)
|
#define nodeIsMaster(n) ((n)->flags & CLUSTER_NODE_MASTER)
|
||||||
@ -168,10 +170,10 @@ typedef struct clusterState {
|
|||||||
clusterNode *mf_slave; /* Slave performing the manual failover. */
|
clusterNode *mf_slave; /* Slave performing the manual failover. */
|
||||||
/* Manual failover state of slave. */
|
/* Manual failover state of slave. */
|
||||||
long long mf_master_offset; /* Master offset the slave needs to start MF
|
long long mf_master_offset; /* Master offset the slave needs to start MF
|
||||||
or zero if stil not received. */
|
or zero if still not received. */
|
||||||
int mf_can_start; /* If non-zero signal that the manual failover
|
int mf_can_start; /* If non-zero signal that the manual failover
|
||||||
can start requesting masters vote. */
|
can start requesting masters vote. */
|
||||||
/* The followign fields are used by masters to take state on elections. */
|
/* The following fields are used by masters to take state on elections. */
|
||||||
uint64_t lastVoteEpoch; /* Epoch of the last vote granted. */
|
uint64_t lastVoteEpoch; /* Epoch of the last vote granted. */
|
||||||
int todo_before_sleep; /* Things to do in clusterBeforeSleep(). */
|
int todo_before_sleep; /* Things to do in clusterBeforeSleep(). */
|
||||||
/* Messages received and sent by type. */
|
/* Messages received and sent by type. */
|
||||||
|
143
src/config.cpp
143
src/config.cpp
@ -106,6 +106,15 @@ configEnum tls_auth_clients_enum[] = {
|
|||||||
{"optional", TLS_CLIENT_AUTH_OPTIONAL},
|
{"optional", TLS_CLIENT_AUTH_OPTIONAL},
|
||||||
{NULL, 0}
|
{NULL, 0}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
configEnum oom_score_adj_enum[] = {
|
||||||
|
{"no", OOM_SCORE_ADJ_NO},
|
||||||
|
{"yes", OOM_SCORE_RELATIVE},
|
||||||
|
{"relative", OOM_SCORE_RELATIVE},
|
||||||
|
{"absolute", OOM_SCORE_ADJ_ABSOLUTE},
|
||||||
|
{NULL, 0}
|
||||||
|
};
|
||||||
|
|
||||||
/* Output buffer limits presets. */
|
/* Output buffer limits presets. */
|
||||||
clientBufferLimitsConfig clientBufferLimitsDefaults[CLIENT_TYPE_OBUF_COUNT] = {
|
clientBufferLimitsConfig clientBufferLimitsDefaults[CLIENT_TYPE_OBUF_COUNT] = {
|
||||||
{0, 0, 0}, /* normal */
|
{0, 0, 0}, /* normal */
|
||||||
@ -302,7 +311,7 @@ void queueLoadModule(sds path, sds *argv, int argc) {
|
|||||||
* g_pserver->oom_score_adj_values if valid.
|
* g_pserver->oom_score_adj_values if valid.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
static int updateOOMScoreAdjValues(sds *args, const char **err) {
|
static int updateOOMScoreAdjValues(sds *args, const char **err, int apply) {
|
||||||
int i;
|
int i;
|
||||||
int values[CONFIG_OOM_COUNT];
|
int values[CONFIG_OOM_COUNT];
|
||||||
|
|
||||||
@ -310,8 +319,8 @@ static int updateOOMScoreAdjValues(sds *args, const char **err) {
|
|||||||
char *eptr;
|
char *eptr;
|
||||||
long long val = strtoll(args[i], &eptr, 10);
|
long long val = strtoll(args[i], &eptr, 10);
|
||||||
|
|
||||||
if (*eptr != '\0' || val < -1000 || val > 1000) {
|
if (*eptr != '\0' || val < -2000 || val > 2000) {
|
||||||
if (err) *err = "Invalid oom-score-adj-values, elements must be between -1000 and 1000.";
|
if (err) *err = "Invalid oom-score-adj-values, elements must be between -2000 and 2000.";
|
||||||
return C_ERR;
|
return C_ERR;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -336,6 +345,10 @@ static int updateOOMScoreAdjValues(sds *args, const char **err) {
|
|||||||
g_pserver->oom_score_adj_values[i] = values[i];
|
g_pserver->oom_score_adj_values[i] = values[i];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* When parsing the config file, we want to apply only when all is done. */
|
||||||
|
if (!apply)
|
||||||
|
return C_OK;
|
||||||
|
|
||||||
/* Update */
|
/* Update */
|
||||||
if (setOOMScoreAdj(-1) == C_ERR) {
|
if (setOOMScoreAdj(-1) == C_ERR) {
|
||||||
/* Roll back */
|
/* Roll back */
|
||||||
@ -473,7 +486,16 @@ void loadServerConfigFromString(char *config) {
|
|||||||
} else if ((!strcasecmp(argv[0],"slaveof") ||
|
} else if ((!strcasecmp(argv[0],"slaveof") ||
|
||||||
!strcasecmp(argv[0],"replicaof")) && argc == 3) {
|
!strcasecmp(argv[0],"replicaof")) && argc == 3) {
|
||||||
slaveof_linenum = linenum;
|
slaveof_linenum = linenum;
|
||||||
replicationAddMaster(argv[1], atoi(argv[2]));
|
if (!strcasecmp(argv[1], "no") && !strcasecmp(argv[2], "one")) {
|
||||||
|
listRelease(g_pserver->masters);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
char *ptr;
|
||||||
|
int port = strtol(argv[2], &ptr, 10);
|
||||||
|
if (port < 0 || port > 65535 || *ptr != '\0') {
|
||||||
|
err= "Invalid master port"; goto loaderr;
|
||||||
|
}
|
||||||
|
replicationAddMaster(argv[1], port);
|
||||||
} else if (!strcasecmp(argv[0],"requirepass") && argc == 2) {
|
} else if (!strcasecmp(argv[0],"requirepass") && argc == 2) {
|
||||||
if (strlen(argv[1]) > CONFIG_AUTHPASS_MAX_LEN) {
|
if (strlen(argv[1]) > CONFIG_AUTHPASS_MAX_LEN) {
|
||||||
err = "Password is longer than CONFIG_AUTHPASS_MAX_LEN";
|
err = "Password is longer than CONFIG_AUTHPASS_MAX_LEN";
|
||||||
@ -484,11 +506,16 @@ void loadServerConfigFromString(char *config) {
|
|||||||
* additionally is to remember the cleartext password in this
|
* additionally is to remember the cleartext password in this
|
||||||
* case, for backward compatibility with Redis <= 5. */
|
* case, for backward compatibility with Redis <= 5. */
|
||||||
ACLSetUser(DefaultUser,"resetpass",-1);
|
ACLSetUser(DefaultUser,"resetpass",-1);
|
||||||
|
sdsfree(g_pserver->requirepass);
|
||||||
|
g_pserver->requirepass = NULL;
|
||||||
|
if (sdslen(argv[1])) {
|
||||||
sds aclop = sdscatprintf(sdsempty(),">%s",argv[1]);
|
sds aclop = sdscatprintf(sdsempty(),">%s",argv[1]);
|
||||||
ACLSetUser(DefaultUser,aclop,sdslen(aclop));
|
ACLSetUser(DefaultUser,aclop,sdslen(aclop));
|
||||||
sdsfree(aclop);
|
sdsfree(aclop);
|
||||||
sdsfree(g_pserver->requirepass);
|
|
||||||
g_pserver->requirepass = sdsnew(argv[1]);
|
g_pserver->requirepass = sdsnew(argv[1]);
|
||||||
|
} else {
|
||||||
|
ACLSetUser(DefaultUser,"nopass",-1);
|
||||||
|
}
|
||||||
} else if (!strcasecmp(argv[0],"list-max-ziplist-entries") && argc == 2){
|
} else if (!strcasecmp(argv[0],"list-max-ziplist-entries") && argc == 2){
|
||||||
/* DEAD OPTION */
|
/* DEAD OPTION */
|
||||||
} else if (!strcasecmp(argv[0],"list-max-ziplist-value") && argc == 2) {
|
} else if (!strcasecmp(argv[0],"list-max-ziplist-value") && argc == 2) {
|
||||||
@ -543,7 +570,7 @@ void loadServerConfigFromString(char *config) {
|
|||||||
cserver.client_obuf_limits[type].soft_limit_bytes = soft;
|
cserver.client_obuf_limits[type].soft_limit_bytes = soft;
|
||||||
cserver.client_obuf_limits[type].soft_limit_seconds = soft_seconds;
|
cserver.client_obuf_limits[type].soft_limit_seconds = soft_seconds;
|
||||||
} else if (!strcasecmp(argv[0],"oom-score-adj-values") && argc == 1 + CONFIG_OOM_COUNT) {
|
} else if (!strcasecmp(argv[0],"oom-score-adj-values") && argc == 1 + CONFIG_OOM_COUNT) {
|
||||||
if (updateOOMScoreAdjValues(&argv[1], &err) == C_ERR) goto loaderr;
|
if (updateOOMScoreAdjValues(&argv[1], &err, 0) == C_ERR) goto loaderr;
|
||||||
} else if (!strcasecmp(argv[0],"notify-keyspace-events") && argc == 2) {
|
} else if (!strcasecmp(argv[0],"notify-keyspace-events") && argc == 2) {
|
||||||
int flags = keyspaceEventsStringToFlags(argv[1]);
|
int flags = keyspaceEventsStringToFlags(argv[1]);
|
||||||
|
|
||||||
@ -751,11 +778,16 @@ void configSetCommand(client *c) {
|
|||||||
* additionally is to remember the cleartext password in this
|
* additionally is to remember the cleartext password in this
|
||||||
* case, for backward compatibility with Redis <= 5. */
|
* case, for backward compatibility with Redis <= 5. */
|
||||||
ACLSetUser(DefaultUser,"resetpass",-1);
|
ACLSetUser(DefaultUser,"resetpass",-1);
|
||||||
|
sdsfree(g_pserver->requirepass);
|
||||||
|
g_pserver->requirepass = NULL;
|
||||||
|
if (sdslen(szFromObj(o))) {
|
||||||
sds aclop = sdscatprintf(sdsempty(),">%s",(char*)ptrFromObj(o));
|
sds aclop = sdscatprintf(sdsempty(),">%s",(char*)ptrFromObj(o));
|
||||||
ACLSetUser(DefaultUser,aclop,sdslen(aclop));
|
ACLSetUser(DefaultUser,aclop,sdslen(aclop));
|
||||||
sdsfree(aclop);
|
sdsfree(aclop);
|
||||||
sdsfree(g_pserver->requirepass);
|
|
||||||
g_pserver->requirepass = sdsnew(szFromObj(o));
|
g_pserver->requirepass = sdsnew(szFromObj(o));
|
||||||
|
} else {
|
||||||
|
ACLSetUser(DefaultUser,"nopass",-1);
|
||||||
|
}
|
||||||
} config_set_special_field("save") {
|
} config_set_special_field("save") {
|
||||||
int vlen, j;
|
int vlen, j;
|
||||||
sds *v = sdssplitlen(szFromObj(o),sdslen(szFromObj(o))," ",1,&vlen);
|
sds *v = sdssplitlen(szFromObj(o),sdslen(szFromObj(o))," ",1,&vlen);
|
||||||
@ -845,7 +877,7 @@ void configSetCommand(client *c) {
|
|||||||
int success = 1;
|
int success = 1;
|
||||||
|
|
||||||
sds *v = sdssplitlen(szFromObj(o), sdslen(szFromObj(o)), " ", 1, &vlen);
|
sds *v = sdssplitlen(szFromObj(o), sdslen(szFromObj(o)), " ", 1, &vlen);
|
||||||
if (vlen != CONFIG_OOM_COUNT || updateOOMScoreAdjValues(v, &errstr) == C_ERR)
|
if (vlen != CONFIG_OOM_COUNT || updateOOMScoreAdjValues(v, &errstr, 1) == C_ERR)
|
||||||
success = 0;
|
success = 0;
|
||||||
|
|
||||||
sdsfreesplitres(v, vlen);
|
sdsfreesplitres(v, vlen);
|
||||||
@ -1356,7 +1388,7 @@ void rewriteConfigNumericalOption(struct rewriteConfigState *state, const char *
|
|||||||
rewriteConfigRewriteLine(state,option,line,force);
|
rewriteConfigRewriteLine(state,option,line,force);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Rewrite a octal option. */
|
/* Rewrite an octal option. */
|
||||||
void rewriteConfigOctalOption(struct rewriteConfigState *state, const char *option, int value, int defvalue) {
|
void rewriteConfigOctalOption(struct rewriteConfigState *state, const char *option, int value, int defvalue) {
|
||||||
int force = value != defvalue;
|
int force = value != defvalue;
|
||||||
sds line = sdscatprintf(sdsempty(),"%s %o",option,value);
|
sds line = sdscatprintf(sdsempty(),"%s %o",option,value);
|
||||||
@ -1381,6 +1413,12 @@ void rewriteConfigSaveOption(struct rewriteConfigState *state) {
|
|||||||
int j;
|
int j;
|
||||||
sds line;
|
sds line;
|
||||||
|
|
||||||
|
/* In Sentinel mode we don't need to rewrite the save parameters */
|
||||||
|
if (g_pserver->sentinel_mode) {
|
||||||
|
rewriteConfigMarkAsProcessed(state,"save");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
/* Note that if there are no save parameters at all, all the current
|
/* Note that if there are no save parameters at all, all the current
|
||||||
* config line with "save" will be detected as orphaned and deleted,
|
* config line with "save" will be detected as orphaned and deleted,
|
||||||
* resulting into no RDB persistence as expected. */
|
* resulting into no RDB persistence as expected. */
|
||||||
@ -1628,60 +1666,62 @@ void rewriteConfigRemoveOrphaned(struct rewriteConfigState *state) {
|
|||||||
dictReleaseIterator(di);
|
dictReleaseIterator(di);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This function overwrites the old configuration file with the new content.
|
/* This function replaces the old configuration file with the new content
|
||||||
*
|
* in an atomic manner.
|
||||||
* 1) The old file length is obtained.
|
|
||||||
* 2) If the new content is smaller, padding is added.
|
|
||||||
* 3) A single write(2) call is used to replace the content of the file.
|
|
||||||
* 4) Later the file is truncated to the length of the new content.
|
|
||||||
*
|
|
||||||
* This way we are sure the file is left in a consistent state even if the
|
|
||||||
* process is stopped between any of the four operations.
|
|
||||||
*
|
*
|
||||||
* The function returns 0 on success, otherwise -1 is returned and errno
|
* The function returns 0 on success, otherwise -1 is returned and errno
|
||||||
* set accordingly. */
|
* is set accordingly. */
|
||||||
int rewriteConfigOverwriteFile(char *configfile, sds content) {
|
int rewriteConfigOverwriteFile(char *configfile, sds content) {
|
||||||
int retval = 0;
|
int fd = -1;
|
||||||
int fd = open(configfile,O_RDWR|O_CREAT,0644);
|
int retval = -1;
|
||||||
int content_size = sdslen(content), padding = 0;
|
char tmp_conffile[PATH_MAX];
|
||||||
struct stat sb;
|
const char *tmp_suffix = ".XXXXXX";
|
||||||
sds content_padded;
|
size_t offset = 0;
|
||||||
|
ssize_t written_bytes = 0;
|
||||||
|
|
||||||
/* 1) Open the old file (or create a new one if it does not
|
int tmp_path_len = snprintf(tmp_conffile, sizeof(tmp_conffile), "%s%s", configfile, tmp_suffix);
|
||||||
* exist), get the size. */
|
if (tmp_path_len <= 0 || (unsigned int)tmp_path_len >= sizeof(tmp_conffile)) {
|
||||||
if (fd == -1) return -1; /* errno set by open(). */
|
serverLog(LL_WARNING, "Config file full path is too long");
|
||||||
if (fstat(fd,&sb) == -1) {
|
errno = ENAMETOOLONG;
|
||||||
close(fd);
|
return retval;
|
||||||
return -1; /* errno set by fstat(). */
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* 2) Pad the content at least match the old file size. */
|
#ifdef _GNU_SOURCE
|
||||||
content_padded = sdsdup(content);
|
fd = mkostemp(tmp_conffile, O_CLOEXEC);
|
||||||
if (content_size < sb.st_size) {
|
#else
|
||||||
/* If the old file was bigger, pad the content with
|
/* There's a theoretical chance here to leak the FD if a module thread forks & execv in the middle */
|
||||||
* a newline plus as many "#" chars as required. */
|
fd = mkstemp(tmp_conffile);
|
||||||
padding = sb.st_size - content_size;
|
#endif
|
||||||
content_padded = sdsgrowzero(content_padded,sb.st_size);
|
|
||||||
content_padded[content_size] = '\n';
|
if (fd == -1) {
|
||||||
memset(content_padded+content_size+1,'#',padding-1);
|
serverLog(LL_WARNING, "Could not create tmp config file (%s)", strerror(errno));
|
||||||
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* 3) Write the new content using a single write(2). */
|
while (offset < sdslen(content)) {
|
||||||
if (write(fd,content_padded,strlen(content_padded)) == -1) {
|
written_bytes = write(fd, content + offset, sdslen(content) - offset);
|
||||||
retval = -1;
|
if (written_bytes <= 0) {
|
||||||
|
if (errno == EINTR) continue; /* FD is blocking, no other retryable errors */
|
||||||
|
serverLog(LL_WARNING, "Failed after writing (%zd) bytes to tmp config file (%s)", offset, strerror(errno));
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
offset+=written_bytes;
|
||||||
/* 4) Truncate the file to the right length if we used padding. */
|
|
||||||
if (padding) {
|
|
||||||
if (ftruncate(fd,content_size) == -1) {
|
|
||||||
/* Non critical error... */
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (fsync(fd))
|
||||||
|
serverLog(LL_WARNING, "Could not sync tmp config file to disk (%s)", strerror(errno));
|
||||||
|
else if (fchmod(fd, 0644) == -1)
|
||||||
|
serverLog(LL_WARNING, "Could not chmod config file (%s)", strerror(errno));
|
||||||
|
else if (rename(tmp_conffile, configfile) == -1)
|
||||||
|
serverLog(LL_WARNING, "Could not rename tmp config file (%s)", strerror(errno));
|
||||||
|
else {
|
||||||
|
retval = 0;
|
||||||
|
serverLog(LL_DEBUG, "Rewritten config file (%s) successfully", configfile);
|
||||||
}
|
}
|
||||||
|
|
||||||
cleanup:
|
cleanup:
|
||||||
sdsfree(content_padded);
|
|
||||||
close(fd);
|
close(fd);
|
||||||
|
if (retval) unlink(tmp_conffile);
|
||||||
return retval;
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2195,7 +2235,7 @@ static int isValidAOFfilename(char *val, const char **err) {
|
|||||||
static int updateHZ(long long val, long long prev, const char **err) {
|
static int updateHZ(long long val, long long prev, const char **err) {
|
||||||
UNUSED(prev);
|
UNUSED(prev);
|
||||||
UNUSED(err);
|
UNUSED(err);
|
||||||
/* Hz is more an hint from the user, so we accept values out of range
|
/* Hz is more a hint from the user, so we accept values out of range
|
||||||
* but cap them to reasonable values. */
|
* but cap them to reasonable values. */
|
||||||
g_pserver->config_hz = val;
|
g_pserver->config_hz = val;
|
||||||
if (g_pserver->config_hz < CONFIG_MIN_HZ) g_pserver->config_hz = CONFIG_MIN_HZ;
|
if (g_pserver->config_hz < CONFIG_MIN_HZ) g_pserver->config_hz = CONFIG_MIN_HZ;
|
||||||
@ -2213,7 +2253,7 @@ static int updateJemallocBgThread(int val, int prev, const char **err) {
|
|||||||
|
|
||||||
static int updateReplBacklogSize(long long val, long long prev, const char **err) {
|
static int updateReplBacklogSize(long long val, long long prev, const char **err) {
|
||||||
/* resizeReplicationBacklog sets g_pserver->repl_backlog_size, and relies on
|
/* resizeReplicationBacklog sets g_pserver->repl_backlog_size, and relies on
|
||||||
* being able to tell when the size changes, so restore prev becore calling it. */
|
* being able to tell when the size changes, so restore prev before calling it. */
|
||||||
UNUSED(err);
|
UNUSED(err);
|
||||||
g_pserver->repl_backlog_size = prev;
|
g_pserver->repl_backlog_size = prev;
|
||||||
resizeReplicationBacklog(val);
|
resizeReplicationBacklog(val);
|
||||||
@ -2403,7 +2443,6 @@ standardConfig configs[] = {
|
|||||||
createBoolConfig("multi-master-no-forward", NULL, MODIFIABLE_CONFIG, cserver.multimaster_no_forward, 0, validateMultiMasterNoForward, NULL),
|
createBoolConfig("multi-master-no-forward", NULL, MODIFIABLE_CONFIG, cserver.multimaster_no_forward, 0, validateMultiMasterNoForward, NULL),
|
||||||
createBoolConfig("allow-write-during-load", NULL, MODIFIABLE_CONFIG, g_pserver->fWriteDuringActiveLoad, 0, NULL, NULL),
|
createBoolConfig("allow-write-during-load", NULL, MODIFIABLE_CONFIG, g_pserver->fWriteDuringActiveLoad, 0, NULL, NULL),
|
||||||
createBoolConfig("io-threads-do-reads", NULL, IMMUTABLE_CONFIG, fDummy, 0, NULL, NULL),
|
createBoolConfig("io-threads-do-reads", NULL, IMMUTABLE_CONFIG, fDummy, 0, NULL, NULL),
|
||||||
createBoolConfig("oom-score-adj", NULL, MODIFIABLE_CONFIG, g_pserver->oom_score_adj, 0, NULL, updateOOMScoreAdj),
|
|
||||||
|
|
||||||
/* String Configs */
|
/* String Configs */
|
||||||
createStringConfig("aclfile", NULL, IMMUTABLE_CONFIG, ALLOW_EMPTY_STRING, g_pserver->acl_filename, "", NULL, NULL),
|
createStringConfig("aclfile", NULL, IMMUTABLE_CONFIG, ALLOW_EMPTY_STRING, g_pserver->acl_filename, "", NULL, NULL),
|
||||||
@ -2420,6 +2459,7 @@ standardConfig configs[] = {
|
|||||||
createStringConfig("bio_cpulist", NULL, IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, g_pserver->bio_cpulist, NULL, NULL, NULL),
|
createStringConfig("bio_cpulist", NULL, IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, g_pserver->bio_cpulist, NULL, NULL, NULL),
|
||||||
createStringConfig("aof_rewrite_cpulist", NULL, IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, g_pserver->aof_rewrite_cpulist, NULL, NULL, NULL),
|
createStringConfig("aof_rewrite_cpulist", NULL, IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, g_pserver->aof_rewrite_cpulist, NULL, NULL, NULL),
|
||||||
createStringConfig("bgsave_cpulist", NULL, IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, g_pserver->bgsave_cpulist, NULL, NULL, NULL),
|
createStringConfig("bgsave_cpulist", NULL, IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, g_pserver->bgsave_cpulist, NULL, NULL, NULL),
|
||||||
|
createStringConfig("ignore-warnings", NULL, MODIFIABLE_CONFIG, ALLOW_EMPTY_STRING, g_pserver->ignore_warnings, "ARM64-COW-BUG", NULL, NULL),
|
||||||
|
|
||||||
/* Enum Configs */
|
/* Enum Configs */
|
||||||
createEnumConfig("supervised", NULL, IMMUTABLE_CONFIG, supervised_mode_enum, cserver.supervised_mode, SUPERVISED_NONE, NULL, NULL),
|
createEnumConfig("supervised", NULL, IMMUTABLE_CONFIG, supervised_mode_enum, cserver.supervised_mode, SUPERVISED_NONE, NULL, NULL),
|
||||||
@ -2428,6 +2468,7 @@ standardConfig configs[] = {
|
|||||||
createEnumConfig("loglevel", NULL, MODIFIABLE_CONFIG, loglevel_enum, cserver.verbosity, LL_NOTICE, NULL, NULL),
|
createEnumConfig("loglevel", NULL, MODIFIABLE_CONFIG, loglevel_enum, cserver.verbosity, LL_NOTICE, NULL, NULL),
|
||||||
createEnumConfig("maxmemory-policy", NULL, MODIFIABLE_CONFIG, maxmemory_policy_enum, g_pserver->maxmemory_policy, MAXMEMORY_NO_EVICTION, NULL, NULL),
|
createEnumConfig("maxmemory-policy", NULL, MODIFIABLE_CONFIG, maxmemory_policy_enum, g_pserver->maxmemory_policy, MAXMEMORY_NO_EVICTION, NULL, NULL),
|
||||||
createEnumConfig("appendfsync", NULL, MODIFIABLE_CONFIG, aof_fsync_enum, g_pserver->aof_fsync, AOF_FSYNC_EVERYSEC, NULL, NULL),
|
createEnumConfig("appendfsync", NULL, MODIFIABLE_CONFIG, aof_fsync_enum, g_pserver->aof_fsync, AOF_FSYNC_EVERYSEC, NULL, NULL),
|
||||||
|
createEnumConfig("oom-score-adj", NULL, MODIFIABLE_CONFIG, oom_score_adj_enum, g_pserver->oom_score_adj, OOM_SCORE_ADJ_NO, NULL, updateOOMScoreAdj),
|
||||||
|
|
||||||
/* Integer configs */
|
/* Integer configs */
|
||||||
createIntConfig("databases", NULL, IMMUTABLE_CONFIG, 1, INT_MAX, cserver.dbnum, 16, INTEGER_CONFIG, NULL, NULL),
|
createIntConfig("databases", NULL, IMMUTABLE_CONFIG, 1, INT_MAX, cserver.dbnum, 16, INTEGER_CONFIG, NULL, NULL),
|
||||||
|
12
src/config.h
12
src/config.h
@ -64,7 +64,7 @@
|
|||||||
|
|
||||||
/* Test for backtrace() */
|
/* Test for backtrace() */
|
||||||
#if defined(__APPLE__) || (defined(__linux__) && defined(__GLIBC__)) || \
|
#if defined(__APPLE__) || (defined(__linux__) && defined(__GLIBC__)) || \
|
||||||
defined(__FreeBSD__) || (defined(__OpenBSD__) && defined(USE_BACKTRACE))\
|
defined(__FreeBSD__) || ((defined(__OpenBSD__) || defined(__NetBSD__)) && defined(USE_BACKTRACE))\
|
||||||
|| defined(__DragonFly__)
|
|| defined(__DragonFly__)
|
||||||
#define HAVE_BACKTRACE 1
|
#define HAVE_BACKTRACE 1
|
||||||
#endif
|
#endif
|
||||||
@ -124,6 +124,10 @@
|
|||||||
#define USE_SETPROCTITLE
|
#define USE_SETPROCTITLE
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
#if defined(__HAIKU__)
|
||||||
|
#define ESOCKTNOSUPPORT 0
|
||||||
|
#endif
|
||||||
|
|
||||||
#if ((defined __linux && defined(__GLIBC__)) || defined __APPLE__)
|
#if ((defined __linux && defined(__GLIBC__)) || defined __APPLE__)
|
||||||
#define USE_SETPROCTITLE
|
#define USE_SETPROCTITLE
|
||||||
#define INIT_SETPROCTITLE_REPLACEMENT
|
#define INIT_SETPROCTITLE_REPLACEMENT
|
||||||
@ -172,7 +176,7 @@ void setproctitle(const char *fmt, ...);
|
|||||||
#endif /* BYTE_ORDER */
|
#endif /* BYTE_ORDER */
|
||||||
|
|
||||||
/* Sometimes after including an OS-specific header that defines the
|
/* Sometimes after including an OS-specific header that defines the
|
||||||
* endianess we end with __BYTE_ORDER but not with BYTE_ORDER that is what
|
* endianness we end with __BYTE_ORDER but not with BYTE_ORDER that is what
|
||||||
* the Redis code uses. In this case let's define everything without the
|
* the Redis code uses. In this case let's define everything without the
|
||||||
* underscores. */
|
* underscores. */
|
||||||
#ifndef BYTE_ORDER
|
#ifndef BYTE_ORDER
|
||||||
@ -242,7 +246,7 @@ void setproctitle(const char *fmt, ...);
|
|||||||
#define redis_set_thread_title(name) pthread_set_name_np(pthread_self(), name)
|
#define redis_set_thread_title(name) pthread_set_name_np(pthread_self(), name)
|
||||||
#elif defined __NetBSD__
|
#elif defined __NetBSD__
|
||||||
#include <pthread.h>
|
#include <pthread.h>
|
||||||
#define redis_set_thread_title(name) pthread_setname_np(pthread_self(), name, NULL)
|
#define redis_set_thread_title(name) pthread_setname_np(pthread_self(), "%s", name)
|
||||||
#else
|
#else
|
||||||
#if (defined __APPLE__ && defined(MAC_OS_X_VERSION_10_7))
|
#if (defined __APPLE__ && defined(MAC_OS_X_VERSION_10_7))
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
@ -258,7 +262,7 @@ int pthread_setname_np(const char *name);
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
/* Check if we can use setcpuaffinity(). */
|
/* Check if we can use setcpuaffinity(). */
|
||||||
#if (defined __linux || defined __NetBSD__ || defined __FreeBSD__)
|
#if (defined __linux || defined __NetBSD__ || defined __FreeBSD__ || defined __DragonFly__)
|
||||||
#define USE_SETCPUAFFINITY
|
#define USE_SETCPUAFFINITY
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
extern "C"
|
extern "C"
|
||||||
|
@ -168,6 +168,11 @@ static int connSocketWrite(connection *conn, const void *data, size_t data_len)
|
|||||||
int ret = write(conn->fd, data, data_len);
|
int ret = write(conn->fd, data, data_len);
|
||||||
if (ret < 0 && errno != EAGAIN) {
|
if (ret < 0 && errno != EAGAIN) {
|
||||||
conn->last_errno = errno;
|
conn->last_errno = errno;
|
||||||
|
|
||||||
|
/* Don't overwrite the state of a connection that is not already
|
||||||
|
* connected, not to mess with handler callbacks.
|
||||||
|
*/
|
||||||
|
if (conn->state == CONN_STATE_CONNECTED)
|
||||||
conn->state.store(CONN_STATE_ERROR, std::memory_order_relaxed);
|
conn->state.store(CONN_STATE_ERROR, std::memory_order_relaxed);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -180,6 +185,11 @@ static int connSocketRead(connection *conn, void *buf, size_t buf_len) {
|
|||||||
conn->state.store(CONN_STATE_CLOSED, std::memory_order_release);
|
conn->state.store(CONN_STATE_CLOSED, std::memory_order_release);
|
||||||
} else if (ret < 0 && errno != EAGAIN) {
|
} else if (ret < 0 && errno != EAGAIN) {
|
||||||
conn->last_errno = errno;
|
conn->last_errno = errno;
|
||||||
|
|
||||||
|
/* Don't overwrite the state of a connection that is not already
|
||||||
|
* connected, not to mess with handler callbacks.
|
||||||
|
*/
|
||||||
|
if (conn->state == CONN_STATE_CONNECTED)
|
||||||
conn->state.store(CONN_STATE_ERROR, std::memory_order_release);
|
conn->state.store(CONN_STATE_ERROR, std::memory_order_release);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -260,8 +270,9 @@ static void connSocketEventHandler(struct aeEventLoop *el, int fd, void *clientD
|
|||||||
if (conn->state.load(std::memory_order_relaxed) == CONN_STATE_CONNECTING &&
|
if (conn->state.load(std::memory_order_relaxed) == CONN_STATE_CONNECTING &&
|
||||||
(mask & AE_WRITABLE) && conn->conn_handler) {
|
(mask & AE_WRITABLE) && conn->conn_handler) {
|
||||||
|
|
||||||
if (connGetSocketError(conn)) {
|
int conn_error = connGetSocketError(conn);
|
||||||
conn->last_errno = errno;
|
if (conn_error) {
|
||||||
|
conn->last_errno = conn_error;
|
||||||
conn->state.store(CONN_STATE_ERROR, std::memory_order_release);
|
conn->state.store(CONN_STATE_ERROR, std::memory_order_release);
|
||||||
} else {
|
} else {
|
||||||
conn->state.store(CONN_STATE_CONNECTED, std::memory_order_release);
|
conn->state.store(CONN_STATE_CONNECTED, std::memory_order_release);
|
||||||
|
@ -111,7 +111,7 @@ static inline int connAccept(connection *conn, ConnectionCallbackFunc accept_han
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Establish a connection. The connect_handler will be called when the connection
|
/* Establish a connection. The connect_handler will be called when the connection
|
||||||
* is established, or if an error has occured.
|
* is established, or if an error has occurred.
|
||||||
*
|
*
|
||||||
* The connection handler will be responsible to set up any read/write handlers
|
* The connection handler will be responsible to set up any read/write handlers
|
||||||
* as needed.
|
* as needed.
|
||||||
@ -173,7 +173,7 @@ static inline int connSetReadHandler(connection *conn, ConnectionCallbackFunc fu
|
|||||||
|
|
||||||
/* Set a write handler, and possibly enable a write barrier, this flag is
|
/* Set a write handler, and possibly enable a write barrier, this flag is
|
||||||
* cleared when write handler is changed or removed.
|
* cleared when write handler is changed or removed.
|
||||||
* With barroer enabled, we never fire the event if the read handler already
|
* With barrier enabled, we never fire the event if the read handler already
|
||||||
* fired in the same event loop iteration. Useful when you want to persist
|
* fired in the same event loop iteration. Useful when you want to persist
|
||||||
* things to disk before sending replies, and want to do that in a group fashion. */
|
* things to disk before sending replies, and want to do that in a group fashion. */
|
||||||
static inline int connSetWriteHandlerWithBarrier(connection *conn, ConnectionCallbackFunc func, int barrier, bool fThreadSafe = false) {
|
static inline int connSetWriteHandlerWithBarrier(connection *conn, ConnectionCallbackFunc func, int barrier, bool fThreadSafe = false) {
|
||||||
@ -241,6 +241,7 @@ int connSockName(connection *conn, char *ip, size_t ip_len, int *port);
|
|||||||
const char *connGetInfo(connection *conn, char *buf, size_t buf_len);
|
const char *connGetInfo(connection *conn, char *buf, size_t buf_len);
|
||||||
|
|
||||||
/* Helpers for tls special considerations */
|
/* Helpers for tls special considerations */
|
||||||
|
sds connTLSGetPeerCert(connection *conn);
|
||||||
int tlsHasPendingData();
|
int tlsHasPendingData();
|
||||||
int tlsProcessPendingData();
|
int tlsProcessPendingData();
|
||||||
|
|
||||||
|
@ -35,7 +35,8 @@ void crcspeed64little_init(crcfn64 crcfn, uint64_t table[8][256]) {
|
|||||||
|
|
||||||
/* generate CRCs for all single byte sequences */
|
/* generate CRCs for all single byte sequences */
|
||||||
for (int n = 0; n < 256; n++) {
|
for (int n = 0; n < 256; n++) {
|
||||||
table[0][n] = crcfn(0, &n, 1);
|
unsigned char v = n;
|
||||||
|
table[0][n] = crcfn(0, &v, 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* generate nested CRC table for future slice-by-8 lookup */
|
/* generate nested CRC table for future slice-by-8 lookup */
|
||||||
|
473
src/db.cpp
473
src/db.cpp
@ -35,6 +35,13 @@
|
|||||||
#include <signal.h>
|
#include <signal.h>
|
||||||
#include <ctype.h>
|
#include <ctype.h>
|
||||||
|
|
||||||
|
/* Database backup. */
|
||||||
|
struct dbBackup {
|
||||||
|
redisDb *dbarray;
|
||||||
|
rax *slots_to_keys;
|
||||||
|
uint64_t slots_keys_count[CLUSTER_SLOTS];
|
||||||
|
};
|
||||||
|
|
||||||
/*-----------------------------------------------------------------------------
|
/*-----------------------------------------------------------------------------
|
||||||
* C-level DB API
|
* C-level DB API
|
||||||
*----------------------------------------------------------------------------*/
|
*----------------------------------------------------------------------------*/
|
||||||
@ -86,7 +93,7 @@ void updateDbValAccess(dictEntry *de, int flags)
|
|||||||
* implementations that should instead rely on lookupKeyRead(),
|
* implementations that should instead rely on lookupKeyRead(),
|
||||||
* lookupKeyWrite() and lookupKeyReadWithFlags(). */
|
* lookupKeyWrite() and lookupKeyReadWithFlags(). */
|
||||||
static robj *lookupKey(redisDb *db, robj *key, int flags) {
|
static robj *lookupKey(redisDb *db, robj *key, int flags) {
|
||||||
dictEntry *de = dictFind(db->pdict,ptrFromObj(key));
|
dictEntry *de = dictFind(db->dict,ptrFromObj(key));
|
||||||
if (de) {
|
if (de) {
|
||||||
robj *val = (robj*)dictGetVal(de);
|
robj *val = (robj*)dictGetVal(de);
|
||||||
|
|
||||||
@ -131,11 +138,8 @@ robj_roptr lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) {
|
|||||||
/* Key expired. If we are in the context of a master, expireIfNeeded()
|
/* Key expired. If we are in the context of a master, expireIfNeeded()
|
||||||
* returns 0 only when the key does not exist at all, so it's safe
|
* returns 0 only when the key does not exist at all, so it's safe
|
||||||
* to return NULL ASAP. */
|
* to return NULL ASAP. */
|
||||||
if (listLength(g_pserver->masters) == 0) {
|
if (listLength(g_pserver->masters) == 0)
|
||||||
g_pserver->stat_keyspace_misses++;
|
goto keymiss;
|
||||||
notifyKeyspaceEvent(NOTIFY_KEY_MISS, "keymiss", key, db->id);
|
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* However if we are in the context of a replica, expireIfNeeded() will
|
/* However if we are in the context of a replica, expireIfNeeded() will
|
||||||
* not really try to expire the key, it only returns information
|
* not really try to expire the key, it only returns information
|
||||||
@ -145,7 +149,7 @@ robj_roptr lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) {
|
|||||||
* However, if the command caller is not the master, and as additional
|
* However, if the command caller is not the master, and as additional
|
||||||
* safety measure, the command invoked is a read-only command, we can
|
* safety measure, the command invoked is a read-only command, we can
|
||||||
* safely return NULL here, and provide a more consistent behavior
|
* safely return NULL here, and provide a more consistent behavior
|
||||||
* to clients accessign expired values in a read-only fashion, that
|
* to clients accessing expired values in a read-only fashion, that
|
||||||
* will say the key as non existing.
|
* will say the key as non existing.
|
||||||
*
|
*
|
||||||
* Notably this covers GETs when slaves are used to scale reads. */
|
* Notably this covers GETs when slaves are used to scale reads. */
|
||||||
@ -154,19 +158,21 @@ robj_roptr lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) {
|
|||||||
serverTL->current_client->cmd &&
|
serverTL->current_client->cmd &&
|
||||||
serverTL->current_client->cmd->flags & CMD_READONLY)
|
serverTL->current_client->cmd->flags & CMD_READONLY)
|
||||||
{
|
{
|
||||||
g_pserver->stat_keyspace_misses++;
|
goto keymiss;
|
||||||
notifyKeyspaceEvent(NOTIFY_KEY_MISS, "keymiss", key, db->id);
|
|
||||||
return NULL;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
val = lookupKey(db,key,flags);
|
val = lookupKey(db,key,flags);
|
||||||
if (val == NULL) {
|
if (val == NULL)
|
||||||
|
goto keymiss;
|
||||||
|
g_pserver->stat_keyspace_hits++;
|
||||||
|
return val;
|
||||||
|
|
||||||
|
keymiss:
|
||||||
|
if (!(flags & LOOKUP_NONOTIFY)) {
|
||||||
g_pserver->stat_keyspace_misses++;
|
g_pserver->stat_keyspace_misses++;
|
||||||
notifyKeyspaceEvent(NOTIFY_KEY_MISS, "keymiss", key, db->id);
|
notifyKeyspaceEvent(NOTIFY_KEY_MISS, "keymiss", key, db->id);
|
||||||
}
|
}
|
||||||
else
|
return NULL;
|
||||||
g_pserver->stat_keyspace_hits++;
|
|
||||||
return val;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Like lookupKeyReadWithFlags(), but does not use any flag, which is the
|
/* Like lookupKeyReadWithFlags(), but does not use any flag, which is the
|
||||||
@ -205,7 +211,7 @@ robj *lookupKeyWriteOrReply(client *c, robj *key, robj *reply) {
|
|||||||
int dbAddCore(redisDb *db, robj *key, robj *val, bool fUpdateMvcc) {
|
int dbAddCore(redisDb *db, robj *key, robj *val, bool fUpdateMvcc) {
|
||||||
serverAssert(!val->FExpires());
|
serverAssert(!val->FExpires());
|
||||||
sds copy = sdsdup(szFromObj(key));
|
sds copy = sdsdup(szFromObj(key));
|
||||||
int retval = dictAdd(db->pdict, copy, val);
|
int retval = dictAdd(db->dict, copy, val);
|
||||||
uint64_t mvcc = getMvccTstamp();
|
uint64_t mvcc = getMvccTstamp();
|
||||||
if (fUpdateMvcc) {
|
if (fUpdateMvcc) {
|
||||||
setMvccTstamp(key, mvcc);
|
setMvccTstamp(key, mvcc);
|
||||||
@ -263,14 +269,14 @@ void dbOverwriteCore(redisDb *db, dictEntry *de, robj *key, robj *val, bool fUpd
|
|||||||
setMvccTstamp(val, getMvccTstamp());
|
setMvccTstamp(val, getMvccTstamp());
|
||||||
}
|
}
|
||||||
|
|
||||||
dictSetVal(db->pdict, de, val);
|
dictSetVal(db->dict, de, val);
|
||||||
|
|
||||||
if (g_pserver->lazyfree_lazy_server_del) {
|
if (g_pserver->lazyfree_lazy_server_del) {
|
||||||
freeObjAsync(old);
|
freeObjAsync(old);
|
||||||
dictSetVal(db->pdict, &auxentry, NULL);
|
dictSetVal(db->dict, &auxentry, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
dictFreeVal(db->pdict, &auxentry);
|
dictFreeVal(db->dict, &auxentry);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Overwrite an existing key with a new value. Incrementing the reference
|
/* Overwrite an existing key with a new value. Incrementing the reference
|
||||||
@ -279,7 +285,7 @@ void dbOverwriteCore(redisDb *db, dictEntry *de, robj *key, robj *val, bool fUpd
|
|||||||
*
|
*
|
||||||
* The program is aborted if the key was not already present. */
|
* The program is aborted if the key was not already present. */
|
||||||
void dbOverwrite(redisDb *db, robj *key, robj *val) {
|
void dbOverwrite(redisDb *db, robj *key, robj *val) {
|
||||||
dictEntry *de = dictFind(db->pdict,ptrFromObj(key));
|
dictEntry *de = dictFind(db->dict,ptrFromObj(key));
|
||||||
|
|
||||||
serverAssertWithInfo(NULL,key,de != NULL);
|
serverAssertWithInfo(NULL,key,de != NULL);
|
||||||
dbOverwriteCore(db, de, key, val, !!g_pserver->fActiveReplica, false);
|
dbOverwriteCore(db, de, key, val, !!g_pserver->fActiveReplica, false);
|
||||||
@ -290,7 +296,7 @@ int dbMerge(redisDb *db, robj *key, robj *val, int fReplace)
|
|||||||
{
|
{
|
||||||
if (fReplace)
|
if (fReplace)
|
||||||
{
|
{
|
||||||
dictEntry *de = dictFind(db->pdict, ptrFromObj(key));
|
dictEntry *de = dictFind(db->dict, ptrFromObj(key));
|
||||||
if (de == nullptr)
|
if (de == nullptr)
|
||||||
return (dbAddCore(db, key, val, false /* fUpdateMvcc */) == DICT_OK);
|
return (dbAddCore(db, key, val, false /* fUpdateMvcc */) == DICT_OK);
|
||||||
|
|
||||||
@ -321,7 +327,7 @@ int dbMerge(redisDb *db, robj *key, robj *val, int fReplace)
|
|||||||
* The client 'c' argument may be set to NULL if the operation is performed
|
* The client 'c' argument may be set to NULL if the operation is performed
|
||||||
* in a context where there is no clear client performing the operation. */
|
* in a context where there is no clear client performing the operation. */
|
||||||
void genericSetKey(client *c, redisDb *db, robj *key, robj *val, int keepttl, int signal) {
|
void genericSetKey(client *c, redisDb *db, robj *key, robj *val, int keepttl, int signal) {
|
||||||
dictEntry *de = dictFind(db->pdict, ptrFromObj(key));
|
dictEntry *de = dictFind(db->dict, ptrFromObj(key));
|
||||||
if (de == NULL) {
|
if (de == NULL) {
|
||||||
dbAdd(db,key,val);
|
dbAdd(db,key,val);
|
||||||
} else {
|
} else {
|
||||||
@ -340,7 +346,7 @@ void setKey(client *c, redisDb *db, robj *key, robj *val) {
|
|||||||
/* Return true if the specified key exists in the specified database.
|
/* Return true if the specified key exists in the specified database.
|
||||||
* LRU/LFU info is not updated in any way. */
|
* LRU/LFU info is not updated in any way. */
|
||||||
int dbExists(redisDb *db, robj *key) {
|
int dbExists(redisDb *db, robj *key) {
|
||||||
return dictFind(db->pdict,ptrFromObj(key)) != NULL;
|
return dictFind(db->dict,ptrFromObj(key)) != NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Return a random key, in form of a Redis object.
|
/* Return a random key, in form of a Redis object.
|
||||||
@ -350,13 +356,13 @@ int dbExists(redisDb *db, robj *key) {
|
|||||||
robj *dbRandomKey(redisDb *db) {
|
robj *dbRandomKey(redisDb *db) {
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
int maxtries = 100;
|
int maxtries = 100;
|
||||||
int allvolatile = dictSize(db->pdict) == db->setexpire->size();
|
int allvolatile = dictSize(db->dict) == db->setexpire->size();
|
||||||
|
|
||||||
while(1) {
|
while(1) {
|
||||||
sds key;
|
sds key;
|
||||||
robj *keyobj;
|
robj *keyobj;
|
||||||
|
|
||||||
de = dictGetRandomKey(db->pdict);
|
de = dictGetRandomKey(db->dict);
|
||||||
if (de == NULL) return NULL;
|
if (de == NULL) return NULL;
|
||||||
|
|
||||||
key = (sds)dictGetKey(de);
|
key = (sds)dictGetKey(de);
|
||||||
@ -394,10 +400,10 @@ int dbSyncDelete(redisDb *db, robj *key) {
|
|||||||
/* Deleting an entry from the expires dict will not free the sds of
|
/* Deleting an entry from the expires dict will not free the sds of
|
||||||
* the key, because it is shared with the main dictionary. */
|
* the key, because it is shared with the main dictionary. */
|
||||||
|
|
||||||
dictEntry *de = dictFind(db->pdict, szFromObj(key));
|
dictEntry *de = dictFind(db->dict, szFromObj(key));
|
||||||
if (de != nullptr && ((robj*)dictGetVal(de))->FExpires())
|
if (de != nullptr && ((robj*)dictGetVal(de))->FExpires())
|
||||||
removeExpireCore(db, key, de);
|
removeExpireCore(db, key, de);
|
||||||
if (dictDelete(db->pdict,ptrFromObj(key)) == DICT_OK) {
|
if (dictDelete(db->dict,ptrFromObj(key)) == DICT_OK) {
|
||||||
if (g_pserver->cluster_enabled) slotToKeyDel(szFromObj(key));
|
if (g_pserver->cluster_enabled) slotToKeyDel(szFromObj(key));
|
||||||
return 1;
|
return 1;
|
||||||
} else {
|
} else {
|
||||||
@ -450,7 +456,42 @@ robj *dbUnshareStringValue(redisDb *db, robj *key, robj *o) {
|
|||||||
return o;
|
return o;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Remove all keys from all the databases in a Redis g_pserver->
|
/* Remove all keys from the database(s) structure. The dbarray argument
|
||||||
|
* may not be the server main DBs (could be a backup).
|
||||||
|
*
|
||||||
|
* The dbnum can be -1 if all the DBs should be emptied, or the specified
|
||||||
|
* DB index if we want to empty only a single database.
|
||||||
|
* The function returns the number of keys removed from the database(s). */
|
||||||
|
long long emptyDbStructure(redisDb *dbarray, int dbnum, int async,
|
||||||
|
void(callback)(void*))
|
||||||
|
{
|
||||||
|
long long removed = 0;
|
||||||
|
int startdb, enddb;
|
||||||
|
|
||||||
|
if (dbnum == -1) {
|
||||||
|
startdb = 0;
|
||||||
|
enddb = cserver.dbnum-1;
|
||||||
|
} else {
|
||||||
|
startdb = enddb = dbnum;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int j = startdb; j <= enddb; j++) {
|
||||||
|
removed += dictSize(dbarray[j].dict);
|
||||||
|
if (async) {
|
||||||
|
emptyDbAsync(&dbarray[j]);
|
||||||
|
} else {
|
||||||
|
dictEmpty(dbarray[j].dict,callback);
|
||||||
|
dbarray[j].setexpire->clear();
|
||||||
|
}
|
||||||
|
/* Because all keys of database are removed, reset average ttl. */
|
||||||
|
dbarray[j].avg_ttl = 0;
|
||||||
|
dbarray[j].last_expire_set = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
return removed;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Remove all keys from all the databases in a Redis server.
|
||||||
* If callback is given the function is called from time to time to
|
* If callback is given the function is called from time to time to
|
||||||
* signal that work is in progress.
|
* signal that work is in progress.
|
||||||
*
|
*
|
||||||
@ -458,18 +499,14 @@ robj *dbUnshareStringValue(redisDb *db, robj *key, robj *o) {
|
|||||||
* DB number if we want to flush only a single Redis database number.
|
* DB number if we want to flush only a single Redis database number.
|
||||||
*
|
*
|
||||||
* Flags are be EMPTYDB_NO_FLAGS if no special flags are specified or
|
* Flags are be EMPTYDB_NO_FLAGS if no special flags are specified or
|
||||||
* 1. EMPTYDB_ASYNC if we want the memory to be freed in a different thread.
|
* EMPTYDB_ASYNC if we want the memory to be freed in a different thread
|
||||||
* 2. EMPTYDB_BACKUP if we want to empty the backup dictionaries created by
|
|
||||||
* disklessLoadMakeBackups. In that case we only free memory and avoid
|
|
||||||
* firing module events.
|
|
||||||
* and the function to return ASAP.
|
* and the function to return ASAP.
|
||||||
*
|
*
|
||||||
* On success the fuction returns the number of keys removed from the
|
* On success the function returns the number of keys removed from the
|
||||||
* database(s). Otherwise -1 is returned in the specific case the
|
* database(s). Otherwise -1 is returned in the specific case the
|
||||||
* DB number is out of range, and errno is set to EINVAL. */
|
* DB number is out of range, and errno is set to EINVAL. */
|
||||||
long long emptyDbGeneric(redisDb *dbarray, int dbnum, int flags, void(callback)(void*)) {
|
long long emptyDb(int dbnum, int flags, void(callback)(void*)) {
|
||||||
int async = (flags & EMPTYDB_ASYNC);
|
int async = (flags & EMPTYDB_ASYNC);
|
||||||
int backup = (flags & EMPTYDB_BACKUP); /* Just free the memory, nothing else */
|
|
||||||
RedisModuleFlushInfoV1 fi = {REDISMODULE_FLUSHINFO_VERSION,!async,dbnum};
|
RedisModuleFlushInfoV1 fi = {REDISMODULE_FLUSHINFO_VERSION,!async,dbnum};
|
||||||
long long removed = 0;
|
long long removed = 0;
|
||||||
|
|
||||||
@ -478,8 +515,6 @@ long long emptyDbGeneric(redisDb *dbarray, int dbnum, int flags, void(callback)(
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Pre-flush actions */
|
|
||||||
if (!backup) {
|
|
||||||
/* Fire the flushdb modules event. */
|
/* Fire the flushdb modules event. */
|
||||||
moduleFireServerEvent(REDISMODULE_EVENT_FLUSHDB,
|
moduleFireServerEvent(REDISMODULE_EVENT_FLUSHDB,
|
||||||
REDISMODULE_SUBEVENT_FLUSHDB_START,
|
REDISMODULE_SUBEVENT_FLUSHDB_START,
|
||||||
@ -489,35 +524,15 @@ long long emptyDbGeneric(redisDb *dbarray, int dbnum, int flags, void(callback)(
|
|||||||
* Note that we need to call the function while the keys are still
|
* Note that we need to call the function while the keys are still
|
||||||
* there. */
|
* there. */
|
||||||
signalFlushedDb(dbnum);
|
signalFlushedDb(dbnum);
|
||||||
}
|
|
||||||
|
|
||||||
int startdb, enddb;
|
/* Empty redis database structure. */
|
||||||
if (dbnum == -1) {
|
removed = emptyDbStructure(g_pserver->db, dbnum, async, callback);
|
||||||
startdb = 0;
|
|
||||||
enddb = cserver.dbnum-1;
|
|
||||||
} else {
|
|
||||||
startdb = enddb = dbnum;
|
|
||||||
}
|
|
||||||
|
|
||||||
for (int j = startdb; j <= enddb; j++) {
|
/* Flush slots to keys map if enable cluster, we can flush entire
|
||||||
removed += dictSize(dbarray[j].pdict);
|
* slots to keys map whatever dbnum because only support one DB
|
||||||
if (async) {
|
* in cluster mode. */
|
||||||
emptyDbAsync(&dbarray[j]);
|
if (g_pserver->cluster_enabled) slotToKeyFlush(async);
|
||||||
} else {
|
|
||||||
dictEmpty(dbarray[j].pdict,callback);
|
|
||||||
dbarray[j].setexpire->clear();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Post-flush actions */
|
|
||||||
if (!backup) {
|
|
||||||
if (g_pserver->cluster_enabled) {
|
|
||||||
if (async) {
|
|
||||||
slotToKeyFlushAsync();
|
|
||||||
} else {
|
|
||||||
slotToKeyFlush();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (dbnum == -1) flushSlaveKeysWithExpireList();
|
if (dbnum == -1) flushSlaveKeysWithExpireList();
|
||||||
|
|
||||||
/* Also fire the end event. Note that this event will fire almost
|
/* Also fire the end event. Note that this event will fire almost
|
||||||
@ -525,13 +540,83 @@ long long emptyDbGeneric(redisDb *dbarray, int dbnum, int flags, void(callback)(
|
|||||||
moduleFireServerEvent(REDISMODULE_EVENT_FLUSHDB,
|
moduleFireServerEvent(REDISMODULE_EVENT_FLUSHDB,
|
||||||
REDISMODULE_SUBEVENT_FLUSHDB_END,
|
REDISMODULE_SUBEVENT_FLUSHDB_END,
|
||||||
&fi);
|
&fi);
|
||||||
}
|
|
||||||
|
|
||||||
return removed;
|
return removed;
|
||||||
}
|
}
|
||||||
|
|
||||||
long long emptyDb(int dbnum, int flags, void(callback)(void*)) {
|
/* Store a backup of the database for later use, and put an empty one
|
||||||
return emptyDbGeneric(g_pserver->db, dbnum, flags, callback);
|
* instead of it. */
|
||||||
|
dbBackup *backupDb(void) {
|
||||||
|
dbBackup *backup = (dbBackup*)zmalloc(sizeof(dbBackup));
|
||||||
|
|
||||||
|
/* Backup main DBs. */
|
||||||
|
backup->dbarray = (redisDb*)zmalloc(sizeof(redisDb)*cserver.dbnum);
|
||||||
|
for (int i=0; i<cserver.dbnum; i++) {
|
||||||
|
backup->dbarray[i] = g_pserver->db[i];
|
||||||
|
g_pserver->db[i].dict = dictCreate(&dbDictType,NULL);
|
||||||
|
g_pserver->db[i].setexpire = new(MALLOC_LOCAL) expireset;
|
||||||
|
g_pserver->db[i].expireitr = g_pserver->db[i].setexpire->end();
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Backup cluster slots to keys map if enable cluster. */
|
||||||
|
if (g_pserver->cluster_enabled) {
|
||||||
|
backup->slots_to_keys = g_pserver->cluster->slots_to_keys;
|
||||||
|
memcpy(backup->slots_keys_count, g_pserver->cluster->slots_keys_count,
|
||||||
|
sizeof(g_pserver->cluster->slots_keys_count));
|
||||||
|
g_pserver->cluster->slots_to_keys = raxNew();
|
||||||
|
memset(g_pserver->cluster->slots_keys_count, 0,
|
||||||
|
sizeof(g_pserver->cluster->slots_keys_count));
|
||||||
|
}
|
||||||
|
|
||||||
|
return backup;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Discard a previously created backup, this can be slow (similar to FLUSHALL)
|
||||||
|
* Arguments are similar to the ones of emptyDb, see EMPTYDB_ flags. */
|
||||||
|
void discardDbBackup(dbBackup *buckup, int flags, void(callback)(void*)) {
|
||||||
|
int async = (flags & EMPTYDB_ASYNC);
|
||||||
|
|
||||||
|
/* Release main DBs backup . */
|
||||||
|
emptyDbStructure(buckup->dbarray, -1, async, callback);
|
||||||
|
for (int i=0; i<cserver.dbnum; i++) {
|
||||||
|
dictRelease(buckup->dbarray[i].dict);
|
||||||
|
delete buckup->dbarray[i].setexpire;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Release slots to keys map backup if enable cluster. */
|
||||||
|
if (g_pserver->cluster_enabled) freeSlotsToKeysMap(buckup->slots_to_keys, async);
|
||||||
|
|
||||||
|
/* Release buckup. */
|
||||||
|
zfree(buckup->dbarray);
|
||||||
|
zfree(buckup);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Restore the previously created backup (discarding what currently resides
|
||||||
|
* in the db).
|
||||||
|
* This function should be called after the current contents of the database
|
||||||
|
* was emptied with a previous call to emptyDb (possibly using the async mode). */
|
||||||
|
void restoreDbBackup(dbBackup *buckup) {
|
||||||
|
/* Restore main DBs. */
|
||||||
|
for (int i=0; i<cserver.dbnum; i++) {
|
||||||
|
serverAssert(dictSize(g_pserver->db[i].dict) == 0);
|
||||||
|
serverAssert(g_pserver->db[i].setexpire->empty());
|
||||||
|
dictRelease(g_pserver->db[i].dict);
|
||||||
|
delete g_pserver->db[i].setexpire;
|
||||||
|
g_pserver->db[i] = buckup->dbarray[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Restore slots to keys map backup if enable cluster. */
|
||||||
|
if (g_pserver->cluster_enabled) {
|
||||||
|
serverAssert(g_pserver->cluster->slots_to_keys->numele == 0);
|
||||||
|
raxFree(g_pserver->cluster->slots_to_keys);
|
||||||
|
g_pserver->cluster->slots_to_keys = buckup->slots_to_keys;
|
||||||
|
memcpy(g_pserver->cluster->slots_keys_count, buckup->slots_keys_count,
|
||||||
|
sizeof(g_pserver->cluster->slots_keys_count));
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Release buckup. */
|
||||||
|
zfree(buckup->dbarray);
|
||||||
|
zfree(buckup);
|
||||||
}
|
}
|
||||||
|
|
||||||
int selectDb(client *c, int id) {
|
int selectDb(client *c, int id) {
|
||||||
@ -545,7 +630,7 @@ long long dbTotalServerKeyCount() {
|
|||||||
long long total = 0;
|
long long total = 0;
|
||||||
int j;
|
int j;
|
||||||
for (j = 0; j < cserver.dbnum; j++) {
|
for (j = 0; j < cserver.dbnum; j++) {
|
||||||
total += dictSize(g_pserver->db[j].pdict);
|
total += dictSize(g_pserver->db[j].dict);
|
||||||
}
|
}
|
||||||
return total;
|
return total;
|
||||||
}
|
}
|
||||||
@ -567,7 +652,18 @@ void signalModifiedKey(client *c, redisDb *db, robj *key) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
void signalFlushedDb(int dbid) {
|
void signalFlushedDb(int dbid) {
|
||||||
touchWatchedKeysOnFlush(dbid);
|
int startdb, enddb;
|
||||||
|
if (dbid == -1) {
|
||||||
|
startdb = 0;
|
||||||
|
enddb = cserver.dbnum-1;
|
||||||
|
} else {
|
||||||
|
startdb = enddb = dbid;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int j = startdb; j <= enddb; j++) {
|
||||||
|
touchAllWatchedKeysInDb(&g_pserver->db[j], NULL);
|
||||||
|
}
|
||||||
|
|
||||||
trackingInvalidateKeysOnFlush(dbid);
|
trackingInvalidateKeysOnFlush(dbid);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -682,7 +778,7 @@ void existsCommand(client *c) {
|
|||||||
int j;
|
int j;
|
||||||
|
|
||||||
for (j = 1; j < c->argc; j++) {
|
for (j = 1; j < c->argc; j++) {
|
||||||
if (lookupKeyRead(c->db,c->argv[j])) count++;
|
if (lookupKeyReadWithFlags(c->db,c->argv[j],LOOKUP_NOTOUCH)) count++;
|
||||||
}
|
}
|
||||||
addReplyLongLong(c,count);
|
addReplyLongLong(c,count);
|
||||||
}
|
}
|
||||||
@ -732,7 +828,7 @@ void keysCommand(client *c) {
|
|||||||
unsigned long numkeys = 0;
|
unsigned long numkeys = 0;
|
||||||
void *replylen = addReplyDeferredLen(c);
|
void *replylen = addReplyDeferredLen(c);
|
||||||
|
|
||||||
di = dictGetSafeIterator(c->db->pdict);
|
di = dictGetSafeIterator(c->db->dict);
|
||||||
allkeys = (pattern[0] == '*' && plen == 1);
|
allkeys = (pattern[0] == '*' && plen == 1);
|
||||||
while((de = dictNext(di)) != NULL) {
|
while((de = dictNext(di)) != NULL) {
|
||||||
sds key = (sds)dictGetKey(de);
|
sds key = (sds)dictGetKey(de);
|
||||||
@ -876,7 +972,7 @@ void scanGenericCommand(client *c, robj_roptr o, unsigned long cursor) {
|
|||||||
/* Handle the case of a hash table. */
|
/* Handle the case of a hash table. */
|
||||||
ht = NULL;
|
ht = NULL;
|
||||||
if (o == nullptr) {
|
if (o == nullptr) {
|
||||||
ht = c->db->pdict;
|
ht = c->db->dict;
|
||||||
} else if (o->type == OBJ_SET && o->encoding == OBJ_ENCODING_HT) {
|
} else if (o->type == OBJ_SET && o->encoding == OBJ_ENCODING_HT) {
|
||||||
ht = (dict*)ptrFromObj(o);
|
ht = (dict*)ptrFromObj(o);
|
||||||
} else if (o->type == OBJ_HASH && o->encoding == OBJ_ENCODING_HT) {
|
} else if (o->type == OBJ_HASH && o->encoding == OBJ_ENCODING_HT) {
|
||||||
@ -884,7 +980,7 @@ void scanGenericCommand(client *c, robj_roptr o, unsigned long cursor) {
|
|||||||
count *= 2; /* We return key / value for this type. */
|
count *= 2; /* We return key / value for this type. */
|
||||||
} else if (o->type == OBJ_ZSET && o->encoding == OBJ_ENCODING_SKIPLIST) {
|
} else if (o->type == OBJ_ZSET && o->encoding == OBJ_ENCODING_SKIPLIST) {
|
||||||
zset *zs = (zset*)ptrFromObj(o);
|
zset *zs = (zset*)ptrFromObj(o);
|
||||||
ht = zs->pdict;
|
ht = zs->dict;
|
||||||
count *= 2; /* We return key / value for this type. */
|
count *= 2; /* We return key / value for this type. */
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -963,7 +1059,7 @@ void scanGenericCommand(client *c, robj_roptr o, unsigned long cursor) {
|
|||||||
/* Filter element if it is an expired key. */
|
/* Filter element if it is an expired key. */
|
||||||
if (!filter && o == nullptr && expireIfNeeded(c->db, kobj)) filter = 1;
|
if (!filter && o == nullptr && expireIfNeeded(c->db, kobj)) filter = 1;
|
||||||
|
|
||||||
/* Remove the element and its associted value if needed. */
|
/* Remove the element and its associated value if needed. */
|
||||||
if (filter) {
|
if (filter) {
|
||||||
decrRefCount(kobj);
|
decrRefCount(kobj);
|
||||||
listDelNode(keys, node);
|
listDelNode(keys, node);
|
||||||
@ -1009,7 +1105,7 @@ void scanCommand(client *c) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
void dbsizeCommand(client *c) {
|
void dbsizeCommand(client *c) {
|
||||||
addReplyLongLong(c,dictSize(c->db->pdict));
|
addReplyLongLong(c,dictSize(c->db->dict));
|
||||||
}
|
}
|
||||||
|
|
||||||
void lastsaveCommand(client *c) {
|
void lastsaveCommand(client *c) {
|
||||||
@ -1222,23 +1318,16 @@ int dbSwapDatabases(long id1, long id2) {
|
|||||||
if (id1 < 0 || id1 >= cserver.dbnum ||
|
if (id1 < 0 || id1 >= cserver.dbnum ||
|
||||||
id2 < 0 || id2 >= cserver.dbnum) return C_ERR;
|
id2 < 0 || id2 >= cserver.dbnum) return C_ERR;
|
||||||
if (id1 == id2) return C_OK;
|
if (id1 == id2) return C_OK;
|
||||||
redisDb aux(g_pserver->db[id1]);
|
|
||||||
redisDb *db1 = &g_pserver->db[id1], *db2 = &g_pserver->db[id2];
|
redisDb *db1 = &g_pserver->db[id1], *db2 = &g_pserver->db[id2];
|
||||||
|
|
||||||
/* Swap hash tables. Note that we don't swap blocking_keys,
|
/* Swap hash tables. Note that we don't swap blocking_keys,
|
||||||
* ready_keys and watched_keys, since we want clients to
|
* ready_keys and watched_keys, since we want clients to
|
||||||
* remain in the same DB they were. */
|
* remain in the same DB they were. */
|
||||||
db1->pdict = db2->pdict;
|
std::swap(db1->dict, db2->dict);
|
||||||
db1->setexpire = db2->setexpire;
|
std::swap(db1->setexpire, db2->setexpire);
|
||||||
db1->expireitr = db2->expireitr;
|
std::swap(db1->expireitr, db2->expireitr);
|
||||||
db1->avg_ttl = db2->avg_ttl;
|
std::swap(db1->avg_ttl, db2->avg_ttl);
|
||||||
db1->last_expire_set = db2->last_expire_set;
|
std::swap(db1->last_expire_set, db2->last_expire_set);
|
||||||
|
|
||||||
db2->pdict = aux.pdict;
|
|
||||||
db2->setexpire = aux.setexpire;
|
|
||||||
db2->expireitr = aux.expireitr;
|
|
||||||
db2->avg_ttl = aux.avg_ttl;
|
|
||||||
db2->last_expire_set = aux.last_expire_set;
|
|
||||||
|
|
||||||
/* Now we need to handle clients blocked on lists: as an effect
|
/* Now we need to handle clients blocked on lists: as an effect
|
||||||
* of swapping the two DBs, a client that was waiting for list
|
* of swapping the two DBs, a client that was waiting for list
|
||||||
@ -1248,9 +1337,14 @@ int dbSwapDatabases(long id1, long id2) {
|
|||||||
* However normally we only do this check for efficiency reasons
|
* However normally we only do this check for efficiency reasons
|
||||||
* in dbAdd() when a list is created. So here we need to rescan
|
* in dbAdd() when a list is created. So here we need to rescan
|
||||||
* the list of clients blocked on lists and signal lists as ready
|
* the list of clients blocked on lists and signal lists as ready
|
||||||
* if needed. */
|
* if needed.
|
||||||
|
*
|
||||||
|
* Also the swapdb should make transaction fail if there is any
|
||||||
|
* client watching keys */
|
||||||
scanDatabaseForReadyLists(db1);
|
scanDatabaseForReadyLists(db1);
|
||||||
|
touchAllWatchedKeysInDb(db1, db2);
|
||||||
scanDatabaseForReadyLists(db2);
|
scanDatabaseForReadyLists(db2);
|
||||||
|
touchAllWatchedKeysInDb(db2, db1);
|
||||||
return C_OK;
|
return C_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1278,6 +1372,8 @@ void swapdbCommand(client *c) {
|
|||||||
addReplyError(c,"DB index is out of range");
|
addReplyError(c,"DB index is out of range");
|
||||||
return;
|
return;
|
||||||
} else {
|
} else {
|
||||||
|
RedisModuleSwapDbInfo si = {REDISMODULE_SWAPDBINFO_VERSION,(int32_t)id1,(int32_t)id2};
|
||||||
|
moduleFireServerEvent(REDISMODULE_EVENT_SWAPDB,0,&si);
|
||||||
g_pserver->dirty++;
|
g_pserver->dirty++;
|
||||||
addReply(c,shared.ok);
|
addReply(c,shared.ok);
|
||||||
}
|
}
|
||||||
@ -1287,7 +1383,7 @@ void swapdbCommand(client *c) {
|
|||||||
* Expires API
|
* Expires API
|
||||||
*----------------------------------------------------------------------------*/
|
*----------------------------------------------------------------------------*/
|
||||||
int removeExpire(redisDb *db, robj *key) {
|
int removeExpire(redisDb *db, robj *key) {
|
||||||
dictEntry *de = dictFind(db->pdict,ptrFromObj(key));
|
dictEntry *de = dictFind(db->dict,ptrFromObj(key));
|
||||||
return removeExpireCore(db, key, de);
|
return removeExpireCore(db, key, de);
|
||||||
}
|
}
|
||||||
int removeExpireCore(redisDb *db, robj *key, dictEntry *de) {
|
int removeExpireCore(redisDb *db, robj *key, dictEntry *de) {
|
||||||
@ -1308,7 +1404,7 @@ int removeExpireCore(redisDb *db, robj *key, dictEntry *de) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
int removeSubkeyExpire(redisDb *db, robj *key, robj *subkey) {
|
int removeSubkeyExpire(redisDb *db, robj *key, robj *subkey) {
|
||||||
dictEntry *de = dictFind(db->pdict,ptrFromObj(key));
|
dictEntry *de = dictFind(db->dict,ptrFromObj(key));
|
||||||
serverAssertWithInfo(NULL,key,de != NULL);
|
serverAssertWithInfo(NULL,key,de != NULL);
|
||||||
|
|
||||||
robj *val = (robj*)dictGetVal(de);
|
robj *val = (robj*)dictGetVal(de);
|
||||||
@ -1350,13 +1446,13 @@ void setExpire(client *c, redisDb *db, robj *key, robj *subkey, long long when)
|
|||||||
serverAssert(GlobalLocksAcquired());
|
serverAssert(GlobalLocksAcquired());
|
||||||
|
|
||||||
/* Reuse the sds from the main dict in the expire dict */
|
/* Reuse the sds from the main dict in the expire dict */
|
||||||
kde = dictFind(db->pdict,ptrFromObj(key));
|
kde = dictFind(db->dict,ptrFromObj(key));
|
||||||
serverAssertWithInfo(NULL,key,kde != NULL);
|
serverAssertWithInfo(NULL,key,kde != NULL);
|
||||||
|
|
||||||
if (((robj*)dictGetVal(kde))->getrefcount(std::memory_order_relaxed) == OBJ_SHARED_REFCOUNT)
|
if (((robj*)dictGetVal(kde))->getrefcount(std::memory_order_relaxed) == OBJ_SHARED_REFCOUNT)
|
||||||
{
|
{
|
||||||
// shared objects cannot have the expire bit set, create a real object
|
// shared objects cannot have the expire bit set, create a real object
|
||||||
dictSetVal(db->pdict, kde, dupStringObject((robj*)dictGetVal(kde)));
|
dictSetVal(db->dict, kde, dupStringObject((robj*)dictGetVal(kde)));
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Update TTL stats (exponential moving average) */
|
/* Update TTL stats (exponential moving average) */
|
||||||
@ -1409,13 +1505,13 @@ void setExpire(client *c, redisDb *db, robj *key, expireEntry &&e)
|
|||||||
serverAssert(GlobalLocksAcquired());
|
serverAssert(GlobalLocksAcquired());
|
||||||
|
|
||||||
/* Reuse the sds from the main dict in the expire dict */
|
/* Reuse the sds from the main dict in the expire dict */
|
||||||
kde = dictFind(db->pdict,ptrFromObj(key));
|
kde = dictFind(db->dict,ptrFromObj(key));
|
||||||
serverAssertWithInfo(NULL,key,kde != NULL);
|
serverAssertWithInfo(NULL,key,kde != NULL);
|
||||||
|
|
||||||
if (((robj*)dictGetVal(kde))->getrefcount(std::memory_order_relaxed) == OBJ_SHARED_REFCOUNT)
|
if (((robj*)dictGetVal(kde))->getrefcount(std::memory_order_relaxed) == OBJ_SHARED_REFCOUNT)
|
||||||
{
|
{
|
||||||
// shared objects cannot have the expire bit set, create a real object
|
// shared objects cannot have the expire bit set, create a real object
|
||||||
dictSetVal(db->pdict, kde, dupStringObject((robj*)dictGetVal(kde)));
|
dictSetVal(db->dict, kde, dupStringObject((robj*)dictGetVal(kde)));
|
||||||
}
|
}
|
||||||
|
|
||||||
if (((robj*)dictGetVal(kde))->FExpires())
|
if (((robj*)dictGetVal(kde))->FExpires())
|
||||||
@ -1440,7 +1536,7 @@ expireEntry *getExpire(redisDb *db, robj_roptr key) {
|
|||||||
if (db->setexpire->size() == 0)
|
if (db->setexpire->size() == 0)
|
||||||
return nullptr;
|
return nullptr;
|
||||||
|
|
||||||
de = dictFind(db->pdict, ptrFromObj(key));
|
de = dictFind(db->dict, ptrFromObj(key));
|
||||||
if (de == NULL)
|
if (de == NULL)
|
||||||
return nullptr;
|
return nullptr;
|
||||||
robj *obj = (robj*)dictGetVal(de);
|
robj *obj = (robj*)dictGetVal(de);
|
||||||
@ -1617,27 +1713,54 @@ int expireIfNeeded(redisDb *db, robj *key) {
|
|||||||
/* -----------------------------------------------------------------------------
|
/* -----------------------------------------------------------------------------
|
||||||
* API to get key arguments from commands
|
* API to get key arguments from commands
|
||||||
* ---------------------------------------------------------------------------*/
|
* ---------------------------------------------------------------------------*/
|
||||||
#define MAX_KEYS_BUFFER 256
|
|
||||||
thread_local static int getKeysTempBuffer[MAX_KEYS_BUFFER];
|
/* Prepare the getKeysResult struct to hold numkeys, either by using the
|
||||||
|
* pre-allocated keysbuf or by allocating a new array on the heap.
|
||||||
|
*
|
||||||
|
* This function must be called at least once before starting to populate
|
||||||
|
* the result, and can be called repeatedly to enlarge the result array.
|
||||||
|
*/
|
||||||
|
int *getKeysPrepareResult(getKeysResult *result, int numkeys) {
|
||||||
|
/* GETKEYS_RESULT_INIT initializes keys to NULL, point it to the pre-allocated stack
|
||||||
|
* buffer here. */
|
||||||
|
if (!result->keys) {
|
||||||
|
serverAssert(!result->numkeys);
|
||||||
|
result->keys = result->keysbuf;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Resize if necessary */
|
||||||
|
if (numkeys > result->size) {
|
||||||
|
if (result->keys != result->keysbuf) {
|
||||||
|
/* We're not using a static buffer, just (re)alloc */
|
||||||
|
result->keys = (int*)zrealloc(result->keys, numkeys * sizeof(int));
|
||||||
|
} else {
|
||||||
|
/* We are using a static buffer, copy its contents */
|
||||||
|
result->keys = (int*)zmalloc(numkeys * sizeof(int));
|
||||||
|
if (result->numkeys)
|
||||||
|
memcpy(result->keys, result->keysbuf, result->numkeys * sizeof(int));
|
||||||
|
}
|
||||||
|
result->size = numkeys;
|
||||||
|
}
|
||||||
|
|
||||||
|
return result->keys;
|
||||||
|
}
|
||||||
|
|
||||||
/* The base case is to use the keys position as given in the command table
|
/* The base case is to use the keys position as given in the command table
|
||||||
* (firstkey, lastkey, step). */
|
* (firstkey, lastkey, step). */
|
||||||
int *getKeysUsingCommandTable(struct redisCommand *cmd,robj **argv, int argc, int *numkeys) {
|
int getKeysUsingCommandTable(struct redisCommand *cmd,robj **argv, int argc, getKeysResult *result) {
|
||||||
int j, i = 0, last, *keys;
|
int j, i = 0, last, *keys;
|
||||||
UNUSED(argv);
|
UNUSED(argv);
|
||||||
|
|
||||||
if (cmd->firstkey == 0) {
|
if (cmd->firstkey == 0) {
|
||||||
*numkeys = 0;
|
result->numkeys = 0;
|
||||||
return NULL;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
last = cmd->lastkey;
|
last = cmd->lastkey;
|
||||||
if (last < 0) last = argc+last;
|
if (last < 0) last = argc+last;
|
||||||
|
|
||||||
int count = ((last - cmd->firstkey)+1);
|
int count = ((last - cmd->firstkey)+1);
|
||||||
keys = getKeysTempBuffer;
|
keys = getKeysPrepareResult(result, count);
|
||||||
if (count > MAX_KEYS_BUFFER)
|
|
||||||
keys = (int*)zmalloc(sizeof(int)*count);
|
|
||||||
|
|
||||||
for (j = cmd->firstkey; j <= last; j += cmd->keystep) {
|
for (j = cmd->firstkey; j <= last; j += cmd->keystep) {
|
||||||
if (j >= argc) {
|
if (j >= argc) {
|
||||||
@ -1648,23 +1771,23 @@ int *getKeysUsingCommandTable(struct redisCommand *cmd,robj **argv, int argc, in
|
|||||||
* return no keys and expect the command implementation to report
|
* return no keys and expect the command implementation to report
|
||||||
* an arity or syntax error. */
|
* an arity or syntax error. */
|
||||||
if (cmd->flags & CMD_MODULE || cmd->arity < 0) {
|
if (cmd->flags & CMD_MODULE || cmd->arity < 0) {
|
||||||
getKeysFreeResult(keys);
|
getKeysFreeResult(result);
|
||||||
*numkeys = 0;
|
result->numkeys = 0;
|
||||||
return NULL;
|
return 0;
|
||||||
} else {
|
} else {
|
||||||
serverPanic("Redis built-in command declared keys positions not matching the arity requirements.");
|
serverPanic("Redis built-in command declared keys positions not matching the arity requirements.");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
keys[i++] = j;
|
keys[i++] = j;
|
||||||
}
|
}
|
||||||
*numkeys = i;
|
result->numkeys = i;
|
||||||
return keys;
|
return i;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Return all the arguments that are keys in the command passed via argc / argv.
|
/* Return all the arguments that are keys in the command passed via argc / argv.
|
||||||
*
|
*
|
||||||
* The command returns the positions of all the key arguments inside the array,
|
* The command returns the positions of all the key arguments inside the array,
|
||||||
* so the actual return value is an heap allocated array of integers. The
|
* so the actual return value is a heap allocated array of integers. The
|
||||||
* length of the array is returned by reference into *numkeys.
|
* length of the array is returned by reference into *numkeys.
|
||||||
*
|
*
|
||||||
* 'cmd' must be point to the corresponding entry into the redisCommand
|
* 'cmd' must be point to the corresponding entry into the redisCommand
|
||||||
@ -1672,26 +1795,26 @@ int *getKeysUsingCommandTable(struct redisCommand *cmd,robj **argv, int argc, in
|
|||||||
*
|
*
|
||||||
* This function uses the command table if a command-specific helper function
|
* This function uses the command table if a command-specific helper function
|
||||||
* is not required, otherwise it calls the command-specific function. */
|
* is not required, otherwise it calls the command-specific function. */
|
||||||
int *getKeysFromCommand(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int getKeysFromCommand(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
if (cmd->flags & CMD_MODULE_GETKEYS) {
|
if (cmd->flags & CMD_MODULE_GETKEYS) {
|
||||||
return moduleGetCommandKeysViaAPI(cmd,argv,argc,numkeys);
|
return moduleGetCommandKeysViaAPI(cmd,argv,argc,result);
|
||||||
} else if (!(cmd->flags & CMD_MODULE) && cmd->getkeys_proc) {
|
} else if (!(cmd->flags & CMD_MODULE) && cmd->getkeys_proc) {
|
||||||
return cmd->getkeys_proc(cmd,argv,argc,numkeys);
|
return cmd->getkeys_proc(cmd,argv,argc,result);
|
||||||
} else {
|
} else {
|
||||||
return getKeysUsingCommandTable(cmd,argv,argc,numkeys);
|
return getKeysUsingCommandTable(cmd,argv,argc,result);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Free the result of getKeysFromCommand. */
|
/* Free the result of getKeysFromCommand. */
|
||||||
void getKeysFreeResult(int *result) {
|
void getKeysFreeResult(getKeysResult *result) {
|
||||||
if (result != getKeysTempBuffer)
|
if (result && result->keys != result->keysbuf)
|
||||||
zfree(result);
|
zfree(result->keys);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Helper function to extract keys from following commands:
|
/* Helper function to extract keys from following commands:
|
||||||
* ZUNIONSTORE <destkey> <num-keys> <key> <key> ... <key> <options>
|
* ZUNIONSTORE <destkey> <num-keys> <key> <key> ... <key> <options>
|
||||||
* ZINTERSTORE <destkey> <num-keys> <key> <key> ... <key> <options> */
|
* ZINTERSTORE <destkey> <num-keys> <key> <key> ... <key> <options> */
|
||||||
int *zunionInterGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int zunionInterGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
int i, num, *keys;
|
int i, num, *keys;
|
||||||
UNUSED(cmd);
|
UNUSED(cmd);
|
||||||
|
|
||||||
@ -1699,30 +1822,30 @@ int *zunionInterGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *nu
|
|||||||
/* Sanity check. Don't return any key if the command is going to
|
/* Sanity check. Don't return any key if the command is going to
|
||||||
* reply with syntax error. */
|
* reply with syntax error. */
|
||||||
if (num < 1 || num > (argc-3)) {
|
if (num < 1 || num > (argc-3)) {
|
||||||
*numkeys = 0;
|
result->numkeys = 0;
|
||||||
return NULL;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Keys in z{union,inter}store come from two places:
|
/* Keys in z{union,inter}store come from two places:
|
||||||
* argv[1] = storage key,
|
* argv[1] = storage key,
|
||||||
* argv[3...n] = keys to intersect */
|
* argv[3...n] = keys to intersect */
|
||||||
keys = getKeysTempBuffer;
|
/* Total keys = {union,inter} keys + storage key */
|
||||||
if (num+1>MAX_KEYS_BUFFER)
|
keys = getKeysPrepareResult(result, num+1);
|
||||||
keys = (int*)zmalloc(sizeof(int)*(num+1));
|
result->numkeys = num+1;
|
||||||
|
|
||||||
/* Add all key positions for argv[3...n] to keys[] */
|
/* Add all key positions for argv[3...n] to keys[] */
|
||||||
for (i = 0; i < num; i++) keys[i] = 3+i;
|
for (i = 0; i < num; i++) keys[i] = 3+i;
|
||||||
|
|
||||||
/* Finally add the argv[1] key position (the storage key target). */
|
/* Finally add the argv[1] key position (the storage key target). */
|
||||||
keys[num] = 1;
|
keys[num] = 1;
|
||||||
*numkeys = num+1; /* Total keys = {union,inter} keys + storage key */
|
|
||||||
return keys;
|
return result->numkeys;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Helper function to extract keys from the following commands:
|
/* Helper function to extract keys from the following commands:
|
||||||
* EVAL <script> <num-keys> <key> <key> ... <key> [more stuff]
|
* EVAL <script> <num-keys> <key> <key> ... <key> [more stuff]
|
||||||
* EVALSHA <script> <num-keys> <key> <key> ... <key> [more stuff] */
|
* EVALSHA <script> <num-keys> <key> <key> ... <key> [more stuff] */
|
||||||
int *evalGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int evalGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
int i, num, *keys;
|
int i, num, *keys;
|
||||||
UNUSED(cmd);
|
UNUSED(cmd);
|
||||||
|
|
||||||
@ -1730,20 +1853,17 @@ int *evalGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys)
|
|||||||
/* Sanity check. Don't return any key if the command is going to
|
/* Sanity check. Don't return any key if the command is going to
|
||||||
* reply with syntax error. */
|
* reply with syntax error. */
|
||||||
if (num <= 0 || num > (argc-3)) {
|
if (num <= 0 || num > (argc-3)) {
|
||||||
*numkeys = 0;
|
result->numkeys = 0;
|
||||||
return NULL;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
keys = getKeysTempBuffer;
|
keys = getKeysPrepareResult(result, num);
|
||||||
if (num>MAX_KEYS_BUFFER)
|
result->numkeys = num;
|
||||||
keys = (int*)zmalloc(sizeof(int)*num);
|
|
||||||
|
|
||||||
*numkeys = num;
|
|
||||||
|
|
||||||
/* Add all key positions for argv[3...n] to keys[] */
|
/* Add all key positions for argv[3...n] to keys[] */
|
||||||
for (i = 0; i < num; i++) keys[i] = 3+i;
|
for (i = 0; i < num; i++) keys[i] = 3+i;
|
||||||
|
|
||||||
return keys;
|
return result->numkeys;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Helper function to extract keys from the SORT command.
|
/* Helper function to extract keys from the SORT command.
|
||||||
@ -1753,13 +1873,12 @@ int *evalGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys)
|
|||||||
* The first argument of SORT is always a key, however a list of options
|
* The first argument of SORT is always a key, however a list of options
|
||||||
* follow in SQL-alike style. Here we parse just the minimum in order to
|
* follow in SQL-alike style. Here we parse just the minimum in order to
|
||||||
* correctly identify keys in the "STORE" option. */
|
* correctly identify keys in the "STORE" option. */
|
||||||
int *sortGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int sortGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
int i, j, num, *keys, found_store = 0;
|
int i, j, num, *keys, found_store = 0;
|
||||||
UNUSED(cmd);
|
UNUSED(cmd);
|
||||||
|
|
||||||
num = 0;
|
num = 0;
|
||||||
keys = getKeysTempBuffer; /* Alloc 2 places for the worst case. */
|
keys = getKeysPrepareResult(result, 2); /* Alloc 2 places for the worst case. */
|
||||||
|
|
||||||
keys[num++] = 1; /* <sort-key> is always present. */
|
keys[num++] = 1; /* <sort-key> is always present. */
|
||||||
|
|
||||||
/* Search for STORE option. By default we consider options to don't
|
/* Search for STORE option. By default we consider options to don't
|
||||||
@ -1791,11 +1910,11 @@ int *sortGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
*numkeys = num + found_store;
|
result->numkeys = num + found_store;
|
||||||
return keys;
|
return result->numkeys;
|
||||||
}
|
}
|
||||||
|
|
||||||
int *migrateGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int migrateGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
int i, num, first, *keys;
|
int i, num, first, *keys;
|
||||||
UNUSED(cmd);
|
UNUSED(cmd);
|
||||||
|
|
||||||
@ -1816,20 +1935,17 @@ int *migrateGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkey
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
keys = getKeysTempBuffer;
|
keys = getKeysPrepareResult(result, num);
|
||||||
if (num>MAX_KEYS_BUFFER)
|
|
||||||
keys = (int*)zmalloc(sizeof(int)*num);
|
|
||||||
|
|
||||||
for (i = 0; i < num; i++) keys[i] = first+i;
|
for (i = 0; i < num; i++) keys[i] = first+i;
|
||||||
*numkeys = num;
|
result->numkeys = num;
|
||||||
return keys;
|
return num;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Helper function to extract keys from following commands:
|
/* Helper function to extract keys from following commands:
|
||||||
* GEORADIUS key x y radius unit [WITHDIST] [WITHHASH] [WITHCOORD] [ASC|DESC]
|
* GEORADIUS key x y radius unit [WITHDIST] [WITHHASH] [WITHCOORD] [ASC|DESC]
|
||||||
* [COUNT count] [STORE key] [STOREDIST key]
|
* [COUNT count] [STORE key] [STOREDIST key]
|
||||||
* GEORADIUSBYMEMBER key member radius unit ... options ... */
|
* GEORADIUSBYMEMBER key member radius unit ... options ... */
|
||||||
int *georadiusGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int georadiusGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
int i, num, *keys;
|
int i, num, *keys;
|
||||||
UNUSED(cmd);
|
UNUSED(cmd);
|
||||||
|
|
||||||
@ -1852,24 +1968,21 @@ int *georadiusGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numk
|
|||||||
* argv[1] = key,
|
* argv[1] = key,
|
||||||
* argv[5...n] = stored key if present
|
* argv[5...n] = stored key if present
|
||||||
*/
|
*/
|
||||||
keys = getKeysTempBuffer;
|
keys = getKeysPrepareResult(result, num);
|
||||||
if (num>MAX_KEYS_BUFFER)
|
|
||||||
keys = (int*)zmalloc(sizeof(int) * num);
|
|
||||||
|
|
||||||
/* Add all key positions to keys[] */
|
/* Add all key positions to keys[] */
|
||||||
keys[0] = 1;
|
keys[0] = 1;
|
||||||
if(num > 1) {
|
if(num > 1) {
|
||||||
keys[1] = stored_key;
|
keys[1] = stored_key;
|
||||||
}
|
}
|
||||||
*numkeys = num;
|
result->numkeys = num;
|
||||||
return keys;
|
return num;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* LCS ... [KEYS <key1> <key2>] ... */
|
/* LCS ... [KEYS <key1> <key2>] ... */
|
||||||
int *lcsGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys)
|
int lcsGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
{
|
|
||||||
int i;
|
int i;
|
||||||
int *keys = getKeysTempBuffer;
|
int *keys = getKeysPrepareResult(result, 2);
|
||||||
UNUSED(cmd);
|
UNUSED(cmd);
|
||||||
|
|
||||||
/* We need to parse the options of the command in order to check for the
|
/* We need to parse the options of the command in order to check for the
|
||||||
@ -1883,33 +1996,32 @@ int *lcsGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys)
|
|||||||
} else if (!strcasecmp(arg, "keys") && moreargs >= 2) {
|
} else if (!strcasecmp(arg, "keys") && moreargs >= 2) {
|
||||||
keys[0] = i+1;
|
keys[0] = i+1;
|
||||||
keys[1] = i+2;
|
keys[1] = i+2;
|
||||||
*numkeys = 2;
|
result->numkeys = 2;
|
||||||
return keys;
|
return result->numkeys;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
*numkeys = 0;
|
result->numkeys = 0;
|
||||||
return keys;
|
return result->numkeys;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Helper function to extract keys from memory command.
|
/* Helper function to extract keys from memory command.
|
||||||
* MEMORY USAGE <key> */
|
* MEMORY USAGE <key> */
|
||||||
int *memoryGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int memoryGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
int *keys;
|
|
||||||
UNUSED(cmd);
|
UNUSED(cmd);
|
||||||
|
|
||||||
|
getKeysPrepareResult(result, 1);
|
||||||
if (argc >= 3 && !strcasecmp(szFromObj(argv[1]),"usage")) {
|
if (argc >= 3 && !strcasecmp(szFromObj(argv[1]),"usage")) {
|
||||||
keys = getKeysTempBuffer;
|
result->keys[0] = 2;
|
||||||
keys[0] = 2;
|
result->numkeys = 1;
|
||||||
*numkeys = 1;
|
return result->numkeys;
|
||||||
return keys;
|
|
||||||
}
|
}
|
||||||
*numkeys = 0;
|
result->numkeys = 0;
|
||||||
return NULL;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* XREAD [BLOCK <milliseconds>] [COUNT <count>] [GROUP <groupname> <ttl>]
|
/* XREAD [BLOCK <milliseconds>] [COUNT <count>] [GROUP <groupname> <ttl>]
|
||||||
* STREAMS key_1 key_2 ... key_N ID_1 ID_2 ... ID_N */
|
* STREAMS key_1 key_2 ... key_N ID_1 ID_2 ... ID_N */
|
||||||
int *xreadGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int xreadGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
int i, num = 0, *keys;
|
int i, num = 0, *keys;
|
||||||
UNUSED(cmd);
|
UNUSED(cmd);
|
||||||
|
|
||||||
@ -1939,19 +2051,16 @@ int *xreadGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys)
|
|||||||
|
|
||||||
/* Syntax error. */
|
/* Syntax error. */
|
||||||
if (streams_pos == -1 || num == 0 || num % 2 != 0) {
|
if (streams_pos == -1 || num == 0 || num % 2 != 0) {
|
||||||
*numkeys = 0;
|
result->numkeys = 0;
|
||||||
return NULL;
|
return 0;
|
||||||
}
|
}
|
||||||
num /= 2; /* We have half the keys as there are arguments because
|
num /= 2; /* We have half the keys as there are arguments because
|
||||||
there are also the IDs, one per key. */
|
there are also the IDs, one per key. */
|
||||||
|
|
||||||
keys = getKeysTempBuffer;
|
keys = getKeysPrepareResult(result, num);
|
||||||
if (num>MAX_KEYS_BUFFER)
|
|
||||||
keys = (int*)zmalloc(sizeof(int) * num);
|
|
||||||
|
|
||||||
for (i = streams_pos+1; i < argc-num; i++) keys[i-streams_pos-1] = i;
|
for (i = streams_pos+1; i < argc-num; i++) keys[i-streams_pos-1] = i;
|
||||||
*numkeys = num;
|
result->numkeys = num;
|
||||||
return keys;
|
return num;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Slot to Key API. This is used by Redis Cluster in order to obtain in
|
/* Slot to Key API. This is used by Redis Cluster in order to obtain in
|
||||||
@ -1989,13 +2098,25 @@ void slotToKeyDel(sds key) {
|
|||||||
slotToKeyUpdateKey(key,0);
|
slotToKeyUpdateKey(key,0);
|
||||||
}
|
}
|
||||||
|
|
||||||
void slotToKeyFlush(void) {
|
/* Release the radix tree mapping Redis Cluster keys to slots. If 'async'
|
||||||
serverAssert(GlobalLocksAcquired());
|
* is true, we release it asynchronously. */
|
||||||
|
void freeSlotsToKeysMap(rax *rt, int async) {
|
||||||
|
if (async) {
|
||||||
|
freeSlotsToKeysMapAsync(rt);
|
||||||
|
} else {
|
||||||
|
raxFree(rt);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Empty the slots-keys map of Redis CLuster by creating a new empty one and
|
||||||
|
* freeing the old one. */
|
||||||
|
void slotToKeyFlush(int async) {
|
||||||
|
rax *old = g_pserver->cluster->slots_to_keys;
|
||||||
|
|
||||||
raxFree(g_pserver->cluster->slots_to_keys);
|
|
||||||
g_pserver->cluster->slots_to_keys = raxNew();
|
g_pserver->cluster->slots_to_keys = raxNew();
|
||||||
memset(g_pserver->cluster->slots_keys_count,0,
|
memset(g_pserver->cluster->slots_keys_count,0,
|
||||||
sizeof(g_pserver->cluster->slots_keys_count));
|
sizeof(g_pserver->cluster->slots_keys_count));
|
||||||
|
freeSlotsToKeysMap(old, async);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Pupulate the specified array of objects with keys in the specified slot.
|
/* Pupulate the specified array of objects with keys in the specified slot.
|
||||||
|
143
src/debug.cpp
143
src/debug.cpp
@ -189,7 +189,7 @@ void xorObjectDigest(redisDb *db, robj_roptr keyobj, unsigned char *digest, robj
|
|||||||
}
|
}
|
||||||
} else if (o->encoding == OBJ_ENCODING_SKIPLIST) {
|
} else if (o->encoding == OBJ_ENCODING_SKIPLIST) {
|
||||||
zset *zs = (zset*)ptrFromObj(o);
|
zset *zs = (zset*)ptrFromObj(o);
|
||||||
dictIterator *di = dictGetIterator(zs->pdict);
|
dictIterator *di = dictGetIterator(zs->dict);
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
|
|
||||||
while((de = dictNext(di)) != NULL) {
|
while((de = dictNext(di)) != NULL) {
|
||||||
@ -281,8 +281,8 @@ void computeDatasetDigest(unsigned char *final) {
|
|||||||
for (j = 0; j < cserver.dbnum; j++) {
|
for (j = 0; j < cserver.dbnum; j++) {
|
||||||
redisDb *db = g_pserver->db+j;
|
redisDb *db = g_pserver->db+j;
|
||||||
|
|
||||||
if (dictSize(db->pdict) == 0) continue;
|
if (dictSize(db->dict) == 0) continue;
|
||||||
di = dictGetSafeIterator(db->pdict);
|
di = dictGetSafeIterator(db->dict);
|
||||||
|
|
||||||
/* hash the DB id, so the same dataset moved in a different
|
/* hash the DB id, so the same dataset moved in a different
|
||||||
* DB will lead to a different digest */
|
* DB will lead to a different digest */
|
||||||
@ -401,7 +401,7 @@ void debugCommand(client *c) {
|
|||||||
"OOM -- Crash the server simulating an out-of-memory error.",
|
"OOM -- Crash the server simulating an out-of-memory error.",
|
||||||
"PANIC -- Crash the server simulating a panic.",
|
"PANIC -- Crash the server simulating a panic.",
|
||||||
"POPULATE <count> [prefix] [size] -- Create <count> string keys named key:<num>. If a prefix is specified is used instead of the 'key' prefix.",
|
"POPULATE <count> [prefix] [size] -- Create <count> string keys named key:<num>. If a prefix is specified is used instead of the 'key' prefix.",
|
||||||
"RELOAD [MERGE] [NOFLUSH] [NOSAVE] -- Save the RDB on disk and reload it back in memory. By default it will save the RDB file and load it back. With the NOFLUSH option the current database is not removed before loading the new one, but conficts in keys will kill the server with an exception. When MERGE is used, conflicting keys will be loaded (the key in the loaded RDB file will win). When NOSAVE is used, the server will not save the current dataset in the RDB file before loading. Use DEBUG RELOAD NOSAVE when you want just to load the RDB file you placed in the Redis working directory in order to replace the current dataset in memory. Use DEBUG RELOAD NOSAVE NOFLUSH MERGE when you want to add what is in the current RDB file placed in the Redis current directory, with the current memory content. Use DEBUG RELOAD when you want to verify Redis is able to persist the current dataset in the RDB file, flush the memory content, and load it back.",
|
"RELOAD [MERGE] [NOFLUSH] [NOSAVE] -- Save the RDB on disk and reload it back in memory. By default it will save the RDB file and load it back. With the NOFLUSH option the current database is not removed before loading the new one, but conflicts in keys will kill the server with an exception. When MERGE is used, conflicting keys will be loaded (the key in the loaded RDB file will win). When NOSAVE is used, the server will not save the current dataset in the RDB file before loading. Use DEBUG RELOAD NOSAVE when you want just to load the RDB file you placed in the Redis working directory in order to replace the current dataset in memory. Use DEBUG RELOAD NOSAVE NOFLUSH MERGE when you want to add what is in the current RDB file placed in the Redis current directory, with the current memory content. Use DEBUG RELOAD when you want to verify Redis is able to persist the current dataset in the RDB file, flush the memory content, and load it back.",
|
||||||
"RESTART -- Graceful restart: save config, db, restart.",
|
"RESTART -- Graceful restart: save config, db, restart.",
|
||||||
"SDSLEN <key> -- Show low level SDS string info representing key and value.",
|
"SDSLEN <key> -- Show low level SDS string info representing key and value.",
|
||||||
"SEGFAULT -- Crash the server with sigsegv.",
|
"SEGFAULT -- Crash the server with sigsegv.",
|
||||||
@ -470,7 +470,7 @@ NULL
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* The default beahvior is to save the RDB file before loading
|
/* The default behavior is to save the RDB file before loading
|
||||||
* it back. */
|
* it back. */
|
||||||
if (save) {
|
if (save) {
|
||||||
rdbSaveInfo rsi, *rsiptr;
|
rdbSaveInfo rsi, *rsiptr;
|
||||||
@ -513,7 +513,7 @@ NULL
|
|||||||
robj *val;
|
robj *val;
|
||||||
const char *strenc;
|
const char *strenc;
|
||||||
|
|
||||||
if ((de = dictFind(c->db->pdict,ptrFromObj(c->argv[2]))) == NULL) {
|
if ((de = dictFind(c->db->dict,ptrFromObj(c->argv[2]))) == NULL) {
|
||||||
addReply(c,shared.nokeyerr);
|
addReply(c,shared.nokeyerr);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -565,7 +565,7 @@ NULL
|
|||||||
robj *val;
|
robj *val;
|
||||||
sds key;
|
sds key;
|
||||||
|
|
||||||
if ((de = dictFind(c->db->pdict,ptrFromObj(c->argv[2]))) == NULL) {
|
if ((de = dictFind(c->db->dict,ptrFromObj(c->argv[2]))) == NULL) {
|
||||||
addReply(c,shared.nokeyerr);
|
addReply(c,shared.nokeyerr);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -586,10 +586,10 @@ NULL
|
|||||||
(long long) getStringObjectSdsUsedMemory(val));
|
(long long) getStringObjectSdsUsedMemory(val));
|
||||||
}
|
}
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]),"ziplist") && c->argc == 3) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]),"ziplist") && c->argc == 3) {
|
||||||
robj *o;
|
robj_roptr o;
|
||||||
|
|
||||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nokeyerr))
|
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nokeyerr))
|
||||||
== NULL) return;
|
== nullptr) return;
|
||||||
|
|
||||||
if (o->encoding != OBJ_ENCODING_ZIPLIST) {
|
if (o->encoding != OBJ_ENCODING_ZIPLIST) {
|
||||||
addReplyError(c,"Not an sds encoded string.");
|
addReplyError(c,"Not an sds encoded string.");
|
||||||
@ -605,7 +605,7 @@ NULL
|
|||||||
|
|
||||||
if (getLongFromObjectOrReply(c, c->argv[2], &keys, NULL) != C_OK)
|
if (getLongFromObjectOrReply(c, c->argv[2], &keys, NULL) != C_OK)
|
||||||
return;
|
return;
|
||||||
dictExpand(c->db->pdict,keys);
|
dictExpand(c->db->dict,keys);
|
||||||
long valsize = 0;
|
long valsize = 0;
|
||||||
if ( c->argc == 5 && getLongFromObjectOrReply(c, c->argv[4], &valsize, NULL) != C_OK )
|
if ( c->argc == 5 && getLongFromObjectOrReply(c, c->argv[4], &valsize, NULL) != C_OK )
|
||||||
return;
|
return;
|
||||||
@ -645,7 +645,11 @@ NULL
|
|||||||
for (int j = 2; j < c->argc; j++) {
|
for (int j = 2; j < c->argc; j++) {
|
||||||
unsigned char digest[20];
|
unsigned char digest[20];
|
||||||
memset(digest,0,20); /* Start with a clean result */
|
memset(digest,0,20); /* Start with a clean result */
|
||||||
robj_roptr o = lookupKeyReadWithFlags(c->db,c->argv[j],LOOKUP_NOTOUCH);
|
|
||||||
|
/* We don't use lookupKey because a debug command should
|
||||||
|
* work on logically expired keys */
|
||||||
|
dictEntry *de;
|
||||||
|
robj* o = (robj*)((de = dictFind(c->db->dict,ptrFromObj(c->argv[j]))) == NULL ? NULL : dictGetVal(de));
|
||||||
if (o) xorObjectDigest(c->db,c->argv[j],digest,o);
|
if (o) xorObjectDigest(c->db,c->argv[j],digest,o);
|
||||||
|
|
||||||
sds d = sdsempty();
|
sds d = sdsempty();
|
||||||
@ -763,7 +767,7 @@ NULL
|
|||||||
}
|
}
|
||||||
|
|
||||||
stats = sdscatprintf(stats,"[Dictionary HT]\n");
|
stats = sdscatprintf(stats,"[Dictionary HT]\n");
|
||||||
dictGetStats(buf,sizeof(buf),g_pserver->db[dbid].pdict);
|
dictGetStats(buf,sizeof(buf),g_pserver->db[dbid].dict);
|
||||||
stats = sdscat(stats,buf);
|
stats = sdscat(stats,buf);
|
||||||
|
|
||||||
stats = sdscatprintf(stats,"[Expires set]\n");
|
stats = sdscatprintf(stats,"[Expires set]\n");
|
||||||
@ -773,18 +777,18 @@ NULL
|
|||||||
addReplyVerbatim(c,stats,sdslen(stats),"txt");
|
addReplyVerbatim(c,stats,sdslen(stats),"txt");
|
||||||
sdsfree(stats);
|
sdsfree(stats);
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]),"htstats-key") && c->argc == 3) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]),"htstats-key") && c->argc == 3) {
|
||||||
robj *o;
|
robj_roptr o;
|
||||||
dict *ht = NULL;
|
dict *ht = NULL;
|
||||||
|
|
||||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nokeyerr))
|
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nokeyerr))
|
||||||
== NULL) return;
|
== nullptr) return;
|
||||||
|
|
||||||
/* Get the hash table reference from the object, if possible. */
|
/* Get the hash table reference from the object, if possible. */
|
||||||
switch (o->encoding) {
|
switch (o->encoding) {
|
||||||
case OBJ_ENCODING_SKIPLIST:
|
case OBJ_ENCODING_SKIPLIST:
|
||||||
{
|
{
|
||||||
zset *zs = (zset*)ptrFromObj(o);
|
zset *zs = (zset*)ptrFromObj(o);
|
||||||
ht = zs->pdict;
|
ht = zs->dict;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case OBJ_ENCODING_HT:
|
case OBJ_ENCODING_HT:
|
||||||
@ -983,7 +987,7 @@ static void *getMcontextEip(ucontext_t *uc) {
|
|||||||
#endif
|
#endif
|
||||||
#elif defined(__linux__)
|
#elif defined(__linux__)
|
||||||
/* Linux */
|
/* Linux */
|
||||||
#if defined(__i386__) || defined(__ILP32__)
|
#if defined(__i386__) || ((defined(__X86_64__) || defined(__x86_64__)) && defined(__ILP32__))
|
||||||
return (void*) uc->uc_mcontext.gregs[14]; /* Linux 32 */
|
return (void*) uc->uc_mcontext.gregs[14]; /* Linux 32 */
|
||||||
#elif defined(__X86_64__) || defined(__x86_64__)
|
#elif defined(__X86_64__) || defined(__x86_64__)
|
||||||
return (void*) uc->uc_mcontext.gregs[16]; /* Linux 64 */
|
return (void*) uc->uc_mcontext.gregs[16]; /* Linux 64 */
|
||||||
@ -1008,6 +1012,12 @@ static void *getMcontextEip(ucontext_t *uc) {
|
|||||||
#elif defined(__x86_64__)
|
#elif defined(__x86_64__)
|
||||||
return (void*) uc->sc_rip;
|
return (void*) uc->sc_rip;
|
||||||
#endif
|
#endif
|
||||||
|
#elif defined(__NetBSD__)
|
||||||
|
#if defined(__i386__)
|
||||||
|
return (void*) uc->uc_mcontext.__gregs[_REG_EIP];
|
||||||
|
#elif defined(__x86_64__)
|
||||||
|
return (void*) uc->uc_mcontext.__gregs[_REG_RIP];
|
||||||
|
#endif
|
||||||
#elif defined(__DragonFly__)
|
#elif defined(__DragonFly__)
|
||||||
return (void*) uc->uc_mcontext.mc_rip;
|
return (void*) uc->uc_mcontext.mc_rip;
|
||||||
#else
|
#else
|
||||||
@ -1144,7 +1154,7 @@ void logRegisters(ucontext_t *uc) {
|
|||||||
/* Linux */
|
/* Linux */
|
||||||
#elif defined(__linux__)
|
#elif defined(__linux__)
|
||||||
/* Linux x86 */
|
/* Linux x86 */
|
||||||
#if defined(__i386__) || defined(__ILP32__)
|
#if defined(__i386__) || ((defined(__X86_64__) || defined(__x86_64__)) && defined(__ILP32__))
|
||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"\n"
|
"\n"
|
||||||
"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\n"
|
"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\n"
|
||||||
@ -1232,7 +1242,7 @@ void logRegisters(ucontext_t *uc) {
|
|||||||
"R10:%016lx R9 :%016lx\nR8 :%016lx R7 :%016lx\n"
|
"R10:%016lx R9 :%016lx\nR8 :%016lx R7 :%016lx\n"
|
||||||
"R6 :%016lx R5 :%016lx\nR4 :%016lx R3 :%016lx\n"
|
"R6 :%016lx R5 :%016lx\nR4 :%016lx R3 :%016lx\n"
|
||||||
"R2 :%016lx R1 :%016lx\nR0 :%016lx EC :%016lx\n"
|
"R2 :%016lx R1 :%016lx\nR0 :%016lx EC :%016lx\n"
|
||||||
"fp: %016lx ip:%016lx\n",
|
"fp: %016lx ip:%016lx\n"
|
||||||
"pc:%016lx sp:%016lx\ncpsr:%016lx fault_address:%016lx\n",
|
"pc:%016lx sp:%016lx\ncpsr:%016lx fault_address:%016lx\n",
|
||||||
(unsigned long) uc->uc_mcontext.arm_r10,
|
(unsigned long) uc->uc_mcontext.arm_r10,
|
||||||
(unsigned long) uc->uc_mcontext.arm_r9,
|
(unsigned long) uc->uc_mcontext.arm_r9,
|
||||||
@ -1365,6 +1375,59 @@ void logRegisters(ucontext_t *uc) {
|
|||||||
);
|
);
|
||||||
logStackContent((void**)uc->sc_esp);
|
logStackContent((void**)uc->sc_esp);
|
||||||
#endif
|
#endif
|
||||||
|
#elif defined(__NetBSD__)
|
||||||
|
#if defined(__x86_64__)
|
||||||
|
serverLog(LL_WARNING,
|
||||||
|
"\n"
|
||||||
|
"RAX:%016lx RBX:%016lx\nRCX:%016lx RDX:%016lx\n"
|
||||||
|
"RDI:%016lx RSI:%016lx\nRBP:%016lx RSP:%016lx\n"
|
||||||
|
"R8 :%016lx R9 :%016lx\nR10:%016lx R11:%016lx\n"
|
||||||
|
"R12:%016lx R13:%016lx\nR14:%016lx R15:%016lx\n"
|
||||||
|
"RIP:%016lx EFL:%016lx\nCSGSFS:%016lx",
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RAX],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RBX],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RCX],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RDX],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RDI],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RSI],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RBP],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RSP],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_R8],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_R9],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_R10],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_R11],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_R12],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_R13],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_R14],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_R15],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RIP],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_RFLAGS],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_CS]
|
||||||
|
);
|
||||||
|
logStackContent((void**)uc->uc_mcontext.__gregs[_REG_RSP]);
|
||||||
|
#elif defined(__i386__)
|
||||||
|
serverLog(LL_WARNING,
|
||||||
|
"\n"
|
||||||
|
"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\n"
|
||||||
|
"EDI:%08lx ESI:%08lx EBP:%08lx ESP:%08lx\n"
|
||||||
|
"SS :%08lx EFL:%08lx EIP:%08lx CS:%08lx\n"
|
||||||
|
"DS :%08lx ES :%08lx FS :%08lx GS:%08lx",
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_EAX],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_EBX],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_EDX],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_EDI],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_ESI],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_EBP],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_ESP],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_SS],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_EFLAGS],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_EIP],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_CS],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_ES],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_FS],
|
||||||
|
(unsigned long) uc->uc_mcontext.__gregs[_REG_GS]
|
||||||
|
);
|
||||||
|
#endif
|
||||||
#elif defined(__DragonFly__)
|
#elif defined(__DragonFly__)
|
||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"\n"
|
"\n"
|
||||||
@ -1526,12 +1589,12 @@ void logCurrentClient(void) {
|
|||||||
}
|
}
|
||||||
/* Check if the first argument, usually a key, is found inside the
|
/* Check if the first argument, usually a key, is found inside the
|
||||||
* selected DB, and if so print info about the associated object. */
|
* selected DB, and if so print info about the associated object. */
|
||||||
if (cc->argc >= 1) {
|
if (cc->argc > 1) {
|
||||||
robj *val, *key;
|
robj *val, *key;
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
|
|
||||||
key = getDecodedObject(cc->argv[1]);
|
key = getDecodedObject(cc->argv[1]);
|
||||||
de = dictFind(cc->db->pdict, ptrFromObj(key));
|
de = dictFind(cc->db->dict, ptrFromObj(key));
|
||||||
if (de) {
|
if (de) {
|
||||||
val = (robj*)dictGetVal(de);
|
val = (robj*)dictGetVal(de);
|
||||||
serverLog(LL_WARNING,"key '%s' found in DB containing the following object:", (char*)ptrFromObj(key));
|
serverLog(LL_WARNING,"key '%s' found in DB containing the following object:", (char*)ptrFromObj(key));
|
||||||
@ -1545,7 +1608,7 @@ void logCurrentClient(void) {
|
|||||||
|
|
||||||
#define MEMTEST_MAX_REGIONS 128
|
#define MEMTEST_MAX_REGIONS 128
|
||||||
|
|
||||||
/* A non destructive memory test executed during segfauls. */
|
/* A non destructive memory test executed during segfault. */
|
||||||
int memtest_test_linux_anonymous_maps(void) {
|
int memtest_test_linux_anonymous_maps(void) {
|
||||||
FILE *fp;
|
FILE *fp;
|
||||||
char line[1024];
|
char line[1024];
|
||||||
@ -1606,7 +1669,27 @@ int memtest_test_linux_anonymous_maps(void) {
|
|||||||
closeDirectLogFiledes(fd);
|
closeDirectLogFiledes(fd);
|
||||||
return errors;
|
return errors;
|
||||||
}
|
}
|
||||||
#endif
|
#endif /* HAVE_PROC_MAPS */
|
||||||
|
|
||||||
|
static void killMainThread(void) {
|
||||||
|
int err;
|
||||||
|
if (pthread_self() != cserver.main_thread_id && pthread_cancel(cserver.main_thread_id) == 0) {
|
||||||
|
if ((err = pthread_join(cserver.main_thread_id,NULL)) != 0) {
|
||||||
|
serverLog(LL_WARNING, "main thread can not be joined: %s", strerror(err));
|
||||||
|
} else {
|
||||||
|
serverLog(LL_WARNING, "main thread terminated");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Kill the running threads (other than current) in an unclean way. This function
|
||||||
|
* should be used only when it's critical to stop the threads for some reason.
|
||||||
|
* Currently Redis does this only on crash (for instance on SIGSEGV) in order
|
||||||
|
* to perform a fast memory check without other threads messing with memory. */
|
||||||
|
void killThreads(void) {
|
||||||
|
killMainThread();
|
||||||
|
bioKillThreads();
|
||||||
|
}
|
||||||
|
|
||||||
/* Scans the (assumed) x86 code starting at addr, for a max of `len`
|
/* Scans the (assumed) x86 code starting at addr, for a max of `len`
|
||||||
* bytes, searching for E8 (callq) opcodes, and dumping the symbols
|
* bytes, searching for E8 (callq) opcodes, and dumping the symbols
|
||||||
@ -1644,7 +1727,7 @@ void sigsegvHandler(int sig, siginfo_t *info, void *secret) {
|
|||||||
|
|
||||||
bugReportStart();
|
bugReportStart();
|
||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"KeyDB %s crashed by signal: %d", KEYDB_REAL_VERSION, sig);
|
"KeyDB %s crashed by signal: %d, si_code: %d", KEYDB_REAL_VERSION, sig, info->si_code);
|
||||||
if (eip != NULL) {
|
if (eip != NULL) {
|
||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"Crashed running the instruction at: %p", eip);
|
"Crashed running the instruction at: %p", eip);
|
||||||
@ -1653,6 +1736,9 @@ void sigsegvHandler(int sig, siginfo_t *info, void *secret) {
|
|||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"Accessing address: %p", (void*)info->si_addr);
|
"Accessing address: %p", (void*)info->si_addr);
|
||||||
}
|
}
|
||||||
|
if (info->si_pid != -1) {
|
||||||
|
serverLog(LL_WARNING, "Killed by PID: %d, UID: %d", info->si_pid, info->si_uid);
|
||||||
|
}
|
||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"Failed assertion: %s (%s:%d)", g_pserver->assert_failed,
|
"Failed assertion: %s (%s:%d)", g_pserver->assert_failed,
|
||||||
g_pserver->assert_file, g_pserver->assert_line);
|
g_pserver->assert_file, g_pserver->assert_line);
|
||||||
@ -1686,7 +1772,7 @@ void sigsegvHandler(int sig, siginfo_t *info, void *secret) {
|
|||||||
#if defined(HAVE_PROC_MAPS)
|
#if defined(HAVE_PROC_MAPS)
|
||||||
/* Test memory */
|
/* Test memory */
|
||||||
serverLogRaw(LL_WARNING|LL_RAW, "\n------ FAST MEMORY TEST ------\n");
|
serverLogRaw(LL_WARNING|LL_RAW, "\n------ FAST MEMORY TEST ------\n");
|
||||||
bioKillThreads();
|
killThreads();
|
||||||
if (memtest_test_linux_anonymous_maps()) {
|
if (memtest_test_linux_anonymous_maps()) {
|
||||||
serverLogRaw(LL_WARNING|LL_RAW,
|
serverLogRaw(LL_WARNING|LL_RAW,
|
||||||
"!!! MEMORY ERROR DETECTED! Check your memory ASAP !!!\n");
|
"!!! MEMORY ERROR DETECTED! Check your memory ASAP !!!\n");
|
||||||
@ -1714,13 +1800,14 @@ void sigsegvHandler(int sig, siginfo_t *info, void *secret) {
|
|||||||
/* Find the address of the next page, which is our "safety"
|
/* Find the address of the next page, which is our "safety"
|
||||||
* limit when dumping. Then try to dump just 128 bytes more
|
* limit when dumping. Then try to dump just 128 bytes more
|
||||||
* than EIP if there is room, or stop sooner. */
|
* than EIP if there is room, or stop sooner. */
|
||||||
|
void *base = (void *)info.dli_saddr;
|
||||||
unsigned long next = ((unsigned long)eip + sz) & ~(sz-1);
|
unsigned long next = ((unsigned long)eip + sz) & ~(sz-1);
|
||||||
unsigned long end = (unsigned long)eip + 128;
|
unsigned long end = (unsigned long)eip + 128;
|
||||||
if (end > next) end = next;
|
if (end > next) end = next;
|
||||||
len = end - (unsigned long)info.dli_saddr;
|
len = end - (unsigned long)base;
|
||||||
serverLogHexDump(LL_WARNING, "dump of function",
|
serverLogHexDump(LL_WARNING, "dump of function",
|
||||||
info.dli_saddr ,len);
|
base ,len);
|
||||||
dumpX86Calls(info.dli_saddr,len);
|
dumpX86Calls(base,len);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1733,7 +1820,7 @@ void sigsegvHandler(int sig, siginfo_t *info, void *secret) {
|
|||||||
);
|
);
|
||||||
|
|
||||||
/* free(messages); Don't call free() with possibly corrupted memory. */
|
/* free(messages); Don't call free() with possibly corrupted memory. */
|
||||||
if (cserver.daemonize && cserver.supervised == 0) unlink(cserver.pidfile);
|
if (cserver.daemonize && cserver.supervised == 0 && cserver.pidfile) unlink(cserver.pidfile);
|
||||||
|
|
||||||
/* Make sure we exit with the right signal at the end. So for instance
|
/* Make sure we exit with the right signal at the end. So for instance
|
||||||
* the core will be dumped if enabled. */
|
* the core will be dumped if enabled. */
|
||||||
|
@ -47,12 +47,12 @@ extern "C" int je_get_defrag_hint(void* ptr);
|
|||||||
|
|
||||||
/* forward declarations*/
|
/* forward declarations*/
|
||||||
void defragDictBucketCallback(void *privdata, dictEntry **bucketref);
|
void defragDictBucketCallback(void *privdata, dictEntry **bucketref);
|
||||||
dictEntry* replaceSateliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sds newkey, uint64_t hash, long *defragged);
|
dictEntry* replaceSatelliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sds newkey, uint64_t hash, long *defragged);
|
||||||
bool replaceSateliteOSetKeyPtr(expireset &set, sds oldkey, sds newkey);
|
bool replaceSatelliteOSetKeyPtr(expireset &set, sds oldkey, sds newkey);
|
||||||
|
|
||||||
/* Defrag helper for generic allocations.
|
/* Defrag helper for generic allocations.
|
||||||
*
|
*
|
||||||
* returns NULL in case the allocatoin wasn't moved.
|
* returns NULL in case the allocation wasn't moved.
|
||||||
* when it returns a non-null value, the old pointer was already released
|
* when it returns a non-null value, the old pointer was already released
|
||||||
* and should NOT be accessed. */
|
* and should NOT be accessed. */
|
||||||
template<typename TPTR>
|
template<typename TPTR>
|
||||||
@ -83,7 +83,7 @@ robj* activeDefragAlloc(robj *o) {
|
|||||||
|
|
||||||
/*Defrag helper for sds strings
|
/*Defrag helper for sds strings
|
||||||
*
|
*
|
||||||
* returns NULL in case the allocatoin wasn't moved.
|
* returns NULL in case the allocation wasn't moved.
|
||||||
* when it returns a non-null value, the old pointer was already released
|
* when it returns a non-null value, the old pointer was already released
|
||||||
* and should NOT be accessed. */
|
* and should NOT be accessed. */
|
||||||
sds activeDefragSds(sds sdsptr) {
|
sds activeDefragSds(sds sdsptr) {
|
||||||
@ -99,7 +99,7 @@ sds activeDefragSds(sds sdsptr) {
|
|||||||
|
|
||||||
/* Defrag helper for robj and/or string objects
|
/* Defrag helper for robj and/or string objects
|
||||||
*
|
*
|
||||||
* returns NULL in case the allocatoin wasn't moved.
|
* returns NULL in case the allocation wasn't moved.
|
||||||
* when it returns a non-null value, the old pointer was already released
|
* when it returns a non-null value, the old pointer was already released
|
||||||
* and should NOT be accessed. */
|
* and should NOT be accessed. */
|
||||||
robj *activeDefragStringOb(robj* ob, long *defragged) {
|
robj *activeDefragStringOb(robj* ob, long *defragged) {
|
||||||
@ -137,11 +137,11 @@ robj *activeDefragStringOb(robj* ob, long *defragged) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Defrag helper for dictEntries to be used during dict iteration (called on
|
/* Defrag helper for dictEntries to be used during dict iteration (called on
|
||||||
* each step). Teturns a stat of how many pointers were moved. */
|
* each step). Returns a stat of how many pointers were moved. */
|
||||||
long dictIterDefragEntry(dictIterator *iter) {
|
long dictIterDefragEntry(dictIterator *iter) {
|
||||||
/* This function is a little bit dirty since it messes with the internals
|
/* This function is a little bit dirty since it messes with the internals
|
||||||
* of the dict and it's iterator, but the benefit is that it is very easy
|
* of the dict and it's iterator, but the benefit is that it is very easy
|
||||||
* to use, and require no other chagnes in the dict. */
|
* to use, and require no other changes in the dict. */
|
||||||
long defragged = 0;
|
long defragged = 0;
|
||||||
dictht *ht;
|
dictht *ht;
|
||||||
/* Handle the next entry (if there is one), and update the pointer in the
|
/* Handle the next entry (if there is one), and update the pointer in the
|
||||||
@ -245,7 +245,7 @@ double *zslDefrag(zskiplist *zsl, double score, sds oldele, sds newele) {
|
|||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Defrag helpler for sorted set.
|
/* Defrag helper for sorted set.
|
||||||
* Defrag a single dict entry key name, and corresponding skiplist struct */
|
* Defrag a single dict entry key name, and corresponding skiplist struct */
|
||||||
long activeDefragZsetEntry(zset *zs, dictEntry *de) {
|
long activeDefragZsetEntry(zset *zs, dictEntry *de) {
|
||||||
sds newsds;
|
sds newsds;
|
||||||
@ -256,7 +256,7 @@ long activeDefragZsetEntry(zset *zs, dictEntry *de) {
|
|||||||
defragged++, de->key = newsds;
|
defragged++, de->key = newsds;
|
||||||
newscore = zslDefrag(zs->zsl, *(double*)dictGetVal(de), sdsele, newsds);
|
newscore = zslDefrag(zs->zsl, *(double*)dictGetVal(de), sdsele, newsds);
|
||||||
if (newscore) {
|
if (newscore) {
|
||||||
dictSetVal(zs->pdict, de, newscore);
|
dictSetVal(zs->dict, de, newscore);
|
||||||
defragged++;
|
defragged++;
|
||||||
}
|
}
|
||||||
return defragged;
|
return defragged;
|
||||||
@ -356,7 +356,7 @@ long activeDefragSdsListAndDict(list *l, dict *d, int dict_val_type) {
|
|||||||
if ((newsds = activeDefragSds(sdsele))) {
|
if ((newsds = activeDefragSds(sdsele))) {
|
||||||
/* When defragging an sds value, we need to update the dict key */
|
/* When defragging an sds value, we need to update the dict key */
|
||||||
uint64_t hash = dictGetHash(d, newsds);
|
uint64_t hash = dictGetHash(d, newsds);
|
||||||
replaceSateliteDictKeyPtrAndOrDefragDictEntry(d, sdsele, newsds, hash, &defragged);
|
replaceSatelliteDictKeyPtrAndOrDefragDictEntry(d, sdsele, newsds, hash, &defragged);
|
||||||
ln->value = newsds;
|
ln->value = newsds;
|
||||||
defragged++;
|
defragged++;
|
||||||
}
|
}
|
||||||
@ -392,7 +392,7 @@ long activeDefragSdsListAndDict(list *l, dict *d, int dict_val_type) {
|
|||||||
* moved. Return value is the the dictEntry if found, or NULL if not found.
|
* moved. Return value is the the dictEntry if found, or NULL if not found.
|
||||||
* NOTE: this is very ugly code, but it let's us avoid the complication of
|
* NOTE: this is very ugly code, but it let's us avoid the complication of
|
||||||
* doing a scan on another dict. */
|
* doing a scan on another dict. */
|
||||||
dictEntry* replaceSateliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sds newkey, uint64_t hash, long *defragged) {
|
dictEntry* replaceSatelliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sds newkey, uint64_t hash, long *defragged) {
|
||||||
dictEntry **deref = dictFindEntryRefByPtrAndHash(d, oldkey, hash);
|
dictEntry **deref = dictFindEntryRefByPtrAndHash(d, oldkey, hash);
|
||||||
if (deref) {
|
if (deref) {
|
||||||
dictEntry *de = *deref;
|
dictEntry *de = *deref;
|
||||||
@ -408,7 +408,7 @@ dictEntry* replaceSateliteDictKeyPtrAndOrDefragDictEntry(dict *d, sds oldkey, sd
|
|||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool replaceSateliteOSetKeyPtr(expireset &set, sds oldkey, sds newkey) {
|
bool replaceSatelliteOSetKeyPtr(expireset &set, sds oldkey, sds newkey) {
|
||||||
auto itr = set.find(oldkey);
|
auto itr = set.find(oldkey);
|
||||||
if (itr != set.end())
|
if (itr != set.end())
|
||||||
{
|
{
|
||||||
@ -454,7 +454,7 @@ long activeDefragQuickListNodes(quicklist *ql) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* when the value has lots of elements, we want to handle it later and not as
|
/* when the value has lots of elements, we want to handle it later and not as
|
||||||
* oart of the main dictionary scan. this is needed in order to prevent latency
|
* part of the main dictionary scan. this is needed in order to prevent latency
|
||||||
* spikes when handling large items */
|
* spikes when handling large items */
|
||||||
void defragLater(redisDb *db, dictEntry *kde) {
|
void defragLater(redisDb *db, dictEntry *kde) {
|
||||||
sds key = sdsdup((sds)dictGetKey(kde));
|
sds key = sdsdup((sds)dictGetKey(kde));
|
||||||
@ -521,7 +521,7 @@ long scanLaterZset(robj *ob, unsigned long *cursor) {
|
|||||||
if (ob->type != OBJ_ZSET || ob->encoding != OBJ_ENCODING_SKIPLIST)
|
if (ob->type != OBJ_ZSET || ob->encoding != OBJ_ENCODING_SKIPLIST)
|
||||||
return 0;
|
return 0;
|
||||||
zset *zs = (zset*)ptrFromObj(ob);
|
zset *zs = (zset*)ptrFromObj(ob);
|
||||||
dict *d = zs->pdict;
|
dict *d = zs->dict;
|
||||||
scanLaterZsetData data = {zs, 0};
|
scanLaterZsetData data = {zs, 0};
|
||||||
*cursor = dictScan(d, *cursor, scanLaterZsetCallback, defragDictBucketCallback, &data);
|
*cursor = dictScan(d, *cursor, scanLaterZsetCallback, defragDictBucketCallback, &data);
|
||||||
return data.defragged;
|
return data.defragged;
|
||||||
@ -596,20 +596,20 @@ long defragZsetSkiplist(redisDb *db, dictEntry *kde) {
|
|||||||
defragged++, zs->zsl = newzsl;
|
defragged++, zs->zsl = newzsl;
|
||||||
if ((newheader = (zskiplistNode*)activeDefragAlloc(zs->zsl->header)))
|
if ((newheader = (zskiplistNode*)activeDefragAlloc(zs->zsl->header)))
|
||||||
defragged++, zs->zsl->header = newheader;
|
defragged++, zs->zsl->header = newheader;
|
||||||
if (dictSize(zs->pdict) > cserver.active_defrag_max_scan_fields)
|
if (dictSize(zs->dict) > cserver.active_defrag_max_scan_fields)
|
||||||
defragLater(db, kde);
|
defragLater(db, kde);
|
||||||
else {
|
else {
|
||||||
dictIterator *di = dictGetIterator(zs->pdict);
|
dictIterator *di = dictGetIterator(zs->dict);
|
||||||
while((de = dictNext(di)) != NULL) {
|
while((de = dictNext(di)) != NULL) {
|
||||||
defragged += activeDefragZsetEntry(zs, de);
|
defragged += activeDefragZsetEntry(zs, de);
|
||||||
}
|
}
|
||||||
dictReleaseIterator(di);
|
dictReleaseIterator(di);
|
||||||
}
|
}
|
||||||
/* handle the dict struct */
|
/* handle the dict struct */
|
||||||
if ((newdict = (dict*)activeDefragAlloc(zs->pdict)))
|
if ((newdict = (dict*)activeDefragAlloc(zs->dict)))
|
||||||
defragged++, zs->pdict = newdict;
|
defragged++, zs->dict = newdict;
|
||||||
/* defrag the dict tables */
|
/* defrag the dict tables */
|
||||||
defragged += dictDefragTables(zs->pdict);
|
defragged += dictDefragTables(zs->dict);
|
||||||
return defragged;
|
return defragged;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -834,7 +834,7 @@ long defragKey(redisDb *db, dictEntry *de) {
|
|||||||
{
|
{
|
||||||
defragged++, de->key = newsds;
|
defragged++, de->key = newsds;
|
||||||
if (!db->setexpire->empty()) {
|
if (!db->setexpire->empty()) {
|
||||||
bool fReplaced = replaceSateliteOSetKeyPtr(*db->setexpire, keysds, newsds);
|
bool fReplaced = replaceSatelliteOSetKeyPtr(*db->setexpire, keysds, newsds);
|
||||||
serverAssert(fReplaced == ob->FExpires());
|
serverAssert(fReplaced == ob->FExpires());
|
||||||
} else {
|
} else {
|
||||||
serverAssert(!ob->FExpires());
|
serverAssert(!ob->FExpires());
|
||||||
@ -909,7 +909,7 @@ void defragScanCallback(void *privdata, const dictEntry *de) {
|
|||||||
g_pserver->stat_active_defrag_scanned++;
|
g_pserver->stat_active_defrag_scanned++;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Defrag scan callback for each hash table bicket,
|
/* Defrag scan callback for each hash table bucket,
|
||||||
* used in order to defrag the dictEntry allocations. */
|
* used in order to defrag the dictEntry allocations. */
|
||||||
void defragDictBucketCallback(void *privdata, dictEntry **bucketref) {
|
void defragDictBucketCallback(void *privdata, dictEntry **bucketref) {
|
||||||
UNUSED(privdata); /* NOTE: this function is also used by both activeDefragCycle and scanLaterHash, etc. don't use privdata */
|
UNUSED(privdata); /* NOTE: this function is also used by both activeDefragCycle and scanLaterHash, etc. don't use privdata */
|
||||||
@ -943,7 +943,7 @@ float getAllocatorFragmentation(size_t *out_frag_bytes) {
|
|||||||
return frag_pct;
|
return frag_pct;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* We may need to defrag other globals, one small allcation can hold a full allocator run.
|
/* We may need to defrag other globals, one small allocation can hold a full allocator run.
|
||||||
* so although small, it is still important to defrag these */
|
* so although small, it is still important to defrag these */
|
||||||
long defragOtherGlobals() {
|
long defragOtherGlobals() {
|
||||||
long defragged = 0;
|
long defragged = 0;
|
||||||
@ -1015,7 +1015,7 @@ int defragLaterStep(redisDb *db, long long endtime) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* each time we enter this function we need to fetch the key from the dict again (if it still exists) */
|
/* each time we enter this function we need to fetch the key from the dict again (if it still exists) */
|
||||||
dictEntry *de = dictFind(db->pdict, defrag_later_current_key);
|
dictEntry *de = dictFind(db->dict, defrag_later_current_key);
|
||||||
key_defragged = g_pserver->stat_active_defrag_hits;
|
key_defragged = g_pserver->stat_active_defrag_hits;
|
||||||
do {
|
do {
|
||||||
int quit = 0;
|
int quit = 0;
|
||||||
@ -1114,7 +1114,7 @@ void activeDefragCycle(void) {
|
|||||||
if (hasActiveChildProcess())
|
if (hasActiveChildProcess())
|
||||||
return; /* Defragging memory while there's a fork will just do damage. */
|
return; /* Defragging memory while there's a fork will just do damage. */
|
||||||
|
|
||||||
/* Once a second, check if we the fragmentation justfies starting a scan
|
/* Once a second, check if the fragmentation justfies starting a scan
|
||||||
* or making it more aggressive. */
|
* or making it more aggressive. */
|
||||||
run_with_period(1000) {
|
run_with_period(1000) {
|
||||||
computeDefragCycles();
|
computeDefragCycles();
|
||||||
@ -1178,13 +1178,13 @@ void activeDefragCycle(void) {
|
|||||||
break; /* this will exit the function and we'll continue on the next cycle */
|
break; /* this will exit the function and we'll continue on the next cycle */
|
||||||
}
|
}
|
||||||
|
|
||||||
cursor = dictScan(db->pdict, cursor, defragScanCallback, defragDictBucketCallback, db);
|
cursor = dictScan(db->dict, cursor, defragScanCallback, defragDictBucketCallback, db);
|
||||||
|
|
||||||
/* Once in 16 scan iterations, 512 pointer reallocations. or 64 keys
|
/* Once in 16 scan iterations, 512 pointer reallocations. or 64 keys
|
||||||
* (if we have a lot of pointers in one hash bucket or rehasing),
|
* (if we have a lot of pointers in one hash bucket or rehasing),
|
||||||
* check if we reached the time limit.
|
* check if we reached the time limit.
|
||||||
* But regardless, don't start a new db in this loop, this is because after
|
* But regardless, don't start a new db in this loop, this is because after
|
||||||
* the last db we call defragOtherGlobals, which must be done in once cycle */
|
* the last db we call defragOtherGlobals, which must be done in one cycle */
|
||||||
if (!cursor || (++iterations > 16 ||
|
if (!cursor || (++iterations > 16 ||
|
||||||
g_pserver->stat_active_defrag_hits - prev_defragged > 512 ||
|
g_pserver->stat_active_defrag_hits - prev_defragged > 512 ||
|
||||||
g_pserver->stat_active_defrag_scanned - prev_scanned > 64)) {
|
g_pserver->stat_active_defrag_scanned - prev_scanned > 64)) {
|
||||||
|
@ -237,7 +237,9 @@ long long timeInMilliseconds(void) {
|
|||||||
return (((long long)tv.tv_sec)*1000)+(tv.tv_usec/1000);
|
return (((long long)tv.tv_sec)*1000)+(tv.tv_usec/1000);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Rehash for an amount of time between ms milliseconds and ms+1 milliseconds */
|
/* Rehash in ms+"delta" milliseconds. The value of "delta" is larger
|
||||||
|
* than 0, and is smaller than 1 in most cases. The exact upper bound
|
||||||
|
* depends on the running time of dictRehash(d,100).*/
|
||||||
int dictRehashMilliseconds(dict *d, int ms) {
|
int dictRehashMilliseconds(dict *d, int ms) {
|
||||||
long long start = timeInMilliseconds();
|
long long start = timeInMilliseconds();
|
||||||
int rehashes = 0;
|
int rehashes = 0;
|
||||||
@ -749,7 +751,7 @@ unsigned int dictGetSomeKeys(dict *d, dictEntry **des, unsigned int count) {
|
|||||||
* this function instead what we do is to consider a "linear" range of the table
|
* this function instead what we do is to consider a "linear" range of the table
|
||||||
* that may be constituted of N buckets with chains of different lengths
|
* that may be constituted of N buckets with chains of different lengths
|
||||||
* appearing one after the other. Then we report a random element in the range.
|
* appearing one after the other. Then we report a random element in the range.
|
||||||
* In this way we smooth away the problem of different chain lenghts. */
|
* In this way we smooth away the problem of different chain lengths. */
|
||||||
#define GETFAIR_NUM_ENTRIES 15
|
#define GETFAIR_NUM_ENTRIES 15
|
||||||
dictEntry *dictGetFairRandomKey(dict *d) {
|
dictEntry *dictGetFairRandomKey(dict *d) {
|
||||||
dictEntry *entries[GETFAIR_NUM_ENTRIES];
|
dictEntry *entries[GETFAIR_NUM_ENTRIES];
|
||||||
@ -1119,7 +1121,7 @@ size_t _dictGetStatsHt(char *buf, size_t bufsize, dictht *ht, int tableid) {
|
|||||||
i, clvector[i], ((float)clvector[i]/ht->size)*100);
|
i, clvector[i], ((float)clvector[i]/ht->size)*100);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Unlike snprintf(), teturn the number of characters actually written. */
|
/* Unlike snprintf(), return the number of characters actually written. */
|
||||||
if (bufsize) buf[bufsize-1] = '\0';
|
if (bufsize) buf[bufsize-1] = '\0';
|
||||||
return strlen(buf);
|
return strlen(buf);
|
||||||
}
|
}
|
||||||
|
@ -8,7 +8,7 @@
|
|||||||
* to be backward compatible are still in big endian) because most of the
|
* to be backward compatible are still in big endian) because most of the
|
||||||
* production environments are little endian, and we have a lot of conversions
|
* production environments are little endian, and we have a lot of conversions
|
||||||
* in a few places because ziplists, intsets, zipmaps, need to be endian-neutral
|
* in a few places because ziplists, intsets, zipmaps, need to be endian-neutral
|
||||||
* even in memory, since they are serialied on RDB files directly with a single
|
* even in memory, since they are serialized on RDB files directly with a single
|
||||||
* write(2) without other additional steps.
|
* write(2) without other additional steps.
|
||||||
*
|
*
|
||||||
* ----------------------------------------------------------------------------
|
* ----------------------------------------------------------------------------
|
||||||
|
@ -42,7 +42,7 @@
|
|||||||
/* To improve the quality of the LRU approximation we take a set of keys
|
/* To improve the quality of the LRU approximation we take a set of keys
|
||||||
* that are good candidate for eviction across freeMemoryIfNeeded() calls.
|
* that are good candidate for eviction across freeMemoryIfNeeded() calls.
|
||||||
*
|
*
|
||||||
* Entries inside the eviciton pool are taken ordered by idle time, putting
|
* Entries inside the eviction pool are taken ordered by idle time, putting
|
||||||
* greater idle times to the right (ascending order).
|
* greater idle times to the right (ascending order).
|
||||||
*
|
*
|
||||||
* When an LFU policy is used instead, a reverse frequency indication is used
|
* When an LFU policy is used instead, a reverse frequency indication is used
|
||||||
@ -88,7 +88,7 @@ unsigned int LRU_CLOCK(void) {
|
|||||||
|
|
||||||
/* Given an object returns the min number of milliseconds the object was never
|
/* Given an object returns the min number of milliseconds the object was never
|
||||||
* requested, using an approximated LRU algorithm. */
|
* requested, using an approximated LRU algorithm. */
|
||||||
unsigned long long estimateObjectIdleTime(robj *o) {
|
unsigned long long estimateObjectIdleTime(robj_roptr o) {
|
||||||
unsigned long long lruclock = LRU_CLOCK();
|
unsigned long long lruclock = LRU_CLOCK();
|
||||||
if (lruclock >= o->lru) {
|
if (lruclock >= o->lru) {
|
||||||
return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;
|
return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;
|
||||||
@ -216,7 +216,7 @@ void processEvictionCandidate(int dbid, sds key, robj *o, const expireEntry *e,
|
|||||||
/* Try to reuse the cached SDS string allocated in the pool entry,
|
/* Try to reuse the cached SDS string allocated in the pool entry,
|
||||||
* because allocating and deallocating this object is costly
|
* because allocating and deallocating this object is costly
|
||||||
* (according to the profiler, not my fantasy. Remember:
|
* (according to the profiler, not my fantasy. Remember:
|
||||||
* premature optimizbla bla bla bla. */
|
* premature optimization bla bla bla bla. */
|
||||||
int klen = sdslen(key);
|
int klen = sdslen(key);
|
||||||
if (klen > EVPOOL_CACHED_SDS_SIZE) {
|
if (klen > EVPOOL_CACHED_SDS_SIZE) {
|
||||||
pool[k].key = sdsdup(key);
|
pool[k].key = sdsdup(key);
|
||||||
@ -357,7 +357,7 @@ unsigned long LFUDecrAndReturn(robj *o) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* ----------------------------------------------------------------------------
|
/* ----------------------------------------------------------------------------
|
||||||
* The external API for eviction: freeMemroyIfNeeded() is called by the
|
* The external API for eviction: freeMemoryIfNeeded() is called by the
|
||||||
* server when there is data to add in order to make space if needed.
|
* server when there is data to add in order to make space if needed.
|
||||||
* --------------------------------------------------------------------------*/
|
* --------------------------------------------------------------------------*/
|
||||||
|
|
||||||
@ -458,7 +458,7 @@ int getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *lev
|
|||||||
*
|
*
|
||||||
* The function returns C_OK if we are under the memory limit or if we
|
* The function returns C_OK if we are under the memory limit or if we
|
||||||
* were over the limit, but the attempt to free memory was successful.
|
* were over the limit, but the attempt to free memory was successful.
|
||||||
* Otehrwise if we are over the memory limit, but not enough memory
|
* Otherwise if we are over the memory limit, but not enough memory
|
||||||
* was freed to return back under the limit, the function returns C_ERR. */
|
* was freed to return back under the limit, the function returns C_ERR. */
|
||||||
int freeMemoryIfNeeded(void) {
|
int freeMemoryIfNeeded(void) {
|
||||||
serverAssert(GlobalLocksAcquired());
|
serverAssert(GlobalLocksAcquired());
|
||||||
@ -508,8 +508,8 @@ int freeMemoryIfNeeded(void) {
|
|||||||
db = g_pserver->db+i;
|
db = g_pserver->db+i;
|
||||||
if (g_pserver->maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS)
|
if (g_pserver->maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS)
|
||||||
{
|
{
|
||||||
if ((keys = dictSize(db->pdict)) != 0) {
|
if ((keys = dictSize(db->dict)) != 0) {
|
||||||
evictionPoolPopulate(i, db->pdict, nullptr, pool);
|
evictionPoolPopulate(i, db->dict, nullptr, pool);
|
||||||
total_keys += keys;
|
total_keys += keys;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -517,7 +517,7 @@ int freeMemoryIfNeeded(void) {
|
|||||||
{
|
{
|
||||||
keys = db->setexpire->size();
|
keys = db->setexpire->size();
|
||||||
if (keys != 0)
|
if (keys != 0)
|
||||||
evictionPoolPopulate(i, db->pdict, db->setexpire, pool);
|
evictionPoolPopulate(i, db->dict, db->setexpire, pool);
|
||||||
total_keys += keys;
|
total_keys += keys;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -529,7 +529,7 @@ int freeMemoryIfNeeded(void) {
|
|||||||
bestdbid = pool[k].dbid;
|
bestdbid = pool[k].dbid;
|
||||||
sds key = nullptr;
|
sds key = nullptr;
|
||||||
|
|
||||||
dictEntry *de = dictFind(g_pserver->db[pool[k].dbid].pdict,pool[k].key);
|
dictEntry *de = dictFind(g_pserver->db[pool[k].dbid].dict,pool[k].key);
|
||||||
if (de != nullptr && (g_pserver->maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS || ((robj*)dictGetVal(de))->FExpires()))
|
if (de != nullptr && (g_pserver->maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS || ((robj*)dictGetVal(de))->FExpires()))
|
||||||
key = (sds)dictGetKey(de);
|
key = (sds)dictGetKey(de);
|
||||||
|
|
||||||
@ -563,8 +563,8 @@ int freeMemoryIfNeeded(void) {
|
|||||||
db = g_pserver->db+j;
|
db = g_pserver->db+j;
|
||||||
if (g_pserver->maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM)
|
if (g_pserver->maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM)
|
||||||
{
|
{
|
||||||
if (dictSize(db->pdict) != 0) {
|
if (dictSize(db->dict) != 0) {
|
||||||
dictEntry *de = dictGetRandomKey(db->pdict);
|
dictEntry *de = dictGetRandomKey(db->dict);
|
||||||
bestkey = (sds)dictGetKey(de);
|
bestkey = (sds)dictGetKey(de);
|
||||||
bestdbid = j;
|
bestdbid = j;
|
||||||
break;
|
break;
|
||||||
@ -593,6 +593,8 @@ int freeMemoryIfNeeded(void) {
|
|||||||
* we are freeing removing the key, but we can't account for
|
* we are freeing removing the key, but we can't account for
|
||||||
* that otherwise we would never exit the loop.
|
* that otherwise we would never exit the loop.
|
||||||
*
|
*
|
||||||
|
* Same for CSC invalidation messages generated by signalModifiedKey.
|
||||||
|
*
|
||||||
* AOF and Output buffer memory will be freed eventually so
|
* AOF and Output buffer memory will be freed eventually so
|
||||||
* we only care about memory used by the key space. */
|
* we only care about memory used by the key space. */
|
||||||
delta = (long long) zmalloc_used_memory();
|
delta = (long long) zmalloc_used_memory();
|
||||||
@ -601,12 +603,12 @@ int freeMemoryIfNeeded(void) {
|
|||||||
dbAsyncDelete(db,keyobj);
|
dbAsyncDelete(db,keyobj);
|
||||||
else
|
else
|
||||||
dbSyncDelete(db,keyobj);
|
dbSyncDelete(db,keyobj);
|
||||||
signalModifiedKey(NULL,db,keyobj);
|
|
||||||
latencyEndMonitor(eviction_latency);
|
latencyEndMonitor(eviction_latency);
|
||||||
latencyAddSampleIfNeeded("eviction-del",eviction_latency);
|
latencyAddSampleIfNeeded("eviction-del",eviction_latency);
|
||||||
delta -= (long long) zmalloc_used_memory();
|
delta -= (long long) zmalloc_used_memory();
|
||||||
mem_freed += delta;
|
mem_freed += delta;
|
||||||
g_pserver->stat_evictedkeys++;
|
g_pserver->stat_evictedkeys++;
|
||||||
|
signalModifiedKey(NULL,db,keyobj);
|
||||||
notifyKeyspaceEvent(NOTIFY_EVICTED, "evicted",
|
notifyKeyspaceEvent(NOTIFY_EVICTED, "evicted",
|
||||||
keyobj, db->id);
|
keyobj, db->id);
|
||||||
decrRefCount(keyobj);
|
decrRefCount(keyobj);
|
||||||
|
@ -54,7 +54,7 @@ void activeExpireCycleExpireFullKey(redisDb *db, const char *key) {
|
|||||||
dbSyncDelete(db,keyobj);
|
dbSyncDelete(db,keyobj);
|
||||||
notifyKeyspaceEvent(NOTIFY_EXPIRED,
|
notifyKeyspaceEvent(NOTIFY_EXPIRED,
|
||||||
"expired",keyobj,db->id);
|
"expired",keyobj,db->id);
|
||||||
trackingInvalidateKey(NULL, keyobj);
|
signalModifiedKey(NULL, db, keyobj);
|
||||||
decrRefCount(keyobj);
|
decrRefCount(keyobj);
|
||||||
g_pserver->stat_expiredkeys++;
|
g_pserver->stat_expiredkeys++;
|
||||||
}
|
}
|
||||||
@ -76,7 +76,7 @@ void activeExpireCycleExpire(redisDb *db, expireEntry &e, long long now) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
expireEntryFat *pfat = e.pfatentry();
|
expireEntryFat *pfat = e.pfatentry();
|
||||||
dictEntry *de = dictFind(db->pdict, e.key());
|
dictEntry *de = dictFind(db->dict, e.key());
|
||||||
robj *val = (robj*)dictGetVal(de);
|
robj *val = (robj*)dictGetVal(de);
|
||||||
int deleted = 0;
|
int deleted = 0;
|
||||||
|
|
||||||
@ -297,7 +297,7 @@ void pexpireMemberAtCommand(client *c)
|
|||||||
* Expire cycle type:
|
* Expire cycle type:
|
||||||
*
|
*
|
||||||
* If type is ACTIVE_EXPIRE_CYCLE_FAST the function will try to run a
|
* If type is ACTIVE_EXPIRE_CYCLE_FAST the function will try to run a
|
||||||
* "fast" expire cycle that takes no longer than EXPIRE_FAST_CYCLE_DURATION
|
* "fast" expire cycle that takes no longer than ACTIVE_EXPIRE_CYCLE_FAST_DURATION
|
||||||
* microseconds, and is not repeated again before the same amount of time.
|
* microseconds, and is not repeated again before the same amount of time.
|
||||||
*
|
*
|
||||||
* If type is ACTIVE_EXPIRE_CYCLE_SLOW, that normal expire cycle is
|
* If type is ACTIVE_EXPIRE_CYCLE_SLOW, that normal expire cycle is
|
||||||
@ -484,7 +484,7 @@ void expireSlaveKeys(void) {
|
|||||||
redisDb *db = g_pserver->db+dbid;
|
redisDb *db = g_pserver->db+dbid;
|
||||||
|
|
||||||
// the expire is hashed based on the key pointer, so we need the point in the main db
|
// the expire is hashed based on the key pointer, so we need the point in the main db
|
||||||
dictEntry *deMain = dictFind(db->pdict, keyname);
|
dictEntry *deMain = dictFind(db->dict, keyname);
|
||||||
auto itr = db->setexpire->end();
|
auto itr = db->setexpire->end();
|
||||||
if (deMain != nullptr)
|
if (deMain != nullptr)
|
||||||
itr = db->setexpire->find((sds)dictGetKey(deMain));
|
itr = db->setexpire->find((sds)dictGetKey(deMain));
|
||||||
@ -519,7 +519,7 @@ void expireSlaveKeys(void) {
|
|||||||
else
|
else
|
||||||
dictDelete(slaveKeysWithExpire,keyname);
|
dictDelete(slaveKeysWithExpire,keyname);
|
||||||
|
|
||||||
/* Stop conditions: found 3 keys we cna't expire in a row or
|
/* Stop conditions: found 3 keys we can't expire in a row or
|
||||||
* time limit was reached. */
|
* time limit was reached. */
|
||||||
cycles++;
|
cycles++;
|
||||||
if (noexpire > 3) break;
|
if (noexpire > 3) break;
|
||||||
@ -571,7 +571,7 @@ size_t getSlaveKeyWithExpireCount(void) {
|
|||||||
*
|
*
|
||||||
* Note: technically we should handle the case of a single DB being flushed
|
* Note: technically we should handle the case of a single DB being flushed
|
||||||
* but it is not worth it since anyway race conditions using the same set
|
* but it is not worth it since anyway race conditions using the same set
|
||||||
* of key names in a wriatable replica and in its master will lead to
|
* of key names in a writable replica and in its master will lead to
|
||||||
* inconsistencies. This is just a best-effort thing we do. */
|
* inconsistencies. This is just a best-effort thing we do. */
|
||||||
void flushSlaveKeysWithExpireList(void) {
|
void flushSlaveKeysWithExpireList(void) {
|
||||||
if (slaveKeysWithExpire) {
|
if (slaveKeysWithExpire) {
|
||||||
@ -595,7 +595,7 @@ int checkAlreadyExpired(long long when) {
|
|||||||
*----------------------------------------------------------------------------*/
|
*----------------------------------------------------------------------------*/
|
||||||
|
|
||||||
/* This is the generic command implementation for EXPIRE, PEXPIRE, EXPIREAT
|
/* This is the generic command implementation for EXPIRE, PEXPIRE, EXPIREAT
|
||||||
* and PEXPIREAT. Because the commad second argument may be relative or absolute
|
* and PEXPIREAT. Because the command second argument may be relative or absolute
|
||||||
* the "basetime" argument is used to signal what the base time is (either 0
|
* the "basetime" argument is used to signal what the base time is (either 0
|
||||||
* for *AT variants of the command, or the current time for relative expires).
|
* for *AT variants of the command, or the current time for relative expires).
|
||||||
*
|
*
|
||||||
|
@ -143,8 +143,8 @@ double extractUnitOrReply(client *c, robj *unit) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Input Argument Helper.
|
/* Input Argument Helper.
|
||||||
* Extract the dinstance from the specified two arguments starting at 'argv'
|
* Extract the distance from the specified two arguments starting at 'argv'
|
||||||
* that shouldbe in the form: <number> <unit> and return the dinstance in the
|
* that should be in the form: <number> <unit>, and return the distance in the
|
||||||
* specified unit on success. *conversions is populated with the coefficient
|
* specified unit on success. *conversions is populated with the coefficient
|
||||||
* to use in order to convert meters to the unit.
|
* to use in order to convert meters to the unit.
|
||||||
*
|
*
|
||||||
@ -651,7 +651,7 @@ void georadiusGeneric(client *c, int flags) {
|
|||||||
|
|
||||||
if (maxelelen < elelen) maxelelen = elelen;
|
if (maxelelen < elelen) maxelelen = elelen;
|
||||||
znode = zslInsert(zs->zsl,score,gp->member);
|
znode = zslInsert(zs->zsl,score,gp->member);
|
||||||
serverAssert(dictAdd(zs->pdict,gp->member,&znode->score) == DICT_OK);
|
serverAssert(dictAdd(zs->dict,gp->member,&znode->score) == DICT_OK);
|
||||||
gp->member = NULL;
|
gp->member = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -788,7 +788,7 @@ void geoposCommand(client *c) {
|
|||||||
|
|
||||||
/* GEODIST key ele1 ele2 [unit]
|
/* GEODIST key ele1 ele2 [unit]
|
||||||
*
|
*
|
||||||
* Return the distance, in meters by default, otherwise accordig to "unit",
|
* Return the distance, in meters by default, otherwise according to "unit",
|
||||||
* between points ele1 and ele2. If one or more elements are missing NULL
|
* between points ele1 and ele2. If one or more elements are missing NULL
|
||||||
* is returned. */
|
* is returned. */
|
||||||
void geodistCommand(client *c) {
|
void geodistCommand(client *c) {
|
||||||
|
@ -65,7 +65,7 @@ uint8_t geohashEstimateStepsByRadius(double range_meters, double lat) {
|
|||||||
}
|
}
|
||||||
step -= 2; /* Make sure range is included in most of the base cases. */
|
step -= 2; /* Make sure range is included in most of the base cases. */
|
||||||
|
|
||||||
/* Wider range torwards the poles... Note: it is possible to do better
|
/* Wider range towards the poles... Note: it is possible to do better
|
||||||
* than this approximation by computing the distance between meridians
|
* than this approximation by computing the distance between meridians
|
||||||
* at this latitude, but this does the trick for now. */
|
* at this latitude, but this does the trick for now. */
|
||||||
if (lat > 66 || lat < -66) {
|
if (lat > 66 || lat < -66) {
|
||||||
@ -81,7 +81,7 @@ uint8_t geohashEstimateStepsByRadius(double range_meters, double lat) {
|
|||||||
|
|
||||||
/* Return the bounding box of the search area centered at latitude,longitude
|
/* Return the bounding box of the search area centered at latitude,longitude
|
||||||
* having a radius of radius_meter. bounds[0] - bounds[2] is the minimum
|
* having a radius of radius_meter. bounds[0] - bounds[2] is the minimum
|
||||||
* and maxium longitude, while bounds[1] - bounds[3] is the minimum and
|
* and maximum longitude, while bounds[1] - bounds[3] is the minimum and
|
||||||
* maximum latitude.
|
* maximum latitude.
|
||||||
*
|
*
|
||||||
* This function does not behave correctly with very large radius values, for
|
* This function does not behave correctly with very large radius values, for
|
||||||
|
@ -95,7 +95,7 @@ struct commandHelp {
|
|||||||
1,
|
1,
|
||||||
"2.0.0" },
|
"2.0.0" },
|
||||||
{ "AUTH",
|
{ "AUTH",
|
||||||
"password",
|
"[username] password",
|
||||||
"Authenticate to the server",
|
"Authenticate to the server",
|
||||||
8,
|
8,
|
||||||
"1.0.0" },
|
"1.0.0" },
|
||||||
@ -736,7 +736,7 @@ struct commandHelp {
|
|||||||
1,
|
1,
|
||||||
"1.0.0" },
|
"1.0.0" },
|
||||||
{ "MIGRATE",
|
{ "MIGRATE",
|
||||||
"host port key|"" destination-db timeout [COPY] [REPLACE] [AUTH password] [KEYS key]",
|
"host port key|"" destination-db timeout [COPY] [REPLACE] [AUTH password] [AUTH2 username password] [KEYS key]",
|
||||||
"Atomically transfer a key from a Redis instance to another one.",
|
"Atomically transfer a key from a Redis instance to another one.",
|
||||||
0,
|
0,
|
||||||
"2.6.0" },
|
"2.6.0" },
|
||||||
|
@ -36,9 +36,9 @@
|
|||||||
|
|
||||||
/* The Redis HyperLogLog implementation is based on the following ideas:
|
/* The Redis HyperLogLog implementation is based on the following ideas:
|
||||||
*
|
*
|
||||||
* * The use of a 64 bit hash function as proposed in [1], in order to don't
|
* * The use of a 64 bit hash function as proposed in [1], in order to estimate
|
||||||
* limited to cardinalities up to 10^9, at the cost of just 1 additional
|
* cardinalities larger than 10^9, at the cost of just 1 additional bit per
|
||||||
* bit per register.
|
* register.
|
||||||
* * The use of 16384 6-bit registers for a great level of accuracy, using
|
* * The use of 16384 6-bit registers for a great level of accuracy, using
|
||||||
* a total of 12k per key.
|
* a total of 12k per key.
|
||||||
* * The use of the Redis string data type. No new type is introduced.
|
* * The use of the Redis string data type. No new type is introduced.
|
||||||
@ -281,7 +281,7 @@ static const char *invalid_hll_err = "-INVALIDOBJ Corrupted HLL object detected\
|
|||||||
* So we right shift of 0 bits (no shift in practice) and
|
* So we right shift of 0 bits (no shift in practice) and
|
||||||
* left shift the next byte of 8 bits, even if we don't use it,
|
* left shift the next byte of 8 bits, even if we don't use it,
|
||||||
* but this has the effect of clearing the bits so the result
|
* but this has the effect of clearing the bits so the result
|
||||||
* will not be affacted after the OR.
|
* will not be affected after the OR.
|
||||||
*
|
*
|
||||||
* -------------------------------------------------------------------------
|
* -------------------------------------------------------------------------
|
||||||
*
|
*
|
||||||
@ -299,7 +299,7 @@ static const char *invalid_hll_err = "-INVALIDOBJ Corrupted HLL object detected\
|
|||||||
* |11000000| <- Our byte at b0
|
* |11000000| <- Our byte at b0
|
||||||
* +--------+
|
* +--------+
|
||||||
*
|
*
|
||||||
* To create a AND-mask to clear the bits about this position, we just
|
* To create an AND-mask to clear the bits about this position, we just
|
||||||
* initialize the mask with the value 63, left shift it of "fs" bits,
|
* initialize the mask with the value 63, left shift it of "fs" bits,
|
||||||
* and finally invert the result.
|
* and finally invert the result.
|
||||||
*
|
*
|
||||||
@ -775,7 +775,7 @@ int hllSparseSet(robj *o, long index, uint8_t count) {
|
|||||||
* by a ZERO opcode with len > 1, or by an XZERO opcode.
|
* by a ZERO opcode with len > 1, or by an XZERO opcode.
|
||||||
*
|
*
|
||||||
* In those cases the original opcode must be split into multiple
|
* In those cases the original opcode must be split into multiple
|
||||||
* opcodes. The worst case is an XZERO split in the middle resuling into
|
* opcodes. The worst case is an XZERO split in the middle resulting into
|
||||||
* XZERO - VAL - XZERO, so the resulting sequence max length is
|
* XZERO - VAL - XZERO, so the resulting sequence max length is
|
||||||
* 5 bytes.
|
* 5 bytes.
|
||||||
*
|
*
|
||||||
@ -907,7 +907,7 @@ promote: /* Promote to dense representation. */
|
|||||||
* the element belongs to is incremented if needed.
|
* the element belongs to is incremented if needed.
|
||||||
*
|
*
|
||||||
* This function is actually a wrapper for hllSparseSet(), it only performs
|
* This function is actually a wrapper for hllSparseSet(), it only performs
|
||||||
* the hashshing of the elmenet to obtain the index and zeros run length. */
|
* the hashshing of the element to obtain the index and zeros run length. */
|
||||||
int hllSparseAdd(robj *o, unsigned char *ele, size_t elesize) {
|
int hllSparseAdd(robj *o, unsigned char *ele, size_t elesize) {
|
||||||
long index;
|
long index;
|
||||||
uint8_t count = hllPatLen(ele,elesize,&index);
|
uint8_t count = hllPatLen(ele,elesize,&index);
|
||||||
@ -1022,7 +1022,7 @@ uint64_t hllCount(struct hllhdr *hdr, int *invalid) {
|
|||||||
double m = HLL_REGISTERS;
|
double m = HLL_REGISTERS;
|
||||||
double E;
|
double E;
|
||||||
int j;
|
int j;
|
||||||
/* Note that reghisto size could be just HLL_Q+2, becuase HLL_Q+1 is
|
/* Note that reghisto size could be just HLL_Q+2, because HLL_Q+1 is
|
||||||
* the maximum frequency of the "000...1" sequence the hash function is
|
* the maximum frequency of the "000...1" sequence the hash function is
|
||||||
* able to return. However it is slow to check for sanity of the
|
* able to return. However it is slow to check for sanity of the
|
||||||
* input: instead we history array at a safe size: overflows will
|
* input: instead we history array at a safe size: overflows will
|
||||||
|
@ -85,7 +85,7 @@ int THPGetAnonHugePagesSize(void) {
|
|||||||
/* ---------------------------- Latency API --------------------------------- */
|
/* ---------------------------- Latency API --------------------------------- */
|
||||||
|
|
||||||
/* Latency monitor initialization. We just need to create the dictionary
|
/* Latency monitor initialization. We just need to create the dictionary
|
||||||
* of time series, each time serie is created on demand in order to avoid
|
* of time series, each time series is created on demand in order to avoid
|
||||||
* having a fixed list to maintain. */
|
* having a fixed list to maintain. */
|
||||||
void latencyMonitorInit(void) {
|
void latencyMonitorInit(void) {
|
||||||
g_pserver->latency_events = dictCreate(&latencyTimeSeriesDictType,NULL);
|
g_pserver->latency_events = dictCreate(&latencyTimeSeriesDictType,NULL);
|
||||||
@ -154,7 +154,7 @@ int latencyResetEvent(char *event_to_reset) {
|
|||||||
|
|
||||||
/* Analyze the samples available for a given event and return a structure
|
/* Analyze the samples available for a given event and return a structure
|
||||||
* populate with different metrics, average, MAD, min, max, and so forth.
|
* populate with different metrics, average, MAD, min, max, and so forth.
|
||||||
* Check latency.h definition of struct latenctStat for more info.
|
* Check latency.h definition of struct latencyStats for more info.
|
||||||
* If the specified event has no elements the structure is populate with
|
* If the specified event has no elements the structure is populate with
|
||||||
* zero values. */
|
* zero values. */
|
||||||
void analyzeLatencyForEvent(char *event, struct latencyStats *ls) {
|
void analyzeLatencyForEvent(char *event, struct latencyStats *ls) {
|
||||||
@ -343,7 +343,7 @@ sds createLatencyReport(void) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (!strcasecmp(event,"aof-fstat") ||
|
if (!strcasecmp(event,"aof-fstat") ||
|
||||||
!strcasecmp(event,"rdb-unlik-temp-file")) {
|
!strcasecmp(event,"rdb-unlink-temp-file")) {
|
||||||
advise_disk_contention = 1;
|
advise_disk_contention = 1;
|
||||||
advise_local_disk = 1;
|
advise_local_disk = 1;
|
||||||
advices += 2;
|
advices += 2;
|
||||||
@ -396,7 +396,7 @@ sds createLatencyReport(void) {
|
|||||||
/* Better VM. */
|
/* Better VM. */
|
||||||
report = sdscat(report,"\nI have a few advices for you:\n\n");
|
report = sdscat(report,"\nI have a few advices for you:\n\n");
|
||||||
if (advise_better_vm) {
|
if (advise_better_vm) {
|
||||||
report = sdscat(report,"- If you are using a virtual machine, consider upgrading it with a faster one using an hypervisior that provides less latency during fork() calls. Xen is known to have poor fork() performance. Even in the context of the same VM provider, certain kinds of instances can execute fork faster than others.\n");
|
report = sdscat(report,"- If you are using a virtual machine, consider upgrading it with a faster one using a hypervisior that provides less latency during fork() calls. Xen is known to have poor fork() performance. Even in the context of the same VM provider, certain kinds of instances can execute fork faster than others.\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Slow log. */
|
/* Slow log. */
|
||||||
@ -416,7 +416,7 @@ sds createLatencyReport(void) {
|
|||||||
if (advise_scheduler) {
|
if (advise_scheduler) {
|
||||||
report = sdscat(report,"- The system is slow to execute Redis code paths not containing system calls. This usually means the system does not provide Redis CPU time to run for long periods. You should try to:\n"
|
report = sdscat(report,"- The system is slow to execute Redis code paths not containing system calls. This usually means the system does not provide Redis CPU time to run for long periods. You should try to:\n"
|
||||||
" 1) Lower the system load.\n"
|
" 1) Lower the system load.\n"
|
||||||
" 2) Use a computer / VM just for Redis if you are running other softawre in the same system.\n"
|
" 2) Use a computer / VM just for Redis if you are running other software in the same system.\n"
|
||||||
" 3) Check if you have a \"noisy neighbour\" problem.\n"
|
" 3) Check if you have a \"noisy neighbour\" problem.\n"
|
||||||
" 4) Check with 'keydb-cli --intrinsic-latency 100' what is the intrinsic latency in your system.\n"
|
" 4) Check with 'keydb-cli --intrinsic-latency 100' what is the intrinsic latency in your system.\n"
|
||||||
" 5) Check if the problem is allocator-related by recompiling Redis with MALLOC=libc, if you are using Jemalloc. However this may create fragmentation problems.\n");
|
" 5) Check if the problem is allocator-related by recompiling Redis with MALLOC=libc, if you are using Jemalloc. However this may create fragmentation problems.\n");
|
||||||
@ -432,7 +432,7 @@ sds createLatencyReport(void) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (advise_data_writeback) {
|
if (advise_data_writeback) {
|
||||||
report = sdscat(report,"- Mounting ext3/4 filesystems with data=writeback can provide a performance boost compared to data=ordered, however this mode of operation provides less guarantees, and sometimes it can happen that after a hard crash the AOF file will have an half-written command at the end and will require to be repaired before Redis restarts.\n");
|
report = sdscat(report,"- Mounting ext3/4 filesystems with data=writeback can provide a performance boost compared to data=ordered, however this mode of operation provides less guarantees, and sometimes it can happen that after a hard crash the AOF file will have a half-written command at the end and will require to be repaired before Redis restarts.\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
if (advise_disk_contention) {
|
if (advise_disk_contention) {
|
||||||
|
@ -15,7 +15,7 @@ size_t lazyfreeGetPendingObjectsCount(void) {
|
|||||||
|
|
||||||
/* Return the amount of work needed in order to free an object.
|
/* Return the amount of work needed in order to free an object.
|
||||||
* The return value is not always the actual number of allocations the
|
* The return value is not always the actual number of allocations the
|
||||||
* object is compoesd of, but a number proportional to it.
|
* object is composed of, but a number proportional to it.
|
||||||
*
|
*
|
||||||
* For strings the function always returns 1.
|
* For strings the function always returns 1.
|
||||||
*
|
*
|
||||||
@ -79,7 +79,7 @@ int dbAsyncDelete(redisDb *db, robj *key) {
|
|||||||
/* If the value is composed of a few allocations, to free in a lazy way
|
/* If the value is composed of a few allocations, to free in a lazy way
|
||||||
* is actually just slower... So under a certain limit we just free
|
* is actually just slower... So under a certain limit we just free
|
||||||
* the object synchronously. */
|
* the object synchronously. */
|
||||||
dictEntry *de = dictUnlink(db->pdict,ptrFromObj(key));
|
dictEntry *de = dictUnlink(db->dict,ptrFromObj(key));
|
||||||
if (de) {
|
if (de) {
|
||||||
robj *val = (robj*)dictGetVal(de);
|
robj *val = (robj*)dictGetVal(de);
|
||||||
if (val->FExpires())
|
if (val->FExpires())
|
||||||
@ -102,14 +102,14 @@ int dbAsyncDelete(redisDb *db, robj *key) {
|
|||||||
if (free_effort > LAZYFREE_THRESHOLD && val->getrefcount(std::memory_order_relaxed) == 1) {
|
if (free_effort > LAZYFREE_THRESHOLD && val->getrefcount(std::memory_order_relaxed) == 1) {
|
||||||
atomicIncr(lazyfree_objects,1);
|
atomicIncr(lazyfree_objects,1);
|
||||||
bioCreateBackgroundJob(BIO_LAZY_FREE,val,NULL,NULL);
|
bioCreateBackgroundJob(BIO_LAZY_FREE,val,NULL,NULL);
|
||||||
dictSetVal(db->pdict,de,NULL);
|
dictSetVal(db->dict,de,NULL);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Release the key-val pair, or just the key if we set the val
|
/* Release the key-val pair, or just the key if we set the val
|
||||||
* field to NULL in order to lazy free it later. */
|
* field to NULL in order to lazy free it later. */
|
||||||
if (de) {
|
if (de) {
|
||||||
dictFreeUnlinkedEntry(db->pdict,de);
|
dictFreeUnlinkedEntry(db->dict,de);
|
||||||
if (g_pserver->cluster_enabled) slotToKeyDel(szFromObj(key));
|
if (g_pserver->cluster_enabled) slotToKeyDel(szFromObj(key));
|
||||||
return 1;
|
return 1;
|
||||||
} else {
|
} else {
|
||||||
@ -132,25 +132,19 @@ void freeObjAsync(robj *o) {
|
|||||||
* create a new empty set of hash tables and scheduling the old ones for
|
* create a new empty set of hash tables and scheduling the old ones for
|
||||||
* lazy freeing. */
|
* lazy freeing. */
|
||||||
void emptyDbAsync(redisDb *db) {
|
void emptyDbAsync(redisDb *db) {
|
||||||
dict *oldht1 = db->pdict;
|
dict *oldht1 = db->dict;
|
||||||
auto *set = db->setexpire;
|
auto *set = db->setexpire;
|
||||||
db->setexpire = new (MALLOC_LOCAL) expireset();
|
db->setexpire = new (MALLOC_LOCAL) expireset();
|
||||||
db->expireitr = db->setexpire->end();
|
db->expireitr = db->setexpire->end();
|
||||||
db->pdict = dictCreate(&dbDictType,NULL);
|
db->dict = dictCreate(&dbDictType,NULL);
|
||||||
atomicIncr(lazyfree_objects,dictSize(oldht1));
|
atomicIncr(lazyfree_objects,dictSize(oldht1));
|
||||||
bioCreateBackgroundJob(BIO_LAZY_FREE,NULL,oldht1,set);
|
bioCreateBackgroundJob(BIO_LAZY_FREE,NULL,oldht1,set);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Empty the slots-keys map of Redis CLuster by creating a new empty one
|
/* Release the radix tree mapping Redis Cluster keys to slots asynchronously. */
|
||||||
* and scheduiling the old for lazy freeing. */
|
void freeSlotsToKeysMapAsync(rax *rt) {
|
||||||
void slotToKeyFlushAsync(void) {
|
atomicIncr(lazyfree_objects,rt->numele);
|
||||||
rax *old = g_pserver->cluster->slots_to_keys;
|
bioCreateBackgroundJob(BIO_LAZY_FREE,NULL,NULL,rt);
|
||||||
|
|
||||||
g_pserver->cluster->slots_to_keys = raxNew();
|
|
||||||
memset(g_pserver->cluster->slots_keys_count,0,
|
|
||||||
sizeof(g_pserver->cluster->slots_keys_count));
|
|
||||||
atomicIncr(lazyfree_objects,old->numele);
|
|
||||||
bioCreateBackgroundJob(BIO_LAZY_FREE,NULL,NULL,old);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Release objects from the lazyfree thread. It's just decrRefCount()
|
/* Release objects from the lazyfree thread. It's just decrRefCount()
|
||||||
@ -161,10 +155,8 @@ void lazyfreeFreeObjectFromBioThread(robj *o) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Release a database from the lazyfree thread. The 'db' pointer is the
|
/* Release a database from the lazyfree thread. The 'db' pointer is the
|
||||||
* database which was substitutied with a fresh one in the main thread
|
* database which was substituted with a fresh one in the main thread
|
||||||
* when the database was logically deleted. 'sl' is a skiplist used by
|
* when the database was logically deleted. */
|
||||||
* Redis Cluster in order to take the hash slots -> keys mapping. This
|
|
||||||
* may be NULL if Redis Cluster is disabled. */
|
|
||||||
void lazyfreeFreeDatabaseFromBioThread(dict *ht1, expireset *set) {
|
void lazyfreeFreeDatabaseFromBioThread(dict *ht1, expireset *set) {
|
||||||
size_t numkeys = dictSize(ht1);
|
size_t numkeys = dictSize(ht1);
|
||||||
dictRelease(ht1);
|
dictRelease(ht1);
|
||||||
@ -172,7 +164,7 @@ void lazyfreeFreeDatabaseFromBioThread(dict *ht1, expireset *set) {
|
|||||||
atomicDecr(lazyfree_objects,numkeys);
|
atomicDecr(lazyfree_objects,numkeys);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Release the skiplist mapping Redis Cluster keys to slots in the
|
/* Release the radix tree mapping Redis Cluster keys to slots in the
|
||||||
* lazyfree thread. */
|
* lazyfree thread. */
|
||||||
void lazyfreeFreeSlotsMapFromBioThread(rax *rt) {
|
void lazyfreeFreeSlotsMapFromBioThread(rax *rt) {
|
||||||
size_t len = rt->numele;
|
size_t len = rt->numele;
|
||||||
|
@ -405,7 +405,7 @@ unsigned char *lpNext(unsigned char *lp, unsigned char *p) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* If 'p' points to an element of the listpack, calling lpPrev() will return
|
/* If 'p' points to an element of the listpack, calling lpPrev() will return
|
||||||
* the pointer to the preivous element (the one on the left), or NULL if 'p'
|
* the pointer to the previous element (the one on the left), or NULL if 'p'
|
||||||
* already pointed to the first element of the listpack. */
|
* already pointed to the first element of the listpack. */
|
||||||
unsigned char *lpPrev(unsigned char *lp, unsigned char *p) {
|
unsigned char *lpPrev(unsigned char *lp, unsigned char *p) {
|
||||||
if (p-lp == LP_HDR_SIZE) return NULL;
|
if (p-lp == LP_HDR_SIZE) return NULL;
|
||||||
@ -768,10 +768,10 @@ unsigned char *lpSeek(unsigned char *lp, long index) {
|
|||||||
if (numele != LP_HDR_NUMELE_UNKNOWN) {
|
if (numele != LP_HDR_NUMELE_UNKNOWN) {
|
||||||
if (index < 0) index = (long)numele+index;
|
if (index < 0) index = (long)numele+index;
|
||||||
if (index < 0) return NULL; /* Index still < 0 means out of range. */
|
if (index < 0) return NULL; /* Index still < 0 means out of range. */
|
||||||
if (index >= numele) return NULL; /* Out of range the other side. */
|
if (index >= (long)numele) return NULL; /* Out of range the other side. */
|
||||||
/* We want to scan right-to-left if the element we are looking for
|
/* We want to scan right-to-left if the element we are looking for
|
||||||
* is past the half of the listpack. */
|
* is past the half of the listpack. */
|
||||||
if (index > numele/2) {
|
if (index > (long)numele/2) {
|
||||||
forward = 0;
|
forward = 0;
|
||||||
/* Right to left scanning always expects a negative index. Convert
|
/* Right to left scanning always expects a negative index. Convert
|
||||||
* our index to negative form. */
|
* our index to negative form. */
|
||||||
|
@ -85,7 +85,7 @@ void lolwutCommand(client *c) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* ========================== LOLWUT Canvase ===============================
|
/* ========================== LOLWUT Canvase ===============================
|
||||||
* Many LOWUT versions will likely print some computer art to the screen.
|
* Many LOLWUT versions will likely print some computer art to the screen.
|
||||||
* This is the case with LOLWUT 5 and LOLWUT 6, so here there is a generic
|
* This is the case with LOLWUT 5 and LOLWUT 6, so here there is a generic
|
||||||
* canvas implementation that can be reused. */
|
* canvas implementation that can be reused. */
|
||||||
|
|
||||||
@ -106,7 +106,7 @@ void lwFreeCanvas(lwCanvas *canvas) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Set a pixel to the specified color. Color is 0 or 1, where zero means no
|
/* Set a pixel to the specified color. Color is 0 or 1, where zero means no
|
||||||
* dot will be displyed, and 1 means dot will be displayed.
|
* dot will be displayed, and 1 means dot will be displayed.
|
||||||
* Coordinates are arranged so that left-top corner is 0,0. You can write
|
* Coordinates are arranged so that left-top corner is 0,0. You can write
|
||||||
* out of the size of the canvas without issues. */
|
* out of the size of the canvas without issues. */
|
||||||
void lwDrawPixel(lwCanvas *canvas, int x, int y, int color) {
|
void lwDrawPixel(lwCanvas *canvas, int x, int y, int color) {
|
||||||
|
@ -156,7 +156,7 @@ void lolwut5Command(client *c) {
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
/* Limits. We want LOLWUT to be always reasonably fast and cheap to execute
|
/* Limits. We want LOLWUT to be always reasonably fast and cheap to execute
|
||||||
* so we have maximum number of columns, rows, and output resulution. */
|
* so we have maximum number of columns, rows, and output resolution. */
|
||||||
if (cols < 1) cols = 1;
|
if (cols < 1) cols = 1;
|
||||||
if (cols > 1000) cols = 1000;
|
if (cols > 1000) cols = 1000;
|
||||||
if (squares_per_row < 1) squares_per_row = 1;
|
if (squares_per_row < 1) squares_per_row = 1;
|
||||||
|
@ -127,7 +127,7 @@
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Whether to store pointers or offsets inside the hash table. On
|
* Whether to store pointers or offsets inside the hash table. On
|
||||||
* 64 bit architetcures, pointers take up twice as much space,
|
* 64 bit architectures, pointers take up twice as much space,
|
||||||
* and might also be slower. Default is to autodetect.
|
* and might also be slower. Default is to autodetect.
|
||||||
*/
|
*/
|
||||||
/*#define LZF_USER_OFFSETS autodetect */
|
/*#define LZF_USER_OFFSETS autodetect */
|
||||||
|
@ -347,10 +347,15 @@ void memtest_alloc_and_test(size_t megabytes, int passes) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
void memtest(size_t megabytes, int passes) {
|
void memtest(size_t megabytes, int passes) {
|
||||||
|
#if !defined(__HAIKU__)
|
||||||
if (ioctl(1, TIOCGWINSZ, &ws) == -1) {
|
if (ioctl(1, TIOCGWINSZ, &ws) == -1) {
|
||||||
ws.ws_col = 80;
|
ws.ws_col = 80;
|
||||||
ws.ws_row = 20;
|
ws.ws_row = 20;
|
||||||
}
|
}
|
||||||
|
#else
|
||||||
|
ws.ws_col = 80;
|
||||||
|
ws.ws_row = 20;
|
||||||
|
#endif
|
||||||
memtest_alloc_and_test(megabytes,passes);
|
memtest_alloc_and_test(megabytes,passes);
|
||||||
printf("\nYour memory passed this test.\n");
|
printf("\nYour memory passed this test.\n");
|
||||||
printf("Please if you are still in doubt use the following two tools:\n");
|
printf("Please if you are still in doubt use the following two tools:\n");
|
||||||
|
421
src/module.cpp
421
src/module.cpp
@ -52,7 +52,7 @@ typedef struct RedisModuleInfoCtx {
|
|||||||
sds info; /* info string we collected so far */
|
sds info; /* info string we collected so far */
|
||||||
int sections; /* number of sections we collected so far */
|
int sections; /* number of sections we collected so far */
|
||||||
int in_section; /* indication if we're in an active section or not */
|
int in_section; /* indication if we're in an active section or not */
|
||||||
int in_dict_field; /* indication that we're curreintly appending to a dict */
|
int in_dict_field; /* indication that we're currently appending to a dict */
|
||||||
} RedisModuleInfoCtx;
|
} RedisModuleInfoCtx;
|
||||||
|
|
||||||
typedef void (*RedisModuleInfoFunc)(RedisModuleInfoCtx *ctx, int for_crash_report);
|
typedef void (*RedisModuleInfoFunc)(RedisModuleInfoCtx *ctx, int for_crash_report);
|
||||||
@ -155,8 +155,7 @@ struct RedisModuleCtx {
|
|||||||
on keys. */
|
on keys. */
|
||||||
|
|
||||||
/* Used if there is the REDISMODULE_CTX_KEYS_POS_REQUEST flag set. */
|
/* Used if there is the REDISMODULE_CTX_KEYS_POS_REQUEST flag set. */
|
||||||
int *keys_pos;
|
getKeysResult *keys_result;
|
||||||
int keys_count;
|
|
||||||
|
|
||||||
struct RedisModulePoolAllocBlock *pa_head;
|
struct RedisModulePoolAllocBlock *pa_head;
|
||||||
redisOpArray saved_oparray; /* When propagating commands in a callback
|
redisOpArray saved_oparray; /* When propagating commands in a callback
|
||||||
@ -166,7 +165,7 @@ struct RedisModuleCtx {
|
|||||||
};
|
};
|
||||||
typedef struct RedisModuleCtx RedisModuleCtx;
|
typedef struct RedisModuleCtx RedisModuleCtx;
|
||||||
|
|
||||||
#define REDISMODULE_CTX_INIT {(void*)(unsigned long)&RM_GetApi, NULL, NULL, NULL, NULL, 0, 0, 0, NULL, 0, NULL, NULL, NULL, 0, NULL, {0}}
|
#define REDISMODULE_CTX_INIT {(void*)(unsigned long)&RM_GetApi, NULL, NULL, NULL, NULL, 0, 0, 0, NULL, 0, NULL, NULL, NULL, NULL, {0}}
|
||||||
#define REDISMODULE_CTX_MULTI_EMITTED (1<<0)
|
#define REDISMODULE_CTX_MULTI_EMITTED (1<<0)
|
||||||
#define REDISMODULE_CTX_AUTO_MEMORY (1<<1)
|
#define REDISMODULE_CTX_AUTO_MEMORY (1<<1)
|
||||||
#define REDISMODULE_CTX_KEYS_POS_REQUEST (1<<2)
|
#define REDISMODULE_CTX_KEYS_POS_REQUEST (1<<2)
|
||||||
@ -685,18 +684,24 @@ void RedisModuleCommandDispatcher(client *c) {
|
|||||||
* In order to accomplish its work, the module command is called, flagging
|
* In order to accomplish its work, the module command is called, flagging
|
||||||
* the context in a way that the command can recognize this is a special
|
* the context in a way that the command can recognize this is a special
|
||||||
* "get keys" call by calling RedisModule_IsKeysPositionRequest(ctx). */
|
* "get keys" call by calling RedisModule_IsKeysPositionRequest(ctx). */
|
||||||
int *moduleGetCommandKeysViaAPI(struct redisCommand *cmd, robj **argv, int argc, int *numkeys) {
|
int moduleGetCommandKeysViaAPI(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {
|
||||||
RedisModuleCommandProxy *cp = (RedisModuleCommandProxy*)(unsigned long)cmd->getkeys_proc;
|
RedisModuleCommandProxy *cp = (RedisModuleCommandProxy*)(unsigned long)cmd->getkeys_proc;
|
||||||
RedisModuleCtx ctx = REDISMODULE_CTX_INIT;
|
RedisModuleCtx ctx = REDISMODULE_CTX_INIT;
|
||||||
|
|
||||||
ctx.module = cp->module;
|
ctx.module = cp->module;
|
||||||
ctx.client = NULL;
|
ctx.client = NULL;
|
||||||
ctx.flags |= REDISMODULE_CTX_KEYS_POS_REQUEST;
|
ctx.flags |= REDISMODULE_CTX_KEYS_POS_REQUEST;
|
||||||
|
|
||||||
|
/* Initialize getKeysResult */
|
||||||
|
getKeysPrepareResult(result, MAX_KEYS_BUFFER);
|
||||||
|
ctx.keys_result = result;
|
||||||
|
|
||||||
cp->func(&ctx,(void**)argv,argc);
|
cp->func(&ctx,(void**)argv,argc);
|
||||||
int *res = ctx.keys_pos;
|
/* We currently always use the array allocated by RM_KeyAtPos() and don't try
|
||||||
if (numkeys) *numkeys = ctx.keys_count;
|
* to optimize for the pre-allocated buffer.
|
||||||
|
*/
|
||||||
moduleFreeContext(&ctx);
|
moduleFreeContext(&ctx);
|
||||||
return res;
|
return result->numkeys;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Return non-zero if a module command, that was declared with the
|
/* Return non-zero if a module command, that was declared with the
|
||||||
@ -721,10 +726,18 @@ int RM_IsKeysPositionRequest(RedisModuleCtx *ctx) {
|
|||||||
* keys are at fixed positions. This interface is only used for commands
|
* keys are at fixed positions. This interface is only used for commands
|
||||||
* with a more complex structure. */
|
* with a more complex structure. */
|
||||||
void RM_KeyAtPos(RedisModuleCtx *ctx, int pos) {
|
void RM_KeyAtPos(RedisModuleCtx *ctx, int pos) {
|
||||||
if (!(ctx->flags & REDISMODULE_CTX_KEYS_POS_REQUEST)) return;
|
if (!(ctx->flags & REDISMODULE_CTX_KEYS_POS_REQUEST) || !ctx->keys_result) return;
|
||||||
if (pos <= 0) return;
|
if (pos <= 0) return;
|
||||||
ctx->keys_pos = (int*)zrealloc(ctx->keys_pos,sizeof(int)*(ctx->keys_count+1), MALLOC_LOCAL);
|
|
||||||
ctx->keys_pos[ctx->keys_count++] = pos;
|
getKeysResult *res = ctx->keys_result;
|
||||||
|
|
||||||
|
/* Check overflow */
|
||||||
|
if (res->numkeys == res->size) {
|
||||||
|
int newsize = res->size + (res->size > 8192 ? 8192 : res->size);
|
||||||
|
getKeysPrepareResult(res, newsize);
|
||||||
|
}
|
||||||
|
|
||||||
|
res->keys[res->numkeys++] = pos;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Helper for RM_CreateCommand(). Turns a string representing command
|
/* Helper for RM_CreateCommand(). Turns a string representing command
|
||||||
@ -921,10 +934,21 @@ int RM_SignalModifiedKey(RedisModuleCtx *ctx, RedisModuleString *keyname) {
|
|||||||
* Automatic memory management for modules
|
* Automatic memory management for modules
|
||||||
* -------------------------------------------------------------------------- */
|
* -------------------------------------------------------------------------- */
|
||||||
|
|
||||||
/* Enable automatic memory management. See API.md for more information.
|
/* Enable automatic memory management.
|
||||||
*
|
*
|
||||||
* The function must be called as the first function of a command implementation
|
* The function must be called as the first function of a command implementation
|
||||||
* that wants to use automatic memory. */
|
* that wants to use automatic memory.
|
||||||
|
*
|
||||||
|
* When enabled, automatic memory management tracks and automatically frees
|
||||||
|
* keys, call replies and Redis string objects once the command returns. In most
|
||||||
|
* cases this eliminates the need of calling the following functions:
|
||||||
|
*
|
||||||
|
* 1) RedisModule_CloseKey()
|
||||||
|
* 2) RedisModule_FreeCallReply()
|
||||||
|
* 3) RedisModule_FreeString()
|
||||||
|
*
|
||||||
|
* These functions can still be used with automatic memory management enabled,
|
||||||
|
* to optimize loops that make numerous allocations for example. */
|
||||||
void RM_AutoMemory(RedisModuleCtx *ctx) {
|
void RM_AutoMemory(RedisModuleCtx *ctx) {
|
||||||
ctx->flags |= REDISMODULE_CTX_AUTO_MEMORY;
|
ctx->flags |= REDISMODULE_CTX_AUTO_MEMORY;
|
||||||
}
|
}
|
||||||
@ -1060,7 +1084,7 @@ RedisModuleString *RM_CreateStringFromLongLong(RedisModuleCtx *ctx, long long ll
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Like RedisModule_CreatString(), but creates a string starting from a double
|
/* Like RedisModule_CreatString(), but creates a string starting from a double
|
||||||
* integer instead of taking a buffer and its length.
|
* instead of taking a buffer and its length.
|
||||||
*
|
*
|
||||||
* The returned string must be released with RedisModule_FreeString() or by
|
* The returned string must be released with RedisModule_FreeString() or by
|
||||||
* enabling automatic memory management. */
|
* enabling automatic memory management. */
|
||||||
@ -1981,6 +2005,12 @@ int RM_GetSelectedDb(RedisModuleCtx *ctx) {
|
|||||||
*
|
*
|
||||||
* * REDISMODULE_CTX_FLAGS_ACTIVE_CHILD: There is currently some background
|
* * REDISMODULE_CTX_FLAGS_ACTIVE_CHILD: There is currently some background
|
||||||
* process active (RDB, AUX or module).
|
* process active (RDB, AUX or module).
|
||||||
|
*
|
||||||
|
* * REDISMODULE_CTX_FLAGS_MULTI_DIRTY: The next EXEC will fail due to dirty
|
||||||
|
* CAS (touched keys).
|
||||||
|
*
|
||||||
|
* * REDISMODULE_CTX_FLAGS_IS_CHILD: Redis is currently running inside
|
||||||
|
* background child process.
|
||||||
*/
|
*/
|
||||||
int RM_GetContextFlags(RedisModuleCtx *ctx) {
|
int RM_GetContextFlags(RedisModuleCtx *ctx) {
|
||||||
|
|
||||||
@ -1992,7 +2022,7 @@ int RM_GetContextFlags(RedisModuleCtx *ctx) {
|
|||||||
flags |= REDISMODULE_CTX_FLAGS_LUA;
|
flags |= REDISMODULE_CTX_FLAGS_LUA;
|
||||||
if (ctx->client->flags & CLIENT_MULTI)
|
if (ctx->client->flags & CLIENT_MULTI)
|
||||||
flags |= REDISMODULE_CTX_FLAGS_MULTI;
|
flags |= REDISMODULE_CTX_FLAGS_MULTI;
|
||||||
/* Module command recieved from MASTER, is replicated. */
|
/* Module command received from MASTER, is replicated. */
|
||||||
if (ctx->client->flags & CLIENT_MASTER)
|
if (ctx->client->flags & CLIENT_MASTER)
|
||||||
flags |= REDISMODULE_CTX_FLAGS_REPLICATED;
|
flags |= REDISMODULE_CTX_FLAGS_REPLICATED;
|
||||||
}
|
}
|
||||||
@ -2056,6 +2086,7 @@ int RM_GetContextFlags(RedisModuleCtx *ctx) {
|
|||||||
|
|
||||||
/* Presence of children processes. */
|
/* Presence of children processes. */
|
||||||
if (hasActiveChildProcess()) flags |= REDISMODULE_CTX_FLAGS_ACTIVE_CHILD;
|
if (hasActiveChildProcess()) flags |= REDISMODULE_CTX_FLAGS_ACTIVE_CHILD;
|
||||||
|
if (g_pserver->in_fork_child) flags |= REDISMODULE_CTX_FLAGS_IS_CHILD;
|
||||||
|
|
||||||
return flags;
|
return flags;
|
||||||
}
|
}
|
||||||
@ -2275,7 +2306,7 @@ void RM_ResetDataset(int restart_aof, int async) {
|
|||||||
|
|
||||||
/* Returns the number of keys in the current db. */
|
/* Returns the number of keys in the current db. */
|
||||||
unsigned long long RM_DbSize(RedisModuleCtx *ctx) {
|
unsigned long long RM_DbSize(RedisModuleCtx *ctx) {
|
||||||
return dictSize(ctx->client->db->pdict);
|
return dictSize(ctx->client->db->dict);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Returns a name of a random key, or NULL if current db is empty. */
|
/* Returns a name of a random key, or NULL if current db is empty. */
|
||||||
@ -2994,9 +3025,9 @@ int RM_HashSet(RedisModuleKey *key, int flags, ...) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Get fields from an hash value. This function is called using a variable
|
/* Get fields from an hash value. This function is called using a variable
|
||||||
* number of arguments, alternating a field name (as a StringRedisModule
|
* number of arguments, alternating a field name (as a RedisModuleString
|
||||||
* pointer) with a pointer to a StringRedisModule pointer, that is set to the
|
* pointer) with a pointer to a RedisModuleString pointer, that is set to the
|
||||||
* value of the field if the field exist, or NULL if the field did not exist.
|
* value of the field if the field exists, or NULL if the field does not exist.
|
||||||
* At the end of the field/value-ptr pairs, NULL must be specified as last
|
* At the end of the field/value-ptr pairs, NULL must be specified as last
|
||||||
* argument to signal the end of the arguments in the variadic function.
|
* argument to signal the end of the arguments in the variadic function.
|
||||||
*
|
*
|
||||||
@ -3009,22 +3040,22 @@ int RM_HashSet(RedisModuleKey *key, int flags, ...) {
|
|||||||
* As with RedisModule_HashSet() the behavior of the command can be specified
|
* As with RedisModule_HashSet() the behavior of the command can be specified
|
||||||
* passing flags different than REDISMODULE_HASH_NONE:
|
* passing flags different than REDISMODULE_HASH_NONE:
|
||||||
*
|
*
|
||||||
* REDISMODULE_HASH_CFIELD: field names as null terminated C strings.
|
* REDISMODULE_HASH_CFIELDS: field names as null terminated C strings.
|
||||||
*
|
*
|
||||||
* REDISMODULE_HASH_EXISTS: instead of setting the value of the field
|
* REDISMODULE_HASH_EXISTS: instead of setting the value of the field
|
||||||
* expecting a RedisModuleString pointer to pointer, the function just
|
* expecting a RedisModuleString pointer to pointer, the function just
|
||||||
* reports if the field exists or not and expects an integer pointer
|
* reports if the field exists or not and expects an integer pointer
|
||||||
* as the second element of each pair.
|
* as the second element of each pair.
|
||||||
*
|
*
|
||||||
* Example of REDISMODULE_HASH_CFIELD:
|
* Example of REDISMODULE_HASH_CFIELDS:
|
||||||
*
|
*
|
||||||
* RedisModuleString *username, *hashedpass;
|
* RedisModuleString *username, *hashedpass;
|
||||||
* RedisModule_HashGet(mykey,"username",&username,"hp",&hashedpass, NULL);
|
* RedisModule_HashGet(mykey,REDISMODULE_HASH_CFIELDS,"username",&username,"hp",&hashedpass, NULL);
|
||||||
*
|
*
|
||||||
* Example of REDISMODULE_HASH_EXISTS:
|
* Example of REDISMODULE_HASH_EXISTS:
|
||||||
*
|
*
|
||||||
* int exists;
|
* int exists;
|
||||||
* RedisModule_HashGet(mykey,argv[1],&exists,NULL);
|
* RedisModule_HashGet(mykey,REDISMODULE_HASH_EXISTS,argv[1],&exists,NULL);
|
||||||
*
|
*
|
||||||
* The function returns REDISMODULE_OK on success and REDISMODULE_ERR if
|
* The function returns REDISMODULE_OK on success and REDISMODULE_ERR if
|
||||||
* the key is not an hash value.
|
* the key is not an hash value.
|
||||||
@ -3115,7 +3146,7 @@ void moduleParseCallReply_SimpleString(RedisModuleCallReply *reply);
|
|||||||
void moduleParseCallReply_Array(RedisModuleCallReply *reply);
|
void moduleParseCallReply_Array(RedisModuleCallReply *reply);
|
||||||
|
|
||||||
/* Do nothing if REDISMODULE_REPLYFLAG_TOPARSE is false, otherwise
|
/* Do nothing if REDISMODULE_REPLYFLAG_TOPARSE is false, otherwise
|
||||||
* use the protcol of the reply in reply->proto in order to fill the
|
* use the protocol of the reply in reply->proto in order to fill the
|
||||||
* reply with parsed data according to the reply type. */
|
* reply with parsed data according to the reply type. */
|
||||||
void moduleParseCallReply(RedisModuleCallReply *reply) {
|
void moduleParseCallReply(RedisModuleCallReply *reply) {
|
||||||
if (!(reply->flags & REDISMODULE_REPLYFLAG_TOPARSE)) return;
|
if (!(reply->flags & REDISMODULE_REPLYFLAG_TOPARSE)) return;
|
||||||
@ -3676,7 +3707,7 @@ void moduleTypeNameByID(char *name, uint64_t moduleid) {
|
|||||||
|
|
||||||
/* Register a new data type exported by the module. The parameters are the
|
/* Register a new data type exported by the module. The parameters are the
|
||||||
* following. Please for in depth documentation check the modules API
|
* following. Please for in depth documentation check the modules API
|
||||||
* documentation, especially the TYPES.md file.
|
* documentation, especially https://redis.io/topics/modules-native-types.
|
||||||
*
|
*
|
||||||
* * **name**: A 9 characters data type name that MUST be unique in the Redis
|
* * **name**: A 9 characters data type name that MUST be unique in the Redis
|
||||||
* Modules ecosystem. Be creative... and there will be no collisions. Use
|
* Modules ecosystem. Be creative... and there will be no collisions. Use
|
||||||
@ -3723,7 +3754,7 @@ void moduleTypeNameByID(char *name, uint64_t moduleid) {
|
|||||||
* * **aux_load**: A callback function pointer that loads out of keyspace data from RDB files.
|
* * **aux_load**: A callback function pointer that loads out of keyspace data from RDB files.
|
||||||
* Similar to aux_save, returns REDISMODULE_OK on success, and ERR otherwise.
|
* Similar to aux_save, returns REDISMODULE_OK on success, and ERR otherwise.
|
||||||
*
|
*
|
||||||
* The **digest* and **mem_usage** methods should currently be omitted since
|
* The **digest** and **mem_usage** methods should currently be omitted since
|
||||||
* they are not yet implemented inside the Redis modules core.
|
* they are not yet implemented inside the Redis modules core.
|
||||||
*
|
*
|
||||||
* Note: the module name "AAAAAAAAA" is reserved and produces an error, it
|
* Note: the module name "AAAAAAAAA" is reserved and produces an error, it
|
||||||
@ -3733,7 +3764,7 @@ void moduleTypeNameByID(char *name, uint64_t moduleid) {
|
|||||||
* and if the module name or encver is invalid, NULL is returned.
|
* and if the module name or encver is invalid, NULL is returned.
|
||||||
* Otherwise the new type is registered into Redis, and a reference of
|
* Otherwise the new type is registered into Redis, and a reference of
|
||||||
* type RedisModuleType is returned: the caller of the function should store
|
* type RedisModuleType is returned: the caller of the function should store
|
||||||
* this reference into a gobal variable to make future use of it in the
|
* this reference into a global variable to make future use of it in the
|
||||||
* modules type API, since a single module may register multiple types.
|
* modules type API, since a single module may register multiple types.
|
||||||
* Example code fragment:
|
* Example code fragment:
|
||||||
*
|
*
|
||||||
@ -3815,7 +3846,7 @@ moduleType *RM_ModuleTypeGetType(RedisModuleKey *key) {
|
|||||||
|
|
||||||
/* Assuming RedisModule_KeyType() returned REDISMODULE_KEYTYPE_MODULE on
|
/* Assuming RedisModule_KeyType() returned REDISMODULE_KEYTYPE_MODULE on
|
||||||
* the key, returns the module type low-level value stored at key, as
|
* the key, returns the module type low-level value stored at key, as
|
||||||
* it was set by the user via RedisModule_ModuleTypeSet().
|
* it was set by the user via RedisModule_ModuleTypeSetValue().
|
||||||
*
|
*
|
||||||
* If the key is NULL, is not associated with a module type, or is empty,
|
* If the key is NULL, is not associated with a module type, or is empty,
|
||||||
* then NULL is returned instead. */
|
* then NULL is returned instead. */
|
||||||
@ -3872,7 +3903,7 @@ int moduleAllDatatypesHandleErrors() {
|
|||||||
|
|
||||||
/* Returns true if any previous IO API failed.
|
/* Returns true if any previous IO API failed.
|
||||||
* for Load* APIs the REDISMODULE_OPTIONS_HANDLE_IO_ERRORS flag must be set with
|
* for Load* APIs the REDISMODULE_OPTIONS_HANDLE_IO_ERRORS flag must be set with
|
||||||
* RediModule_SetModuleOptions first. */
|
* RedisModule_SetModuleOptions first. */
|
||||||
int RM_IsIOError(RedisModuleIO *io) {
|
int RM_IsIOError(RedisModuleIO *io) {
|
||||||
return io->error;
|
return io->error;
|
||||||
}
|
}
|
||||||
@ -4007,7 +4038,7 @@ RedisModuleString *RM_LoadString(RedisModuleIO *io) {
|
|||||||
*
|
*
|
||||||
* The size of the string is stored at '*lenptr' if not NULL.
|
* The size of the string is stored at '*lenptr' if not NULL.
|
||||||
* The returned string is not automatically NULL terminated, it is loaded
|
* The returned string is not automatically NULL terminated, it is loaded
|
||||||
* exactly as it was stored inisde the RDB file. */
|
* exactly as it was stored inside the RDB file. */
|
||||||
char *RM_LoadStringBuffer(RedisModuleIO *io, size_t *lenptr) {
|
char *RM_LoadStringBuffer(RedisModuleIO *io, size_t *lenptr) {
|
||||||
return (char*)moduleLoadString(io,1,lenptr);
|
return (char*)moduleLoadString(io,1,lenptr);
|
||||||
}
|
}
|
||||||
@ -4601,7 +4632,7 @@ int moduleTryServeClientBlockedOnKey(client *c, robj *key) {
|
|||||||
* reply_callback: called after a successful RedisModule_UnblockClient()
|
* reply_callback: called after a successful RedisModule_UnblockClient()
|
||||||
* call in order to reply to the client and unblock it.
|
* call in order to reply to the client and unblock it.
|
||||||
*
|
*
|
||||||
* reply_timeout: called when the timeout is reached in order to send an
|
* timeout_callback: called when the timeout is reached in order to send an
|
||||||
* error to the client.
|
* error to the client.
|
||||||
*
|
*
|
||||||
* free_privdata: called in order to free the private data that is passed
|
* free_privdata: called in order to free the private data that is passed
|
||||||
@ -4628,13 +4659,13 @@ RedisModuleBlockedClient *RM_BlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc
|
|||||||
* once certain keys become "ready", that is, contain more data.
|
* once certain keys become "ready", that is, contain more data.
|
||||||
*
|
*
|
||||||
* Basically this is similar to what a typical Redis command usually does,
|
* Basically this is similar to what a typical Redis command usually does,
|
||||||
* like BLPOP or ZPOPMAX: the client blocks if it cannot be served ASAP,
|
* like BLPOP or BZPOPMAX: the client blocks if it cannot be served ASAP,
|
||||||
* and later when the key receives new data (a list push for instance), the
|
* and later when the key receives new data (a list push for instance), the
|
||||||
* client is unblocked and served.
|
* client is unblocked and served.
|
||||||
*
|
*
|
||||||
* However in the case of this module API, when the client is unblocked?
|
* However in the case of this module API, when the client is unblocked?
|
||||||
*
|
*
|
||||||
* 1. If you block ok a key of a type that has blocking operations associated,
|
* 1. If you block on a key of a type that has blocking operations associated,
|
||||||
* like a list, a sorted set, a stream, and so forth, the client may be
|
* like a list, a sorted set, a stream, and so forth, the client may be
|
||||||
* unblocked once the relevant key is targeted by an operation that normally
|
* unblocked once the relevant key is targeted by an operation that normally
|
||||||
* unblocks the native blocking operations for that type. So if we block
|
* unblocks the native blocking operations for that type. So if we block
|
||||||
@ -4687,8 +4718,9 @@ RedisModuleBlockedClient *RM_BlockClientOnKeys(RedisModuleCtx *ctx, RedisModuleC
|
|||||||
|
|
||||||
/* This function is used in order to potentially unblock a client blocked
|
/* This function is used in order to potentially unblock a client blocked
|
||||||
* on keys with RedisModule_BlockClientOnKeys(). When this function is called,
|
* on keys with RedisModule_BlockClientOnKeys(). When this function is called,
|
||||||
* all the clients blocked for this key will get their reply callback called,
|
* all the clients blocked for this key will get their reply_callback called.
|
||||||
* and if the callback returns REDISMODULE_OK the client will be unblocked. */
|
*
|
||||||
|
* Note: The function has no effect if the signaled key doesn't exist. */
|
||||||
void RM_SignalKeyAsReady(RedisModuleCtx *ctx, RedisModuleString *key) {
|
void RM_SignalKeyAsReady(RedisModuleCtx *ctx, RedisModuleString *key) {
|
||||||
signalKeyAsReady(ctx->client->db, key);
|
signalKeyAsReady(ctx->client->db, key);
|
||||||
}
|
}
|
||||||
@ -4732,14 +4764,13 @@ int moduleClientIsBlockedOnKeys(client *c) {
|
|||||||
*
|
*
|
||||||
* Note 1: this function can be called from threads spawned by the module.
|
* Note 1: this function can be called from threads spawned by the module.
|
||||||
*
|
*
|
||||||
* Note 2: when we unblock a client that is blocked for keys using
|
* Note 2: when we unblock a client that is blocked for keys using the API
|
||||||
* the API RedisModule_BlockClientOnKeys(), the privdata argument here is
|
* RedisModule_BlockClientOnKeys(), the privdata argument here is not used.
|
||||||
* not used, and the reply callback is called with the privdata pointer that
|
|
||||||
* was passed when blocking the client.
|
|
||||||
*
|
|
||||||
* Unblocking a client that was blocked for keys using this API will still
|
* Unblocking a client that was blocked for keys using this API will still
|
||||||
* require the client to get some reply, so the function will use the
|
* require the client to get some reply, so the function will use the
|
||||||
* "timeout" handler in order to do so. */
|
* "timeout" handler in order to do so (The privdata provided in
|
||||||
|
* RedisModule_BlockClientOnKeys() is accessible from the timeout
|
||||||
|
* callback via RM_GetBlockedClientPrivateData). */
|
||||||
int RM_UnblockClient(RedisModuleBlockedClient *bc, void *privdata) {
|
int RM_UnblockClient(RedisModuleBlockedClient *bc, void *privdata) {
|
||||||
if (bc->blocked_on_keys) {
|
if (bc->blocked_on_keys) {
|
||||||
/* In theory the user should always pass the timeout handler as an
|
/* In theory the user should always pass the timeout handler as an
|
||||||
@ -4899,6 +4930,7 @@ void moduleBlockedClientTimedOut(client *c) {
|
|||||||
ctx.module = bc->module;
|
ctx.module = bc->module;
|
||||||
ctx.client = bc->client;
|
ctx.client = bc->client;
|
||||||
ctx.blocked_client = bc;
|
ctx.blocked_client = bc;
|
||||||
|
ctx.blocked_privdata = bc->privdata;
|
||||||
bc->timeout_callback(&ctx,(void**)c->argv,c->argc);
|
bc->timeout_callback(&ctx,(void**)c->argv,c->argc);
|
||||||
moduleFreeContext(&ctx);
|
moduleFreeContext(&ctx);
|
||||||
/* For timeout events, we do not want to call the disconnect callback,
|
/* For timeout events, we do not want to call the disconnect callback,
|
||||||
@ -4966,8 +4998,9 @@ int RM_BlockedClientDisconnected(RedisModuleCtx *ctx) {
|
|||||||
* that a blocked client was used when the context was created, otherwise
|
* that a blocked client was used when the context was created, otherwise
|
||||||
* no RedisModule_Reply* call should be made at all.
|
* no RedisModule_Reply* call should be made at all.
|
||||||
*
|
*
|
||||||
* TODO: thread safe contexts do not inherit the blocked client
|
* NOTE: If you're creating a detached thread safe context (bc is NULL),
|
||||||
* selected database. */
|
* consider using `RM_GetDetachedThreadSafeContext` which will also retain
|
||||||
|
* the module ID and thus be more useful for logging. */
|
||||||
RedisModuleCtx *RM_GetThreadSafeContext(RedisModuleBlockedClient *bc) {
|
RedisModuleCtx *RM_GetThreadSafeContext(RedisModuleBlockedClient *bc) {
|
||||||
RedisModuleCtx *ctx = (RedisModuleCtx*)zmalloc(sizeof(*ctx), MALLOC_LOCAL);
|
RedisModuleCtx *ctx = (RedisModuleCtx*)zmalloc(sizeof(*ctx), MALLOC_LOCAL);
|
||||||
RedisModuleCtx empty = REDISMODULE_CTX_INIT;
|
RedisModuleCtx empty = REDISMODULE_CTX_INIT;
|
||||||
@ -4989,6 +5022,21 @@ RedisModuleCtx *RM_GetThreadSafeContext(RedisModuleBlockedClient *bc) {
|
|||||||
return ctx;
|
return ctx;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Return a detached thread safe context that is not associated with any
|
||||||
|
* specific blocked client, but is associated with the module's context.
|
||||||
|
*
|
||||||
|
* This is useful for modules that wish to hold a global context over
|
||||||
|
* a long term, for purposes such as logging. */
|
||||||
|
RedisModuleCtx *RM_GetDetachedThreadSafeContext(RedisModuleCtx *ctx) {
|
||||||
|
RedisModuleCtx *new_ctx = (RedisModuleCtx*)zmalloc(sizeof(*new_ctx));
|
||||||
|
RedisModuleCtx empty = REDISMODULE_CTX_INIT;
|
||||||
|
memcpy(new_ctx,&empty,sizeof(empty));
|
||||||
|
new_ctx->module = ctx->module;
|
||||||
|
new_ctx->flags |= REDISMODULE_CTX_THREAD_SAFE;
|
||||||
|
new_ctx->client = createClient(NULL, IDX_EVENT_LOOP_MAIN);
|
||||||
|
return new_ctx;
|
||||||
|
}
|
||||||
|
|
||||||
/* Release a thread safe context. */
|
/* Release a thread safe context. */
|
||||||
void RM_FreeThreadSafeContext(RedisModuleCtx *ctx) {
|
void RM_FreeThreadSafeContext(RedisModuleCtx *ctx) {
|
||||||
moduleAcquireGIL(false /*fServerThread*/);
|
moduleAcquireGIL(false /*fServerThread*/);
|
||||||
@ -5120,7 +5168,7 @@ int moduleGILAcquiredByModule(void) {
|
|||||||
|
|
||||||
/* Subscribe to keyspace notifications. This is a low-level version of the
|
/* Subscribe to keyspace notifications. This is a low-level version of the
|
||||||
* keyspace-notifications API. A module can register callbacks to be notified
|
* keyspace-notifications API. A module can register callbacks to be notified
|
||||||
* when keyspce events occur.
|
* when keyspace events occur.
|
||||||
*
|
*
|
||||||
* Notification events are filtered by their type (string events, set events,
|
* Notification events are filtered by their type (string events, set events,
|
||||||
* etc), and the subscriber callback receives only events that match a specific
|
* etc), and the subscriber callback receives only events that match a specific
|
||||||
@ -5565,7 +5613,13 @@ int moduleTimerHandler(struct aeEventLoop *eventLoop, long long id, void *client
|
|||||||
raxRemove(Timers,(unsigned char*)ri.key,ri.key_len,NULL);
|
raxRemove(Timers,(unsigned char*)ri.key,ri.key_len,NULL);
|
||||||
zfree(timer);
|
zfree(timer);
|
||||||
} else {
|
} else {
|
||||||
next_period = (expiretime-now)/1000; /* Scale to milliseconds. */
|
/* We call ustime() again instead of using the cached 'now' so that
|
||||||
|
* 'next_period' isn't affected by the time it took to execute
|
||||||
|
* previous calls to 'callback.
|
||||||
|
* We need to cast 'expiretime' so that the compiler will not treat
|
||||||
|
* the difference as unsigned (Causing next_period to be huge) in
|
||||||
|
* case expiretime < ustime() */
|
||||||
|
next_period = ((long long)expiretime-ustime())/1000; /* Scale to milliseconds. */
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -5578,7 +5632,16 @@ int moduleTimerHandler(struct aeEventLoop *eventLoop, long long id, void *client
|
|||||||
|
|
||||||
/* Create a new timer that will fire after `period` milliseconds, and will call
|
/* Create a new timer that will fire after `period` milliseconds, and will call
|
||||||
* the specified function using `data` as argument. The returned timer ID can be
|
* the specified function using `data` as argument. The returned timer ID can be
|
||||||
* used to get information from the timer or to stop it before it fires. */
|
* used to get information from the timer or to stop it before it fires.
|
||||||
|
* Note that for the common use case of a repeating timer (Re-registration
|
||||||
|
* of the timer inside the RedisModuleTimerProc callback) it matters when
|
||||||
|
* this API is called:
|
||||||
|
* If it is called at the beginning of 'callback' it means
|
||||||
|
* the event will triggered every 'period'.
|
||||||
|
* If it is called at the end of 'callback' it means
|
||||||
|
* there will 'period' milliseconds gaps between events.
|
||||||
|
* (If the time it takes to execute 'callback' is negligible the two
|
||||||
|
* statements above mean the same) */
|
||||||
RedisModuleTimerID RM_CreateTimer(RedisModuleCtx *ctx, mstime_t period, RedisModuleTimerProc callback, void *data) {
|
RedisModuleTimerID RM_CreateTimer(RedisModuleCtx *ctx, mstime_t period, RedisModuleTimerProc callback, void *data) {
|
||||||
RedisModuleTimer *timer = (RedisModuleTimer*)zmalloc(sizeof(*timer), MALLOC_LOCAL);
|
RedisModuleTimer *timer = (RedisModuleTimer*)zmalloc(sizeof(*timer), MALLOC_LOCAL);
|
||||||
timer->module = ctx->module;
|
timer->module = ctx->module;
|
||||||
@ -5600,7 +5663,8 @@ RedisModuleTimerID RM_CreateTimer(RedisModuleCtx *ctx, mstime_t period, RedisMod
|
|||||||
|
|
||||||
/* We need to install the main event loop timer if it's not already
|
/* We need to install the main event loop timer if it's not already
|
||||||
* installed, or we may need to refresh its period if we just installed
|
* installed, or we may need to refresh its period if we just installed
|
||||||
* a timer that will expire sooner than any other else. */
|
* a timer that will expire sooner than any other else (i.e. the timer
|
||||||
|
* we just installed is the first timer in the Timers rax). */
|
||||||
if (aeTimer != -1) {
|
if (aeTimer != -1) {
|
||||||
raxIterator ri;
|
raxIterator ri;
|
||||||
raxStart(&ri,Timers);
|
raxStart(&ri,Timers);
|
||||||
@ -5696,8 +5760,14 @@ void revokeClientAuthentication(client *c) {
|
|||||||
|
|
||||||
c->puser = DefaultUser;
|
c->puser = DefaultUser;
|
||||||
c->authenticated = 0;
|
c->authenticated = 0;
|
||||||
|
/* We will write replies to this client later, so we can't close it
|
||||||
|
* directly even if async. */
|
||||||
|
if (c == serverTL->current_client) {
|
||||||
|
c->flags |= CLIENT_CLOSE_AFTER_COMMAND;
|
||||||
|
} else {
|
||||||
freeClientAsync(c);
|
freeClientAsync(c);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/* Cleanup all clients that have been authenticated with this module. This
|
/* Cleanup all clients that have been authenticated with this module. This
|
||||||
* is called from onUnload() to give the module a chance to cleanup any
|
* is called from onUnload() to give the module a chance to cleanup any
|
||||||
@ -5791,6 +5861,11 @@ static int authenticateClientWithUser(RedisModuleCtx *ctx, user *user, RedisModu
|
|||||||
return REDISMODULE_ERR;
|
return REDISMODULE_ERR;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Avoid settings which are meaningless and will be lost */
|
||||||
|
if (!ctx->client || (ctx->client->flags & CLIENT_MODULE)) {
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
}
|
||||||
|
|
||||||
moduleNotifyUserChanged(ctx->client);
|
moduleNotifyUserChanged(ctx->client);
|
||||||
|
|
||||||
ctx->client->puser = user;
|
ctx->client->puser = user;
|
||||||
@ -5836,7 +5911,7 @@ int RM_AuthenticateClientWithACLUser(RedisModuleCtx *ctx, const char *name, size
|
|||||||
/* Deauthenticate and close the client. The client resources will not be
|
/* Deauthenticate and close the client. The client resources will not be
|
||||||
* be immediately freed, but will be cleaned up in a background job. This is
|
* be immediately freed, but will be cleaned up in a background job. This is
|
||||||
* the recommended way to deauthenicate a client since most clients can't
|
* the recommended way to deauthenicate a client since most clients can't
|
||||||
* handle users becomming deauthenticated. Returns REDISMODULE_ERR when the
|
* handle users becoming deauthenticated. Returns REDISMODULE_ERR when the
|
||||||
* client doesn't exist and REDISMODULE_OK when the operation was successful.
|
* client doesn't exist and REDISMODULE_OK when the operation was successful.
|
||||||
*
|
*
|
||||||
* The client ID is returned from the RM_AuthenticateClientWithUser and
|
* The client ID is returned from the RM_AuthenticateClientWithUser and
|
||||||
@ -5855,6 +5930,31 @@ int RM_DeauthenticateAndCloseClient(RedisModuleCtx *ctx, uint64_t client_id) {
|
|||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Return the X.509 client-side certificate used by the client to authenticate
|
||||||
|
* this connection.
|
||||||
|
*
|
||||||
|
* The return value is an allocated RedisModuleString that is a X.509 certificate
|
||||||
|
* encoded in PEM (Base64) format. It should be freed (or auto-freed) by the caller.
|
||||||
|
*
|
||||||
|
* A NULL value is returned in the following conditions:
|
||||||
|
*
|
||||||
|
* - Connection ID does not exist
|
||||||
|
* - Connection is not a TLS connection
|
||||||
|
* - Connection is a TLS connection but no client ceritifcate was used
|
||||||
|
*/
|
||||||
|
RedisModuleString *RM_GetClientCertificate(RedisModuleCtx *ctx, uint64_t client_id) {
|
||||||
|
client *c = lookupClientByID(client_id);
|
||||||
|
if (c == NULL) return NULL;
|
||||||
|
|
||||||
|
sds cert = connTLSGetPeerCert(c->conn);
|
||||||
|
if (!cert) return NULL;
|
||||||
|
|
||||||
|
RedisModuleString *s = createObject(OBJ_STRING, cert);
|
||||||
|
if (ctx != NULL) autoMemoryAdd(ctx, REDISMODULE_AM_STRING, s);
|
||||||
|
|
||||||
|
return s;
|
||||||
|
}
|
||||||
|
|
||||||
/* --------------------------------------------------------------------------
|
/* --------------------------------------------------------------------------
|
||||||
* Modules Dictionary API
|
* Modules Dictionary API
|
||||||
*
|
*
|
||||||
@ -5956,14 +6056,14 @@ int RM_DictDel(RedisModuleDict *d, RedisModuleString *key, void *oldval) {
|
|||||||
return RM_DictDelC(d,ptrFromObj(key),sdslen(szFromObj(key)),oldval);
|
return RM_DictDelC(d,ptrFromObj(key),sdslen(szFromObj(key)),oldval);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Return an interator, setup in order to start iterating from the specified
|
/* Return an iterator, setup in order to start iterating from the specified
|
||||||
* key by applying the operator 'op', which is just a string specifying the
|
* key by applying the operator 'op', which is just a string specifying the
|
||||||
* comparison operator to use in order to seek the first element. The
|
* comparison operator to use in order to seek the first element. The
|
||||||
* operators avalable are:
|
* operators available are:
|
||||||
*
|
*
|
||||||
* "^" -- Seek the first (lexicographically smaller) key.
|
* "^" -- Seek the first (lexicographically smaller) key.
|
||||||
* "$" -- Seek the last (lexicographically biffer) key.
|
* "$" -- Seek the last (lexicographically biffer) key.
|
||||||
* ">" -- Seek the first element greter than the specified key.
|
* ">" -- Seek the first element greater than the specified key.
|
||||||
* ">=" -- Seek the first element greater or equal than the specified key.
|
* ">=" -- Seek the first element greater or equal than the specified key.
|
||||||
* "<" -- Seek the first element smaller than the specified key.
|
* "<" -- Seek the first element smaller than the specified key.
|
||||||
* "<=" -- Seek the first element smaller or equal than the specified key.
|
* "<=" -- Seek the first element smaller or equal than the specified key.
|
||||||
@ -6090,7 +6190,7 @@ RedisModuleString *RM_DictPrev(RedisModuleCtx *ctx, RedisModuleDictIter *di, voi
|
|||||||
* in the loop, as we iterate elements, we can also check if we are still
|
* in the loop, as we iterate elements, we can also check if we are still
|
||||||
* on range.
|
* on range.
|
||||||
*
|
*
|
||||||
* The function returne REDISMODULE_ERR if the iterator reached the
|
* The function return REDISMODULE_ERR if the iterator reached the
|
||||||
* end of elements condition as well. */
|
* end of elements condition as well. */
|
||||||
int RM_DictCompareC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) {
|
int RM_DictCompareC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) {
|
||||||
if (raxEOF(&di->ri)) return REDISMODULE_ERR;
|
if (raxEOF(&di->ri)) return REDISMODULE_ERR;
|
||||||
@ -6471,7 +6571,7 @@ int RM_ExportSharedAPI(RedisModuleCtx *ctx, const char *apiname, void *func) {
|
|||||||
* command that requires external APIs: if some API cannot be resolved, the
|
* command that requires external APIs: if some API cannot be resolved, the
|
||||||
* command should return an error.
|
* command should return an error.
|
||||||
*
|
*
|
||||||
* Here is an exmaple:
|
* Here is an example:
|
||||||
*
|
*
|
||||||
* int ... myCommandImplementation() {
|
* int ... myCommandImplementation() {
|
||||||
* if (getExternalAPIs() == 0) {
|
* if (getExternalAPIs() == 0) {
|
||||||
@ -6860,7 +6960,7 @@ void RM_ScanCursorDestroy(RedisModuleScanCursor *cursor) {
|
|||||||
* RedisModule_ScanCursorDestroy(c);
|
* RedisModule_ScanCursorDestroy(c);
|
||||||
*
|
*
|
||||||
* It is also possible to use this API from another thread while the lock
|
* It is also possible to use this API from another thread while the lock
|
||||||
* is acquired durring the actuall call to RM_Scan:
|
* is acquired during the actuall call to RM_Scan:
|
||||||
*
|
*
|
||||||
* RedisModuleCursor *c = RedisModule_ScanCursorCreate();
|
* RedisModuleCursor *c = RedisModule_ScanCursorCreate();
|
||||||
* RedisModule_ThreadSafeContextLock(ctx);
|
* RedisModule_ThreadSafeContextLock(ctx);
|
||||||
@ -6874,7 +6974,7 @@ void RM_ScanCursorDestroy(RedisModuleScanCursor *cursor) {
|
|||||||
* The function will return 1 if there are more elements to scan and
|
* The function will return 1 if there are more elements to scan and
|
||||||
* 0 otherwise, possibly setting errno if the call failed.
|
* 0 otherwise, possibly setting errno if the call failed.
|
||||||
*
|
*
|
||||||
* It is also possible to restart and existing cursor using RM_CursorRestart.
|
* It is also possible to restart an existing cursor using RM_ScanCursorRestart.
|
||||||
*
|
*
|
||||||
* IMPORTANT: This API is very similar to the Redis SCAN command from the
|
* IMPORTANT: This API is very similar to the Redis SCAN command from the
|
||||||
* point of view of the guarantees it provides. This means that the API
|
* point of view of the guarantees it provides. This means that the API
|
||||||
@ -6888,7 +6988,7 @@ void RM_ScanCursorDestroy(RedisModuleScanCursor *cursor) {
|
|||||||
* Moreover playing with the Redis keyspace while iterating may have the
|
* Moreover playing with the Redis keyspace while iterating may have the
|
||||||
* effect of returning more duplicates. A safe pattern is to store the keys
|
* effect of returning more duplicates. A safe pattern is to store the keys
|
||||||
* names you want to modify elsewhere, and perform the actions on the keys
|
* names you want to modify elsewhere, and perform the actions on the keys
|
||||||
* later when the iteration is complete. Howerver this can cost a lot of
|
* later when the iteration is complete. However this can cost a lot of
|
||||||
* memory, so it may make sense to just operate on the current key when
|
* memory, so it may make sense to just operate on the current key when
|
||||||
* possible during the iteration, given that this is safe. */
|
* possible during the iteration, given that this is safe. */
|
||||||
int RM_Scan(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanCB fn, void *privdata) {
|
int RM_Scan(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanCB fn, void *privdata) {
|
||||||
@ -6898,7 +6998,7 @@ int RM_Scan(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanC
|
|||||||
}
|
}
|
||||||
int ret = 1;
|
int ret = 1;
|
||||||
ScanCBData data = { ctx, privdata, fn };
|
ScanCBData data = { ctx, privdata, fn };
|
||||||
cursor->cursor = dictScan(ctx->client->db->pdict, cursor->cursor, moduleScanCallback, NULL, &data);
|
cursor->cursor = dictScan(ctx->client->db->dict, cursor->cursor, moduleScanCallback, NULL, &data);
|
||||||
if (cursor->cursor == 0) {
|
if (cursor->cursor == 0) {
|
||||||
cursor->done = 1;
|
cursor->done = 1;
|
||||||
ret = 0;
|
ret = 0;
|
||||||
@ -6953,8 +7053,8 @@ static void moduleScanKeyCallback(void *privdata, const dictEntry *de) {
|
|||||||
* RedisModule_CloseKey(key);
|
* RedisModule_CloseKey(key);
|
||||||
* RedisModule_ScanCursorDestroy(c);
|
* RedisModule_ScanCursorDestroy(c);
|
||||||
*
|
*
|
||||||
* It is also possible to use this API from another thread while the lock is acquired durring
|
* It is also possible to use this API from another thread while the lock is acquired during
|
||||||
* the actuall call to RM_Scan, and re-opening the key each time:
|
* the actuall call to RM_ScanKey, and re-opening the key each time:
|
||||||
* RedisModuleCursor *c = RedisModule_ScanCursorCreate();
|
* RedisModuleCursor *c = RedisModule_ScanCursorCreate();
|
||||||
* RedisModule_ThreadSafeContextLock(ctx);
|
* RedisModule_ThreadSafeContextLock(ctx);
|
||||||
* RedisModuleKey *key = RedisModule_OpenKey(...)
|
* RedisModuleKey *key = RedisModule_OpenKey(...)
|
||||||
@ -6970,7 +7070,7 @@ static void moduleScanKeyCallback(void *privdata, const dictEntry *de) {
|
|||||||
*
|
*
|
||||||
* The function will return 1 if there are more elements to scan and 0 otherwise,
|
* The function will return 1 if there are more elements to scan and 0 otherwise,
|
||||||
* possibly setting errno if the call failed.
|
* possibly setting errno if the call failed.
|
||||||
* It is also possible to restart and existing cursor using RM_CursorRestart.
|
* It is also possible to restart an existing cursor using RM_ScanCursorRestart.
|
||||||
*
|
*
|
||||||
* NOTE: Certain operations are unsafe while iterating the object. For instance
|
* NOTE: Certain operations are unsafe while iterating the object. For instance
|
||||||
* while the API guarantees to return at least one time all the elements that
|
* while the API guarantees to return at least one time all the elements that
|
||||||
@ -6994,7 +7094,7 @@ int RM_ScanKey(RedisModuleKey *key, RedisModuleScanCursor *cursor, RedisModuleSc
|
|||||||
ht = (dict*)ptrFromObj(o);
|
ht = (dict*)ptrFromObj(o);
|
||||||
} else if (o->type == OBJ_ZSET) {
|
} else if (o->type == OBJ_ZSET) {
|
||||||
if (o->encoding == OBJ_ENCODING_SKIPLIST)
|
if (o->encoding == OBJ_ENCODING_SKIPLIST)
|
||||||
ht = ((zset *)ptrFromObj(o))->pdict;
|
ht = ((zset *)ptrFromObj(o))->dict;
|
||||||
} else {
|
} else {
|
||||||
errno = EINVAL;
|
errno = EINVAL;
|
||||||
return 0;
|
return 0;
|
||||||
@ -7073,7 +7173,7 @@ int RM_Fork(RedisModuleForkDoneHandler cb, void *user_data) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
openChildInfoPipe();
|
openChildInfoPipe();
|
||||||
if ((childpid = redisFork()) == 0) {
|
if ((childpid = redisFork(CHILD_TYPE_MODULE)) == 0) {
|
||||||
/* Child */
|
/* Child */
|
||||||
redisSetProcTitle("redis-module-fork");
|
redisSetProcTitle("redis-module-fork");
|
||||||
} else if (childpid == -1) {
|
} else if (childpid == -1) {
|
||||||
@ -7084,6 +7184,7 @@ int RM_Fork(RedisModuleForkDoneHandler cb, void *user_data) {
|
|||||||
g_pserver->module_child_pid = childpid;
|
g_pserver->module_child_pid = childpid;
|
||||||
moduleForkInfo.done_handler = cb;
|
moduleForkInfo.done_handler = cb;
|
||||||
moduleForkInfo.done_handler_user_data = user_data;
|
moduleForkInfo.done_handler_user_data = user_data;
|
||||||
|
updateDictResizePolicy();
|
||||||
serverLog(LL_VERBOSE, "Module fork started pid: %d ", childpid);
|
serverLog(LL_VERBOSE, "Module fork started pid: %d ", childpid);
|
||||||
}
|
}
|
||||||
return childpid;
|
return childpid;
|
||||||
@ -7093,7 +7194,7 @@ int RM_Fork(RedisModuleForkDoneHandler cb, void *user_data) {
|
|||||||
* retcode will be provided to the done handler executed on the parent process.
|
* retcode will be provided to the done handler executed on the parent process.
|
||||||
*/
|
*/
|
||||||
int RM_ExitFromChild(int retcode) {
|
int RM_ExitFromChild(int retcode) {
|
||||||
sendChildCOWInfo(CHILD_INFO_TYPE_MODULE, "Module fork");
|
sendChildCOWInfo(CHILD_TYPE_MODULE, "Module fork");
|
||||||
exitFromChild(retcode);
|
exitFromChild(retcode);
|
||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
}
|
}
|
||||||
@ -7123,7 +7224,7 @@ int TerminateModuleForkChild(int child_pid, int wait) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Can be used to kill the forked child process from the parent process.
|
/* Can be used to kill the forked child process from the parent process.
|
||||||
* child_pid whould be the return value of RedisModule_Fork. */
|
* child_pid would be the return value of RedisModule_Fork. */
|
||||||
int RM_KillForkChild(int child_pid) {
|
int RM_KillForkChild(int child_pid) {
|
||||||
/* Kill module child, wait for child exit. */
|
/* Kill module child, wait for child exit. */
|
||||||
if (TerminateModuleForkChild(child_pid,1) == C_OK)
|
if (TerminateModuleForkChild(child_pid,1) == C_OK)
|
||||||
@ -7261,7 +7362,7 @@ void ModuleForkDoneHandler(int exitcode, int bysignal) {
|
|||||||
* REDISMODULE_SUBEVENT_LOADING_FAILED
|
* REDISMODULE_SUBEVENT_LOADING_FAILED
|
||||||
*
|
*
|
||||||
* Note that AOF loading may start with an RDB data in case of
|
* Note that AOF loading may start with an RDB data in case of
|
||||||
* rdb-preamble, in which case you'll only recieve an AOF_START event.
|
* rdb-preamble, in which case you'll only receive an AOF_START event.
|
||||||
*
|
*
|
||||||
*
|
*
|
||||||
* RedisModuleEvent_ClientChange
|
* RedisModuleEvent_ClientChange
|
||||||
@ -7283,7 +7384,7 @@ void ModuleForkDoneHandler(int exitcode, int bysignal) {
|
|||||||
* This event is called when the instance (that can be both a
|
* This event is called when the instance (that can be both a
|
||||||
* master or a replica) get a new online replica, or lose a
|
* master or a replica) get a new online replica, or lose a
|
||||||
* replica since it gets disconnected.
|
* replica since it gets disconnected.
|
||||||
* The following sub events are availble:
|
* The following sub events are available:
|
||||||
*
|
*
|
||||||
* REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE
|
* REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE
|
||||||
* REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE
|
* REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE
|
||||||
@ -7321,7 +7422,7 @@ void ModuleForkDoneHandler(int exitcode, int bysignal) {
|
|||||||
* RedisModuleEvent_ModuleChange
|
* RedisModuleEvent_ModuleChange
|
||||||
*
|
*
|
||||||
* This event is called when a new module is loaded or one is unloaded.
|
* This event is called when a new module is loaded or one is unloaded.
|
||||||
* The following sub events are availble:
|
* The following sub events are available:
|
||||||
*
|
*
|
||||||
* REDISMODULE_SUBEVENT_MODULE_LOADED
|
* REDISMODULE_SUBEVENT_MODULE_LOADED
|
||||||
* REDISMODULE_SUBEVENT_MODULE_UNLOADED
|
* REDISMODULE_SUBEVENT_MODULE_UNLOADED
|
||||||
@ -7348,14 +7449,29 @@ void ModuleForkDoneHandler(int exitcode, int bysignal) {
|
|||||||
* int32_t progress; // Approximate progress between 0 and 1024,
|
* int32_t progress; // Approximate progress between 0 and 1024,
|
||||||
* or -1 if unknown.
|
* or -1 if unknown.
|
||||||
*
|
*
|
||||||
* The function returns REDISMODULE_OK if the module was successfully subscrived
|
* RedisModuleEvent_SwapDB
|
||||||
* for the specified event. If the API is called from a wrong context then
|
*
|
||||||
* REDISMODULE_ERR is returned. */
|
* This event is called when a swap db command has been successfully
|
||||||
|
* Executed.
|
||||||
|
* For this event call currently there is no subevents available.
|
||||||
|
*
|
||||||
|
* The data pointer can be casted to a RedisModuleSwapDbInfo
|
||||||
|
* structure with the following fields:
|
||||||
|
*
|
||||||
|
* int32_t dbnum_first; // Swap Db first dbnum
|
||||||
|
* int32_t dbnum_second; // Swap Db second dbnum
|
||||||
|
*
|
||||||
|
*
|
||||||
|
*
|
||||||
|
* The function returns REDISMODULE_OK if the module was successfully subscribed
|
||||||
|
* for the specified event. If the API is called from a wrong context or unsupported event
|
||||||
|
* is given then REDISMODULE_ERR is returned. */
|
||||||
int RM_SubscribeToServerEvent(RedisModuleCtx *ctx, RedisModuleEvent event, RedisModuleEventCallback callback) {
|
int RM_SubscribeToServerEvent(RedisModuleCtx *ctx, RedisModuleEvent event, RedisModuleEventCallback callback) {
|
||||||
RedisModuleEventListener *el;
|
RedisModuleEventListener *el;
|
||||||
|
|
||||||
/* Protect in case of calls from contexts without a module reference. */
|
/* Protect in case of calls from contexts without a module reference. */
|
||||||
if (ctx->module == NULL) return REDISMODULE_ERR;
|
if (ctx->module == NULL) return REDISMODULE_ERR;
|
||||||
|
if (event.id >= _REDISMODULE_EVENT_NEXT) return REDISMODULE_ERR;
|
||||||
|
|
||||||
/* Search an event matching this module and event ID. */
|
/* Search an event matching this module and event ID. */
|
||||||
listIter li;
|
listIter li;
|
||||||
@ -7387,6 +7503,42 @@ int RM_SubscribeToServerEvent(RedisModuleCtx *ctx, RedisModuleEvent event, Redis
|
|||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* For a given server event and subevent, return zero if the
|
||||||
|
* subevent is not supported and non-zero otherwise.
|
||||||
|
*/
|
||||||
|
int RM_IsSubEventSupported(RedisModuleEvent event, int64_t subevent) {
|
||||||
|
switch (event.id) {
|
||||||
|
case REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED:
|
||||||
|
return subevent < _REDISMODULE_EVENT_REPLROLECHANGED_NEXT;
|
||||||
|
case REDISMODULE_EVENT_PERSISTENCE:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_PERSISTENCE_NEXT;
|
||||||
|
case REDISMODULE_EVENT_FLUSHDB:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_FLUSHDB_NEXT;
|
||||||
|
case REDISMODULE_EVENT_LOADING:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_LOADING_NEXT;
|
||||||
|
case REDISMODULE_EVENT_CLIENT_CHANGE:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_CLIENT_CHANGE_NEXT;
|
||||||
|
case REDISMODULE_EVENT_SHUTDOWN:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_SHUTDOWN_NEXT;
|
||||||
|
case REDISMODULE_EVENT_REPLICA_CHANGE:
|
||||||
|
return subevent < _REDISMODULE_EVENT_REPLROLECHANGED_NEXT;
|
||||||
|
case REDISMODULE_EVENT_MASTER_LINK_CHANGE:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_MASTER_NEXT;
|
||||||
|
case REDISMODULE_EVENT_CRON_LOOP:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_CRON_LOOP_NEXT;
|
||||||
|
case REDISMODULE_EVENT_MODULE_CHANGE:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_MODULE_NEXT;
|
||||||
|
case REDISMODULE_EVENT_LOADING_PROGRESS:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_LOADING_PROGRESS_NEXT;
|
||||||
|
case REDISMODULE_EVENT_SWAPDB:
|
||||||
|
return subevent < _REDISMODULE_SUBEVENT_SWAPDB_NEXT;
|
||||||
|
default:
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/* This is called by the Redis internals every time we want to fire an
|
/* This is called by the Redis internals every time we want to fire an
|
||||||
* event that can be interceppted by some module. The pointer 'data' is useful
|
* event that can be interceppted by some module. The pointer 'data' is useful
|
||||||
* in order to populate the event-specific structure when needed, in order
|
* in order to populate the event-specific structure when needed, in order
|
||||||
@ -7400,6 +7552,7 @@ void moduleFireServerEvent(uint64_t eid, int subid, void *data) {
|
|||||||
* cheap if there are no registered modules. */
|
* cheap if there are no registered modules. */
|
||||||
if (listLength(RedisModule_EventListeners) == 0) return;
|
if (listLength(RedisModule_EventListeners) == 0) return;
|
||||||
|
|
||||||
|
int real_client_used = 0;
|
||||||
listIter li;
|
listIter li;
|
||||||
listNode *ln;
|
listNode *ln;
|
||||||
listRewind(RedisModule_EventListeners,&li);
|
listRewind(RedisModule_EventListeners,&li);
|
||||||
@ -7409,7 +7562,15 @@ void moduleFireServerEvent(uint64_t eid, int subid, void *data) {
|
|||||||
RedisModuleCtx ctx = REDISMODULE_CTX_INIT;
|
RedisModuleCtx ctx = REDISMODULE_CTX_INIT;
|
||||||
ctx.module = el->module;
|
ctx.module = el->module;
|
||||||
|
|
||||||
if (ModulesInHooks == 0) {
|
if (eid == REDISMODULE_EVENT_CLIENT_CHANGE) {
|
||||||
|
/* In the case of client changes, we're pushing the real client
|
||||||
|
* so the event handler can mutate it if needed. For example,
|
||||||
|
* to change its authentication state in a way that does not
|
||||||
|
* depend on specific commands executed later.
|
||||||
|
*/
|
||||||
|
ctx.client = (client *) data;
|
||||||
|
real_client_used = 1;
|
||||||
|
} else if (ModulesInHooks == 0) {
|
||||||
ctx.client = moduleFreeContextReusedClient;
|
ctx.client = moduleFreeContextReusedClient;
|
||||||
} else {
|
} else {
|
||||||
ctx.client = createClient(NULL, IDX_EVENT_LOOP_MAIN);
|
ctx.client = createClient(NULL, IDX_EVENT_LOOP_MAIN);
|
||||||
@ -7452,6 +7613,8 @@ void moduleFireServerEvent(uint64_t eid, int subid, void *data) {
|
|||||||
moduledata = data;
|
moduledata = data;
|
||||||
} else if (eid == REDISMODULE_EVENT_CRON_LOOP) {
|
} else if (eid == REDISMODULE_EVENT_CRON_LOOP) {
|
||||||
moduledata = data;
|
moduledata = data;
|
||||||
|
} else if (eid == REDISMODULE_EVENT_SWAPDB) {
|
||||||
|
moduledata = data;
|
||||||
}
|
}
|
||||||
|
|
||||||
ModulesInHooks++;
|
ModulesInHooks++;
|
||||||
@ -7460,7 +7623,7 @@ void moduleFireServerEvent(uint64_t eid, int subid, void *data) {
|
|||||||
el->module->in_hook--;
|
el->module->in_hook--;
|
||||||
ModulesInHooks--;
|
ModulesInHooks--;
|
||||||
|
|
||||||
if (ModulesInHooks != 0) freeClient(ctx.client);
|
if (ModulesInHooks != 0 && !real_client_used) freeClient(ctx.client);
|
||||||
moduleFreeContext(&ctx);
|
moduleFreeContext(&ctx);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -7544,7 +7707,7 @@ void moduleInitModulesSystem(void) {
|
|||||||
g_pserver->loadmodule_queue = listCreate();
|
g_pserver->loadmodule_queue = listCreate();
|
||||||
modules = dictCreate(&modulesDictType,NULL);
|
modules = dictCreate(&modulesDictType,NULL);
|
||||||
|
|
||||||
/* Set up the keyspace notification susbscriber list and static client */
|
/* Set up the keyspace notification subscriber list and static client */
|
||||||
moduleKeyspaceSubscribers = listCreate();
|
moduleKeyspaceSubscribers = listCreate();
|
||||||
moduleFreeContextReusedClient = createClient(NULL, IDX_EVENT_LOOP_MAIN);
|
moduleFreeContextReusedClient = createClient(NULL, IDX_EVENT_LOOP_MAIN);
|
||||||
moduleFreeContextReusedClient->flags |= CLIENT_MODULE;
|
moduleFreeContextReusedClient->flags |= CLIENT_MODULE;
|
||||||
@ -7898,7 +8061,7 @@ size_t moduleCount(void) {
|
|||||||
return dictSize(modules);
|
return dictSize(modules);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Set the key last access time for LRU based eviction. not relevent if the
|
/* Set the key last access time for LRU based eviction. not relevant if the
|
||||||
* servers's maxmemory policy is LFU based. Value is idle time in milliseconds.
|
* servers's maxmemory policy is LFU based. Value is idle time in milliseconds.
|
||||||
* returns REDISMODULE_OK if the LRU was updated, REDISMODULE_ERR otherwise. */
|
* returns REDISMODULE_OK if the LRU was updated, REDISMODULE_ERR otherwise. */
|
||||||
int RM_SetLRU(RedisModuleKey *key, mstime_t lru_idle) {
|
int RM_SetLRU(RedisModuleKey *key, mstime_t lru_idle) {
|
||||||
@ -7950,6 +8113,46 @@ int RM_GetLFU(RedisModuleKey *key, long long *lfu_freq) {
|
|||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns the full ContextFlags mask, using the return value
|
||||||
|
* the module can check if a certain set of flags are supported
|
||||||
|
* by the redis server version in use.
|
||||||
|
* Example:
|
||||||
|
* int supportedFlags = RM_GetContextFlagsAll()
|
||||||
|
* if (supportedFlags & REDISMODULE_CTX_FLAGS_MULTI) {
|
||||||
|
* // REDISMODULE_CTX_FLAGS_MULTI is supported
|
||||||
|
* } else{
|
||||||
|
* // REDISMODULE_CTX_FLAGS_MULTI is not supported
|
||||||
|
* }
|
||||||
|
*/
|
||||||
|
int RM_GetContextFlagsAll() {
|
||||||
|
return _REDISMODULE_CTX_FLAGS_NEXT - 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns the full KeyspaceNotification mask, using the return value
|
||||||
|
* the module can check if a certain set of flags are supported
|
||||||
|
* by the redis server version in use.
|
||||||
|
* Example:
|
||||||
|
* int supportedFlags = RM_GetKeyspaceNotificationFlagsAll()
|
||||||
|
* if (supportedFlags & REDISMODULE_NOTIFY_LOADED) {
|
||||||
|
* // REDISMODULE_NOTIFY_LOADED is supported
|
||||||
|
* } else{
|
||||||
|
* // REDISMODULE_NOTIFY_LOADED is not supported
|
||||||
|
* }
|
||||||
|
*/
|
||||||
|
int RM_GetKeyspaceNotificationFlagsAll() {
|
||||||
|
return _REDISMODULE_NOTIFY_NEXT - 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Return the redis version in format of 0x00MMmmpp.
|
||||||
|
* Example for 6.0.7 the return value will be 0x00060007.
|
||||||
|
*/
|
||||||
|
int RM_GetServerVersion() {
|
||||||
|
return KEYDB_VERSION_NUM;
|
||||||
|
}
|
||||||
|
|
||||||
/* Replace the value assigned to a module type.
|
/* Replace the value assigned to a module type.
|
||||||
*
|
*
|
||||||
* The key must be open for writing, have an existing value, and have a moduleType
|
* The key must be open for writing, have an existing value, and have a moduleType
|
||||||
@ -7984,6 +8187,69 @@ int RM_ModuleTypeReplaceValue(RedisModuleKey *key, moduleType *mt, void *new_val
|
|||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* For a specified command, parse its arguments and return an array that
|
||||||
|
* contains the indexes of all key name arguments. This function is
|
||||||
|
* essnetially a more efficient way to do COMMAND GETKEYS.
|
||||||
|
*
|
||||||
|
* A NULL return value indicates the specified command has no keys, or
|
||||||
|
* an error condition. Error conditions are indicated by setting errno
|
||||||
|
* as folllows:
|
||||||
|
*
|
||||||
|
* ENOENT: Specified command does not exist.
|
||||||
|
* EINVAL: Invalid command arity specified.
|
||||||
|
*
|
||||||
|
* NOTE: The returned array is not a Redis Module object so it does not
|
||||||
|
* get automatically freed even when auto-memory is used. The caller
|
||||||
|
* must explicitly call RM_Free() to free it.
|
||||||
|
*/
|
||||||
|
int *RM_GetCommandKeys(RedisModuleCtx *ctx, RedisModuleString **argv, int argc, int *num_keys) {
|
||||||
|
UNUSED(ctx);
|
||||||
|
struct redisCommand *cmd;
|
||||||
|
int *res = NULL;
|
||||||
|
|
||||||
|
/* Find command */
|
||||||
|
if ((cmd = lookupCommand((sds)ptrFromObj(argv[0]))) == NULL) {
|
||||||
|
errno = ENOENT;
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Bail out if command has no keys */
|
||||||
|
if (cmd->getkeys_proc == NULL && cmd->firstkey == 0) {
|
||||||
|
errno = 0;
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ((cmd->arity > 0 && cmd->arity != argc) || (argc < -cmd->arity)) {
|
||||||
|
errno = EINVAL;
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
getKeysResult result = GETKEYS_RESULT_INIT;
|
||||||
|
getKeysFromCommand(cmd, argv, argc, &result);
|
||||||
|
|
||||||
|
*num_keys = result.numkeys;
|
||||||
|
if (!result.numkeys) {
|
||||||
|
errno = 0;
|
||||||
|
getKeysFreeResult(&result);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (result.keys == result.keysbuf) {
|
||||||
|
/* If the result is using a stack based array, copy it. */
|
||||||
|
unsigned long int size = sizeof(int) * result.numkeys;
|
||||||
|
res = (int*)zmalloc(size);
|
||||||
|
memcpy(res, result.keys, size);
|
||||||
|
} else {
|
||||||
|
/* We return the heap based array and intentionally avoid calling
|
||||||
|
* getKeysFreeResult() here, as it is the caller's responsibility
|
||||||
|
* to free this array.
|
||||||
|
*/
|
||||||
|
res = result.keys;
|
||||||
|
}
|
||||||
|
|
||||||
|
return res;
|
||||||
|
}
|
||||||
|
|
||||||
/* Register all the APIs we export. Keep this function at the end of the
|
/* Register all the APIs we export. Keep this function at the end of the
|
||||||
* file so that's easy to seek it to add new entries. */
|
* file so that's easy to seek it to add new entries. */
|
||||||
void moduleRegisterCoreAPI(void) {
|
void moduleRegisterCoreAPI(void) {
|
||||||
@ -8120,6 +8386,7 @@ void moduleRegisterCoreAPI(void) {
|
|||||||
REGISTER_API(AbortBlock);
|
REGISTER_API(AbortBlock);
|
||||||
REGISTER_API(Milliseconds);
|
REGISTER_API(Milliseconds);
|
||||||
REGISTER_API(GetThreadSafeContext);
|
REGISTER_API(GetThreadSafeContext);
|
||||||
|
REGISTER_API(GetDetachedThreadSafeContext);
|
||||||
REGISTER_API(FreeThreadSafeContext);
|
REGISTER_API(FreeThreadSafeContext);
|
||||||
REGISTER_API(ThreadSafeContextLock);
|
REGISTER_API(ThreadSafeContextLock);
|
||||||
REGISTER_API(ThreadSafeContextTryLock);
|
REGISTER_API(ThreadSafeContextTryLock);
|
||||||
@ -8219,4 +8486,10 @@ void moduleRegisterCoreAPI(void) {
|
|||||||
REGISTER_API(DeauthenticateAndCloseClient);
|
REGISTER_API(DeauthenticateAndCloseClient);
|
||||||
REGISTER_API(AuthenticateClientWithACLUser);
|
REGISTER_API(AuthenticateClientWithACLUser);
|
||||||
REGISTER_API(AuthenticateClientWithUser);
|
REGISTER_API(AuthenticateClientWithUser);
|
||||||
|
REGISTER_API(GetContextFlagsAll);
|
||||||
|
REGISTER_API(GetKeyspaceNotificationFlagsAll);
|
||||||
|
REGISTER_API(IsSubEventSupported);
|
||||||
|
REGISTER_API(GetServerVersion);
|
||||||
|
REGISTER_API(GetClientCertificate);
|
||||||
|
REGISTER_API(GetCommandKeys);
|
||||||
}
|
}
|
||||||
|
@ -125,7 +125,7 @@ int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
|||||||
cmd_KEYRANGE,"readonly",1,1,0) == REDISMODULE_ERR)
|
cmd_KEYRANGE,"readonly",1,1,0) == REDISMODULE_ERR)
|
||||||
return REDISMODULE_ERR;
|
return REDISMODULE_ERR;
|
||||||
|
|
||||||
/* Create our global dictionray. Here we'll set our keys and values. */
|
/* Create our global dictionary. Here we'll set our keys and values. */
|
||||||
Keyspace = RedisModule_CreateDict(NULL);
|
Keyspace = RedisModule_CreateDict(NULL);
|
||||||
|
|
||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
|
@ -91,7 +91,7 @@ int HelloPushCall_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, in
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* HELLO.PUSH.CALL2
|
/* HELLO.PUSH.CALL2
|
||||||
* This is exaxctly as HELLO.PUSH.CALL, but shows how we can reply to the
|
* This is exactly as HELLO.PUSH.CALL, but shows how we can reply to the
|
||||||
* client using directly a reply object that Call() returned. */
|
* client using directly a reply object that Call() returned. */
|
||||||
int HelloPushCall2_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
int HelloPushCall2_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
||||||
{
|
{
|
||||||
@ -345,7 +345,7 @@ int HelloToggleCase_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv,
|
|||||||
|
|
||||||
/* HELLO.MORE.EXPIRE key milliseconds.
|
/* HELLO.MORE.EXPIRE key milliseconds.
|
||||||
*
|
*
|
||||||
* If they key has already an associated TTL, extends it by "milliseconds"
|
* If the key has already an associated TTL, extends it by "milliseconds"
|
||||||
* milliseconds. Otherwise no operation is performed. */
|
* milliseconds. Otherwise no operation is performed. */
|
||||||
int HelloMoreExpire_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
int HelloMoreExpire_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
||||||
RedisModule_AutoMemory(ctx); /* Use automatic memory management. */
|
RedisModule_AutoMemory(ctx); /* Use automatic memory management. */
|
||||||
|
@ -89,7 +89,7 @@ void discardTransaction(client *c) {
|
|||||||
unwatchAllKeys(c);
|
unwatchAllKeys(c);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Flag the transacation as DIRTY_EXEC so that EXEC will fail.
|
/* Flag the transaction as DIRTY_EXEC so that EXEC will fail.
|
||||||
* Should be called every time there is an error while queueing a command. */
|
* Should be called every time there is an error while queueing a command. */
|
||||||
void flagTransaction(client *c) {
|
void flagTransaction(client *c) {
|
||||||
if (c->flags & CLIENT_MULTI)
|
if (c->flags & CLIENT_MULTI)
|
||||||
@ -348,32 +348,38 @@ void touchWatchedKey(redisDb *db, robj *key) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* On FLUSHDB or FLUSHALL all the watched keys that are present before the
|
/* Set CLIENT_DIRTY_CAS to all clients of DB when DB is dirty.
|
||||||
* flush but will be deleted as effect of the flushing operation should
|
* It may happen in the following situations:
|
||||||
* be touched. "dbid" is the DB that's getting the flush. -1 if it is
|
* FLUSHDB, FLUSHALL, SWAPDB
|
||||||
* a FLUSHALL operation (all the DBs flushed). */
|
*
|
||||||
void touchWatchedKeysOnFlush(int dbid) {
|
* replaced_with: for SWAPDB, the WATCH should be invalidated if
|
||||||
listIter li1, li2;
|
* the key exists in either of them, and skipped only if it
|
||||||
|
* doesn't exist in both. */
|
||||||
|
void touchAllWatchedKeysInDb(redisDb *emptied, redisDb *replaced_with) {
|
||||||
|
listIter li;
|
||||||
listNode *ln;
|
listNode *ln;
|
||||||
|
dictEntry *de;
|
||||||
|
|
||||||
serverAssert(GlobalLocksAcquired());
|
serverAssert(GlobalLocksAcquired());
|
||||||
|
|
||||||
/* For every client, check all the waited keys */
|
if (dictSize(emptied->watched_keys) == 0) return;
|
||||||
listRewind(g_pserver->clients,&li1);
|
|
||||||
while((ln = listNext(&li1))) {
|
|
||||||
client *c = (client*)listNodeValue(ln);
|
|
||||||
listRewind(c->watched_keys,&li2);
|
|
||||||
while((ln = listNext(&li2))) {
|
|
||||||
watchedKey *wk = (watchedKey*)listNodeValue(ln);
|
|
||||||
|
|
||||||
/* For every watched key matching the specified DB, if the
|
dictIterator *di = dictGetSafeIterator(emptied->watched_keys);
|
||||||
* key exists, mark the client as dirty, as the key will be
|
while((de = dictNext(di)) != NULL) {
|
||||||
* removed. */
|
robj *key = (robj*)dictGetKey(de);
|
||||||
if (dbid == -1 || wk->db->id == dbid) {
|
list *clients = (list*)dictGetVal(de);
|
||||||
if (dictFind(wk->db->pdict, ptrFromObj(wk->key)) != NULL)
|
if (!clients) continue;
|
||||||
|
listRewind(clients,&li);
|
||||||
|
while((ln = listNext(&li))) {
|
||||||
|
client *c = (client*)listNodeValue(ln);
|
||||||
|
if (dictFind(emptied->dict, ptrFromObj(key))) {
|
||||||
|
c->flags |= CLIENT_DIRTY_CAS;
|
||||||
|
} else if (replaced_with && dictFind(replaced_with->dict, ptrFromObj(key))) {
|
||||||
c->flags |= CLIENT_DIRTY_CAS;
|
c->flags |= CLIENT_DIRTY_CAS;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
dictReleaseIterator(di);
|
||||||
}
|
}
|
||||||
|
|
||||||
void watchCommand(client *c) {
|
void watchCommand(client *c) {
|
||||||
|
@ -50,7 +50,7 @@ size_t sdsZmallocSize(sds s) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Return the amount of memory used by the sds string at object->ptr
|
/* Return the amount of memory used by the sds string at object->ptr
|
||||||
* for a string object. */
|
* for a string object. This includes internal fragmentation. */
|
||||||
size_t getStringObjectSdsUsedMemory(robj *o) {
|
size_t getStringObjectSdsUsedMemory(robj *o) {
|
||||||
serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);
|
serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);
|
||||||
switch(o->encoding) {
|
switch(o->encoding) {
|
||||||
@ -60,6 +60,17 @@ size_t getStringObjectSdsUsedMemory(robj *o) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Return the length of a string object.
|
||||||
|
* This does NOT includes internal fragmentation or sds unused space. */
|
||||||
|
size_t getStringObjectLen(robj *o) {
|
||||||
|
serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);
|
||||||
|
switch(o->encoding) {
|
||||||
|
case OBJ_ENCODING_RAW: return sdslen(szFromObj(o));
|
||||||
|
case OBJ_ENCODING_EMBSTR: return sdslen(szFromObj(o));
|
||||||
|
default: return 0; /* Just integer encoding for now. */
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/* Client.reply list dup and free methods. */
|
/* Client.reply list dup and free methods. */
|
||||||
void *dupClientReplyValue(void *o) {
|
void *dupClientReplyValue(void *o) {
|
||||||
clientReplyBlock *old = (clientReplyBlock*)o;
|
clientReplyBlock *old = (clientReplyBlock*)o;
|
||||||
@ -126,6 +137,7 @@ client *createClient(connection *conn, int iel) {
|
|||||||
c->reqtype = 0;
|
c->reqtype = 0;
|
||||||
c->argc = 0;
|
c->argc = 0;
|
||||||
c->argv = NULL;
|
c->argv = NULL;
|
||||||
|
c->argv_len_sum = 0;
|
||||||
c->cmd = c->lastcmd = NULL;
|
c->cmd = c->lastcmd = NULL;
|
||||||
c->puser = DefaultUser;
|
c->puser = DefaultUser;
|
||||||
c->multibulklen = 0;
|
c->multibulklen = 0;
|
||||||
@ -190,7 +202,7 @@ client *createClient(connection *conn, int iel) {
|
|||||||
return c;
|
return c;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This funciton puts the client in the queue of clients that should write
|
/* This function puts the client in the queue of clients that should write
|
||||||
* their output buffers to the socket. Note that it does not *yet* install
|
* their output buffers to the socket. Note that it does not *yet* install
|
||||||
* the write handler, to start clients are put in a queue of clients that need
|
* the write handler, to start clients are put in a queue of clients that need
|
||||||
* to write, so we try to do that before returning in the event loop (see the
|
* to write, so we try to do that before returning in the event loop (see the
|
||||||
@ -267,6 +279,9 @@ int prepareClientToWrite(client *c) {
|
|||||||
* handler since there is no socket at all. */
|
* handler since there is no socket at all. */
|
||||||
if (flags & (CLIENT_LUA|CLIENT_MODULE)) return C_OK;
|
if (flags & (CLIENT_LUA|CLIENT_MODULE)) return C_OK;
|
||||||
|
|
||||||
|
/* If CLIENT_CLOSE_ASAP flag is set, we need not write anything. */
|
||||||
|
if (c->flags & CLIENT_CLOSE_ASAP) return C_ERR;
|
||||||
|
|
||||||
/* CLIENT REPLY OFF / SKIP handling: don't send replies. */
|
/* CLIENT REPLY OFF / SKIP handling: don't send replies. */
|
||||||
if (flags & (CLIENT_REPLY_OFF|CLIENT_REPLY_SKIP)) return C_ERR;
|
if (flags & (CLIENT_REPLY_OFF|CLIENT_REPLY_SKIP)) return C_ERR;
|
||||||
|
|
||||||
@ -290,6 +305,9 @@ int prepareClientToWrite(client *c) {
|
|||||||
* Low level functions to add more data to output buffers.
|
* Low level functions to add more data to output buffers.
|
||||||
* -------------------------------------------------------------------------- */
|
* -------------------------------------------------------------------------- */
|
||||||
|
|
||||||
|
/* Attempts to add the reply to the static buffer in the client struct.
|
||||||
|
* Returns C_ERR if the buffer is full, or the reply list is not empty,
|
||||||
|
* in which case the reply must be added to the reply list. */
|
||||||
int _addReplyToBuffer(client *c, const char *s, size_t len) {
|
int _addReplyToBuffer(client *c, const char *s, size_t len) {
|
||||||
if (c->flags.load(std::memory_order_relaxed) & CLIENT_CLOSE_AFTER_REPLY) return C_OK;
|
if (c->flags.load(std::memory_order_relaxed) & CLIENT_CLOSE_AFTER_REPLY) return C_OK;
|
||||||
|
|
||||||
@ -336,6 +354,8 @@ int _addReplyToBuffer(client *c, const char *s, size_t len) {
|
|||||||
return C_OK;
|
return C_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Adds the reply to the reply linked list.
|
||||||
|
* Note: some edits to this function need to be relayed to AddReplyFromClient. */
|
||||||
void _addReplyProtoToList(client *c, const char *s, size_t len) {
|
void _addReplyProtoToList(client *c, const char *s, size_t len) {
|
||||||
if (c->flags.load(std::memory_order_relaxed) & CLIENT_CLOSE_AFTER_REPLY) return;
|
if (c->flags.load(std::memory_order_relaxed) & CLIENT_CLOSE_AFTER_REPLY) return;
|
||||||
AssertCorrectThread(c);
|
AssertCorrectThread(c);
|
||||||
@ -343,7 +363,7 @@ void _addReplyProtoToList(client *c, const char *s, size_t len) {
|
|||||||
listNode *ln = listLast(c->reply);
|
listNode *ln = listLast(c->reply);
|
||||||
clientReplyBlock *tail = (clientReplyBlock*) (ln? listNodeValue(ln): NULL);
|
clientReplyBlock *tail = (clientReplyBlock*) (ln? listNodeValue(ln): NULL);
|
||||||
|
|
||||||
/* Note that 'tail' may be NULL even if we have a tail node, becuase when
|
/* Note that 'tail' may be NULL even if we have a tail node, because when
|
||||||
* addReplyDeferredLen() is used, it sets a dummy node to NULL just
|
* addReplyDeferredLen() is used, it sets a dummy node to NULL just
|
||||||
* fo fill it later, when the size of the bulk length is set. */
|
* fo fill it later, when the size of the bulk length is set. */
|
||||||
|
|
||||||
@ -969,14 +989,40 @@ void addReplySubcommandSyntaxError(client *c) {
|
|||||||
/* Append 'src' client output buffers into 'dst' client output buffers.
|
/* Append 'src' client output buffers into 'dst' client output buffers.
|
||||||
* This function clears the output buffers of 'src' */
|
* This function clears the output buffers of 'src' */
|
||||||
void AddReplyFromClient(client *dst, client *src) {
|
void AddReplyFromClient(client *dst, client *src) {
|
||||||
|
/* If the source client contains a partial response due to client output
|
||||||
|
* buffer limits, propagate that to the dest rather than copy a partial
|
||||||
|
* reply. We don't wanna run the risk of copying partial response in case
|
||||||
|
* for some reason the output limits don't reach the same decision (maybe
|
||||||
|
* they changed) */
|
||||||
|
if (src->flags & CLIENT_CLOSE_ASAP) {
|
||||||
|
sds client = catClientInfoString(sdsempty(),dst);
|
||||||
|
freeClientAsync(dst);
|
||||||
|
serverLog(LL_WARNING,"Client %s scheduled to be closed ASAP for overcoming of output buffer limits.", client);
|
||||||
|
sdsfree(client);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* First add the static buffer (either into the static buffer or reply list) */
|
||||||
|
addReplyProto(dst,src->buf, src->bufpos);
|
||||||
|
|
||||||
|
/* We need to check with prepareClientToWrite again (after addReplyProto)
|
||||||
|
* since addReplyProto may have changed something (like CLIENT_CLOSE_ASAP) */
|
||||||
if (prepareClientToWrite(dst) != C_OK)
|
if (prepareClientToWrite(dst) != C_OK)
|
||||||
return;
|
return;
|
||||||
addReplyProto(dst,src->buf, src->bufpos);
|
|
||||||
|
/* We're bypassing _addReplyProtoToList, so we need to add the pre/post
|
||||||
|
* checks in it. */
|
||||||
|
if (dst->flags & CLIENT_CLOSE_AFTER_REPLY) return;
|
||||||
|
|
||||||
|
/* Concatenate the reply list into the dest */
|
||||||
if (listLength(src->reply))
|
if (listLength(src->reply))
|
||||||
listJoin(dst->reply,src->reply);
|
listJoin(dst->reply,src->reply);
|
||||||
dst->reply_bytes += src->reply_bytes;
|
dst->reply_bytes += src->reply_bytes;
|
||||||
src->reply_bytes = 0;
|
src->reply_bytes = 0;
|
||||||
src->bufpos = 0;
|
src->bufpos = 0;
|
||||||
|
|
||||||
|
/* Check output buffer limits */
|
||||||
|
asyncCloseClientOnOutputBufferLimitReached(dst);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Copy 'src' client output buffers into 'dst' client output buffers.
|
/* Copy 'src' client output buffers into 'dst' client output buffers.
|
||||||
@ -1275,6 +1321,7 @@ static void freeClientArgv(client *c) {
|
|||||||
decrRefCount(c->argv[j]);
|
decrRefCount(c->argv[j]);
|
||||||
c->argc = 0;
|
c->argc = 0;
|
||||||
c->cmd = NULL;
|
c->cmd = NULL;
|
||||||
|
c->argv_len_sum = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void disconnectSlavesExcept(unsigned char *uuid)
|
void disconnectSlavesExcept(unsigned char *uuid)
|
||||||
@ -1414,7 +1461,7 @@ bool freeClient(client *c) {
|
|||||||
listDelNode(g_pserver->clients_to_close,ln);
|
listDelNode(g_pserver->clients_to_close,ln);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* If it is our master that's beging disconnected we should make sure
|
/* If it is our master that's being disconnected we should make sure
|
||||||
* to cache the state to try a partial resynchronization later.
|
* to cache the state to try a partial resynchronization later.
|
||||||
*
|
*
|
||||||
* Note that before doing this we make sure that the client is not in
|
* Note that before doing this we make sure that the client is not in
|
||||||
@ -1500,6 +1547,7 @@ bool freeClient(client *c) {
|
|||||||
zfree(c->replyAsync);
|
zfree(c->replyAsync);
|
||||||
if (c->name) decrRefCount(c->name);
|
if (c->name) decrRefCount(c->name);
|
||||||
zfree(c->argv);
|
zfree(c->argv);
|
||||||
|
c->argv_len_sum = 0;
|
||||||
freeClientMultiState(c);
|
freeClientMultiState(c);
|
||||||
sdsfree(c->peerid);
|
sdsfree(c->peerid);
|
||||||
ulock.unlock();
|
ulock.unlock();
|
||||||
@ -1810,6 +1858,9 @@ int handleClientsWithPendingWrites(int iel, int aof_state) {
|
|||||||
|
|
||||||
std::unique_lock<decltype(c->lock)> lock(c->lock);
|
std::unique_lock<decltype(c->lock)> lock(c->lock);
|
||||||
|
|
||||||
|
/* Don't write to clients that are going to be closed anyway. */
|
||||||
|
if (c->flags & CLIENT_CLOSE_ASAP) continue;
|
||||||
|
|
||||||
/* Try to write buffers to the client socket. */
|
/* Try to write buffers to the client socket. */
|
||||||
if (writeToClient(c,0) == C_ERR)
|
if (writeToClient(c,0) == C_ERR)
|
||||||
{
|
{
|
||||||
@ -1865,7 +1916,7 @@ void resetClient(client *c) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This funciton is used when we want to re-enter the event loop but there
|
/* This function is used when we want to re-enter the event loop but there
|
||||||
* is the risk that the client we are dealing with will be freed in some
|
* is the risk that the client we are dealing with will be freed in some
|
||||||
* way. This happens for instance in:
|
* way. This happens for instance in:
|
||||||
*
|
*
|
||||||
@ -1881,19 +1932,23 @@ void resetClient(client *c) {
|
|||||||
void protectClient(client *c) {
|
void protectClient(client *c) {
|
||||||
c->flags |= CLIENT_PROTECTED;
|
c->flags |= CLIENT_PROTECTED;
|
||||||
AssertCorrectThread(c);
|
AssertCorrectThread(c);
|
||||||
|
if (c->conn) {
|
||||||
connSetReadHandler(c->conn,NULL);
|
connSetReadHandler(c->conn,NULL);
|
||||||
connSetWriteHandler(c->conn,NULL);
|
connSetWriteHandler(c->conn,NULL);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/* This will undo the client protection done by protectClient() */
|
/* This will undo the client protection done by protectClient() */
|
||||||
void unprotectClient(client *c) {
|
void unprotectClient(client *c) {
|
||||||
AssertCorrectThread(c);
|
AssertCorrectThread(c);
|
||||||
if (c->flags & CLIENT_PROTECTED) {
|
if (c->flags & CLIENT_PROTECTED) {
|
||||||
c->flags &= ~CLIENT_PROTECTED;
|
c->flags &= ~CLIENT_PROTECTED;
|
||||||
|
if (c->conn) {
|
||||||
connSetReadHandler(c->conn,readQueryFromClient, true);
|
connSetReadHandler(c->conn,readQueryFromClient, true);
|
||||||
if (clientHasPendingReplies(c)) clientInstallWriteHandler(c);
|
if (clientHasPendingReplies(c)) clientInstallWriteHandler(c);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/* Like processMultibulkBuffer(), but for the inline protocol instead of RESP,
|
/* Like processMultibulkBuffer(), but for the inline protocol instead of RESP,
|
||||||
* this function consumes the client query buffer and creates a command ready
|
* this function consumes the client query buffer and creates a command ready
|
||||||
@ -1949,6 +2004,7 @@ int processInlineBuffer(client *c) {
|
|||||||
* However the is an exception: masters may send us just a newline
|
* However the is an exception: masters may send us just a newline
|
||||||
* to keep the connection active. */
|
* to keep the connection active. */
|
||||||
if (querylen != 0 && c->flags & CLIENT_MASTER) {
|
if (querylen != 0 && c->flags & CLIENT_MASTER) {
|
||||||
|
sdsfreesplitres(argv,argc);
|
||||||
serverLog(LL_WARNING,"WARNING: Receiving inline protocol from master, master stream corruption? Closing the master connection and discarding the cached master.");
|
serverLog(LL_WARNING,"WARNING: Receiving inline protocol from master, master stream corruption? Closing the master connection and discarding the cached master.");
|
||||||
setProtocolError("Master using the inline protocol. Desync?",c);
|
setProtocolError("Master using the inline protocol. Desync?",c);
|
||||||
return C_ERR;
|
return C_ERR;
|
||||||
@ -1961,12 +2017,14 @@ int processInlineBuffer(client *c) {
|
|||||||
if (argc) {
|
if (argc) {
|
||||||
if (c->argv) zfree(c->argv);
|
if (c->argv) zfree(c->argv);
|
||||||
c->argv = (robj**)zmalloc(sizeof(robj*)*argc, MALLOC_LOCAL);
|
c->argv = (robj**)zmalloc(sizeof(robj*)*argc, MALLOC_LOCAL);
|
||||||
|
c->argv_len_sum = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Create redis objects for all arguments. */
|
/* Create redis objects for all arguments. */
|
||||||
for (c->argc = 0, j = 0; j < argc; j++) {
|
for (c->argc = 0, j = 0; j < argc; j++) {
|
||||||
c->argv[c->argc] = createObject(OBJ_STRING,argv[j]);
|
c->argv[c->argc] = createObject(OBJ_STRING,argv[j]);
|
||||||
c->argc++;
|
c->argc++;
|
||||||
|
c->argv_len_sum += sdslen(argv[j]);
|
||||||
}
|
}
|
||||||
sds_free(argv);
|
sds_free(argv);
|
||||||
return C_OK;
|
return C_OK;
|
||||||
@ -2058,6 +2116,7 @@ int processMultibulkBuffer(client *c) {
|
|||||||
/* Setup argv array on client structure */
|
/* Setup argv array on client structure */
|
||||||
if (c->argv) zfree(c->argv);
|
if (c->argv) zfree(c->argv);
|
||||||
c->argv = (robj**)zmalloc(sizeof(robj*)*c->multibulklen, MALLOC_LOCAL);
|
c->argv = (robj**)zmalloc(sizeof(robj*)*c->multibulklen, MALLOC_LOCAL);
|
||||||
|
c->argv_len_sum = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
serverAssertWithInfo(c,NULL,c->multibulklen > 0);
|
serverAssertWithInfo(c,NULL,c->multibulklen > 0);
|
||||||
@ -2111,7 +2170,7 @@ int processMultibulkBuffer(client *c) {
|
|||||||
c->qb_pos = 0;
|
c->qb_pos = 0;
|
||||||
/* Hint the sds library about the amount of bytes this string is
|
/* Hint the sds library about the amount of bytes this string is
|
||||||
* going to contain. */
|
* going to contain. */
|
||||||
c->querybuf = sdsMakeRoomFor(c->querybuf,ll+2);
|
c->querybuf = sdsMakeRoomFor(c->querybuf,ll+2-sdslen(c->querybuf));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
c->bulklen = ll;
|
c->bulklen = ll;
|
||||||
@ -2130,6 +2189,7 @@ int processMultibulkBuffer(client *c) {
|
|||||||
sdslen(c->querybuf) == (size_t)(c->bulklen+2))
|
sdslen(c->querybuf) == (size_t)(c->bulklen+2))
|
||||||
{
|
{
|
||||||
c->argv[c->argc++] = createObject(OBJ_STRING,c->querybuf);
|
c->argv[c->argc++] = createObject(OBJ_STRING,c->querybuf);
|
||||||
|
c->argv_len_sum += c->bulklen;
|
||||||
sdsIncrLen(c->querybuf,-2); /* remove CRLF */
|
sdsIncrLen(c->querybuf,-2); /* remove CRLF */
|
||||||
/* Assume that if we saw a fat argument we'll see another one
|
/* Assume that if we saw a fat argument we'll see another one
|
||||||
* likely... */
|
* likely... */
|
||||||
@ -2138,6 +2198,7 @@ int processMultibulkBuffer(client *c) {
|
|||||||
} else {
|
} else {
|
||||||
c->argv[c->argc++] =
|
c->argv[c->argc++] =
|
||||||
createStringObject(c->querybuf+c->qb_pos,c->bulklen);
|
createStringObject(c->querybuf+c->qb_pos,c->bulklen);
|
||||||
|
c->argv_len_sum += c->bulklen;
|
||||||
c->qb_pos += c->bulklen+2;
|
c->qb_pos += c->bulklen+2;
|
||||||
}
|
}
|
||||||
c->bulklen = -1;
|
c->bulklen = -1;
|
||||||
@ -2437,7 +2498,7 @@ char *getClientPeerId(client *c) {
|
|||||||
return c->peerid;
|
return c->peerid;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Concatenate a string representing the state of a client in an human
|
/* Concatenate a string representing the state of a client in a human
|
||||||
* readable format, into the sds string 's'. */
|
* readable format, into the sds string 's'. */
|
||||||
sds catClientInfoString(sds s, client *client) {
|
sds catClientInfoString(sds s, client *client) {
|
||||||
char flags[16], events[3], conninfo[CONN_INFO_LEN], *p;
|
char flags[16], events[3], conninfo[CONN_INFO_LEN], *p;
|
||||||
@ -2470,8 +2531,21 @@ sds catClientInfoString(sds s, client *client) {
|
|||||||
if (connHasWriteHandler(client->conn)) *p++ = 'w';
|
if (connHasWriteHandler(client->conn)) *p++ = 'w';
|
||||||
}
|
}
|
||||||
*p = '\0';
|
*p = '\0';
|
||||||
|
|
||||||
|
/* Compute the total memory consumed by this client. */
|
||||||
|
size_t obufmem = getClientOutputBufferMemoryUsage(client);
|
||||||
|
size_t total_mem = obufmem;
|
||||||
|
total_mem += zmalloc_size(client); /* includes client->buf */
|
||||||
|
total_mem += sdsZmallocSize(client->querybuf);
|
||||||
|
/* For efficiency (less work keeping track of the argv memory), it doesn't include the used memory
|
||||||
|
* i.e. unused sds space and internal fragmentation, just the string length. but this is enough to
|
||||||
|
* spot problematic clients. */
|
||||||
|
total_mem += client->argv_len_sum;
|
||||||
|
if (client->argv)
|
||||||
|
total_mem += zmalloc_size(client->argv);
|
||||||
|
|
||||||
return sdscatfmt(s,
|
return sdscatfmt(s,
|
||||||
"id=%U addr=%s %s name=%s age=%I idle=%I flags=%s db=%i sub=%i psub=%i multi=%i qbuf=%U qbuf-free=%U obl=%U oll=%U omem=%U events=%s cmd=%s user=%s",
|
"id=%U addr=%s %s name=%s age=%I idle=%I flags=%s db=%i sub=%i psub=%i multi=%i qbuf=%U qbuf-free=%U argv-mem=%U obl=%U oll=%U omem=%U tot-mem=%U events=%s cmd=%s user=%s",
|
||||||
(unsigned long long) client->id,
|
(unsigned long long) client->id,
|
||||||
getClientPeerId(client),
|
getClientPeerId(client),
|
||||||
connGetInfo(client->conn, conninfo, sizeof(conninfo)),
|
connGetInfo(client->conn, conninfo, sizeof(conninfo)),
|
||||||
@ -2485,9 +2559,11 @@ sds catClientInfoString(sds s, client *client) {
|
|||||||
(client->flags & CLIENT_MULTI) ? client->mstate.count : -1,
|
(client->flags & CLIENT_MULTI) ? client->mstate.count : -1,
|
||||||
(unsigned long long) sdslen(client->querybuf),
|
(unsigned long long) sdslen(client->querybuf),
|
||||||
(unsigned long long) sdsavail(client->querybuf),
|
(unsigned long long) sdsavail(client->querybuf),
|
||||||
|
(unsigned long long) client->argv_len_sum,
|
||||||
(unsigned long long) client->bufpos,
|
(unsigned long long) client->bufpos,
|
||||||
(unsigned long long) listLength(client->reply),
|
(unsigned long long) listLength(client->reply),
|
||||||
(unsigned long long) getClientOutputBufferMemoryUsage(client),
|
(unsigned long long) obufmem, /* should not include client->buf since we want to see 0 for static clients. */
|
||||||
|
(unsigned long long) total_mem,
|
||||||
events,
|
events,
|
||||||
client->lastcmd ? client->lastcmd->name : "NULL",
|
client->lastcmd ? client->lastcmd->name : "NULL",
|
||||||
client->puser ? client->puser->name : "(superuser)");
|
client->puser ? client->puser->name : "(superuser)");
|
||||||
@ -3040,6 +3116,10 @@ void rewriteClientCommandVector(client *c, int argc, ...) {
|
|||||||
/* Replace argv and argc with our new versions. */
|
/* Replace argv and argc with our new versions. */
|
||||||
c->argv = argv;
|
c->argv = argv;
|
||||||
c->argc = argc;
|
c->argc = argc;
|
||||||
|
c->argv_len_sum = 0;
|
||||||
|
for (j = 0; j < c->argc; j++)
|
||||||
|
if (c->argv[j])
|
||||||
|
c->argv_len_sum += getStringObjectLen(c->argv[j]);
|
||||||
c->cmd = lookupCommandOrOriginal((sds)ptrFromObj(c->argv[0]));
|
c->cmd = lookupCommandOrOriginal((sds)ptrFromObj(c->argv[0]));
|
||||||
serverAssertWithInfo(c,NULL,c->cmd != NULL);
|
serverAssertWithInfo(c,NULL,c->cmd != NULL);
|
||||||
va_end(ap);
|
va_end(ap);
|
||||||
@ -3047,10 +3127,15 @@ void rewriteClientCommandVector(client *c, int argc, ...) {
|
|||||||
|
|
||||||
/* Completely replace the client command vector with the provided one. */
|
/* Completely replace the client command vector with the provided one. */
|
||||||
void replaceClientCommandVector(client *c, int argc, robj **argv) {
|
void replaceClientCommandVector(client *c, int argc, robj **argv) {
|
||||||
|
int j;
|
||||||
freeClientArgv(c);
|
freeClientArgv(c);
|
||||||
zfree(c->argv);
|
zfree(c->argv);
|
||||||
c->argv = argv;
|
c->argv = argv;
|
||||||
c->argc = argc;
|
c->argc = argc;
|
||||||
|
c->argv_len_sum = 0;
|
||||||
|
for (j = 0; j < c->argc; j++)
|
||||||
|
if (c->argv[j])
|
||||||
|
c->argv_len_sum += getStringObjectLen(c->argv[j]);
|
||||||
c->cmd = lookupCommandOrOriginal((sds)ptrFromObj(c->argv[0]));
|
c->cmd = lookupCommandOrOriginal((sds)ptrFromObj(c->argv[0]));
|
||||||
serverAssertWithInfo(c,NULL,c->cmd != NULL);
|
serverAssertWithInfo(c,NULL,c->cmd != NULL);
|
||||||
}
|
}
|
||||||
@ -3075,6 +3160,8 @@ void rewriteClientCommandArgument(client *c, int i, robj *newval) {
|
|||||||
c->argv[i] = NULL;
|
c->argv[i] = NULL;
|
||||||
}
|
}
|
||||||
oldval = c->argv[i];
|
oldval = c->argv[i];
|
||||||
|
if (oldval) c->argv_len_sum -= getStringObjectLen(oldval);
|
||||||
|
if (newval) c->argv_len_sum += getStringObjectLen(newval);
|
||||||
c->argv[i] = newval;
|
c->argv[i] = newval;
|
||||||
incrRefCount(newval);
|
incrRefCount(newval);
|
||||||
if (oldval) decrRefCount(oldval);
|
if (oldval) decrRefCount(oldval);
|
||||||
|
@ -62,7 +62,7 @@ int keyspaceEventsStringToFlags(char *classes) {
|
|||||||
return flags;
|
return flags;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This function does exactly the revese of the function above: it gets
|
/* This function does exactly the reverse of the function above: it gets
|
||||||
* as input an integer with the xored flags and returns a string representing
|
* as input an integer with the xored flags and returns a string representing
|
||||||
* the selected classes. The string returned is an sds string that needs to
|
* the selected classes. The string returned is an sds string that needs to
|
||||||
* be released with sdsfree(). */
|
* be released with sdsfree(). */
|
||||||
|
@ -149,7 +149,7 @@ robj *createStringObject(const char *ptr, size_t len) {
|
|||||||
/* Create a string object from a long long value. When possible returns a
|
/* Create a string object from a long long value. When possible returns a
|
||||||
* shared integer object, or at least an integer encoded one.
|
* shared integer object, or at least an integer encoded one.
|
||||||
*
|
*
|
||||||
* If valueobj is non zero, the function avoids returning a a shared
|
* If valueobj is non zero, the function avoids returning a shared
|
||||||
* integer, because the object is going to be used as value in the Redis key
|
* integer, because the object is going to be used as value in the Redis key
|
||||||
* space (for instance when the INCR command is used), so we want LFU/LRU
|
* space (for instance when the INCR command is used), so we want LFU/LRU
|
||||||
* values specific for each key. */
|
* values specific for each key. */
|
||||||
@ -273,7 +273,7 @@ robj *createZsetObject(void) {
|
|||||||
zset *zs = (zset*)zmalloc(sizeof(*zs), MALLOC_SHARED);
|
zset *zs = (zset*)zmalloc(sizeof(*zs), MALLOC_SHARED);
|
||||||
robj *o;
|
robj *o;
|
||||||
|
|
||||||
zs->pdict = dictCreate(&zsetDictType,NULL);
|
zs->dict = dictCreate(&zsetDictType,NULL);
|
||||||
zs->zsl = zslCreate();
|
zs->zsl = zslCreate();
|
||||||
o = createObject(OBJ_ZSET,zs);
|
o = createObject(OBJ_ZSET,zs);
|
||||||
o->encoding = OBJ_ENCODING_SKIPLIST;
|
o->encoding = OBJ_ENCODING_SKIPLIST;
|
||||||
@ -335,7 +335,7 @@ void freeZsetObject(robj_roptr o) {
|
|||||||
switch (o->encoding) {
|
switch (o->encoding) {
|
||||||
case OBJ_ENCODING_SKIPLIST:
|
case OBJ_ENCODING_SKIPLIST:
|
||||||
zs = (zset*)ptrFromObj(o);
|
zs = (zset*)ptrFromObj(o);
|
||||||
dictRelease(zs->pdict);
|
dictRelease(zs->dict);
|
||||||
zslFree(zs->zsl);
|
zslFree(zs->zsl);
|
||||||
zfree(zs);
|
zfree(zs);
|
||||||
break;
|
break;
|
||||||
@ -805,6 +805,7 @@ const char *strEncoding(int encoding) {
|
|||||||
case OBJ_ENCODING_INTSET: return "intset";
|
case OBJ_ENCODING_INTSET: return "intset";
|
||||||
case OBJ_ENCODING_SKIPLIST: return "skiplist";
|
case OBJ_ENCODING_SKIPLIST: return "skiplist";
|
||||||
case OBJ_ENCODING_EMBSTR: return "embstr";
|
case OBJ_ENCODING_EMBSTR: return "embstr";
|
||||||
|
case OBJ_ENCODING_STREAM: return "stream";
|
||||||
default: return "unknown";
|
default: return "unknown";
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -851,7 +852,7 @@ size_t objectComputeSize(robj *o, size_t sample_size) {
|
|||||||
if(o->encoding == OBJ_ENCODING_INT) {
|
if(o->encoding == OBJ_ENCODING_INT) {
|
||||||
asize = sizeof(*o);
|
asize = sizeof(*o);
|
||||||
} else if(o->encoding == OBJ_ENCODING_RAW) {
|
} else if(o->encoding == OBJ_ENCODING_RAW) {
|
||||||
asize = sdsAllocSize(szFromObj(o))+sizeof(*o);
|
asize = sdsZmallocSize(szFromObj(o))+sizeof(*o);
|
||||||
} else if(o->encoding == OBJ_ENCODING_EMBSTR) {
|
} else if(o->encoding == OBJ_ENCODING_EMBSTR) {
|
||||||
asize = sdslen(szFromObj(o))+2+sizeof(*o);
|
asize = sdslen(szFromObj(o))+2+sizeof(*o);
|
||||||
} else {
|
} else {
|
||||||
@ -879,7 +880,7 @@ size_t objectComputeSize(robj *o, size_t sample_size) {
|
|||||||
asize = sizeof(*o)+sizeof(dict)+(sizeof(struct dictEntry*)*dictSlots(d));
|
asize = sizeof(*o)+sizeof(dict)+(sizeof(struct dictEntry*)*dictSlots(d));
|
||||||
while((de = dictNext(di)) != NULL && samples < sample_size) {
|
while((de = dictNext(di)) != NULL && samples < sample_size) {
|
||||||
ele = (sds)dictGetKey(de);
|
ele = (sds)dictGetKey(de);
|
||||||
elesize += sizeof(struct dictEntry) + sdsAllocSize(ele);
|
elesize += sizeof(struct dictEntry) + sdsZmallocSize(ele);
|
||||||
samples++;
|
samples++;
|
||||||
}
|
}
|
||||||
dictReleaseIterator(di);
|
dictReleaseIterator(di);
|
||||||
@ -894,14 +895,14 @@ size_t objectComputeSize(robj *o, size_t sample_size) {
|
|||||||
if (o->encoding == OBJ_ENCODING_ZIPLIST) {
|
if (o->encoding == OBJ_ENCODING_ZIPLIST) {
|
||||||
asize = sizeof(*o)+(ziplistBlobLen((unsigned char*)ptrFromObj(o)));
|
asize = sizeof(*o)+(ziplistBlobLen((unsigned char*)ptrFromObj(o)));
|
||||||
} else if (o->encoding == OBJ_ENCODING_SKIPLIST) {
|
} else if (o->encoding == OBJ_ENCODING_SKIPLIST) {
|
||||||
d = ((zset*)ptrFromObj(o))->pdict;
|
d = ((zset*)ptrFromObj(o))->dict;
|
||||||
zskiplist *zsl = ((zset*)ptrFromObj(o))->zsl;
|
zskiplist *zsl = ((zset*)ptrFromObj(o))->zsl;
|
||||||
zskiplistNode *znode = zsl->header->level(0)->forward;
|
zskiplistNode *znode = zsl->header->level(0)->forward;
|
||||||
asize = sizeof(*o)+sizeof(zset)+sizeof(zskiplist)+sizeof(dict)+
|
asize = sizeof(*o)+sizeof(zset)+sizeof(zskiplist)+sizeof(dict)+
|
||||||
(sizeof(struct dictEntry*)*dictSlots(d))+
|
(sizeof(struct dictEntry*)*dictSlots(d))+
|
||||||
zmalloc_size(zsl->header);
|
zmalloc_size(zsl->header);
|
||||||
while(znode != NULL && samples < sample_size) {
|
while(znode != NULL && samples < sample_size) {
|
||||||
elesize += sdsAllocSize(znode->ele);
|
elesize += sdsZmallocSize(znode->ele);
|
||||||
elesize += sizeof(struct dictEntry) + zmalloc_size(znode);
|
elesize += sizeof(struct dictEntry) + zmalloc_size(znode);
|
||||||
samples++;
|
samples++;
|
||||||
znode = znode->level(0)->forward;
|
znode = znode->level(0)->forward;
|
||||||
@ -920,7 +921,7 @@ size_t objectComputeSize(robj *o, size_t sample_size) {
|
|||||||
while((de = dictNext(di)) != NULL && samples < sample_size) {
|
while((de = dictNext(di)) != NULL && samples < sample_size) {
|
||||||
ele = (sds)dictGetKey(de);
|
ele = (sds)dictGetKey(de);
|
||||||
ele2 = (sds)dictGetVal(de);
|
ele2 = (sds)dictGetVal(de);
|
||||||
elesize += sdsAllocSize(ele) + sdsAllocSize(ele2);
|
elesize += sdsZmallocSize(ele) + sdsZmallocSize(ele2);
|
||||||
elesize += sizeof(struct dictEntry);
|
elesize += sizeof(struct dictEntry);
|
||||||
samples++;
|
samples++;
|
||||||
}
|
}
|
||||||
@ -1061,7 +1062,7 @@ struct redisMemOverhead *getMemoryOverheadData(void) {
|
|||||||
|
|
||||||
mem = 0;
|
mem = 0;
|
||||||
if (g_pserver->aof_state != AOF_OFF) {
|
if (g_pserver->aof_state != AOF_OFF) {
|
||||||
mem += sdsalloc(g_pserver->aof_buf);
|
mem += sdsZmallocSize(g_pserver->aof_buf);
|
||||||
mem += aofRewriteBufferSize();
|
mem += aofRewriteBufferSize();
|
||||||
}
|
}
|
||||||
mh->aof_buffer = mem;
|
mh->aof_buffer = mem;
|
||||||
@ -1081,16 +1082,16 @@ struct redisMemOverhead *getMemoryOverheadData(void) {
|
|||||||
|
|
||||||
for (j = 0; j < cserver.dbnum; j++) {
|
for (j = 0; j < cserver.dbnum; j++) {
|
||||||
redisDb *db = g_pserver->db+j;
|
redisDb *db = g_pserver->db+j;
|
||||||
long long keyscount = dictSize(db->pdict);
|
long long keyscount = dictSize(db->dict);
|
||||||
if (keyscount==0) continue;
|
if (keyscount==0) continue;
|
||||||
|
|
||||||
mh->total_keys += keyscount;
|
mh->total_keys += keyscount;
|
||||||
mh->db = (decltype(mh->db))zrealloc(mh->db,sizeof(mh->db[0])*(mh->num_dbs+1), MALLOC_LOCAL);
|
mh->db = (decltype(mh->db))zrealloc(mh->db,sizeof(mh->db[0])*(mh->num_dbs+1), MALLOC_LOCAL);
|
||||||
mh->db[mh->num_dbs].dbid = j;
|
mh->db[mh->num_dbs].dbid = j;
|
||||||
|
|
||||||
mem = dictSize(db->pdict) * sizeof(dictEntry) +
|
mem = dictSize(db->dict) * sizeof(dictEntry) +
|
||||||
dictSlots(db->pdict) * sizeof(dictEntry*) +
|
dictSlots(db->dict) * sizeof(dictEntry*) +
|
||||||
dictSize(db->pdict) * sizeof(robj);
|
dictSize(db->dict) * sizeof(robj);
|
||||||
mh->db[mh->num_dbs].overhead_ht_main = mem;
|
mh->db[mh->num_dbs].overhead_ht_main = mem;
|
||||||
mem_total+=mem;
|
mem_total+=mem;
|
||||||
|
|
||||||
@ -1276,24 +1277,21 @@ int objectSetLRUOrLFU(robj *val, long long lfu_freq, long long lru_idle,
|
|||||||
|
|
||||||
/* This is a helper function for the OBJECT command. We need to lookup keys
|
/* This is a helper function for the OBJECT command. We need to lookup keys
|
||||||
* without any modification of LRU or other parameters. */
|
* without any modification of LRU or other parameters. */
|
||||||
robj *objectCommandLookup(client *c, robj *key) {
|
robj_roptr objectCommandLookup(client *c, robj *key) {
|
||||||
dictEntry *de;
|
return lookupKeyReadWithFlags(c->db,key,LOOKUP_NOTOUCH|LOOKUP_NONOTIFY);
|
||||||
|
|
||||||
if ((de = dictFind(c->db->pdict,ptrFromObj(key))) == NULL) return NULL;
|
|
||||||
return (robj*) dictGetVal(de);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
robj *objectCommandLookupOrReply(client *c, robj *key, robj *reply) {
|
robj_roptr objectCommandLookupOrReply(client *c, robj *key, robj *reply) {
|
||||||
robj *o = objectCommandLookup(c,key);
|
robj_roptr o = objectCommandLookup(c,key);
|
||||||
|
|
||||||
if (!o) addReply(c, reply);
|
if (!o) addReply(c, reply);
|
||||||
return o;
|
return o;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Object command allows to inspect the internals of an Redis Object.
|
/* Object command allows to inspect the internals of a Redis Object.
|
||||||
* Usage: OBJECT <refcount|encoding|idletime|freq> <key> */
|
* Usage: OBJECT <refcount|encoding|idletime|freq> <key> */
|
||||||
void objectCommand(client *c) {
|
void objectCommand(client *c) {
|
||||||
robj *o;
|
robj_roptr o;
|
||||||
|
|
||||||
if (c->argc == 2 && !strcasecmp(szFromObj(c->argv[1]),"help")) {
|
if (c->argc == 2 && !strcasecmp(szFromObj(c->argv[1]),"help")) {
|
||||||
const char *help[] = {
|
const char *help[] = {
|
||||||
@ -1306,15 +1304,15 @@ NULL
|
|||||||
addReplyHelp(c, help);
|
addReplyHelp(c, help);
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]),"refcount") && c->argc == 3) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]),"refcount") && c->argc == 3) {
|
||||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||||
== NULL) return;
|
== nullptr) return;
|
||||||
addReplyLongLong(c,o->getrefcount(std::memory_order_relaxed));
|
addReplyLongLong(c,o->getrefcount(std::memory_order_relaxed));
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]),"encoding") && c->argc == 3) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]),"encoding") && c->argc == 3) {
|
||||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||||
== NULL) return;
|
== nullptr) return;
|
||||||
addReplyBulkCString(c,strEncoding(o->encoding));
|
addReplyBulkCString(c,strEncoding(o->encoding));
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]),"idletime") && c->argc == 3) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]),"idletime") && c->argc == 3) {
|
||||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||||
== NULL) return;
|
== nullptr) return;
|
||||||
if (g_pserver->maxmemory_policy & MAXMEMORY_FLAG_LFU) {
|
if (g_pserver->maxmemory_policy & MAXMEMORY_FLAG_LFU) {
|
||||||
addReplyError(c,"An LFU maxmemory policy is selected, idle time not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.");
|
addReplyError(c,"An LFU maxmemory policy is selected, idle time not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.");
|
||||||
return;
|
return;
|
||||||
@ -1322,7 +1320,7 @@ NULL
|
|||||||
addReplyLongLong(c,estimateObjectIdleTime(o)/1000);
|
addReplyLongLong(c,estimateObjectIdleTime(o)/1000);
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]),"freq") && c->argc == 3) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]),"freq") && c->argc == 3) {
|
||||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||||
== NULL) return;
|
== nullptr) return;
|
||||||
if (!(g_pserver->maxmemory_policy & MAXMEMORY_FLAG_LFU)) {
|
if (!(g_pserver->maxmemory_policy & MAXMEMORY_FLAG_LFU)) {
|
||||||
addReplyError(c,"An LFU maxmemory policy is not selected, access frequency not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.");
|
addReplyError(c,"An LFU maxmemory policy is not selected, access frequency not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.");
|
||||||
return;
|
return;
|
||||||
@ -1331,10 +1329,10 @@ NULL
|
|||||||
* in case of the key has not been accessed for a long time,
|
* in case of the key has not been accessed for a long time,
|
||||||
* because we update the access time only
|
* because we update the access time only
|
||||||
* when the key is read or overwritten. */
|
* when the key is read or overwritten. */
|
||||||
addReplyLongLong(c,LFUDecrAndReturn(o));
|
addReplyLongLong(c,LFUDecrAndReturn(o.unsafe_robjcast()));
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]), "lastmodified") && c->argc == 3) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]), "lastmodified") && c->argc == 3) {
|
||||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||||
== NULL) return;
|
== nullptr) return;
|
||||||
uint64_t mvcc = mvccFromObj(o);
|
uint64_t mvcc = mvccFromObj(o);
|
||||||
addReplyLongLong(c, (g_pserver->mstime - (mvcc >> MVCC_MS_SHIFT)) / 1000);
|
addReplyLongLong(c, (g_pserver->mstime - (mvcc >> MVCC_MS_SHIFT)) / 1000);
|
||||||
} else {
|
} else {
|
||||||
@ -1377,12 +1375,12 @@ NULL
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if ((de = dictFind(c->db->pdict,ptrFromObj(c->argv[2]))) == NULL) {
|
if ((de = dictFind(c->db->dict,ptrFromObj(c->argv[2]))) == NULL) {
|
||||||
addReplyNull(c);
|
addReplyNull(c);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
size_t usage = objectComputeSize((robj*)dictGetVal(de),samples);
|
size_t usage = objectComputeSize((robj*)dictGetVal(de),samples);
|
||||||
usage += sdsAllocSize((sds)dictGetKey(de));
|
usage += sdsZmallocSize((sds)dictGetKey(de));
|
||||||
usage += sizeof(dictEntry);
|
usage += sizeof(dictEntry);
|
||||||
addReplyLongLong(c,usage);
|
addReplyLongLong(c,usage);
|
||||||
} else if (!strcasecmp(szFromObj(c->argv[1]),"stats") && c->argc == 2) {
|
} else if (!strcasecmp(szFromObj(c->argv[1]),"stats") && c->argc == 2) {
|
||||||
|
@ -46,7 +46,7 @@
|
|||||||
* count: 16 bits, max 65536 (max zl bytes is 65k, so max count actually < 32k).
|
* count: 16 bits, max 65536 (max zl bytes is 65k, so max count actually < 32k).
|
||||||
* encoding: 2 bits, RAW=1, LZF=2.
|
* encoding: 2 bits, RAW=1, LZF=2.
|
||||||
* container: 2 bits, NONE=1, ZIPLIST=2.
|
* container: 2 bits, NONE=1, ZIPLIST=2.
|
||||||
* recompress: 1 bit, bool, true if node is temporarry decompressed for usage.
|
* recompress: 1 bit, bool, true if node is temporary decompressed for usage.
|
||||||
* attempted_compress: 1 bit, boolean, used for verifying during testing.
|
* attempted_compress: 1 bit, boolean, used for verifying during testing.
|
||||||
* extra: 10 bits, free for future use; pads out the remainder of 32 bits */
|
* extra: 10 bits, free for future use; pads out the remainder of 32 bits */
|
||||||
typedef struct quicklistNode {
|
typedef struct quicklistNode {
|
||||||
@ -105,7 +105,7 @@ typedef struct quicklistBookmark {
|
|||||||
/* quicklist is a 40 byte struct (on 64-bit systems) describing a quicklist.
|
/* quicklist is a 40 byte struct (on 64-bit systems) describing a quicklist.
|
||||||
* 'count' is the number of total entries.
|
* 'count' is the number of total entries.
|
||||||
* 'len' is the number of quicklist nodes.
|
* 'len' is the number of quicklist nodes.
|
||||||
* 'compress' is: -1 if compression disabled, otherwise it's the number
|
* 'compress' is: 0 if compression disabled, otherwise it's the number
|
||||||
* of quicklistNodes to leave uncompressed at ends of quicklist.
|
* of quicklistNodes to leave uncompressed at ends of quicklist.
|
||||||
* 'fill' is the user-requested (or default) fill factor.
|
* 'fill' is the user-requested (or default) fill factor.
|
||||||
* 'bookmakrs are an optional feature that is used by realloc this struct,
|
* 'bookmakrs are an optional feature that is used by realloc this struct,
|
||||||
|
16
src/rax.c
16
src/rax.c
@ -628,7 +628,7 @@ int raxGenericInsert(rax *rax, unsigned char *s, size_t len, void *data, void **
|
|||||||
*
|
*
|
||||||
* 3b. IF $SPLITPOS != 0:
|
* 3b. IF $SPLITPOS != 0:
|
||||||
* Trim the compressed node (reallocating it as well) in order to
|
* Trim the compressed node (reallocating it as well) in order to
|
||||||
* contain $splitpos characters. Change chilid pointer in order to link
|
* contain $splitpos characters. Change child pointer in order to link
|
||||||
* to the split node. If new compressed node len is just 1, set
|
* to the split node. If new compressed node len is just 1, set
|
||||||
* iscompr to 0 (layout is the same). Fix parent's reference.
|
* iscompr to 0 (layout is the same). Fix parent's reference.
|
||||||
*
|
*
|
||||||
@ -1082,7 +1082,7 @@ int raxRemove(rax *rax, unsigned char *s, size_t len, void **old) {
|
|||||||
}
|
}
|
||||||
} else if (h->size == 1) {
|
} else if (h->size == 1) {
|
||||||
/* If the node had just one child, after the removal of the key
|
/* If the node had just one child, after the removal of the key
|
||||||
* further compression with adjacent nodes is pontentially possible. */
|
* further compression with adjacent nodes is potentially possible. */
|
||||||
trycompress = 1;
|
trycompress = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1329,7 +1329,7 @@ int raxIteratorNextStep(raxIterator *it, int noup) {
|
|||||||
if (!noup && children) {
|
if (!noup && children) {
|
||||||
debugf("GO DEEPER\n");
|
debugf("GO DEEPER\n");
|
||||||
/* Seek the lexicographically smaller key in this subtree, which
|
/* Seek the lexicographically smaller key in this subtree, which
|
||||||
* is the first one found always going torwards the first child
|
* is the first one found always going towards the first child
|
||||||
* of every successive node. */
|
* of every successive node. */
|
||||||
if (!raxStackPush(&it->stack,it->node)) return 0;
|
if (!raxStackPush(&it->stack,it->node)) return 0;
|
||||||
raxNode **cp = raxNodeFirstChildPtr(it->node);
|
raxNode **cp = raxNodeFirstChildPtr(it->node);
|
||||||
@ -1348,7 +1348,7 @@ int raxIteratorNextStep(raxIterator *it, int noup) {
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
/* If we finished exporing the previous sub-tree, switch to the
|
/* If we finished exploring the previous sub-tree, switch to the
|
||||||
* new one: go upper until a node is found where there are
|
* new one: go upper until a node is found where there are
|
||||||
* children representing keys lexicographically greater than the
|
* children representing keys lexicographically greater than the
|
||||||
* current key. */
|
* current key. */
|
||||||
@ -1510,7 +1510,7 @@ int raxIteratorPrevStep(raxIterator *it, int noup) {
|
|||||||
int raxSeek(raxIterator *it, const char *op, unsigned char *ele, size_t len) {
|
int raxSeek(raxIterator *it, const char *op, unsigned char *ele, size_t len) {
|
||||||
int eq = 0, lt = 0, gt = 0, first = 0, last = 0;
|
int eq = 0, lt = 0, gt = 0, first = 0, last = 0;
|
||||||
|
|
||||||
it->stack.items = 0; /* Just resetting. Intialized by raxStart(). */
|
it->stack.items = 0; /* Just resetting. Initialized by raxStart(). */
|
||||||
it->flags |= RAX_ITER_JUST_SEEKED;
|
it->flags |= RAX_ITER_JUST_SEEKED;
|
||||||
it->flags &= ~RAX_ITER_EOF;
|
it->flags &= ~RAX_ITER_EOF;
|
||||||
it->key_len = 0;
|
it->key_len = 0;
|
||||||
@ -1731,7 +1731,7 @@ int raxPrev(raxIterator *it) {
|
|||||||
* tree, expect a disappointing distribution. A random walk produces good
|
* tree, expect a disappointing distribution. A random walk produces good
|
||||||
* random elements if the tree is not sparse, however in the case of a radix
|
* random elements if the tree is not sparse, however in the case of a radix
|
||||||
* tree certain keys will be reported much more often than others. At least
|
* tree certain keys will be reported much more often than others. At least
|
||||||
* this function should be able to expore every possible element eventually. */
|
* this function should be able to explore every possible element eventually. */
|
||||||
int raxRandomWalk(raxIterator *it, size_t steps) {
|
int raxRandomWalk(raxIterator *it, size_t steps) {
|
||||||
if (it->rt->numele == 0) {
|
if (it->rt->numele == 0) {
|
||||||
it->flags |= RAX_ITER_EOF;
|
it->flags |= RAX_ITER_EOF;
|
||||||
@ -1825,7 +1825,7 @@ uint64_t raxSize(rax *rax) {
|
|||||||
/* ----------------------------- Introspection ------------------------------ */
|
/* ----------------------------- Introspection ------------------------------ */
|
||||||
|
|
||||||
/* This function is mostly used for debugging and learning purposes.
|
/* This function is mostly used for debugging and learning purposes.
|
||||||
* It shows an ASCII representation of a tree on standard output, outling
|
* It shows an ASCII representation of a tree on standard output, outline
|
||||||
* all the nodes and the contained keys.
|
* all the nodes and the contained keys.
|
||||||
*
|
*
|
||||||
* The representation is as follow:
|
* The representation is as follow:
|
||||||
@ -1835,7 +1835,7 @@ uint64_t raxSize(rax *rax) {
|
|||||||
* [abc]=0x12345678 (node is a key, pointing to value 0x12345678)
|
* [abc]=0x12345678 (node is a key, pointing to value 0x12345678)
|
||||||
* [] (a normal empty node)
|
* [] (a normal empty node)
|
||||||
*
|
*
|
||||||
* Children are represented in new idented lines, each children prefixed by
|
* Children are represented in new indented lines, each children prefixed by
|
||||||
* the "`-(x)" string, where "x" is the edge byte.
|
* the "`-(x)" string, where "x" is the edge byte.
|
||||||
*
|
*
|
||||||
* [abc]
|
* [abc]
|
||||||
|
@ -68,7 +68,7 @@ extern "C" {
|
|||||||
* successive nodes having a single child are "compressed" into the node
|
* successive nodes having a single child are "compressed" into the node
|
||||||
* itself as a string of characters, each representing a next-level child,
|
* itself as a string of characters, each representing a next-level child,
|
||||||
* and only the link to the node representing the last character node is
|
* and only the link to the node representing the last character node is
|
||||||
* provided inside the representation. So the above representation is turend
|
* provided inside the representation. So the above representation is turned
|
||||||
* into:
|
* into:
|
||||||
*
|
*
|
||||||
* ["foo"] ""
|
* ["foo"] ""
|
||||||
@ -133,7 +133,7 @@ typedef struct raxNode {
|
|||||||
* nodes).
|
* nodes).
|
||||||
*
|
*
|
||||||
* If the node has an associated key (iskey=1) and is not NULL
|
* If the node has an associated key (iskey=1) and is not NULL
|
||||||
* (isnull=0), then after the raxNode pointers poiting to the
|
* (isnull=0), then after the raxNode pointers pointing to the
|
||||||
* children, an additional value pointer is present (as you can see
|
* children, an additional value pointer is present (as you can see
|
||||||
* in the representation above as "value-ptr" field).
|
* in the representation above as "value-ptr" field).
|
||||||
*/
|
*/
|
||||||
|
102
src/rdb.cpp
102
src/rdb.cpp
@ -36,6 +36,7 @@
|
|||||||
#include "cron.h"
|
#include "cron.h"
|
||||||
|
|
||||||
#include <math.h>
|
#include <math.h>
|
||||||
|
#include <fcntl.h>
|
||||||
#include <sys/types.h>
|
#include <sys/types.h>
|
||||||
#include <sys/time.h>
|
#include <sys/time.h>
|
||||||
#include <sys/resource.h>
|
#include <sys/resource.h>
|
||||||
@ -54,6 +55,9 @@ extern int rdbCheckMode;
|
|||||||
void rdbCheckError(const char *fmt, ...);
|
void rdbCheckError(const char *fmt, ...);
|
||||||
void rdbCheckSetError(const char *fmt, ...);
|
void rdbCheckSetError(const char *fmt, ...);
|
||||||
|
|
||||||
|
#ifdef __GNUC__
|
||||||
|
void rdbReportError(int corruption_error, int linenum, const char *reason, ...) __attribute__ ((format (printf, 3, 4)));
|
||||||
|
#endif
|
||||||
void rdbReportError(int corruption_error, int linenum, const char *reason, ...) {
|
void rdbReportError(int corruption_error, int linenum, const char *reason, ...) {
|
||||||
va_list ap;
|
va_list ap;
|
||||||
char msg[1024];
|
char msg[1024];
|
||||||
@ -82,7 +86,7 @@ void rdbReportError(int corruption_error, int linenum, const char *reason, ...)
|
|||||||
exit(1);
|
exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int rdbWriteRaw(rio *rdb, void *p, size_t len) {
|
static ssize_t rdbWriteRaw(rio *rdb, void *p, size_t len) {
|
||||||
if (rdb && rioWrite(rdb,p,len) == 0)
|
if (rdb && rioWrite(rdb,p,len) == 0)
|
||||||
return -1;
|
return -1;
|
||||||
return len;
|
return len;
|
||||||
@ -489,7 +493,7 @@ void *rdbGenericLoadStringObject(rio *rdb, int flags, size_t *lenptr) {
|
|||||||
int plain = flags & RDB_LOAD_PLAIN;
|
int plain = flags & RDB_LOAD_PLAIN;
|
||||||
int sds = flags & RDB_LOAD_SDS;
|
int sds = flags & RDB_LOAD_SDS;
|
||||||
int isencoded;
|
int isencoded;
|
||||||
uint64_t len;
|
unsigned long long len;
|
||||||
|
|
||||||
len = rdbLoadLen(rdb,&isencoded);
|
len = rdbLoadLen(rdb,&isencoded);
|
||||||
if (isencoded) {
|
if (isencoded) {
|
||||||
@ -501,8 +505,8 @@ void *rdbGenericLoadStringObject(rio *rdb, int flags, size_t *lenptr) {
|
|||||||
case RDB_ENC_LZF:
|
case RDB_ENC_LZF:
|
||||||
return rdbLoadLzfStringObject(rdb,flags,lenptr);
|
return rdbLoadLzfStringObject(rdb,flags,lenptr);
|
||||||
default:
|
default:
|
||||||
rdbExitReportCorruptRDB("Unknown RDB string encoding type %d",len);
|
rdbExitReportCorruptRDB("Unknown RDB string encoding type %llu",len);
|
||||||
return nullptr; /* Never reached. */
|
return nullptr;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1205,6 +1209,8 @@ ssize_t rdbSaveSingleModuleAux(rio *rdb, int when, moduleType *mt) {
|
|||||||
/* Save a module-specific aux value. */
|
/* Save a module-specific aux value. */
|
||||||
RedisModuleIO io;
|
RedisModuleIO io;
|
||||||
int retval = rdbSaveType(rdb, RDB_OPCODE_MODULE_AUX);
|
int retval = rdbSaveType(rdb, RDB_OPCODE_MODULE_AUX);
|
||||||
|
if (retval == -1) return -1;
|
||||||
|
io.bytes += retval;
|
||||||
|
|
||||||
/* Write the "module" identifier as prefix, so that we'll be able
|
/* Write the "module" identifier as prefix, so that we'll be able
|
||||||
* to call the right module during loading. */
|
* to call the right module during loading. */
|
||||||
@ -1265,7 +1271,7 @@ int rdbSaveRio(rio *rdb, int *error, int rdbflags, rdbSaveInfo *rsi) {
|
|||||||
|
|
||||||
for (j = 0; j < cserver.dbnum; j++) {
|
for (j = 0; j < cserver.dbnum; j++) {
|
||||||
redisDb *db = g_pserver->db+j;
|
redisDb *db = g_pserver->db+j;
|
||||||
dict *d = db->pdict;
|
dict *d = db->dict;
|
||||||
if (dictSize(d) == 0) continue;
|
if (dictSize(d) == 0) continue;
|
||||||
di = dictGetSafeIterator(d);
|
di = dictGetSafeIterator(d);
|
||||||
|
|
||||||
@ -1275,7 +1281,7 @@ int rdbSaveRio(rio *rdb, int *error, int rdbflags, rdbSaveInfo *rsi) {
|
|||||||
|
|
||||||
/* Write the RESIZE DB opcode. */
|
/* Write the RESIZE DB opcode. */
|
||||||
uint64_t db_size, expires_size;
|
uint64_t db_size, expires_size;
|
||||||
db_size = dictSize(db->pdict);
|
db_size = dictSize(db->dict);
|
||||||
expires_size = db->setexpire->size();
|
expires_size = db->setexpire->size();
|
||||||
if (rdbSaveType(rdb,RDB_OPCODE_RESIZEDB) == -1) goto werr;
|
if (rdbSaveType(rdb,RDB_OPCODE_RESIZEDB) == -1) goto werr;
|
||||||
if (rdbSaveLen(rdb,db_size) == -1) goto werr;
|
if (rdbSaveLen(rdb,db_size) == -1) goto werr;
|
||||||
@ -1392,7 +1398,7 @@ int rdbSave(rdbSaveInfo *rsi)
|
|||||||
int rdbSaveFile(char *filename, rdbSaveInfo *rsi) {
|
int rdbSaveFile(char *filename, rdbSaveInfo *rsi) {
|
||||||
char tmpfile[256];
|
char tmpfile[256];
|
||||||
char cwd[MAXPATHLEN]; /* Current working dir path for error messages. */
|
char cwd[MAXPATHLEN]; /* Current working dir path for error messages. */
|
||||||
FILE *fp;
|
FILE *fp = NULL;
|
||||||
rio rdb;
|
rio rdb;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
@ -1421,9 +1427,10 @@ int rdbSaveFile(char *filename, rdbSaveInfo *rsi) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Make sure data will not remain on the OS's output buffers */
|
/* Make sure data will not remain on the OS's output buffers */
|
||||||
if (fflush(fp) == EOF) goto werr;
|
if (fflush(fp)) goto werr;
|
||||||
if (fsync(fileno(fp)) == -1) goto werr;
|
if (fsync(fileno(fp))) goto werr;
|
||||||
if (fclose(fp) == EOF) goto werr;
|
if (fclose(fp)) { fp = NULL; goto werr; }
|
||||||
|
fp = NULL;
|
||||||
|
|
||||||
/* Use RENAME to make sure the DB file is changed atomically only
|
/* Use RENAME to make sure the DB file is changed atomically only
|
||||||
* if the generate DB file is ok. */
|
* if the generate DB file is ok. */
|
||||||
@ -1450,7 +1457,7 @@ int rdbSaveFile(char *filename, rdbSaveInfo *rsi) {
|
|||||||
|
|
||||||
werr:
|
werr:
|
||||||
serverLog(LL_WARNING,"Write error saving DB on disk: %s", strerror(errno));
|
serverLog(LL_WARNING,"Write error saving DB on disk: %s", strerror(errno));
|
||||||
fclose(fp);
|
if (fp) fclose(fp);
|
||||||
unlink(tmpfile);
|
unlink(tmpfile);
|
||||||
stopSaving(0);
|
stopSaving(0);
|
||||||
return C_ERR;
|
return C_ERR;
|
||||||
@ -1465,7 +1472,7 @@ int rdbSaveBackground(rdbSaveInfo *rsi) {
|
|||||||
g_pserver->lastbgsave_try = time(NULL);
|
g_pserver->lastbgsave_try = time(NULL);
|
||||||
openChildInfoPipe();
|
openChildInfoPipe();
|
||||||
|
|
||||||
if ((childpid = redisFork()) == 0) {
|
if ((childpid = redisFork(CHILD_TYPE_RDB)) == 0) {
|
||||||
int retval;
|
int retval;
|
||||||
|
|
||||||
/* Child */
|
/* Child */
|
||||||
@ -1473,7 +1480,7 @@ int rdbSaveBackground(rdbSaveInfo *rsi) {
|
|||||||
redisSetCpuAffinity(g_pserver->bgsave_cpulist);
|
redisSetCpuAffinity(g_pserver->bgsave_cpulist);
|
||||||
retval = rdbSave(rsi);
|
retval = rdbSave(rsi);
|
||||||
if (retval == C_OK) {
|
if (retval == C_OK) {
|
||||||
sendChildCOWInfo(CHILD_INFO_TYPE_RDB, "RDB");
|
sendChildCOWInfo(CHILD_TYPE_RDB, "RDB");
|
||||||
}
|
}
|
||||||
exitFromChild((retval == C_OK) ? 0 : 1);
|
exitFromChild((retval == C_OK) ? 0 : 1);
|
||||||
} else {
|
} else {
|
||||||
@ -1489,16 +1496,35 @@ int rdbSaveBackground(rdbSaveInfo *rsi) {
|
|||||||
g_pserver->rdb_save_time_start = time(NULL);
|
g_pserver->rdb_save_time_start = time(NULL);
|
||||||
g_pserver->rdb_child_pid = childpid;
|
g_pserver->rdb_child_pid = childpid;
|
||||||
g_pserver->rdb_child_type = RDB_CHILD_TYPE_DISK;
|
g_pserver->rdb_child_type = RDB_CHILD_TYPE_DISK;
|
||||||
|
updateDictResizePolicy();
|
||||||
return C_OK;
|
return C_OK;
|
||||||
}
|
}
|
||||||
return C_OK; /* unreached */
|
return C_OK; /* unreached */
|
||||||
}
|
}
|
||||||
|
|
||||||
void rdbRemoveTempFile(pid_t childpid) {
|
/* Note that we may call this function in signal handle 'sigShutdownHandler',
|
||||||
|
* so we need guarantee all functions we call are async-signal-safe.
|
||||||
|
* If we call this function from signal handle, we won't call bg_unlik that
|
||||||
|
* is not async-signal-safe. */
|
||||||
|
void rdbRemoveTempFile(pid_t childpid, int from_signal) {
|
||||||
char tmpfile[256];
|
char tmpfile[256];
|
||||||
|
char pid[32];
|
||||||
|
|
||||||
snprintf(tmpfile,sizeof(tmpfile),"temp-%d.rdb", (int) childpid);
|
/* Generate temp rdb file name using aync-signal safe functions. */
|
||||||
|
int pid_len = ll2string(pid, sizeof(pid), childpid);
|
||||||
|
strcpy(tmpfile, "temp-");
|
||||||
|
strncpy(tmpfile+5, pid, pid_len);
|
||||||
|
strcpy(tmpfile+5+pid_len, ".rdb");
|
||||||
|
|
||||||
|
if (from_signal) {
|
||||||
|
/* bg_unlink is not async-signal-safe, but in this case we don't really
|
||||||
|
* need to close the fd, it'll be released when the process exists. */
|
||||||
|
int fd = open(tmpfile, O_RDONLY|O_NONBLOCK);
|
||||||
|
UNUSED(fd);
|
||||||
unlink(tmpfile);
|
unlink(tmpfile);
|
||||||
|
} else {
|
||||||
|
bg_unlink(tmpfile);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This function is called by rdbLoadObject() when the code is in RDB-check
|
/* This function is called by rdbLoadObject() when the code is in RDB-check
|
||||||
@ -1625,7 +1651,7 @@ robj *rdbLoadObject(int rdbtype, rio *rdb, sds key, uint64_t mvcc_tstamp) {
|
|||||||
zs = (zset*)ptrFromObj(o);
|
zs = (zset*)ptrFromObj(o);
|
||||||
|
|
||||||
if (zsetlen > DICT_HT_INITIAL_SIZE)
|
if (zsetlen > DICT_HT_INITIAL_SIZE)
|
||||||
dictExpand(zs->pdict,zsetlen);
|
dictExpand(zs->dict,zsetlen);
|
||||||
|
|
||||||
/* Load every single element of the sorted set. */
|
/* Load every single element of the sorted set. */
|
||||||
while(zsetlen--) {
|
while(zsetlen--) {
|
||||||
@ -1656,7 +1682,7 @@ robj *rdbLoadObject(int rdbtype, rio *rdb, sds key, uint64_t mvcc_tstamp) {
|
|||||||
if (sdslen(sdsele) > maxelelen) maxelelen = sdslen(sdsele);
|
if (sdslen(sdsele) > maxelelen) maxelelen = sdslen(sdsele);
|
||||||
|
|
||||||
znode = zslInsert(zs->zsl,score,sdsele);
|
znode = zslInsert(zs->zsl,score,sdsele);
|
||||||
dictAdd(zs->pdict,sdsele,&znode->score);
|
dictAdd(zs->dict,sdsele,&znode->score);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Convert *after* loading, since sorted sets are not stored ordered. */
|
/* Convert *after* loading, since sorted sets are not stored ordered. */
|
||||||
@ -2288,12 +2314,12 @@ int rdbLoadRio(rio *rdb, int rdbflags, rdbSaveInfo *rsi) {
|
|||||||
goto eoferr;
|
goto eoferr;
|
||||||
if ((expires_size = rdbLoadLen(rdb,NULL)) == RDB_LENERR)
|
if ((expires_size = rdbLoadLen(rdb,NULL)) == RDB_LENERR)
|
||||||
goto eoferr;
|
goto eoferr;
|
||||||
dictExpand(db->pdict,db_size);
|
dictExpand(db->dict,db_size);
|
||||||
continue; /* Read next opcode. */
|
continue; /* Read next opcode. */
|
||||||
} else if (type == RDB_OPCODE_AUX) {
|
} else if (type == RDB_OPCODE_AUX) {
|
||||||
/* AUX: generic string-string fields. Use to add state to RDB
|
/* AUX: generic string-string fields. Use to add state to RDB
|
||||||
* which is backward compatible. Implementations of RDB loading
|
* which is backward compatible. Implementations of RDB loading
|
||||||
* are requierd to skip AUX fields they don't understand.
|
* are required to skip AUX fields they don't understand.
|
||||||
*
|
*
|
||||||
* An AUX field is composed of two strings: key and value. */
|
* An AUX field is composed of two strings: key and value. */
|
||||||
robj *auxkey, *auxval;
|
robj *auxkey, *auxval;
|
||||||
@ -2321,7 +2347,7 @@ int rdbLoadRio(rio *rdb, int rdbflags, rdbSaveInfo *rsi) {
|
|||||||
if (luaCreateFunction(NULL,g_pserver->lua,auxval) == NULL) {
|
if (luaCreateFunction(NULL,g_pserver->lua,auxval) == NULL) {
|
||||||
rdbExitReportCorruptRDB(
|
rdbExitReportCorruptRDB(
|
||||||
"Can't load Lua script from RDB file! "
|
"Can't load Lua script from RDB file! "
|
||||||
"BODY: %s", ptrFromObj(auxval));
|
"BODY: %s", (char*)ptrFromObj(auxval));
|
||||||
}
|
}
|
||||||
} else if (!strcasecmp(szFromObj(auxkey),"redis-ver")) {
|
} else if (!strcasecmp(szFromObj(auxkey),"redis-ver")) {
|
||||||
serverLog(LL_NOTICE,"Loading RDB produced by version %s",
|
serverLog(LL_NOTICE,"Loading RDB produced by version %s",
|
||||||
@ -2600,7 +2626,7 @@ int rdbLoadFile(const char *filename, rdbSaveInfo *rsi, int rdbflags) {
|
|||||||
|
|
||||||
/* A background saving child (BGSAVE) terminated its work. Handle this.
|
/* A background saving child (BGSAVE) terminated its work. Handle this.
|
||||||
* This function covers the case of actual BGSAVEs. */
|
* This function covers the case of actual BGSAVEs. */
|
||||||
void backgroundSaveDoneHandlerDisk(int exitcode, int bysignal) {
|
static void backgroundSaveDoneHandlerDisk(int exitcode, int bysignal) {
|
||||||
if (!bysignal && exitcode == 0) {
|
if (!bysignal && exitcode == 0) {
|
||||||
serverLog(LL_NOTICE,
|
serverLog(LL_NOTICE,
|
||||||
"Background saving terminated with success");
|
"Background saving terminated with success");
|
||||||
@ -2616,27 +2642,20 @@ void backgroundSaveDoneHandlerDisk(int exitcode, int bysignal) {
|
|||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"Background saving terminated by signal %d", bysignal);
|
"Background saving terminated by signal %d", bysignal);
|
||||||
latencyStartMonitor(latency);
|
latencyStartMonitor(latency);
|
||||||
rdbRemoveTempFile(g_pserver->rdb_child_pid);
|
rdbRemoveTempFile(g_pserver->rdb_child_pid, 0);
|
||||||
latencyEndMonitor(latency);
|
latencyEndMonitor(latency);
|
||||||
latencyAddSampleIfNeeded("rdb-unlink-temp-file",latency);
|
latencyAddSampleIfNeeded("rdb-unlink-temp-file",latency);
|
||||||
/* SIGUSR1 is whitelisted, so we have a way to kill a child without
|
/* SIGUSR1 is whitelisted, so we have a way to kill a child without
|
||||||
* tirggering an error condition. */
|
* triggering an error condition. */
|
||||||
if (bysignal != SIGUSR1)
|
if (bysignal != SIGUSR1)
|
||||||
g_pserver->lastbgsave_status = C_ERR;
|
g_pserver->lastbgsave_status = C_ERR;
|
||||||
}
|
}
|
||||||
g_pserver->rdb_child_pid = -1;
|
|
||||||
g_pserver->rdb_child_type = RDB_CHILD_TYPE_NONE;
|
|
||||||
g_pserver->rdb_save_time_last = time(NULL)-g_pserver->rdb_save_time_start;
|
|
||||||
g_pserver->rdb_save_time_start = -1;
|
|
||||||
/* Possibly there are slaves waiting for a BGSAVE in order to be served
|
|
||||||
* (the first stage of SYNC is a bulk transfer of dump.rdb) */
|
|
||||||
updateSlavesWaitingBgsave((!bysignal && exitcode == 0) ? C_OK : C_ERR, RDB_CHILD_TYPE_DISK);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* A background saving child (BGSAVE) terminated its work. Handle this.
|
/* A background saving child (BGSAVE) terminated its work. Handle this.
|
||||||
* This function covers the case of RDB -> Slaves socket transfers for
|
* This function covers the case of RDB -> Slaves socket transfers for
|
||||||
* diskless replication. */
|
* diskless replication. */
|
||||||
void backgroundSaveDoneHandlerSocket(int exitcode, int bysignal) {
|
static void backgroundSaveDoneHandlerSocket(int exitcode, int bysignal) {
|
||||||
serverAssert(GlobalLocksAcquired());
|
serverAssert(GlobalLocksAcquired());
|
||||||
|
|
||||||
if (!bysignal && exitcode == 0) {
|
if (!bysignal && exitcode == 0) {
|
||||||
@ -2648,15 +2667,11 @@ void backgroundSaveDoneHandlerSocket(int exitcode, int bysignal) {
|
|||||||
serverLog(LL_WARNING,
|
serverLog(LL_WARNING,
|
||||||
"Background transfer terminated by signal %d", bysignal);
|
"Background transfer terminated by signal %d", bysignal);
|
||||||
}
|
}
|
||||||
g_pserver->rdb_child_pid = -1;
|
|
||||||
g_pserver->rdb_child_type = RDB_CHILD_TYPE_NONE;
|
|
||||||
g_pserver->rdb_save_time_start = -1;
|
|
||||||
|
|
||||||
updateSlavesWaitingBgsave((!bysignal && exitcode == 0) ? C_OK : C_ERR, RDB_CHILD_TYPE_SOCKET);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* When a background RDB saving/transfer terminates, call the right handler. */
|
/* When a background RDB saving/transfer terminates, call the right handler. */
|
||||||
void backgroundSaveDoneHandler(int exitcode, int bysignal) {
|
void backgroundSaveDoneHandler(int exitcode, int bysignal) {
|
||||||
|
int type = g_pserver->rdb_child_type;
|
||||||
switch(g_pserver->rdb_child_type) {
|
switch(g_pserver->rdb_child_type) {
|
||||||
case RDB_CHILD_TYPE_DISK:
|
case RDB_CHILD_TYPE_DISK:
|
||||||
backgroundSaveDoneHandlerDisk(exitcode,bysignal);
|
backgroundSaveDoneHandlerDisk(exitcode,bysignal);
|
||||||
@ -2668,6 +2683,14 @@ void backgroundSaveDoneHandler(int exitcode, int bysignal) {
|
|||||||
serverPanic("Unknown RDB child type.");
|
serverPanic("Unknown RDB child type.");
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
g_pserver->rdb_child_pid = -1;
|
||||||
|
g_pserver->rdb_child_type = RDB_CHILD_TYPE_NONE;
|
||||||
|
g_pserver->rdb_save_time_last = time(NULL)-g_pserver->rdb_save_time_start;
|
||||||
|
g_pserver->rdb_save_time_start = -1;
|
||||||
|
/* Possibly there are slaves waiting for a BGSAVE in order to be served
|
||||||
|
* (the first stage of SYNC is a bulk transfer of dump.rdb) */
|
||||||
|
updateSlavesWaitingBgsave((!bysignal && exitcode == 0) ? C_OK : C_ERR, type);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Kill the RDB saving child using SIGUSR1 (so that the parent will know
|
/* Kill the RDB saving child using SIGUSR1 (so that the parent will know
|
||||||
@ -2675,7 +2698,7 @@ void backgroundSaveDoneHandler(int exitcode, int bysignal) {
|
|||||||
* the cleanup needed. */
|
* the cleanup needed. */
|
||||||
void killRDBChild(void) {
|
void killRDBChild(void) {
|
||||||
kill(g_pserver->rdb_child_pid,SIGUSR1);
|
kill(g_pserver->rdb_child_pid,SIGUSR1);
|
||||||
rdbRemoveTempFile(g_pserver->rdb_child_pid);
|
rdbRemoveTempFile(g_pserver->rdb_child_pid, 0);
|
||||||
closeChildInfoPipe();
|
closeChildInfoPipe();
|
||||||
updateDictResizePolicy();
|
updateDictResizePolicy();
|
||||||
}
|
}
|
||||||
@ -2720,7 +2743,7 @@ int rdbSaveToSlavesSockets(rdbSaveInfo *rsi) {
|
|||||||
|
|
||||||
/* Create the child process. */
|
/* Create the child process. */
|
||||||
openChildInfoPipe();
|
openChildInfoPipe();
|
||||||
if ((childpid = redisFork()) == 0) {
|
if ((childpid = redisFork(CHILD_TYPE_RDB)) == 0) {
|
||||||
/* Child */
|
/* Child */
|
||||||
int retval;
|
int retval;
|
||||||
rio rdb;
|
rio rdb;
|
||||||
@ -2735,7 +2758,7 @@ int rdbSaveToSlavesSockets(rdbSaveInfo *rsi) {
|
|||||||
retval = C_ERR;
|
retval = C_ERR;
|
||||||
|
|
||||||
if (retval == C_OK) {
|
if (retval == C_OK) {
|
||||||
sendChildCOWInfo(CHILD_INFO_TYPE_RDB, "RDB");
|
sendChildCOWInfo(CHILD_TYPE_RDB, "RDB");
|
||||||
}
|
}
|
||||||
|
|
||||||
rioFreeFd(&rdb);
|
rioFreeFd(&rdb);
|
||||||
@ -2770,6 +2793,7 @@ int rdbSaveToSlavesSockets(rdbSaveInfo *rsi) {
|
|||||||
g_pserver->rdb_save_time_start = time(NULL);
|
g_pserver->rdb_save_time_start = time(NULL);
|
||||||
g_pserver->rdb_child_pid = childpid;
|
g_pserver->rdb_child_pid = childpid;
|
||||||
g_pserver->rdb_child_type = RDB_CHILD_TYPE_SOCKET;
|
g_pserver->rdb_child_type = RDB_CHILD_TYPE_SOCKET;
|
||||||
|
updateDictResizePolicy();
|
||||||
close(g_pserver->rdb_pipe_write); /* close write in parent so that it can detect the close on the child. */
|
close(g_pserver->rdb_pipe_write); /* close write in parent so that it can detect the close on the child. */
|
||||||
aePostFunction(g_pserver->rgthreadvar[IDX_EVENT_LOOP_MAIN].el, []{
|
aePostFunction(g_pserver->rgthreadvar[IDX_EVENT_LOOP_MAIN].el, []{
|
||||||
if (aeCreateFileEvent(serverTL->el, g_pserver->rdb_pipe_read, AE_READABLE, rdbPipeReadHandler,NULL) == AE_ERR) {
|
if (aeCreateFileEvent(serverTL->el, g_pserver->rdb_pipe_read, AE_READABLE, rdbPipeReadHandler,NULL) == AE_ERR) {
|
||||||
|
@ -145,7 +145,7 @@ int rdbLoad(rdbSaveInfo *rsi, int rdbflags);
|
|||||||
int rdbLoadFile(const char *filename, rdbSaveInfo *rsi, int rdbflags);
|
int rdbLoadFile(const char *filename, rdbSaveInfo *rsi, int rdbflags);
|
||||||
int rdbSaveBackground(rdbSaveInfo *rsi);
|
int rdbSaveBackground(rdbSaveInfo *rsi);
|
||||||
int rdbSaveToSlavesSockets(rdbSaveInfo *rsi);
|
int rdbSaveToSlavesSockets(rdbSaveInfo *rsi);
|
||||||
void rdbRemoveTempFile(pid_t childpid);
|
void rdbRemoveTempFile(pid_t childpid, int from_signal);
|
||||||
int rdbSave(rdbSaveInfo *rsi);
|
int rdbSave(rdbSaveInfo *rsi);
|
||||||
int rdbSaveFile(char *filename, rdbSaveInfo *rsi);
|
int rdbSaveFile(char *filename, rdbSaveInfo *rsi);
|
||||||
int rdbSaveFp(FILE *pf, rdbSaveInfo *rsi);
|
int rdbSaveFp(FILE *pf, rdbSaveInfo *rsi);
|
||||||
@ -157,6 +157,7 @@ robj *rdbLoadObject(int type, rio *rdb, sds key, uint64_t mvcc_tstamp);
|
|||||||
void backgroundSaveDoneHandler(int exitcode, int bysignal);
|
void backgroundSaveDoneHandler(int exitcode, int bysignal);
|
||||||
int rdbSaveKeyValuePair(rio *rdb, robj *key, robj *val, long long expiretime);
|
int rdbSaveKeyValuePair(rio *rdb, robj *key, robj *val, long long expiretime);
|
||||||
ssize_t rdbSaveSingleModuleAux(rio *rdb, int when, moduleType *mt);
|
ssize_t rdbSaveSingleModuleAux(rio *rdb, int when, moduleType *mt);
|
||||||
|
robj *rdbLoadCheckModuleValue(rio *rdb, char *modulename);
|
||||||
robj *rdbLoadStringObject(rio *rdb);
|
robj *rdbLoadStringObject(rio *rdb);
|
||||||
ssize_t rdbSaveStringObject(rio *rdb, robj_roptr obj);
|
ssize_t rdbSaveStringObject(rio *rdb, robj_roptr obj);
|
||||||
ssize_t rdbSaveRawString(rio *rdb, const unsigned char *s, size_t len);
|
ssize_t rdbSaveRawString(rio *rdb, const unsigned char *s, size_t len);
|
||||||
|
@ -765,7 +765,7 @@ static client createClient(const char *cmd, size_t len, client from, int thread_
|
|||||||
}
|
}
|
||||||
c->stagptr[c->staglen++] = p;
|
c->stagptr[c->staglen++] = p;
|
||||||
c->stagfree--;
|
c->stagfree--;
|
||||||
p += 5; /* 12 is strlen("{tag}"). */
|
p += 5; /* 5 is strlen("{tag}"). */
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1525,6 +1525,7 @@ int test_is_selected(const char *name) {
|
|||||||
int main(int argc, const char **argv) {
|
int main(int argc, const char **argv) {
|
||||||
int i;
|
int i;
|
||||||
char *data, *cmd;
|
char *data, *cmd;
|
||||||
|
const char *tag;
|
||||||
int len;
|
int len;
|
||||||
|
|
||||||
client c;
|
client c;
|
||||||
@ -1576,7 +1577,12 @@ int main(int argc, const char **argv) {
|
|||||||
|
|
||||||
config.latency = (long long*)zmalloc(sizeof(long long)*config.requests, MALLOC_LOCAL);
|
config.latency = (long long*)zmalloc(sizeof(long long)*config.requests, MALLOC_LOCAL);
|
||||||
|
|
||||||
|
tag = "";
|
||||||
|
|
||||||
if (config.cluster_mode) {
|
if (config.cluster_mode) {
|
||||||
|
// We only include the slot placeholder {tag} if cluster mode is enabled
|
||||||
|
tag = ":{tag}";
|
||||||
|
|
||||||
/* Fetch cluster configuration. */
|
/* Fetch cluster configuration. */
|
||||||
if (!fetchClusterConfiguration() || !config.cluster_nodes) {
|
if (!fetchClusterConfiguration() || !config.cluster_nodes) {
|
||||||
if (!config.hostsocket) {
|
if (!config.hostsocket) {
|
||||||
@ -1690,63 +1696,63 @@ int main(int argc, const char **argv) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("set")) {
|
if (test_is_selected("set")) {
|
||||||
len = redisFormatCommand(&cmd,"SET key:{tag}:__rand_int__ %s",data);
|
len = redisFormatCommand(&cmd,"SET key%s:__rand_int__ %s",tag,data);
|
||||||
benchmark("SET",cmd,len);
|
benchmark("SET",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("get")) {
|
if (test_is_selected("get")) {
|
||||||
len = redisFormatCommand(&cmd,"GET key:{tag}:__rand_int__");
|
len = redisFormatCommand(&cmd,"GET key%s:__rand_int__",tag);
|
||||||
benchmark("GET",cmd,len);
|
benchmark("GET",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("incr")) {
|
if (test_is_selected("incr")) {
|
||||||
len = redisFormatCommand(&cmd,"INCR counter:{tag}:__rand_int__");
|
len = redisFormatCommand(&cmd,"INCR counter%s:__rand_int__",tag);
|
||||||
benchmark("INCR",cmd,len);
|
benchmark("INCR",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("lpush")) {
|
if (test_is_selected("lpush")) {
|
||||||
len = redisFormatCommand(&cmd,"LPUSH mylist:{tag} %s",data);
|
len = redisFormatCommand(&cmd,"LPUSH mylist%s %s",tag,data);
|
||||||
benchmark("LPUSH",cmd,len);
|
benchmark("LPUSH",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("rpush")) {
|
if (test_is_selected("rpush")) {
|
||||||
len = redisFormatCommand(&cmd,"RPUSH mylist:{tag} %s",data);
|
len = redisFormatCommand(&cmd,"RPUSH mylist%s %s",tag,data);
|
||||||
benchmark("RPUSH",cmd,len);
|
benchmark("RPUSH",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("lpop")) {
|
if (test_is_selected("lpop")) {
|
||||||
len = redisFormatCommand(&cmd,"LPOP mylist:{tag}");
|
len = redisFormatCommand(&cmd,"LPOP mylist%s",tag);
|
||||||
benchmark("LPOP",cmd,len);
|
benchmark("LPOP",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("rpop")) {
|
if (test_is_selected("rpop")) {
|
||||||
len = redisFormatCommand(&cmd,"RPOP mylist:{tag}");
|
len = redisFormatCommand(&cmd,"RPOP mylist%s",tag);
|
||||||
benchmark("RPOP",cmd,len);
|
benchmark("RPOP",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("sadd")) {
|
if (test_is_selected("sadd")) {
|
||||||
len = redisFormatCommand(&cmd,
|
len = redisFormatCommand(&cmd,
|
||||||
"SADD myset:{tag} element:__rand_int__");
|
"SADD myset%s element:__rand_int__",tag);
|
||||||
benchmark("SADD",cmd,len);
|
benchmark("SADD",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("hset")) {
|
if (test_is_selected("hset")) {
|
||||||
len = redisFormatCommand(&cmd,
|
len = redisFormatCommand(&cmd,
|
||||||
"HSET myhash:{tag} element:__rand_int__ %s",data);
|
"HSET myhash%s element:__rand_int__ %s",tag,data);
|
||||||
benchmark("HSET",cmd,len);
|
benchmark("HSET",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("spop")) {
|
if (test_is_selected("spop")) {
|
||||||
len = redisFormatCommand(&cmd,"SPOP myset:{tag}");
|
len = redisFormatCommand(&cmd,"SPOP myset%s",tag);
|
||||||
benchmark("SPOP",cmd,len);
|
benchmark("SPOP",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
@ -1755,13 +1761,13 @@ int main(int argc, const char **argv) {
|
|||||||
const char *score = "0";
|
const char *score = "0";
|
||||||
if (config.randomkeys) score = "__rand_int__";
|
if (config.randomkeys) score = "__rand_int__";
|
||||||
len = redisFormatCommand(&cmd,
|
len = redisFormatCommand(&cmd,
|
||||||
"ZADD myzset:{tag} %s element:__rand_int__",score);
|
"ZADD myzset%s %s element:__rand_int__",tag,score);
|
||||||
benchmark("ZADD",cmd,len);
|
benchmark("ZADD",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("zpopmin")) {
|
if (test_is_selected("zpopmin")) {
|
||||||
len = redisFormatCommand(&cmd,"ZPOPMIN myzset:{tag}");
|
len = redisFormatCommand(&cmd,"ZPOPMIN myzset%s",tag);
|
||||||
benchmark("ZPOPMIN",cmd,len);
|
benchmark("ZPOPMIN",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
@ -1772,45 +1778,47 @@ int main(int argc, const char **argv) {
|
|||||||
test_is_selected("lrange_500") ||
|
test_is_selected("lrange_500") ||
|
||||||
test_is_selected("lrange_600"))
|
test_is_selected("lrange_600"))
|
||||||
{
|
{
|
||||||
len = redisFormatCommand(&cmd,"LPUSH mylist:{tag} %s",data);
|
len = redisFormatCommand(&cmd,"LPUSH mylist%s %s",tag,data);
|
||||||
benchmark("LPUSH (needed to benchmark LRANGE)",cmd,len);
|
benchmark("LPUSH (needed to benchmark LRANGE)",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("lrange") || test_is_selected("lrange_100")) {
|
if (test_is_selected("lrange") || test_is_selected("lrange_100")) {
|
||||||
len = redisFormatCommand(&cmd,"LRANGE mylist:{tag} 0 99");
|
len = redisFormatCommand(&cmd,"LRANGE mylist%s 0 99",tag);
|
||||||
benchmark("LRANGE_100 (first 100 elements)",cmd,len);
|
benchmark("LRANGE_100 (first 100 elements)",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("lrange") || test_is_selected("lrange_300")) {
|
if (test_is_selected("lrange") || test_is_selected("lrange_300")) {
|
||||||
len = redisFormatCommand(&cmd,"LRANGE mylist:{tag} 0 299");
|
len = redisFormatCommand(&cmd,"LRANGE mylist%s 0 299",tag);
|
||||||
benchmark("LRANGE_300 (first 300 elements)",cmd,len);
|
benchmark("LRANGE_300 (first 300 elements)",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("lrange") || test_is_selected("lrange_500")) {
|
if (test_is_selected("lrange") || test_is_selected("lrange_500")) {
|
||||||
len = redisFormatCommand(&cmd,"LRANGE mylist:{tag} 0 449");
|
len = redisFormatCommand(&cmd,"LRANGE mylist%s 0 449",tag);
|
||||||
benchmark("LRANGE_500 (first 450 elements)",cmd,len);
|
benchmark("LRANGE_500 (first 450 elements)",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("lrange") || test_is_selected("lrange_600")) {
|
if (test_is_selected("lrange") || test_is_selected("lrange_600")) {
|
||||||
len = redisFormatCommand(&cmd,"LRANGE mylist:{tag} 0 599");
|
len = redisFormatCommand(&cmd,"LRANGE mylist%s 0 599",tag);
|
||||||
benchmark("LRANGE_600 (first 600 elements)",cmd,len);
|
benchmark("LRANGE_600 (first 600 elements)",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (test_is_selected("mset")) {
|
if (test_is_selected("mset")) {
|
||||||
const char *argv[21];
|
const char *cmd_argv[21];
|
||||||
argv[0] = "MSET";
|
cmd_argv[0] = "MSET";
|
||||||
|
sds key_placeholder = sdscatprintf(sdsnew(""),"key%s:__rand_int__",tag);
|
||||||
for (i = 1; i < 21; i += 2) {
|
for (i = 1; i < 21; i += 2) {
|
||||||
argv[i] = "key:{tag}:__rand_int__";
|
cmd_argv[i] = key_placeholder;
|
||||||
argv[i+1] = data;
|
cmd_argv[i+1] = data;
|
||||||
}
|
}
|
||||||
len = redisFormatCommandArgv(&cmd,21,argv,NULL);
|
len = redisFormatCommandArgv(&cmd,21,cmd_argv,NULL);
|
||||||
benchmark("MSET (10 keys)",cmd,len);
|
benchmark("MSET (10 keys)",cmd,len);
|
||||||
free(cmd);
|
free(cmd);
|
||||||
|
sdsfree(key_placeholder);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!config.csv) printf("\n");
|
if (!config.csv) printf("\n");
|
||||||
|
@ -58,6 +58,7 @@ struct {
|
|||||||
#define RDB_CHECK_DOING_CHECK_SUM 5
|
#define RDB_CHECK_DOING_CHECK_SUM 5
|
||||||
#define RDB_CHECK_DOING_READ_LEN 6
|
#define RDB_CHECK_DOING_READ_LEN 6
|
||||||
#define RDB_CHECK_DOING_READ_AUX 7
|
#define RDB_CHECK_DOING_READ_AUX 7
|
||||||
|
#define RDB_CHECK_DOING_READ_MODULE_AUX 8
|
||||||
|
|
||||||
const char *rdb_check_doing_string[] = {
|
const char *rdb_check_doing_string[] = {
|
||||||
"start",
|
"start",
|
||||||
@ -67,7 +68,8 @@ const char *rdb_check_doing_string[] = {
|
|||||||
"read-object-value",
|
"read-object-value",
|
||||||
"check-sum",
|
"check-sum",
|
||||||
"read-len",
|
"read-len",
|
||||||
"read-aux"
|
"read-aux",
|
||||||
|
"read-module-aux"
|
||||||
};
|
};
|
||||||
|
|
||||||
const char *rdb_type_string[] = {
|
const char *rdb_type_string[] = {
|
||||||
@ -272,6 +274,21 @@ int redis_check_rdb(const char *rdbfilename, FILE *fp) {
|
|||||||
decrRefCount(auxkey);
|
decrRefCount(auxkey);
|
||||||
decrRefCount(auxval);
|
decrRefCount(auxval);
|
||||||
continue; /* Read type again. */
|
continue; /* Read type again. */
|
||||||
|
} else if (type == RDB_OPCODE_MODULE_AUX) {
|
||||||
|
/* AUX: Auxiliary data for modules. */
|
||||||
|
uint64_t moduleid, when_opcode, when;
|
||||||
|
rdbstate.doing = RDB_CHECK_DOING_READ_MODULE_AUX;
|
||||||
|
if ((moduleid = rdbLoadLen(&rdb,NULL)) == RDB_LENERR) goto eoferr;
|
||||||
|
if ((when_opcode = rdbLoadLen(&rdb,NULL)) == RDB_LENERR) goto eoferr;
|
||||||
|
if ((when = rdbLoadLen(&rdb,NULL)) == RDB_LENERR) goto eoferr;
|
||||||
|
|
||||||
|
char name[10];
|
||||||
|
moduleTypeNameByID(name,moduleid);
|
||||||
|
rdbCheckInfo("MODULE AUX for: %s", name);
|
||||||
|
|
||||||
|
robj *o = rdbLoadCheckModuleValue(&rdb,name);
|
||||||
|
decrRefCount(o);
|
||||||
|
continue; /* Read type again. */
|
||||||
} else {
|
} else {
|
||||||
if (!rdbIsObjectType(type)) {
|
if (!rdbIsObjectType(type)) {
|
||||||
rdbCheckError("Invalid object type: %d", type);
|
rdbCheckError("Invalid object type: %d", type);
|
||||||
@ -331,7 +348,7 @@ err:
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* RDB check main: called form redis.c when Redis is executed with the
|
/* RDB check main: called form server.c when Redis is executed with the
|
||||||
* keydb-check-rdb alias, on during RDB loading errors.
|
* keydb-check-rdb alias, on during RDB loading errors.
|
||||||
*
|
*
|
||||||
* The function works in two ways: can be called with argc/argv as a
|
* The function works in two ways: can be called with argc/argv as a
|
||||||
|
@ -146,7 +146,7 @@ static void cliRefreshPrompt(void) {
|
|||||||
|
|
||||||
/* Return the name of the dotfile for the specified 'dotfilename'.
|
/* Return the name of the dotfile for the specified 'dotfilename'.
|
||||||
* Normally it just concatenates user $HOME to the file specified
|
* Normally it just concatenates user $HOME to the file specified
|
||||||
* in 'dotfilename'. However if the environment varialbe 'envoverride'
|
* in 'dotfilename'. However if the environment variable 'envoverride'
|
||||||
* is set, its value is taken as the path.
|
* is set, its value is taken as the path.
|
||||||
*
|
*
|
||||||
* The function returns NULL (if the file is /dev/null or cannot be
|
* The function returns NULL (if the file is /dev/null or cannot be
|
||||||
@ -220,12 +220,20 @@ static sds percentDecode(const char *pe, size_t len) {
|
|||||||
static void parseRedisUri(const char *uri) {
|
static void parseRedisUri(const char *uri) {
|
||||||
|
|
||||||
const char *scheme = "redis://";
|
const char *scheme = "redis://";
|
||||||
|
const char *tlsscheme = "rediss://";
|
||||||
const char *curr = uri;
|
const char *curr = uri;
|
||||||
const char *end = uri + strlen(uri);
|
const char *end = uri + strlen(uri);
|
||||||
const char *userinfo, *username, *port, *host, *path;
|
const char *userinfo, *username, *port, *host, *path;
|
||||||
|
|
||||||
/* URI must start with a valid scheme. */
|
/* URI must start with a valid scheme. */
|
||||||
if (strncasecmp(scheme, curr, strlen(scheme))) {
|
if (!strncasecmp(tlsscheme, curr, strlen(tlsscheme))) {
|
||||||
|
#ifdef USE_OPENSSL
|
||||||
|
config.tls = 1;
|
||||||
|
#else
|
||||||
|
fprintf(stderr,"rediss:// is only supported when redis-cli is compiled with OpenSSL\n");
|
||||||
|
exit(1);
|
||||||
|
#endif
|
||||||
|
} else if (strncasecmp(scheme, curr, strlen(scheme))) {
|
||||||
fprintf(stderr,"Invalid URI scheme\n");
|
fprintf(stderr,"Invalid URI scheme\n");
|
||||||
exit(1);
|
exit(1);
|
||||||
}
|
}
|
||||||
@ -1060,7 +1068,7 @@ static int cliReadReply(int output_raw_strings) {
|
|||||||
} else {
|
} else {
|
||||||
if (config.output == OUTPUT_RAW) {
|
if (config.output == OUTPUT_RAW) {
|
||||||
out = cliFormatReplyRaw(reply);
|
out = cliFormatReplyRaw(reply);
|
||||||
out = sdscat(out,"\n");
|
out = sdscatsds(out, config.cmd_delim);
|
||||||
} else if (config.output == OUTPUT_STANDARD) {
|
} else if (config.output == OUTPUT_STANDARD) {
|
||||||
out = cliFormatReplyTTY(reply,"");
|
out = cliFormatReplyTTY(reply,"");
|
||||||
} else if (config.output == OUTPUT_CSV) {
|
} else if (config.output == OUTPUT_CSV) {
|
||||||
@ -1342,6 +1350,9 @@ static int parseOptions(int argc, char **argv) {
|
|||||||
} else if (!strcmp(argv[i],"-d") && !lastarg) {
|
} else if (!strcmp(argv[i],"-d") && !lastarg) {
|
||||||
sdsfree(config.mb_delim);
|
sdsfree(config.mb_delim);
|
||||||
config.mb_delim = sdsnew(argv[++i]);
|
config.mb_delim = sdsnew(argv[++i]);
|
||||||
|
} else if (!strcmp(argv[i],"-D") && !lastarg) {
|
||||||
|
sdsfree(config.cmd_delim);
|
||||||
|
config.cmd_delim = sdsnew(argv[++i]);
|
||||||
} else if (!strcmp(argv[i],"--verbose")) {
|
} else if (!strcmp(argv[i],"--verbose")) {
|
||||||
config.verbose = 1;
|
config.verbose = 1;
|
||||||
} else if (!strcmp(argv[i],"--cluster") && !lastarg) {
|
} else if (!strcmp(argv[i],"--cluster") && !lastarg) {
|
||||||
@ -1524,7 +1535,7 @@ static void usage(void) {
|
|||||||
" -a <password> Password to use when connecting to the server.\n"
|
" -a <password> Password to use when connecting to the server.\n"
|
||||||
" You can also use the " REDIS_CLI_AUTH_ENV " environment\n"
|
" You can also use the " REDIS_CLI_AUTH_ENV " environment\n"
|
||||||
" variable to pass this password more safely\n"
|
" variable to pass this password more safely\n"
|
||||||
" (if both are used, this argument takes predecence).\n"
|
" (if both are used, this argument takes precedence).\n"
|
||||||
" --user <username> Used to send ACL style 'AUTH username pass'. Needs -a.\n"
|
" --user <username> Used to send ACL style 'AUTH username pass'. Needs -a.\n"
|
||||||
" --pass <password> Alias of -a for consistency with the new --user option.\n"
|
" --pass <password> Alias of -a for consistency with the new --user option.\n"
|
||||||
" --askpass Force user to input password with mask from STDIN.\n"
|
" --askpass Force user to input password with mask from STDIN.\n"
|
||||||
@ -1537,7 +1548,8 @@ static void usage(void) {
|
|||||||
" -n <db> Database number.\n"
|
" -n <db> Database number.\n"
|
||||||
" -3 Start session in RESP3 protocol mode.\n"
|
" -3 Start session in RESP3 protocol mode.\n"
|
||||||
" -x Read last argument from STDIN.\n"
|
" -x Read last argument from STDIN.\n"
|
||||||
" -d <delimiter> Multi-bulk delimiter in for raw formatting (default: \\n).\n"
|
" -d <delimiter> Delimiter between response bulks for raw formatting (default: \\n).\n"
|
||||||
|
" -D <delimiter> Delimiter between responses for raw formatting (default: \\n).\n"
|
||||||
" -c Enable cluster mode (follow -ASK and -MOVED redirections).\n"
|
" -c Enable cluster mode (follow -ASK and -MOVED redirections).\n"
|
||||||
#ifdef USE_OPENSSL
|
#ifdef USE_OPENSSL
|
||||||
" --tls Establish a secure TLS connection.\n"
|
" --tls Establish a secure TLS connection.\n"
|
||||||
@ -1954,7 +1966,7 @@ static int evalMode(int argc, char **argv) {
|
|||||||
argv2[2] = sdscatprintf(sdsempty(),"%d",keys);
|
argv2[2] = sdscatprintf(sdsempty(),"%d",keys);
|
||||||
|
|
||||||
/* Call it */
|
/* Call it */
|
||||||
int eval_ldb = config.eval_ldb; /* Save it, may be reverteed. */
|
int eval_ldb = config.eval_ldb; /* Save it, may be reverted. */
|
||||||
retval = issueCommand(argc+3-got_comma, argv2);
|
retval = issueCommand(argc+3-got_comma, argv2);
|
||||||
if (eval_ldb) {
|
if (eval_ldb) {
|
||||||
if (!config.eval_ldb) {
|
if (!config.eval_ldb) {
|
||||||
@ -4445,8 +4457,6 @@ static void clusterManagerMode(clusterManagerCommandProc *proc) {
|
|||||||
exit(0);
|
exit(0);
|
||||||
cluster_manager_err:
|
cluster_manager_err:
|
||||||
freeClusterManager();
|
freeClusterManager();
|
||||||
sdsfree(config.hostip);
|
|
||||||
sdsfree(config.mb_delim);
|
|
||||||
exit(1);
|
exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -5743,13 +5753,13 @@ struct distsamples {
|
|||||||
* samples greater than the previous one, and is also the stop sentinel.
|
* samples greater than the previous one, and is also the stop sentinel.
|
||||||
*
|
*
|
||||||
* "tot' is the total number of samples in the different buckets, so it
|
* "tot' is the total number of samples in the different buckets, so it
|
||||||
* is the SUM(samples[i].conut) for i to 0 up to the max sample.
|
* is the SUM(samples[i].count) for i to 0 up to the max sample.
|
||||||
*
|
*
|
||||||
* As a side effect the function sets all the buckets count to 0. */
|
* As a side effect the function sets all the buckets count to 0. */
|
||||||
void showLatencyDistSamples(struct distsamples *samples, long long tot) {
|
void showLatencyDistSamples(struct distsamples *samples, long long tot) {
|
||||||
int j;
|
int j;
|
||||||
|
|
||||||
/* We convert samples into a index inside the palette
|
/* We convert samples into an index inside the palette
|
||||||
* proportional to the percentage a given bucket represents.
|
* proportional to the percentage a given bucket represents.
|
||||||
* This way intensity of the different parts of the spectrum
|
* This way intensity of the different parts of the spectrum
|
||||||
* don't change relative to the number of requests, which avoids to
|
* don't change relative to the number of requests, which avoids to
|
||||||
@ -6147,7 +6157,9 @@ static void getRDB(clusterManagerNode *node) {
|
|||||||
} else {
|
} else {
|
||||||
fprintf(stderr,"Transfer finished with success.\n");
|
fprintf(stderr,"Transfer finished with success.\n");
|
||||||
}
|
}
|
||||||
redisFree(s); /* Close the file descriptor ASAP as fsync() may take time. */
|
redisFree(s); /* Close the connection ASAP as fsync() may take time. */
|
||||||
|
if (node)
|
||||||
|
node->context = NULL;
|
||||||
fsync(fd);
|
fsync(fd);
|
||||||
close(fd);
|
close(fd);
|
||||||
fprintf(stderr,"Transfer finished with success.\n");
|
fprintf(stderr,"Transfer finished with success.\n");
|
||||||
@ -6853,7 +6865,7 @@ static void LRUTestMode(void) {
|
|||||||
* Intrisic latency mode.
|
* Intrisic latency mode.
|
||||||
*
|
*
|
||||||
* Measure max latency of a running process that does not result from
|
* Measure max latency of a running process that does not result from
|
||||||
* syscalls. Basically this software should provide an hint about how much
|
* syscalls. Basically this software should provide a hint about how much
|
||||||
* time the kernel leaves the process without a chance to run.
|
* time the kernel leaves the process without a chance to run.
|
||||||
*--------------------------------------------------------------------------- */
|
*--------------------------------------------------------------------------- */
|
||||||
|
|
||||||
@ -7001,6 +7013,7 @@ int main(int argc, char **argv) {
|
|||||||
else
|
else
|
||||||
config.output = OUTPUT_STANDARD;
|
config.output = OUTPUT_STANDARD;
|
||||||
config.mb_delim = sdsnew("\n");
|
config.mb_delim = sdsnew("\n");
|
||||||
|
config.cmd_delim = sdsnew("\n");
|
||||||
|
|
||||||
firstarg = parseOptions(argc,argv);
|
firstarg = parseOptions(argc,argv);
|
||||||
argc -= firstarg;
|
argc -= firstarg;
|
||||||
@ -7024,8 +7037,6 @@ int main(int argc, char **argv) {
|
|||||||
if (CLUSTER_MANAGER_MODE()) {
|
if (CLUSTER_MANAGER_MODE()) {
|
||||||
clusterManagerCommandProc *proc = validateClusterManagerCommand();
|
clusterManagerCommandProc *proc = validateClusterManagerCommand();
|
||||||
if (!proc) {
|
if (!proc) {
|
||||||
sdsfree(config.hostip);
|
|
||||||
sdsfree(config.mb_delim);
|
|
||||||
exit(1);
|
exit(1);
|
||||||
}
|
}
|
||||||
clusterManagerMode(proc);
|
clusterManagerMode(proc);
|
||||||
|
@ -171,6 +171,7 @@ extern struct config {
|
|||||||
char *user;
|
char *user;
|
||||||
int output; /* output mode, see OUTPUT_* defines */
|
int output; /* output mode, see OUTPUT_* defines */
|
||||||
sds mb_delim;
|
sds mb_delim;
|
||||||
|
sds cmd_delim;
|
||||||
char prompt[128];
|
char prompt[128];
|
||||||
char *eval;
|
char *eval;
|
||||||
int eval_ldb;
|
int eval_ldb;
|
||||||
|
@ -116,6 +116,13 @@ extern "C" {
|
|||||||
#define REDISMODULE_CTX_FLAGS_ACTIVE_CHILD (1<<18)
|
#define REDISMODULE_CTX_FLAGS_ACTIVE_CHILD (1<<18)
|
||||||
/* The next EXEC will fail due to dirty CAS (touched keys). */
|
/* The next EXEC will fail due to dirty CAS (touched keys). */
|
||||||
#define REDISMODULE_CTX_FLAGS_MULTI_DIRTY (1<<19)
|
#define REDISMODULE_CTX_FLAGS_MULTI_DIRTY (1<<19)
|
||||||
|
/* Redis is currently running inside background child process. */
|
||||||
|
#define REDISMODULE_CTX_FLAGS_IS_CHILD (1<<20)
|
||||||
|
|
||||||
|
/* Next context flag, must be updated when adding new flags above!
|
||||||
|
This flag should not be used directly by the module.
|
||||||
|
* Use RedisModule_GetContextFlagsAll instead. */
|
||||||
|
#define _REDISMODULE_CTX_FLAGS_NEXT (1<<21)
|
||||||
|
|
||||||
/* Keyspace changes notification classes. Every class is associated with a
|
/* Keyspace changes notification classes. Every class is associated with a
|
||||||
* character for configuration purposes.
|
* character for configuration purposes.
|
||||||
@ -133,6 +140,12 @@ extern "C" {
|
|||||||
#define REDISMODULE_NOTIFY_STREAM (1<<10) /* t */
|
#define REDISMODULE_NOTIFY_STREAM (1<<10) /* t */
|
||||||
#define REDISMODULE_NOTIFY_KEY_MISS (1<<11) /* m (Note: This one is excluded from REDISMODULE_NOTIFY_ALL on purpose) */
|
#define REDISMODULE_NOTIFY_KEY_MISS (1<<11) /* m (Note: This one is excluded from REDISMODULE_NOTIFY_ALL on purpose) */
|
||||||
#define REDISMODULE_NOTIFY_LOADED (1<<12) /* module only key space notification, indicate a key loaded from rdb */
|
#define REDISMODULE_NOTIFY_LOADED (1<<12) /* module only key space notification, indicate a key loaded from rdb */
|
||||||
|
|
||||||
|
/* Next notification flag, must be updated when adding new flags above!
|
||||||
|
This flag should not be used directly by the module.
|
||||||
|
* Use RedisModule_GetKeyspaceNotificationFlagsAll instead. */
|
||||||
|
#define _REDISMODULE_NOTIFY_NEXT (1<<13)
|
||||||
|
|
||||||
#define REDISMODULE_NOTIFY_ALL (REDISMODULE_NOTIFY_GENERIC | REDISMODULE_NOTIFY_STRING | REDISMODULE_NOTIFY_LIST | REDISMODULE_NOTIFY_SET | REDISMODULE_NOTIFY_HASH | REDISMODULE_NOTIFY_ZSET | REDISMODULE_NOTIFY_EXPIRED | REDISMODULE_NOTIFY_EVICTED | REDISMODULE_NOTIFY_STREAM) /* A */
|
#define REDISMODULE_NOTIFY_ALL (REDISMODULE_NOTIFY_GENERIC | REDISMODULE_NOTIFY_STRING | REDISMODULE_NOTIFY_LIST | REDISMODULE_NOTIFY_SET | REDISMODULE_NOTIFY_HASH | REDISMODULE_NOTIFY_ZSET | REDISMODULE_NOTIFY_EXPIRED | REDISMODULE_NOTIFY_EVICTED | REDISMODULE_NOTIFY_STREAM) /* A */
|
||||||
|
|
||||||
/* A special pointer that we can use between the core and the module to signal
|
/* A special pointer that we can use between the core and the module to signal
|
||||||
@ -182,7 +195,9 @@ typedef uint64_t RedisModuleTimerID;
|
|||||||
* are modified from the user's sperspective, to invalidate WATCH. */
|
* are modified from the user's sperspective, to invalidate WATCH. */
|
||||||
#define REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED (1<<1)
|
#define REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED (1<<1)
|
||||||
|
|
||||||
/* Server events definitions. */
|
/* Server events definitions.
|
||||||
|
* Those flags should not be used directly by the module, instead
|
||||||
|
* the module should use RedisModuleEvent_* variables */
|
||||||
#define REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED 0
|
#define REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED 0
|
||||||
#define REDISMODULE_EVENT_PERSISTENCE 1
|
#define REDISMODULE_EVENT_PERSISTENCE 1
|
||||||
#define REDISMODULE_EVENT_FLUSHDB 2
|
#define REDISMODULE_EVENT_FLUSHDB 2
|
||||||
@ -194,6 +209,10 @@ typedef uint64_t RedisModuleTimerID;
|
|||||||
#define REDISMODULE_EVENT_CRON_LOOP 8
|
#define REDISMODULE_EVENT_CRON_LOOP 8
|
||||||
#define REDISMODULE_EVENT_MODULE_CHANGE 9
|
#define REDISMODULE_EVENT_MODULE_CHANGE 9
|
||||||
#define REDISMODULE_EVENT_LOADING_PROGRESS 10
|
#define REDISMODULE_EVENT_LOADING_PROGRESS 10
|
||||||
|
#define REDISMODULE_EVENT_SWAPDB 11
|
||||||
|
|
||||||
|
/* Next event flag, should be updated if a new event added. */
|
||||||
|
#define _REDISMODULE_EVENT_NEXT 12
|
||||||
|
|
||||||
typedef struct RedisModuleEvent {
|
typedef struct RedisModuleEvent {
|
||||||
uint64_t id; /* REDISMODULE_EVENT_... defines. */
|
uint64_t id; /* REDISMODULE_EVENT_... defines. */
|
||||||
@ -247,6 +266,10 @@ static const RedisModuleEvent
|
|||||||
RedisModuleEvent_LoadingProgress = {
|
RedisModuleEvent_LoadingProgress = {
|
||||||
REDISMODULE_EVENT_LOADING_PROGRESS,
|
REDISMODULE_EVENT_LOADING_PROGRESS,
|
||||||
1
|
1
|
||||||
|
},
|
||||||
|
RedisModuleEvent_SwapDB = {
|
||||||
|
REDISMODULE_EVENT_SWAPDB,
|
||||||
|
1
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Those are values that are used for the 'subevent' callback argument. */
|
/* Those are values that are used for the 'subevent' callback argument. */
|
||||||
@ -255,33 +278,47 @@ static const RedisModuleEvent
|
|||||||
#define REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START 2
|
#define REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START 2
|
||||||
#define REDISMODULE_SUBEVENT_PERSISTENCE_ENDED 3
|
#define REDISMODULE_SUBEVENT_PERSISTENCE_ENDED 3
|
||||||
#define REDISMODULE_SUBEVENT_PERSISTENCE_FAILED 4
|
#define REDISMODULE_SUBEVENT_PERSISTENCE_FAILED 4
|
||||||
|
#define _REDISMODULE_SUBEVENT_PERSISTENCE_NEXT 5
|
||||||
|
|
||||||
#define REDISMODULE_SUBEVENT_LOADING_RDB_START 0
|
#define REDISMODULE_SUBEVENT_LOADING_RDB_START 0
|
||||||
#define REDISMODULE_SUBEVENT_LOADING_AOF_START 1
|
#define REDISMODULE_SUBEVENT_LOADING_AOF_START 1
|
||||||
#define REDISMODULE_SUBEVENT_LOADING_REPL_START 2
|
#define REDISMODULE_SUBEVENT_LOADING_REPL_START 2
|
||||||
#define REDISMODULE_SUBEVENT_LOADING_ENDED 3
|
#define REDISMODULE_SUBEVENT_LOADING_ENDED 3
|
||||||
#define REDISMODULE_SUBEVENT_LOADING_FAILED 4
|
#define REDISMODULE_SUBEVENT_LOADING_FAILED 4
|
||||||
|
#define _REDISMODULE_SUBEVENT_LOADING_NEXT 5
|
||||||
|
|
||||||
#define REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED 0
|
#define REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED 0
|
||||||
#define REDISMODULE_SUBEVENT_CLIENT_CHANGE_DISCONNECTED 1
|
#define REDISMODULE_SUBEVENT_CLIENT_CHANGE_DISCONNECTED 1
|
||||||
|
#define _REDISMODULE_SUBEVENT_CLIENT_CHANGE_NEXT 2
|
||||||
|
|
||||||
#define REDISMODULE_SUBEVENT_MASTER_LINK_UP 0
|
#define REDISMODULE_SUBEVENT_MASTER_LINK_UP 0
|
||||||
#define REDISMODULE_SUBEVENT_MASTER_LINK_DOWN 1
|
#define REDISMODULE_SUBEVENT_MASTER_LINK_DOWN 1
|
||||||
|
#define _REDISMODULE_SUBEVENT_MASTER_NEXT 2
|
||||||
|
|
||||||
#define REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE 0
|
#define REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE 0
|
||||||
#define REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE 1
|
#define REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE 1
|
||||||
|
#define _REDISMODULE_SUBEVENT_REPLICA_CHANGE_NEXT 2
|
||||||
|
|
||||||
#define REDISMODULE_EVENT_REPLROLECHANGED_NOW_MASTER 0
|
#define REDISMODULE_EVENT_REPLROLECHANGED_NOW_MASTER 0
|
||||||
#define REDISMODULE_EVENT_REPLROLECHANGED_NOW_REPLICA 1
|
#define REDISMODULE_EVENT_REPLROLECHANGED_NOW_REPLICA 1
|
||||||
|
#define _REDISMODULE_EVENT_REPLROLECHANGED_NEXT 2
|
||||||
|
|
||||||
#define REDISMODULE_SUBEVENT_FLUSHDB_START 0
|
#define REDISMODULE_SUBEVENT_FLUSHDB_START 0
|
||||||
#define REDISMODULE_SUBEVENT_FLUSHDB_END 1
|
#define REDISMODULE_SUBEVENT_FLUSHDB_END 1
|
||||||
|
#define _REDISMODULE_SUBEVENT_FLUSHDB_NEXT 2
|
||||||
|
|
||||||
#define REDISMODULE_SUBEVENT_MODULE_LOADED 0
|
#define REDISMODULE_SUBEVENT_MODULE_LOADED 0
|
||||||
#define REDISMODULE_SUBEVENT_MODULE_UNLOADED 1
|
#define REDISMODULE_SUBEVENT_MODULE_UNLOADED 1
|
||||||
|
#define _REDISMODULE_SUBEVENT_MODULE_NEXT 2
|
||||||
|
|
||||||
|
|
||||||
#define REDISMODULE_SUBEVENT_LOADING_PROGRESS_RDB 0
|
#define REDISMODULE_SUBEVENT_LOADING_PROGRESS_RDB 0
|
||||||
#define REDISMODULE_SUBEVENT_LOADING_PROGRESS_AOF 1
|
#define REDISMODULE_SUBEVENT_LOADING_PROGRESS_AOF 1
|
||||||
|
#define _REDISMODULE_SUBEVENT_LOADING_PROGRESS_NEXT 2
|
||||||
|
|
||||||
|
#define _REDISMODULE_SUBEVENT_SHUTDOWN_NEXT 0
|
||||||
|
#define _REDISMODULE_SUBEVENT_CRON_LOOP_NEXT 0
|
||||||
|
#define _REDISMODULE_SUBEVENT_SWAPDB_NEXT 0
|
||||||
|
|
||||||
/* RedisModuleClientInfo flags. */
|
/* RedisModuleClientInfo flags. */
|
||||||
#define REDISMODULE_CLIENTINFO_FLAG_SSL (1<<0)
|
#define REDISMODULE_CLIENTINFO_FLAG_SSL (1<<0)
|
||||||
@ -378,6 +415,17 @@ typedef struct RedisModuleLoadingProgressInfo {
|
|||||||
|
|
||||||
#define RedisModuleLoadingProgress RedisModuleLoadingProgressV1
|
#define RedisModuleLoadingProgress RedisModuleLoadingProgressV1
|
||||||
|
|
||||||
|
#define REDISMODULE_SWAPDBINFO_VERSION 1
|
||||||
|
typedef struct RedisModuleSwapDbInfo {
|
||||||
|
uint64_t version; /* Not used since this structure is never passed
|
||||||
|
from the module to the core right now. Here
|
||||||
|
for future compatibility. */
|
||||||
|
int32_t dbnum_first; /* Swap Db first dbnum */
|
||||||
|
int32_t dbnum_second; /* Swap Db second dbnum */
|
||||||
|
} RedisModuleSwapDbInfoV1;
|
||||||
|
|
||||||
|
#define RedisModuleSwapDbInfo RedisModuleSwapDbInfoV1
|
||||||
|
|
||||||
/* ------------------------- End of common defines ------------------------ */
|
/* ------------------------- End of common defines ------------------------ */
|
||||||
|
|
||||||
#ifndef REDISMODULE_CORE
|
#ifndef REDISMODULE_CORE
|
||||||
@ -656,6 +704,10 @@ REDISMODULE_API void (*RedisModule_ScanCursorRestart)(RedisModuleScanCursor *cur
|
|||||||
REDISMODULE_API void (*RedisModule_ScanCursorDestroy)(RedisModuleScanCursor *cursor) REDISMODULE_ATTR;
|
REDISMODULE_API void (*RedisModule_ScanCursorDestroy)(RedisModuleScanCursor *cursor) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API int (*RedisModule_Scan)(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanCB fn, void *privdata) REDISMODULE_ATTR;
|
REDISMODULE_API int (*RedisModule_Scan)(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanCB fn, void *privdata) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API int (*RedisModule_ScanKey)(RedisModuleKey *key, RedisModuleScanCursor *cursor, RedisModuleScanKeyCB fn, void *privdata) REDISMODULE_ATTR;
|
REDISMODULE_API int (*RedisModule_ScanKey)(RedisModuleKey *key, RedisModuleScanCursor *cursor, RedisModuleScanKeyCB fn, void *privdata) REDISMODULE_ATTR;
|
||||||
|
REDISMODULE_API int (*RedisModule_GetContextFlagsAll)() REDISMODULE_ATTR;
|
||||||
|
REDISMODULE_API int (*RedisModule_GetKeyspaceNotificationFlagsAll)() REDISMODULE_ATTR;
|
||||||
|
REDISMODULE_API int (*RedisModule_IsSubEventSupported)(RedisModuleEvent event, uint64_t subevent) REDISMODULE_ATTR;
|
||||||
|
REDISMODULE_API int (*RedisModule_GetServerVersion)() REDISMODULE_ATTR;
|
||||||
|
|
||||||
/* Experimental APIs */
|
/* Experimental APIs */
|
||||||
#ifdef REDISMODULE_EXPERIMENTAL_API
|
#ifdef REDISMODULE_EXPERIMENTAL_API
|
||||||
@ -668,6 +720,7 @@ REDISMODULE_API void * (*RedisModule_GetBlockedClientPrivateData)(RedisModuleCtx
|
|||||||
REDISMODULE_API RedisModuleBlockedClient * (*RedisModule_GetBlockedClientHandle)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
REDISMODULE_API RedisModuleBlockedClient * (*RedisModule_GetBlockedClientHandle)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API int (*RedisModule_AbortBlock)(RedisModuleBlockedClient *bc) REDISMODULE_ATTR;
|
REDISMODULE_API int (*RedisModule_AbortBlock)(RedisModuleBlockedClient *bc) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API RedisModuleCtx * (*RedisModule_GetThreadSafeContext)(RedisModuleBlockedClient *bc) REDISMODULE_ATTR;
|
REDISMODULE_API RedisModuleCtx * (*RedisModule_GetThreadSafeContext)(RedisModuleBlockedClient *bc) REDISMODULE_ATTR;
|
||||||
|
REDISMODULE_API RedisModuleCtx * (*RedisModule_GetDetachedThreadSafeContext)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API void (*RedisModule_FreeThreadSafeContext)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
REDISMODULE_API void (*RedisModule_FreeThreadSafeContext)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API void (*RedisModule_ThreadSafeContextLock)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
REDISMODULE_API void (*RedisModule_ThreadSafeContextLock)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API int (*RedisModule_ThreadSafeContextTryLock)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
REDISMODULE_API int (*RedisModule_ThreadSafeContextTryLock)(RedisModuleCtx *ctx) REDISMODULE_ATTR;
|
||||||
@ -710,6 +763,8 @@ REDISMODULE_API int (*RedisModule_SetModuleUserACL)(RedisModuleUser *user, const
|
|||||||
REDISMODULE_API int (*RedisModule_AuthenticateClientWithACLUser)(RedisModuleCtx *ctx, const char *name, size_t len, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) REDISMODULE_ATTR;
|
REDISMODULE_API int (*RedisModule_AuthenticateClientWithACLUser)(RedisModuleCtx *ctx, const char *name, size_t len, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API int (*RedisModule_AuthenticateClientWithUser)(RedisModuleCtx *ctx, RedisModuleUser *user, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) REDISMODULE_ATTR;
|
REDISMODULE_API int (*RedisModule_AuthenticateClientWithUser)(RedisModuleCtx *ctx, RedisModuleUser *user, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) REDISMODULE_ATTR;
|
||||||
REDISMODULE_API int (*RedisModule_DeauthenticateAndCloseClient)(RedisModuleCtx *ctx, uint64_t client_id) REDISMODULE_ATTR;
|
REDISMODULE_API int (*RedisModule_DeauthenticateAndCloseClient)(RedisModuleCtx *ctx, uint64_t client_id) REDISMODULE_ATTR;
|
||||||
|
REDISMODULE_API RedisModuleString * (*RedisModule_GetClientCertificate)(RedisModuleCtx *ctx, uint64_t id) REDISMODULE_ATTR;
|
||||||
|
REDISMODULE_API int *(*RedisModule_GetCommandKeys)(RedisModuleCtx *ctx, RedisModuleString **argv, int argc, int *num_keys) REDISMODULE_ATTR;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#define RedisModule_IsAOFClient(id) ((id) == CLIENT_ID_AOF)
|
#define RedisModule_IsAOFClient(id) ((id) == CLIENT_ID_AOF)
|
||||||
@ -899,9 +954,14 @@ static int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int
|
|||||||
REDISMODULE_GET_API(ScanCursorDestroy);
|
REDISMODULE_GET_API(ScanCursorDestroy);
|
||||||
REDISMODULE_GET_API(Scan);
|
REDISMODULE_GET_API(Scan);
|
||||||
REDISMODULE_GET_API(ScanKey);
|
REDISMODULE_GET_API(ScanKey);
|
||||||
|
REDISMODULE_GET_API(GetContextFlagsAll);
|
||||||
|
REDISMODULE_GET_API(GetKeyspaceNotificationFlagsAll);
|
||||||
|
REDISMODULE_GET_API(IsSubEventSupported);
|
||||||
|
REDISMODULE_GET_API(GetServerVersion);
|
||||||
|
|
||||||
#ifdef REDISMODULE_EXPERIMENTAL_API
|
#ifdef REDISMODULE_EXPERIMENTAL_API
|
||||||
REDISMODULE_GET_API(GetThreadSafeContext);
|
REDISMODULE_GET_API(GetThreadSafeContext);
|
||||||
|
REDISMODULE_GET_API(GetDetachedThreadSafeContext);
|
||||||
REDISMODULE_GET_API(FreeThreadSafeContext);
|
REDISMODULE_GET_API(FreeThreadSafeContext);
|
||||||
REDISMODULE_GET_API(ThreadSafeContextLock);
|
REDISMODULE_GET_API(ThreadSafeContextLock);
|
||||||
REDISMODULE_GET_API(ThreadSafeContextTryLock);
|
REDISMODULE_GET_API(ThreadSafeContextTryLock);
|
||||||
@ -951,6 +1011,8 @@ static int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int
|
|||||||
REDISMODULE_GET_API(DeauthenticateAndCloseClient);
|
REDISMODULE_GET_API(DeauthenticateAndCloseClient);
|
||||||
REDISMODULE_GET_API(AuthenticateClientWithACLUser);
|
REDISMODULE_GET_API(AuthenticateClientWithACLUser);
|
||||||
REDISMODULE_GET_API(AuthenticateClientWithUser);
|
REDISMODULE_GET_API(AuthenticateClientWithUser);
|
||||||
|
REDISMODULE_GET_API(GetClientCertificate);
|
||||||
|
REDISMODULE_GET_API(GetCommandKeys);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
if (RedisModule_IsModuleNameBusy && RedisModule_IsModuleNameBusy(name)) return REDISMODULE_ERR;
|
if (RedisModule_IsModuleNameBusy && RedisModule_IsModuleNameBusy(name)) return REDISMODULE_ERR;
|
||||||
@ -960,6 +1022,8 @@ static int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int
|
|||||||
|
|
||||||
#define RedisModule_Assert(_e) ((_e)?(void)0 : (RedisModule__Assert(#_e,__FILE__,__LINE__),exit(1)))
|
#define RedisModule_Assert(_e) ((_e)?(void)0 : (RedisModule__Assert(#_e,__FILE__,__LINE__),exit(1)))
|
||||||
|
|
||||||
|
#define RMAPI_FUNC_SUPPORTED(func) (func != NULL)
|
||||||
|
|
||||||
#else
|
#else
|
||||||
|
|
||||||
/* Things only defined for the modules core, not exported to modules
|
/* Things only defined for the modules core, not exported to modules
|
||||||
@ -972,4 +1036,4 @@ static int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#endif /* REDISMOUDLE_H */
|
#endif /* REDISMODULE_H */
|
||||||
|
@ -160,16 +160,16 @@ client *replicaFromMaster(redisMaster *mi)
|
|||||||
* the file deletion to the filesystem. This call removes the file in a
|
* the file deletion to the filesystem. This call removes the file in a
|
||||||
* background thread instead. We actually just do close() in the thread,
|
* background thread instead. We actually just do close() in the thread,
|
||||||
* by using the fact that if there is another instance of the same file open,
|
* by using the fact that if there is another instance of the same file open,
|
||||||
* the foreground unlink() will not really do anything, and deleting the
|
* the foreground unlink() will only remove the fs name, and deleting the
|
||||||
* file will only happen once the last reference is lost. */
|
* file's storage space will only happen once the last reference is lost. */
|
||||||
int bg_unlink(const char *filename) {
|
int bg_unlink(const char *filename) {
|
||||||
int fd = open(filename,O_RDONLY|O_NONBLOCK);
|
int fd = open(filename,O_RDONLY|O_NONBLOCK);
|
||||||
if (fd == -1) {
|
if (fd == -1) {
|
||||||
/* Can't open the file? Fall back to unlinking in the main thread. */
|
/* Can't open the file? Fall back to unlinking in the main thread. */
|
||||||
return unlink(filename);
|
return unlink(filename);
|
||||||
} else {
|
} else {
|
||||||
/* The following unlink() will not do anything since file
|
/* The following unlink() removes the name but doesn't free the
|
||||||
* is still open. */
|
* file contents because a process still has it open. */
|
||||||
int retval = unlink(filename);
|
int retval = unlink(filename);
|
||||||
if (retval == -1) {
|
if (retval == -1) {
|
||||||
/* If we got an unlink error, we just return it, closing the
|
/* If we got an unlink error, we just return it, closing the
|
||||||
@ -412,7 +412,7 @@ static int writeProtoNum(char *dst, const size_t cchdst, long long num)
|
|||||||
* as well. This function is used if the instance is a master: we use
|
* as well. This function is used if the instance is a master: we use
|
||||||
* the commands received by our clients in order to create the replication
|
* the commands received by our clients in order to create the replication
|
||||||
* stream. Instead if the instance is a replica and has sub-slaves attached,
|
* stream. Instead if the instance is a replica and has sub-slaves attached,
|
||||||
* we use replicationFeedSlavesFromMaster() */
|
* we use replicationFeedSlavesFromMasterStream() */
|
||||||
void replicationFeedSlavesCore(list *slaves, int dictid, robj **argv, int argc) {
|
void replicationFeedSlavesCore(list *slaves, int dictid, robj **argv, int argc) {
|
||||||
int j;
|
int j;
|
||||||
serverAssert(GlobalLocksAcquired());
|
serverAssert(GlobalLocksAcquired());
|
||||||
@ -772,7 +772,7 @@ int masterTryPartialResynchronization(client *c) {
|
|||||||
(strcasecmp(master_replid, g_pserver->replid2) ||
|
(strcasecmp(master_replid, g_pserver->replid2) ||
|
||||||
psync_offset > g_pserver->second_replid_offset))
|
psync_offset > g_pserver->second_replid_offset))
|
||||||
{
|
{
|
||||||
/* Run id "?" is used by slaves that want to force a full resync. */
|
/* Replid "?" is used by slaves that want to force a full resync. */
|
||||||
if (master_replid[0] != '?') {
|
if (master_replid[0] != '?') {
|
||||||
if (strcasecmp(master_replid, g_pserver->replid) &&
|
if (strcasecmp(master_replid, g_pserver->replid) &&
|
||||||
strcasecmp(master_replid, g_pserver->replid2))
|
strcasecmp(master_replid, g_pserver->replid2))
|
||||||
@ -951,7 +951,7 @@ int startBgsaveForReplication(int mincapa) {
|
|||||||
return retval;
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* SYNC and PSYNC command implemenation. */
|
/* SYNC and PSYNC command implementation. */
|
||||||
void syncCommand(client *c) {
|
void syncCommand(client *c) {
|
||||||
/* ignore SYNC if already replica or in monitor mode */
|
/* ignore SYNC if already replica or in monitor mode */
|
||||||
if (c->flags & CLIENT_SLAVE) return;
|
if (c->flags & CLIENT_SLAVE) return;
|
||||||
@ -1781,7 +1781,7 @@ void replicationEmptyDbCallback(void *privdata) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Once we have a link with the master and the synchroniziation was
|
/* Once we have a link with the master and the synchronization was
|
||||||
* performed, this function materializes the master client we store
|
* performed, this function materializes the master client we store
|
||||||
* at g_pserver->master, starting from the specified file descriptor. */
|
* at g_pserver->master, starting from the specified file descriptor. */
|
||||||
void replicationCreateMasterClient(redisMaster *mi, connection *conn, int dbid) {
|
void replicationCreateMasterClient(redisMaster *mi, connection *conn, int dbid) {
|
||||||
@ -1846,16 +1846,10 @@ static int useDisklessLoad() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Helper function for readSyncBulkPayload() to make backups of the current
|
/* Helper function for readSyncBulkPayload() to make backups of the current
|
||||||
* DBs before socket-loading the new ones. The backups may be restored later
|
* databases before socket-loading the new ones. The backups may be restored
|
||||||
* or freed by disklessLoadRestoreBackups(). */
|
* by disklessLoadRestoreBackup or freed by disklessLoadDiscardBackup later. */
|
||||||
redisDb *disklessLoadMakeBackups(void) {
|
dbBackup *disklessLoadMakeBackup(void) {
|
||||||
redisDb *backups = (redisDb*)zmalloc(sizeof(redisDb)*cserver.dbnum);
|
return backupDb();
|
||||||
for (int i=0; i<cserver.dbnum; i++) {
|
|
||||||
backups[i] = g_pserver->db[i];
|
|
||||||
g_pserver->db[i].pdict = dictCreate(&dbDictType,NULL);
|
|
||||||
g_pserver->db[i].setexpire = new (MALLOC_LOCAL) expireset();
|
|
||||||
}
|
|
||||||
return backups;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Helper function for readSyncBulkPayload(): when replica-side diskless
|
/* Helper function for readSyncBulkPayload(): when replica-side diskless
|
||||||
@ -1863,30 +1857,15 @@ redisDb *disklessLoadMakeBackups(void) {
|
|||||||
* before loading the new ones from the socket.
|
* before loading the new ones from the socket.
|
||||||
*
|
*
|
||||||
* If the socket loading went wrong, we want to restore the old backups
|
* If the socket loading went wrong, we want to restore the old backups
|
||||||
* into the server databases. This function does just that in the case
|
* into the server databases. */
|
||||||
* the 'restore' argument (the number of DBs to replace) is non-zero.
|
void disklessLoadRestoreBackup(dbBackup *buckup) {
|
||||||
*
|
restoreDbBackup(buckup);
|
||||||
* When instead the loading succeeded we want just to free our old backups,
|
|
||||||
* in that case the funciton will do just that when 'restore' is 0. */
|
|
||||||
void disklessLoadRestoreBackups(redisDb *backup, int restore, int empty_db_flags)
|
|
||||||
{
|
|
||||||
if (restore) {
|
|
||||||
/* Restore. */
|
|
||||||
emptyDbGeneric(g_pserver->db,-1,empty_db_flags,replicationEmptyDbCallback);
|
|
||||||
for (int i=0; i<cserver.dbnum; i++) {
|
|
||||||
dictRelease(g_pserver->db[i].pdict);
|
|
||||||
delete g_pserver->db[i].setexpire;
|
|
||||||
g_pserver->db[i] = backup[i];
|
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
/* Delete (Pass EMPTYDB_BACKUP in order to avoid firing module events) . */
|
/* Helper function for readSyncBulkPayload() to discard our old backups
|
||||||
emptyDbGeneric(backup,-1,empty_db_flags|EMPTYDB_BACKUP,replicationEmptyDbCallback);
|
* when the loading succeeded. */
|
||||||
for (int i=0; i<cserver.dbnum; i++) {
|
void disklessLoadDiscardBackup(dbBackup *buckup, int flag) {
|
||||||
dictRelease(backup[i].pdict);
|
discardDbBackup(buckup, flag, replicationEmptyDbCallback);
|
||||||
delete backup[i].setexpire;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
zfree(backup);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Asynchronously read the SYNC payload we receive from a master */
|
/* Asynchronously read the SYNC payload we receive from a master */
|
||||||
@ -1895,7 +1874,7 @@ void readSyncBulkPayload(connection *conn) {
|
|||||||
char buf[PROTO_IOBUF_LEN];
|
char buf[PROTO_IOBUF_LEN];
|
||||||
ssize_t nread, readlen, nwritten;
|
ssize_t nread, readlen, nwritten;
|
||||||
int use_diskless_load = useDisklessLoad();
|
int use_diskless_load = useDisklessLoad();
|
||||||
redisDb *diskless_load_backup = NULL;
|
dbBackup *diskless_load_backup = NULL;
|
||||||
rdbSaveInfo rsi = RDB_SAVE_INFO_INIT;
|
rdbSaveInfo rsi = RDB_SAVE_INFO_INIT;
|
||||||
int empty_db_flags = g_pserver->repl_slave_lazy_flush ? EMPTYDB_ASYNC :
|
int empty_db_flags = g_pserver->repl_slave_lazy_flush ? EMPTYDB_ASYNC :
|
||||||
EMPTYDB_NO_FLAGS;
|
EMPTYDB_NO_FLAGS;
|
||||||
@ -1907,7 +1886,7 @@ void readSyncBulkPayload(connection *conn) {
|
|||||||
serverAssert(GlobalLocksAcquired());
|
serverAssert(GlobalLocksAcquired());
|
||||||
|
|
||||||
/* Static vars used to hold the EOF mark, and the last bytes received
|
/* Static vars used to hold the EOF mark, and the last bytes received
|
||||||
* form the server: when they match, we reached the end of the transfer. */
|
* from the server: when they match, we reached the end of the transfer. */
|
||||||
static char eofmark[CONFIG_RUN_ID_SIZE];
|
static char eofmark[CONFIG_RUN_ID_SIZE];
|
||||||
static char lastbytes[CONFIG_RUN_ID_SIZE];
|
static char lastbytes[CONFIG_RUN_ID_SIZE];
|
||||||
static int usemark = 0;
|
static int usemark = 0;
|
||||||
@ -2081,11 +2060,11 @@ void readSyncBulkPayload(connection *conn) {
|
|||||||
g_pserver->repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB)
|
g_pserver->repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB)
|
||||||
{
|
{
|
||||||
/* Create a backup of server.db[] and initialize to empty
|
/* Create a backup of server.db[] and initialize to empty
|
||||||
* dictionaries */
|
* dictionaries. */
|
||||||
diskless_load_backup = disklessLoadMakeBackups();
|
diskless_load_backup = disklessLoadMakeBackup();
|
||||||
}
|
}
|
||||||
/* We call to emptyDb even in case of REPL_DISKLESS_LOAD_SWAPDB
|
/* We call to emptyDb even in case of REPL_DISKLESS_LOAD_SWAPDB
|
||||||
* (Where disklessLoadMakeBackups left server.db empty) because we
|
* (Where disklessLoadMakeBackup left server.db empty) because we
|
||||||
* want to execute all the auxiliary logic of emptyDb (Namely,
|
* want to execute all the auxiliary logic of emptyDb (Namely,
|
||||||
* fire module events) */
|
* fire module events) */
|
||||||
if (!fUpdate) {
|
if (!fUpdate) {
|
||||||
@ -2118,14 +2097,14 @@ void readSyncBulkPayload(connection *conn) {
|
|||||||
"from socket");
|
"from socket");
|
||||||
cancelReplicationHandshake(mi);
|
cancelReplicationHandshake(mi);
|
||||||
rioFreeConn(&rdb, NULL);
|
rioFreeConn(&rdb, NULL);
|
||||||
if (g_pserver->repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB) {
|
|
||||||
/* Restore the backed up databases. */
|
|
||||||
disklessLoadRestoreBackups(diskless_load_backup,1,
|
|
||||||
empty_db_flags);
|
|
||||||
} else {
|
|
||||||
/* Remove the half-loaded data in case we started with
|
/* Remove the half-loaded data in case we started with
|
||||||
* an empty replica. */
|
* an empty replica. */
|
||||||
emptyDb(-1,empty_db_flags,replicationEmptyDbCallback);
|
emptyDb(-1,empty_db_flags,replicationEmptyDbCallback);
|
||||||
|
|
||||||
|
if (g_pserver->repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB) {
|
||||||
|
/* Restore the backed up databases. */
|
||||||
|
disklessLoadRestoreBackup(diskless_load_backup);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Note that there's no point in restarting the AOF on SYNC
|
/* Note that there's no point in restarting the AOF on SYNC
|
||||||
@ -2140,7 +2119,7 @@ void readSyncBulkPayload(connection *conn) {
|
|||||||
/* Delete the backup databases we created before starting to load
|
/* Delete the backup databases we created before starting to load
|
||||||
* the new RDB. Now the RDB was loaded with success so the old
|
* the new RDB. Now the RDB was loaded with success so the old
|
||||||
* data is useless. */
|
* data is useless. */
|
||||||
disklessLoadRestoreBackups(diskless_load_backup,0,empty_db_flags);
|
disklessLoadDiscardBackup(diskless_load_backup, empty_db_flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Verify the end mark is correct. */
|
/* Verify the end mark is correct. */
|
||||||
@ -2174,6 +2153,17 @@ void readSyncBulkPayload(connection *conn) {
|
|||||||
|
|
||||||
const char *rdb_filename = mi->repl_transfer_tmpfile;
|
const char *rdb_filename = mi->repl_transfer_tmpfile;
|
||||||
|
|
||||||
|
/* Make sure the new file (also used for persistence) is fully synced
|
||||||
|
* (not covered by earlier calls to rdb_fsync_range). */
|
||||||
|
if (fsync(mi->repl_transfer_fd) == -1) {
|
||||||
|
serverLog(LL_WARNING,
|
||||||
|
"Failed trying to sync the temp DB to disk in "
|
||||||
|
"MASTER <-> REPLICA synchronization: %s",
|
||||||
|
strerror(errno));
|
||||||
|
cancelReplicationHandshake(mi);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
/* Rename rdb like renaming rewrite aof asynchronously. */
|
/* Rename rdb like renaming rewrite aof asynchronously. */
|
||||||
if (!fUpdate) {
|
if (!fUpdate) {
|
||||||
int old_rdb_fd = open(g_pserver->rdb_filename,O_RDONLY|O_NONBLOCK);
|
int old_rdb_fd = open(g_pserver->rdb_filename,O_RDONLY|O_NONBLOCK);
|
||||||
@ -2243,7 +2233,7 @@ void readSyncBulkPayload(connection *conn) {
|
|||||||
REDISMODULE_SUBEVENT_MASTER_LINK_UP,
|
REDISMODULE_SUBEVENT_MASTER_LINK_UP,
|
||||||
NULL);
|
NULL);
|
||||||
|
|
||||||
/* After a full resynchroniziation we use the replication ID and
|
/* After a full resynchronization we use the replication ID and
|
||||||
* offset of the master. The secondary ID / offset are cleared since
|
* offset of the master. The secondary ID / offset are cleared since
|
||||||
* we are starting a new history. */
|
* we are starting a new history. */
|
||||||
if (fUpdate)
|
if (fUpdate)
|
||||||
@ -2351,7 +2341,7 @@ char *sendSynchronousCommand(redisMaster *mi, int flags, connection *conn, ...)
|
|||||||
/* Try a partial resynchronization with the master if we are about to reconnect.
|
/* Try a partial resynchronization with the master if we are about to reconnect.
|
||||||
* If there is no cached master structure, at least try to issue a
|
* If there is no cached master structure, at least try to issue a
|
||||||
* "PSYNC ? -1" command in order to trigger a full resync using the PSYNC
|
* "PSYNC ? -1" command in order to trigger a full resync using the PSYNC
|
||||||
* command in order to obtain the master run id and the master replication
|
* command in order to obtain the master replid and the master replication
|
||||||
* global offset.
|
* global offset.
|
||||||
*
|
*
|
||||||
* This function is designed to be called from syncWithMaster(), so the
|
* This function is designed to be called from syncWithMaster(), so the
|
||||||
@ -2379,7 +2369,7 @@ char *sendSynchronousCommand(redisMaster *mi, int flags, connection *conn, ...)
|
|||||||
*
|
*
|
||||||
* PSYNC_CONTINUE: If the PSYNC command succeeded and we can continue.
|
* PSYNC_CONTINUE: If the PSYNC command succeeded and we can continue.
|
||||||
* PSYNC_FULLRESYNC: If PSYNC is supported but a full resync is needed.
|
* PSYNC_FULLRESYNC: If PSYNC is supported but a full resync is needed.
|
||||||
* In this case the master run_id and global replication
|
* In this case the master replid and global replication
|
||||||
* offset is saved.
|
* offset is saved.
|
||||||
* PSYNC_NOT_SUPPORTED: If the server does not understand PSYNC at all and
|
* PSYNC_NOT_SUPPORTED: If the server does not understand PSYNC at all and
|
||||||
* the caller should fall back to SYNC.
|
* the caller should fall back to SYNC.
|
||||||
@ -2410,7 +2400,7 @@ int slaveTryPartialResynchronization(redisMaster *mi, connection *conn, int read
|
|||||||
/* Writing half */
|
/* Writing half */
|
||||||
if (!read_reply) {
|
if (!read_reply) {
|
||||||
/* Initially set master_initial_offset to -1 to mark the current
|
/* Initially set master_initial_offset to -1 to mark the current
|
||||||
* master run_id and offset as not valid. Later if we'll be able to do
|
* master replid and offset as not valid. Later if we'll be able to do
|
||||||
* a FULL resync using the PSYNC command we'll set the offset at the
|
* a FULL resync using the PSYNC command we'll set the offset at the
|
||||||
* right value, so that this information will be propagated to the
|
* right value, so that this information will be propagated to the
|
||||||
* client structure representing the master into g_pserver->master. */
|
* client structure representing the master into g_pserver->master. */
|
||||||
@ -2451,7 +2441,7 @@ int slaveTryPartialResynchronization(redisMaster *mi, connection *conn, int read
|
|||||||
if (!strncmp(reply,"+FULLRESYNC",11)) {
|
if (!strncmp(reply,"+FULLRESYNC",11)) {
|
||||||
char *replid = NULL, *offset = NULL;
|
char *replid = NULL, *offset = NULL;
|
||||||
|
|
||||||
/* FULL RESYNC, parse the reply in order to extract the run id
|
/* FULL RESYNC, parse the reply in order to extract the replid
|
||||||
* and the replication offset. */
|
* and the replication offset. */
|
||||||
replid = strchr(reply,' ');
|
replid = strchr(reply,' ');
|
||||||
if (replid) {
|
if (replid) {
|
||||||
@ -2804,7 +2794,7 @@ void syncWithMaster(connection *conn) {
|
|||||||
|
|
||||||
/* Try a partial resynchonization. If we don't have a cached master
|
/* Try a partial resynchonization. If we don't have a cached master
|
||||||
* slaveTryPartialResynchronization() will at least try to use PSYNC
|
* slaveTryPartialResynchronization() will at least try to use PSYNC
|
||||||
* to start a full resynchronization so that we get the master run id
|
* to start a full resynchronization so that we get the master replid
|
||||||
* and the global offset, to try a partial resync at the next
|
* and the global offset, to try a partial resync at the next
|
||||||
* reconnection attempt. */
|
* reconnection attempt. */
|
||||||
if (mi->repl_state == REPL_STATE_SEND_PSYNC) {
|
if (mi->repl_state == REPL_STATE_SEND_PSYNC) {
|
||||||
@ -2973,7 +2963,7 @@ void replicationAbortSyncTransfer(redisMaster *mi) {
|
|||||||
undoConnectWithMaster(mi);
|
undoConnectWithMaster(mi);
|
||||||
if (mi->repl_transfer_fd!=-1) {
|
if (mi->repl_transfer_fd!=-1) {
|
||||||
close(mi->repl_transfer_fd);
|
close(mi->repl_transfer_fd);
|
||||||
unlink(mi->repl_transfer_tmpfile);
|
bg_unlink(mi->repl_transfer_tmpfile);
|
||||||
zfree(mi->repl_transfer_tmpfile);
|
zfree(mi->repl_transfer_tmpfile);
|
||||||
mi->repl_transfer_tmpfile = NULL;
|
mi->repl_transfer_tmpfile = NULL;
|
||||||
mi->repl_transfer_fd = -1;
|
mi->repl_transfer_fd = -1;
|
||||||
@ -2987,7 +2977,7 @@ void replicationAbortSyncTransfer(redisMaster *mi) {
|
|||||||
* If there was a replication handshake in progress 1 is returned and
|
* If there was a replication handshake in progress 1 is returned and
|
||||||
* the replication state (g_pserver->repl_state) set to REPL_STATE_CONNECT.
|
* the replication state (g_pserver->repl_state) set to REPL_STATE_CONNECT.
|
||||||
*
|
*
|
||||||
* Otherwise zero is returned and no operation is perforemd at all. */
|
* Otherwise zero is returned and no operation is performed at all. */
|
||||||
int cancelReplicationHandshake(redisMaster *mi) {
|
int cancelReplicationHandshake(redisMaster *mi) {
|
||||||
if (mi->repl_state == REPL_STATE_TRANSFER) {
|
if (mi->repl_state == REPL_STATE_TRANSFER) {
|
||||||
replicationAbortSyncTransfer(mi);
|
replicationAbortSyncTransfer(mi);
|
||||||
@ -3544,7 +3534,7 @@ void refreshGoodSlavesCount(void) {
|
|||||||
*
|
*
|
||||||
* We don't care about taking a different cache for every different replica
|
* We don't care about taking a different cache for every different replica
|
||||||
* since to fill the cache again is not very costly, the goal of this code
|
* since to fill the cache again is not very costly, the goal of this code
|
||||||
* is to avoid that the same big script is trasmitted a big number of times
|
* is to avoid that the same big script is transmitted a big number of times
|
||||||
* per second wasting bandwidth and processor speed, but it is not a problem
|
* per second wasting bandwidth and processor speed, but it is not a problem
|
||||||
* if we need to rebuild the cache from scratch from time to time, every used
|
* if we need to rebuild the cache from scratch from time to time, every used
|
||||||
* script will need to be transmitted a single time to reappear in the cache.
|
* script will need to be transmitted a single time to reappear in the cache.
|
||||||
@ -3554,7 +3544,7 @@ void refreshGoodSlavesCount(void) {
|
|||||||
* 1) Every time a new replica connects, we flush the whole script cache.
|
* 1) Every time a new replica connects, we flush the whole script cache.
|
||||||
* 2) We only send as EVALSHA what was sent to the master as EVALSHA, without
|
* 2) We only send as EVALSHA what was sent to the master as EVALSHA, without
|
||||||
* trying to convert EVAL into EVALSHA specifically for slaves.
|
* trying to convert EVAL into EVALSHA specifically for slaves.
|
||||||
* 3) Every time we trasmit a script as EVAL to the slaves, we also add the
|
* 3) Every time we transmit a script as EVAL to the slaves, we also add the
|
||||||
* corresponding SHA1 of the script into the cache as we are sure every
|
* corresponding SHA1 of the script into the cache as we are sure every
|
||||||
* replica knows about the script starting from now.
|
* replica knows about the script starting from now.
|
||||||
* 4) On SCRIPT FLUSH command, we replicate the command to all the slaves
|
* 4) On SCRIPT FLUSH command, we replicate the command to all the slaves
|
||||||
@ -3645,7 +3635,7 @@ int replicationScriptCacheExists(sds sha1) {
|
|||||||
|
|
||||||
/* This just set a flag so that we broadcast a REPLCONF GETACK command
|
/* This just set a flag so that we broadcast a REPLCONF GETACK command
|
||||||
* to all the slaves in the beforeSleep() function. Note that this way
|
* to all the slaves in the beforeSleep() function. Note that this way
|
||||||
* we "group" all the clients that want to wait for synchronouns replication
|
* we "group" all the clients that want to wait for synchronous replication
|
||||||
* in a given event loop iteration, and send a single GETACK for them all. */
|
* in a given event loop iteration, and send a single GETACK for them all. */
|
||||||
void replicationRequestAckFromSlaves(void) {
|
void replicationRequestAckFromSlaves(void) {
|
||||||
g_pserver->get_ack_from_slaves = 1;
|
g_pserver->get_ack_from_slaves = 1;
|
||||||
|
@ -72,7 +72,7 @@ struct ldbState {
|
|||||||
list *children; /* All forked debugging sessions pids. */
|
list *children; /* All forked debugging sessions pids. */
|
||||||
int bp[LDB_BREAKPOINTS_MAX]; /* An array of breakpoints line numbers. */
|
int bp[LDB_BREAKPOINTS_MAX]; /* An array of breakpoints line numbers. */
|
||||||
int bpcount; /* Number of valid entries inside bp. */
|
int bpcount; /* Number of valid entries inside bp. */
|
||||||
int step; /* Stop at next line ragardless of breakpoints. */
|
int step; /* Stop at next line regardless of breakpoints. */
|
||||||
int luabp; /* Stop at next line because redis.breakpoint() was called. */
|
int luabp; /* Stop at next line because redis.breakpoint() was called. */
|
||||||
sds *src; /* Lua script source code split by line. */
|
sds *src; /* Lua script source code split by line. */
|
||||||
int lines; /* Number of lines in 'src'. */
|
int lines; /* Number of lines in 'src'. */
|
||||||
@ -413,9 +413,9 @@ void luaReplyToRedisReply(client *c, lua_State *lua) {
|
|||||||
lua_pushnil(lua); /* Use nil to start iteration. */
|
lua_pushnil(lua); /* Use nil to start iteration. */
|
||||||
while (lua_next(lua,-2)) {
|
while (lua_next(lua,-2)) {
|
||||||
/* Stack now: table, key, value */
|
/* Stack now: table, key, value */
|
||||||
luaReplyToRedisReply(c, lua); /* Return value. */
|
lua_pushvalue(lua,-2); /* Dup key before consuming. */
|
||||||
lua_pushvalue(lua,-1); /* Dup key before consuming. */
|
|
||||||
luaReplyToRedisReply(c, lua); /* Return key. */
|
luaReplyToRedisReply(c, lua); /* Return key. */
|
||||||
|
luaReplyToRedisReply(c, lua); /* Return value. */
|
||||||
/* Stack now: table, key. */
|
/* Stack now: table, key. */
|
||||||
maplen++;
|
maplen++;
|
||||||
}
|
}
|
||||||
@ -899,7 +899,7 @@ int luaRedisReplicateCommandsCommand(lua_State *lua) {
|
|||||||
|
|
||||||
/* redis.breakpoint()
|
/* redis.breakpoint()
|
||||||
*
|
*
|
||||||
* Allows to stop execution during a debuggign session from within
|
* Allows to stop execution during a debugging session from within
|
||||||
* the Lua code implementation, like if a breakpoint was set in the code
|
* the Lua code implementation, like if a breakpoint was set in the code
|
||||||
* immediately after the function. */
|
* immediately after the function. */
|
||||||
int luaRedisBreakpointCommand(lua_State *lua) {
|
int luaRedisBreakpointCommand(lua_State *lua) {
|
||||||
@ -1509,7 +1509,7 @@ void evalGenericCommand(client *c, int evalsha) {
|
|||||||
/* Hash the code if this is an EVAL call */
|
/* Hash the code if this is an EVAL call */
|
||||||
sha1hex(funcname+2,(char*)ptrFromObj(c->argv[1]),sdslen((sds)ptrFromObj(c->argv[1])));
|
sha1hex(funcname+2,(char*)ptrFromObj(c->argv[1]),sdslen((sds)ptrFromObj(c->argv[1])));
|
||||||
} else {
|
} else {
|
||||||
/* We already have the SHA if it is a EVALSHA */
|
/* We already have the SHA if it is an EVALSHA */
|
||||||
int j;
|
int j;
|
||||||
char *sha = (char*)ptrFromObj(c->argv[1]);
|
char *sha = (char*)ptrFromObj(c->argv[1]);
|
||||||
|
|
||||||
@ -1645,7 +1645,7 @@ void evalGenericCommand(client *c, int evalsha) {
|
|||||||
* To do so we use a cache of SHA1s of scripts that we already propagated
|
* To do so we use a cache of SHA1s of scripts that we already propagated
|
||||||
* as full EVAL, that's called the Replication Script Cache.
|
* as full EVAL, that's called the Replication Script Cache.
|
||||||
*
|
*
|
||||||
* For repliation, everytime a new replica attaches to the master, we need to
|
* For replication, everytime a new replica attaches to the master, we need to
|
||||||
* flush our cache of scripts that can be replicated as EVALSHA, while
|
* flush our cache of scripts that can be replicated as EVALSHA, while
|
||||||
* for AOF we need to do so every time we rewrite the AOF file. */
|
* for AOF we need to do so every time we rewrite the AOF file. */
|
||||||
if (evalsha && !g_pserver->lua_replicate_commands) {
|
if (evalsha && !g_pserver->lua_replicate_commands) {
|
||||||
@ -1818,7 +1818,7 @@ void ldbLog(sds entry) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* A version of ldbLog() which prevents producing logs greater than
|
/* A version of ldbLog() which prevents producing logs greater than
|
||||||
* ldb.maxlen. The first time the limit is reached an hint is generated
|
* ldb.maxlen. The first time the limit is reached a hint is generated
|
||||||
* to inform the user that reply trimming can be disabled using the
|
* to inform the user that reply trimming can be disabled using the
|
||||||
* debugger "maxlen" command. */
|
* debugger "maxlen" command. */
|
||||||
void ldbLogWithMaxLen(sds entry) {
|
void ldbLogWithMaxLen(sds entry) {
|
||||||
@ -1859,7 +1859,7 @@ void ldbSendLogs(void) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Start a debugging session before calling EVAL implementation.
|
/* Start a debugging session before calling EVAL implementation.
|
||||||
* The techique we use is to capture the client socket file descriptor,
|
* The technique we use is to capture the client socket file descriptor,
|
||||||
* in order to perform direct I/O with it from within Lua hooks. This
|
* in order to perform direct I/O with it from within Lua hooks. This
|
||||||
* way we don't have to re-enter Redis in order to handle I/O.
|
* way we don't have to re-enter Redis in order to handle I/O.
|
||||||
*
|
*
|
||||||
@ -1873,7 +1873,7 @@ void ldbSendLogs(void) {
|
|||||||
int ldbStartSession(client *c) {
|
int ldbStartSession(client *c) {
|
||||||
ldb.forked = (c->flags & CLIENT_LUA_DEBUG_SYNC) == 0;
|
ldb.forked = (c->flags & CLIENT_LUA_DEBUG_SYNC) == 0;
|
||||||
if (ldb.forked) {
|
if (ldb.forked) {
|
||||||
pid_t cp = redisFork();
|
pid_t cp = redisFork(CHILD_TYPE_LDB);
|
||||||
if (cp == -1) {
|
if (cp == -1) {
|
||||||
addReplyError(c,"Fork() failed: can't run EVAL in debugging mode.");
|
addReplyError(c,"Fork() failed: can't run EVAL in debugging mode.");
|
||||||
return 0;
|
return 0;
|
||||||
@ -1942,7 +1942,7 @@ void ldbEndSession(client *c) {
|
|||||||
connNonBlock(ldb.conn);
|
connNonBlock(ldb.conn);
|
||||||
connSendTimeout(ldb.conn,0);
|
connSendTimeout(ldb.conn,0);
|
||||||
|
|
||||||
/* Close the client connectin after sending the final EVAL reply
|
/* Close the client connection after sending the final EVAL reply
|
||||||
* in order to signal the end of the debugging session. */
|
* in order to signal the end of the debugging session. */
|
||||||
c->flags |= CLIENT_CLOSE_AFTER_REPLY;
|
c->flags |= CLIENT_CLOSE_AFTER_REPLY;
|
||||||
|
|
||||||
@ -2112,7 +2112,7 @@ void ldbLogSourceLine(int lnum) {
|
|||||||
/* Implement the "list" command of the Lua debugger. If around is 0
|
/* Implement the "list" command of the Lua debugger. If around is 0
|
||||||
* the whole file is listed, otherwise only a small portion of the file
|
* the whole file is listed, otherwise only a small portion of the file
|
||||||
* around the specified line is shown. When a line number is specified
|
* around the specified line is shown. When a line number is specified
|
||||||
* the amonut of context (lines before/after) is specified via the
|
* the amount of context (lines before/after) is specified via the
|
||||||
* 'context' argument. */
|
* 'context' argument. */
|
||||||
void ldbList(int around, int context) {
|
void ldbList(int around, int context) {
|
||||||
int j;
|
int j;
|
||||||
@ -2123,7 +2123,7 @@ void ldbList(int around, int context) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Append an human readable representation of the Lua value at position 'idx'
|
/* Append a human readable representation of the Lua value at position 'idx'
|
||||||
* on the stack of the 'lua' state, to the SDS string passed as argument.
|
* on the stack of the 'lua' state, to the SDS string passed as argument.
|
||||||
* The new SDS string with the represented value attached is returned.
|
* The new SDS string with the represented value attached is returned.
|
||||||
* Used in order to implement ldbLogStackValue().
|
* Used in order to implement ldbLogStackValue().
|
||||||
@ -2367,7 +2367,7 @@ char *ldbRedisProtocolToHuman_Double(sds *o, char *reply) {
|
|||||||
return p+2;
|
return p+2;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Log a Redis reply as debugger output, in an human readable format.
|
/* Log a Redis reply as debugger output, in a human readable format.
|
||||||
* If the resulting string is longer than 'len' plus a few more chars
|
* If the resulting string is longer than 'len' plus a few more chars
|
||||||
* used as prefix, it gets truncated. */
|
* used as prefix, it gets truncated. */
|
||||||
void ldbLogRedisReply(char *reply) {
|
void ldbLogRedisReply(char *reply) {
|
||||||
@ -2551,7 +2551,7 @@ void ldbTrace(lua_State *lua) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Impleemnts the debugger "maxlen" command. It just queries or sets the
|
/* Implements the debugger "maxlen" command. It just queries or sets the
|
||||||
* ldb.maxlen variable. */
|
* ldb.maxlen variable. */
|
||||||
void ldbMaxlen(sds *argv, int argc) {
|
void ldbMaxlen(sds *argv, int argc) {
|
||||||
if (argc == 2) {
|
if (argc == 2) {
|
||||||
@ -2624,8 +2624,8 @@ ldbLog(sdsnew(" mode dataset changes will be retained."));
|
|||||||
ldbLog(sdsnew(""));
|
ldbLog(sdsnew(""));
|
||||||
ldbLog(sdsnew("Debugger functions you can call from Lua scripts:"));
|
ldbLog(sdsnew("Debugger functions you can call from Lua scripts:"));
|
||||||
ldbLog(sdsnew("redis.debug() Produce logs in the debugger console."));
|
ldbLog(sdsnew("redis.debug() Produce logs in the debugger console."));
|
||||||
ldbLog(sdsnew("redis.breakpoint() Stop execution like if there was a breakpoing."));
|
ldbLog(sdsnew("redis.breakpoint() Stop execution like if there was a breakpoint in the"));
|
||||||
ldbLog(sdsnew(" in the next line of code."));
|
ldbLog(sdsnew(" next line of code."));
|
||||||
ldbSendLogs();
|
ldbSendLogs();
|
||||||
} else if (!strcasecmp(argv[0],"s") || !strcasecmp(argv[0],"step") ||
|
} else if (!strcasecmp(argv[0],"s") || !strcasecmp(argv[0],"step") ||
|
||||||
!strcasecmp(argv[0],"n") || !strcasecmp(argv[0],"next")) {
|
!strcasecmp(argv[0],"n") || !strcasecmp(argv[0],"next")) {
|
||||||
|
40
src/sds.c
40
src/sds.c
@ -444,7 +444,7 @@ sds sdscatlen(sds s, const void *t, size_t len) {
|
|||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Append the specified null termianted C string to the sds string 's'.
|
/* Append the specified null terminated C string to the sds string 's'.
|
||||||
*
|
*
|
||||||
* After the call, the passed sds string is no longer valid and all the
|
* After the call, the passed sds string is no longer valid and all the
|
||||||
* references must be substituted with the new pointer returned by the call. */
|
* references must be substituted with the new pointer returned by the call. */
|
||||||
@ -492,7 +492,7 @@ int sdsll2str(char *s, long long value) {
|
|||||||
size_t l;
|
size_t l;
|
||||||
|
|
||||||
/* Generate the string representation, this method produces
|
/* Generate the string representation, this method produces
|
||||||
* an reversed string. */
|
* a reversed string. */
|
||||||
v = (value < 0) ? -value : value;
|
v = (value < 0) ? -value : value;
|
||||||
p = s;
|
p = s;
|
||||||
do {
|
do {
|
||||||
@ -523,7 +523,7 @@ int sdsull2str(char *s, unsigned long long v) {
|
|||||||
size_t l;
|
size_t l;
|
||||||
|
|
||||||
/* Generate the string representation, this method produces
|
/* Generate the string representation, this method produces
|
||||||
* an reversed string. */
|
* a reversed string. */
|
||||||
p = s;
|
p = s;
|
||||||
do {
|
do {
|
||||||
*p++ = '0'+(v%10);
|
*p++ = '0'+(v%10);
|
||||||
@ -562,6 +562,7 @@ sds sdscatvprintf(sds s, const char *fmt, va_list ap) {
|
|||||||
va_list cpy;
|
va_list cpy;
|
||||||
char staticbuf[1024], *buf = staticbuf, *t;
|
char staticbuf[1024], *buf = staticbuf, *t;
|
||||||
size_t buflen = strlen(fmt)*2;
|
size_t buflen = strlen(fmt)*2;
|
||||||
|
int bufstrlen;
|
||||||
|
|
||||||
/* We try to start using a static buffer for speed.
|
/* We try to start using a static buffer for speed.
|
||||||
* If not possible we revert to heap allocation. */
|
* If not possible we revert to heap allocation. */
|
||||||
@ -572,16 +573,19 @@ sds sdscatvprintf(sds s, const char *fmt, va_list ap) {
|
|||||||
buflen = sizeof(staticbuf);
|
buflen = sizeof(staticbuf);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Try with buffers two times bigger every time we fail to
|
/* Alloc enough space for buffer and \0 after failing to
|
||||||
* fit the string in the current buffer size. */
|
* fit the string in the current buffer size. */
|
||||||
while(1) {
|
while(1) {
|
||||||
buf[buflen-2] = '\0';
|
|
||||||
va_copy(cpy,ap);
|
va_copy(cpy,ap);
|
||||||
vsnprintf(buf, buflen, fmt, cpy);
|
bufstrlen = vsnprintf(buf, buflen, fmt, cpy);
|
||||||
va_end(cpy);
|
va_end(cpy);
|
||||||
if (buf[buflen-2] != '\0') {
|
if (bufstrlen < 0) {
|
||||||
if (buf != staticbuf) s_free(buf);
|
if (buf != staticbuf) s_free(buf);
|
||||||
buflen *= 2;
|
return NULL;
|
||||||
|
}
|
||||||
|
if (((size_t)bufstrlen) >= buflen) {
|
||||||
|
if (buf != staticbuf) s_free(buf);
|
||||||
|
buflen = ((size_t)bufstrlen) + 1;
|
||||||
buf = s_malloc(buflen, MALLOC_SHARED);
|
buf = s_malloc(buflen, MALLOC_SHARED);
|
||||||
if (buf == NULL) return NULL;
|
if (buf == NULL) return NULL;
|
||||||
continue;
|
continue;
|
||||||
@ -590,7 +594,7 @@ sds sdscatvprintf(sds s, const char *fmt, va_list ap) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Finally concat the obtained string to the SDS string and return it. */
|
/* Finally concat the obtained string to the SDS string and return it. */
|
||||||
t = sdscat(s, buf);
|
t = sdscatlen(s, buf, bufstrlen);
|
||||||
if (buf != staticbuf) s_free(buf);
|
if (buf != staticbuf) s_free(buf);
|
||||||
return t;
|
return t;
|
||||||
}
|
}
|
||||||
@ -645,7 +649,7 @@ sds sdscatfmt(sds s, char const *fmt, ...) {
|
|||||||
/* To avoid continuous reallocations, let's start with a buffer that
|
/* To avoid continuous reallocations, let's start with a buffer that
|
||||||
* can hold at least two times the format string itself. It's not the
|
* can hold at least two times the format string itself. It's not the
|
||||||
* best heuristic but seems to work in practice. */
|
* best heuristic but seems to work in practice. */
|
||||||
s = sdsMakeRoomFor(s, initlen + strlen(fmt)*2);
|
s = sdsMakeRoomFor(s, strlen(fmt)*2);
|
||||||
va_start(ap,fmt);
|
va_start(ap,fmt);
|
||||||
f = fmt; /* Next format specifier byte to process. */
|
f = fmt; /* Next format specifier byte to process. */
|
||||||
i = initlen; /* Position of the next byte to write to dest str. */
|
i = initlen; /* Position of the next byte to write to dest str. */
|
||||||
@ -1198,6 +1202,22 @@ int sdsTest(void) {
|
|||||||
test_cond("sdscatprintf() seems working in the base case",
|
test_cond("sdscatprintf() seems working in the base case",
|
||||||
sdslen(x) == 3 && memcmp(x,"123\0",4) == 0)
|
sdslen(x) == 3 && memcmp(x,"123\0",4) == 0)
|
||||||
|
|
||||||
|
sdsfree(x);
|
||||||
|
x = sdscatprintf(sdsempty(),"a%cb",0);
|
||||||
|
test_cond("sdscatprintf() seems working with \\0 inside of result",
|
||||||
|
sdslen(x) == 3 && memcmp(x,"a\0""b\0",4) == 0)
|
||||||
|
|
||||||
|
{
|
||||||
|
sdsfree(x);
|
||||||
|
char etalon[1024*1024];
|
||||||
|
for (size_t i = 0; i < sizeof(etalon); i++) {
|
||||||
|
etalon[i] = '0';
|
||||||
|
}
|
||||||
|
x = sdscatprintf(sdsempty(),"%0*d",(int)sizeof(etalon),0);
|
||||||
|
test_cond("sdscatprintf() can print 1MB",
|
||||||
|
sdslen(x) == sizeof(etalon) && memcmp(x,etalon,sizeof(etalon)) == 0)
|
||||||
|
}
|
||||||
|
|
||||||
sdsfree(x);
|
sdsfree(x);
|
||||||
x = sdsnew("--");
|
x = sdsnew("--");
|
||||||
x = sdscatfmt(x, "Hello %s World %I,%I--", "Hi!", LLONG_MIN,LLONG_MAX);
|
x = sdscatfmt(x, "Hello %s World %I,%I--", "Hi!", LLONG_MIN,LLONG_MAX);
|
||||||
|
@ -133,13 +133,13 @@ typedef struct sentinelAddr {
|
|||||||
/* The link to a sentinelRedisInstance. When we have the same set of Sentinels
|
/* The link to a sentinelRedisInstance. When we have the same set of Sentinels
|
||||||
* monitoring many masters, we have different instances representing the
|
* monitoring many masters, we have different instances representing the
|
||||||
* same Sentinels, one per master, and we need to share the hiredis connections
|
* same Sentinels, one per master, and we need to share the hiredis connections
|
||||||
* among them. Oherwise if 5 Sentinels are monitoring 100 masters we create
|
* among them. Otherwise if 5 Sentinels are monitoring 100 masters we create
|
||||||
* 500 outgoing connections instead of 5.
|
* 500 outgoing connections instead of 5.
|
||||||
*
|
*
|
||||||
* So this structure represents a reference counted link in terms of the two
|
* So this structure represents a reference counted link in terms of the two
|
||||||
* hiredis connections for commands and Pub/Sub, and the fields needed for
|
* hiredis connections for commands and Pub/Sub, and the fields needed for
|
||||||
* failure detection, since the ping/pong time are now local to the link: if
|
* failure detection, since the ping/pong time are now local to the link: if
|
||||||
* the link is available, the instance is avaialbe. This way we don't just
|
* the link is available, the instance is available. This way we don't just
|
||||||
* have 5 connections instead of 500, we also send 5 pings instead of 500.
|
* have 5 connections instead of 500, we also send 5 pings instead of 500.
|
||||||
*
|
*
|
||||||
* Links are shared only for Sentinels: master and slave instances have
|
* Links are shared only for Sentinels: master and slave instances have
|
||||||
@ -988,7 +988,7 @@ instanceLink *createInstanceLink(void) {
|
|||||||
return link;
|
return link;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Disconnect an hiredis connection in the context of an instance link. */
|
/* Disconnect a hiredis connection in the context of an instance link. */
|
||||||
void instanceLinkCloseConnection(instanceLink *link, redisAsyncContext *c) {
|
void instanceLinkCloseConnection(instanceLink *link, redisAsyncContext *c) {
|
||||||
if (c == NULL) return;
|
if (c == NULL) return;
|
||||||
|
|
||||||
@ -1127,7 +1127,7 @@ int sentinelUpdateSentinelAddressInAllMasters(sentinelRedisInstance *ri) {
|
|||||||
return reconfigured;
|
return reconfigured;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This function is called when an hiredis connection reported an error.
|
/* This function is called when a hiredis connection reported an error.
|
||||||
* We set it to NULL and mark the link as disconnected so that it will be
|
* We set it to NULL and mark the link as disconnected so that it will be
|
||||||
* reconnected again.
|
* reconnected again.
|
||||||
*
|
*
|
||||||
@ -2017,7 +2017,7 @@ void sentinelSendAuthIfNeeded(sentinelRedisInstance *ri, redisAsyncContext *c) {
|
|||||||
* The connection type is "cmd" or "pubsub" as specified by 'type'.
|
* The connection type is "cmd" or "pubsub" as specified by 'type'.
|
||||||
*
|
*
|
||||||
* This makes it possible to list all the sentinel instances connected
|
* This makes it possible to list all the sentinel instances connected
|
||||||
* to a Redis servewr with CLIENT LIST, grepping for a specific name format. */
|
* to a Redis server with CLIENT LIST, grepping for a specific name format. */
|
||||||
void sentinelSetClientName(sentinelRedisInstance *ri, redisAsyncContext *c, const char *type) {
|
void sentinelSetClientName(sentinelRedisInstance *ri, redisAsyncContext *c, const char *type) {
|
||||||
char name[64];
|
char name[64];
|
||||||
|
|
||||||
@ -2472,7 +2472,7 @@ void sentinelPublishReplyCallback(redisAsyncContext *c, void *reply, void *privd
|
|||||||
ri->last_pub_time = mstime();
|
ri->last_pub_time = mstime();
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Process an hello message received via Pub/Sub in master or slave instance,
|
/* Process a hello message received via Pub/Sub in master or slave instance,
|
||||||
* or sent directly to this sentinel via the (fake) PUBLISH command of Sentinel.
|
* or sent directly to this sentinel via the (fake) PUBLISH command of Sentinel.
|
||||||
*
|
*
|
||||||
* If the master name specified in the message is not known, the message is
|
* If the master name specified in the message is not known, the message is
|
||||||
@ -2609,7 +2609,7 @@ void sentinelReceiveHelloMessages(redisAsyncContext *c, void *reply, void *privd
|
|||||||
sentinelProcessHelloMessage(r->element[2]->str, r->element[2]->len);
|
sentinelProcessHelloMessage(r->element[2]->str, r->element[2]->len);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Send an "Hello" message via Pub/Sub to the specified 'ri' Redis
|
/* Send a "Hello" message via Pub/Sub to the specified 'ri' Redis
|
||||||
* instance in order to broadcast the current configuration for this
|
* instance in order to broadcast the current configuration for this
|
||||||
* master, and to advertise the existence of this Sentinel at the same time.
|
* master, and to advertise the existence of this Sentinel at the same time.
|
||||||
*
|
*
|
||||||
@ -2663,7 +2663,7 @@ int sentinelSendHello(sentinelRedisInstance *ri) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Reset last_pub_time in all the instances in the specified dictionary
|
/* Reset last_pub_time in all the instances in the specified dictionary
|
||||||
* in order to force the delivery of an Hello update ASAP. */
|
* in order to force the delivery of a Hello update ASAP. */
|
||||||
void sentinelForceHelloUpdateDictOfRedisInstances(dict *instances) {
|
void sentinelForceHelloUpdateDictOfRedisInstances(dict *instances) {
|
||||||
dictIterator *di;
|
dictIterator *di;
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
@ -2677,13 +2677,13 @@ void sentinelForceHelloUpdateDictOfRedisInstances(dict *instances) {
|
|||||||
dictReleaseIterator(di);
|
dictReleaseIterator(di);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This function forces the delivery of an "Hello" message (see
|
/* This function forces the delivery of a "Hello" message (see
|
||||||
* sentinelSendHello() top comment for further information) to all the Redis
|
* sentinelSendHello() top comment for further information) to all the Redis
|
||||||
* and Sentinel instances related to the specified 'master'.
|
* and Sentinel instances related to the specified 'master'.
|
||||||
*
|
*
|
||||||
* It is technically not needed since we send an update to every instance
|
* It is technically not needed since we send an update to every instance
|
||||||
* with a period of SENTINEL_PUBLISH_PERIOD milliseconds, however when a
|
* with a period of SENTINEL_PUBLISH_PERIOD milliseconds, however when a
|
||||||
* Sentinel upgrades a configuration it is a good idea to deliever an update
|
* Sentinel upgrades a configuration it is a good idea to deliver an update
|
||||||
* to the other Sentinels ASAP. */
|
* to the other Sentinels ASAP. */
|
||||||
int sentinelForceHelloUpdateForMaster(sentinelRedisInstance *master) {
|
int sentinelForceHelloUpdateForMaster(sentinelRedisInstance *master) {
|
||||||
if (!(master->flags & SRI_MASTER)) return C_ERR;
|
if (!(master->flags & SRI_MASTER)) return C_ERR;
|
||||||
@ -3084,7 +3084,7 @@ void sentinelCommand(client *c) {
|
|||||||
* ip and port are the ip and port of the master we want to be
|
* ip and port are the ip and port of the master we want to be
|
||||||
* checked by Sentinel. Note that the command will not check by
|
* checked by Sentinel. Note that the command will not check by
|
||||||
* name but just by master, in theory different Sentinels may monitor
|
* name but just by master, in theory different Sentinels may monitor
|
||||||
* differnet masters with the same name.
|
* different masters with the same name.
|
||||||
*
|
*
|
||||||
* current-epoch is needed in order to understand if we are allowed
|
* current-epoch is needed in order to understand if we are allowed
|
||||||
* to vote for a failover leader or not. Each Sentinel can vote just
|
* to vote for a failover leader or not. Each Sentinel can vote just
|
||||||
@ -3511,14 +3511,13 @@ void sentinelSetCommand(client *c) {
|
|||||||
"Reconfiguration of scripts path is denied for "
|
"Reconfiguration of scripts path is denied for "
|
||||||
"security reasons. Check the deny-scripts-reconfig "
|
"security reasons. Check the deny-scripts-reconfig "
|
||||||
"configuration directive in your Sentinel configuration");
|
"configuration directive in your Sentinel configuration");
|
||||||
return;
|
goto seterr;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (strlen(value) && access(value,X_OK) == -1) {
|
if (strlen(value) && access(value,X_OK) == -1) {
|
||||||
addReplyError(c,
|
addReplyError(c,
|
||||||
"Notification script seems non existing or non executable");
|
"Notification script seems non existing or non executable");
|
||||||
if (changes) sentinelFlushConfig();
|
goto seterr;
|
||||||
return;
|
|
||||||
}
|
}
|
||||||
sdsfree(ri->notification_script);
|
sdsfree(ri->notification_script);
|
||||||
ri->notification_script = strlen(value) ? sdsnew(value) : NULL;
|
ri->notification_script = strlen(value) ? sdsnew(value) : NULL;
|
||||||
@ -3531,15 +3530,14 @@ void sentinelSetCommand(client *c) {
|
|||||||
"Reconfiguration of scripts path is denied for "
|
"Reconfiguration of scripts path is denied for "
|
||||||
"security reasons. Check the deny-scripts-reconfig "
|
"security reasons. Check the deny-scripts-reconfig "
|
||||||
"configuration directive in your Sentinel configuration");
|
"configuration directive in your Sentinel configuration");
|
||||||
return;
|
goto seterr;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (strlen(value) && access(value,X_OK) == -1) {
|
if (strlen(value) && access(value,X_OK) == -1) {
|
||||||
addReplyError(c,
|
addReplyError(c,
|
||||||
"Client reconfiguration script seems non existing or "
|
"Client reconfiguration script seems non existing or "
|
||||||
"non executable");
|
"non executable");
|
||||||
if (changes) sentinelFlushConfig();
|
goto seterr;
|
||||||
return;
|
|
||||||
}
|
}
|
||||||
sdsfree(ri->client_reconfig_script);
|
sdsfree(ri->client_reconfig_script);
|
||||||
ri->client_reconfig_script = strlen(value) ? sdsnew(value) : NULL;
|
ri->client_reconfig_script = strlen(value) ? sdsnew(value) : NULL;
|
||||||
@ -3589,8 +3587,7 @@ void sentinelSetCommand(client *c) {
|
|||||||
} else {
|
} else {
|
||||||
addReplyErrorFormat(c,"Unknown option or number of arguments for "
|
addReplyErrorFormat(c,"Unknown option or number of arguments for "
|
||||||
"SENTINEL SET '%s'", option);
|
"SENTINEL SET '%s'", option);
|
||||||
if (changes) sentinelFlushConfig();
|
goto seterr;
|
||||||
return;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Log the event. */
|
/* Log the event. */
|
||||||
@ -3616,9 +3613,11 @@ void sentinelSetCommand(client *c) {
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
badfmt: /* Bad format errors */
|
badfmt: /* Bad format errors */
|
||||||
if (changes) sentinelFlushConfig();
|
|
||||||
addReplyErrorFormat(c,"Invalid argument '%s' for SENTINEL SET '%s'",
|
addReplyErrorFormat(c,"Invalid argument '%s' for SENTINEL SET '%s'",
|
||||||
(char*)ptrFromObj(c->argv[badarg]),option);
|
(char*)ptrFromObj(c->argv[badarg]),option);
|
||||||
|
seterr:
|
||||||
|
if (changes) sentinelFlushConfig();
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Our fake PUBLISH command: it is actually useful only to receive hello messages
|
/* Our fake PUBLISH command: it is actually useful only to receive hello messages
|
||||||
@ -3997,7 +3996,7 @@ int sentinelSendSlaveOf(sentinelRedisInstance *ri, const char *host, int port) {
|
|||||||
* the following tasks:
|
* the following tasks:
|
||||||
* 1) Reconfigure the instance according to the specified host/port params.
|
* 1) Reconfigure the instance according to the specified host/port params.
|
||||||
* 2) Rewrite the configuration.
|
* 2) Rewrite the configuration.
|
||||||
* 3) Disconnect all clients (but this one sending the commnad) in order
|
* 3) Disconnect all clients (but this one sending the command) in order
|
||||||
* to trigger the ask-master-on-reconnection protocol for connected
|
* to trigger the ask-master-on-reconnection protocol for connected
|
||||||
* clients.
|
* clients.
|
||||||
*
|
*
|
||||||
@ -4552,7 +4551,7 @@ void sentinelHandleDictOfRedisInstances(dict *instances) {
|
|||||||
* difference bigger than SENTINEL_TILT_TRIGGER milliseconds if one of the
|
* difference bigger than SENTINEL_TILT_TRIGGER milliseconds if one of the
|
||||||
* following conditions happen:
|
* following conditions happen:
|
||||||
*
|
*
|
||||||
* 1) The Sentiel process for some time is blocked, for every kind of
|
* 1) The Sentinel process for some time is blocked, for every kind of
|
||||||
* random reason: the load is huge, the computer was frozen for some time
|
* random reason: the load is huge, the computer was frozen for some time
|
||||||
* in I/O or alike, the process was stopped by a signal. Everything.
|
* in I/O or alike, the process was stopped by a signal. Everything.
|
||||||
* 2) The system clock was altered significantly.
|
* 2) The system clock was altered significantly.
|
||||||
|
277
src/server.cpp
277
src/server.cpp
@ -72,6 +72,10 @@ int g_fTestMode = false;
|
|||||||
const char *motd_url = "http://api.keydb.dev/motd/motd_server.txt";
|
const char *motd_url = "http://api.keydb.dev/motd/motd_server.txt";
|
||||||
const char *motd_cache_file = "/.keydb-server-motd";
|
const char *motd_cache_file = "/.keydb-server-motd";
|
||||||
|
|
||||||
|
#ifdef __linux__
|
||||||
|
#include <sys/mman.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
/* Our shared "common" objects */
|
/* Our shared "common" objects */
|
||||||
|
|
||||||
struct sharedObjectsStruct shared;
|
struct sharedObjectsStruct shared;
|
||||||
@ -136,7 +140,7 @@ volatile unsigned long lru_clock; /* Server global current LRU time. */
|
|||||||
* write: Write command (may modify the key space).
|
* write: Write command (may modify the key space).
|
||||||
*
|
*
|
||||||
* read-only: All the non special commands just reading from keys without
|
* read-only: All the non special commands just reading from keys without
|
||||||
* changing the content, or returning other informations like
|
* changing the content, or returning other information like
|
||||||
* the TIME command. Special commands such administrative commands
|
* the TIME command. Special commands such administrative commands
|
||||||
* or transaction related commands (multi, exec, discard, ...)
|
* or transaction related commands (multi, exec, discard, ...)
|
||||||
* are not flagged as read-only commands, since they affect the
|
* are not flagged as read-only commands, since they affect the
|
||||||
@ -924,7 +928,7 @@ struct redisCommand redisCommandTable[] = {
|
|||||||
|
|
||||||
/* GEORADIUS has store options that may write. */
|
/* GEORADIUS has store options that may write. */
|
||||||
{"georadius",georadiusCommand,-6,
|
{"georadius",georadiusCommand,-6,
|
||||||
"write @geo",
|
"write use-memory @geo",
|
||||||
0,georadiusGetKeys,1,1,1,0,0,0},
|
0,georadiusGetKeys,1,1,1,0,0,0},
|
||||||
|
|
||||||
{"georadius_ro",georadiusroCommand,-6,
|
{"georadius_ro",georadiusroCommand,-6,
|
||||||
@ -932,7 +936,7 @@ struct redisCommand redisCommandTable[] = {
|
|||||||
0,georadiusGetKeys,1,1,1,0,0,0},
|
0,georadiusGetKeys,1,1,1,0,0,0},
|
||||||
|
|
||||||
{"georadiusbymember",georadiusbymemberCommand,-5,
|
{"georadiusbymember",georadiusbymemberCommand,-5,
|
||||||
"write @geo",
|
"write use-memory @geo",
|
||||||
0,georadiusGetKeys,1,1,1,0,0,0},
|
0,georadiusGetKeys,1,1,1,0,0,0},
|
||||||
|
|
||||||
{"georadiusbymember_ro",georadiusbymemberroCommand,-5,
|
{"georadiusbymember_ro",georadiusbymemberroCommand,-5,
|
||||||
@ -1345,7 +1349,7 @@ dictType objectKeyHeapPointerValueDictType = {
|
|||||||
dictVanillaFree /* val destructor */
|
dictVanillaFree /* val destructor */
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Set dictionary type. Keys are SDS strings, values are ot used. */
|
/* Set dictionary type. Keys are SDS strings, values are not used. */
|
||||||
dictType setDictType = {
|
dictType setDictType = {
|
||||||
dictSdsHash, /* hash function */
|
dictSdsHash, /* hash function */
|
||||||
NULL, /* key dup */
|
NULL, /* key dup */
|
||||||
@ -1365,7 +1369,7 @@ dictType zsetDictType = {
|
|||||||
NULL /* val destructor */
|
NULL /* val destructor */
|
||||||
};
|
};
|
||||||
|
|
||||||
/* db->pdict, keys are sds strings, vals are Redis objects. */
|
/* db->dict, keys are sds strings, vals are Redis objects. */
|
||||||
dictType dbDictType = {
|
dictType dbDictType = {
|
||||||
dictSdsHash, /* hash function */
|
dictSdsHash, /* hash function */
|
||||||
NULL, /* key dup */
|
NULL, /* key dup */
|
||||||
@ -1450,9 +1454,8 @@ dictType clusterNodesBlackListDictType = {
|
|||||||
NULL /* val destructor */
|
NULL /* val destructor */
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Cluster re-addition blacklist. This maps node IDs to the time
|
/* Modules system dictionary type. Keys are module name,
|
||||||
* we can re-add this node. The goal is to avoid readding a removed
|
* values are pointer to RedisModule struct. */
|
||||||
* node for some time. */
|
|
||||||
dictType modulesDictType = {
|
dictType modulesDictType = {
|
||||||
dictSdsCaseHash, /* hash function */
|
dictSdsCaseHash, /* hash function */
|
||||||
NULL, /* key dup */
|
NULL, /* key dup */
|
||||||
@ -1496,21 +1499,21 @@ int htNeedsResize(dict *dict) {
|
|||||||
/* If the percentage of used slots in the HT reaches HASHTABLE_MIN_FILL
|
/* If the percentage of used slots in the HT reaches HASHTABLE_MIN_FILL
|
||||||
* we resize the hash table to save memory */
|
* we resize the hash table to save memory */
|
||||||
void tryResizeHashTables(int dbid) {
|
void tryResizeHashTables(int dbid) {
|
||||||
if (htNeedsResize(g_pserver->db[dbid].pdict))
|
if (htNeedsResize(g_pserver->db[dbid].dict))
|
||||||
dictResize(g_pserver->db[dbid].pdict);
|
dictResize(g_pserver->db[dbid].dict);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Our hash table implementation performs rehashing incrementally while
|
/* Our hash table implementation performs rehashing incrementally while
|
||||||
* we write/read from the hash table. Still if the server is idle, the hash
|
* we write/read from the hash table. Still if the server is idle, the hash
|
||||||
* table will use two tables for a long time. So we try to use 1 millisecond
|
* table will use two tables for a long time. So we try to use 1 millisecond
|
||||||
* of CPU time at every call of this function to perform some rehahsing.
|
* of CPU time at every call of this function to perform some rehashing.
|
||||||
*
|
*
|
||||||
* The function returns 1 if some rehashing was performed, otherwise 0
|
* The function returns 1 if some rehashing was performed, otherwise 0
|
||||||
* is returned. */
|
* is returned. */
|
||||||
int incrementallyRehash(int dbid) {
|
int incrementallyRehash(int dbid) {
|
||||||
/* Keys dictionary */
|
/* Keys dictionary */
|
||||||
if (dictIsRehashing(g_pserver->db[dbid].pdict)) {
|
if (dictIsRehashing(g_pserver->db[dbid].dict)) {
|
||||||
dictRehashMilliseconds(g_pserver->db[dbid].pdict,1);
|
dictRehashMilliseconds(g_pserver->db[dbid].dict,1);
|
||||||
return 1; /* already used our millisecond for this loop... */
|
return 1; /* already used our millisecond for this loop... */
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
@ -1520,8 +1523,8 @@ int incrementallyRehash(int dbid) {
|
|||||||
* as we want to avoid resizing the hash tables when there is a child in order
|
* as we want to avoid resizing the hash tables when there is a child in order
|
||||||
* to play well with copy-on-write (otherwise when a resize happens lots of
|
* to play well with copy-on-write (otherwise when a resize happens lots of
|
||||||
* memory pages are copied). The goal of this function is to update the ability
|
* memory pages are copied). The goal of this function is to update the ability
|
||||||
* for dict.c to resize the hash tables accordingly to the fact we have o not
|
* for dict.c to resize the hash tables accordingly to the fact we have an
|
||||||
* running childs. */
|
* active fork child running. */
|
||||||
void updateDictResizePolicy(void) {
|
void updateDictResizePolicy(void) {
|
||||||
if (!hasActiveChildProcess())
|
if (!hasActiveChildProcess())
|
||||||
dictEnableResize();
|
dictEnableResize();
|
||||||
@ -1634,7 +1637,7 @@ size_t ClientsPeakMemInput[CLIENTS_PEAK_MEM_USAGE_SLOTS];
|
|||||||
size_t ClientsPeakMemOutput[CLIENTS_PEAK_MEM_USAGE_SLOTS];
|
size_t ClientsPeakMemOutput[CLIENTS_PEAK_MEM_USAGE_SLOTS];
|
||||||
|
|
||||||
int clientsCronTrackExpansiveClients(client *c) {
|
int clientsCronTrackExpansiveClients(client *c) {
|
||||||
size_t in_usage = sdsAllocSize(c->querybuf);
|
size_t in_usage = sdsZmallocSize(c->querybuf) + c->argv_len_sum;
|
||||||
size_t out_usage = getClientOutputBufferMemoryUsage(c);
|
size_t out_usage = getClientOutputBufferMemoryUsage(c);
|
||||||
int i = g_pserver->unixtime % CLIENTS_PEAK_MEM_USAGE_SLOTS;
|
int i = g_pserver->unixtime % CLIENTS_PEAK_MEM_USAGE_SLOTS;
|
||||||
int zeroidx = (i+1) % CLIENTS_PEAK_MEM_USAGE_SLOTS;
|
int zeroidx = (i+1) % CLIENTS_PEAK_MEM_USAGE_SLOTS;
|
||||||
@ -1669,10 +1672,12 @@ int clientsCronTrackClientsMemUsage(client *c) {
|
|||||||
size_t mem = 0;
|
size_t mem = 0;
|
||||||
int type = getClientType(c);
|
int type = getClientType(c);
|
||||||
mem += getClientOutputBufferMemoryUsage(c);
|
mem += getClientOutputBufferMemoryUsage(c);
|
||||||
mem += sdsAllocSize(c->querybuf);
|
mem += sdsZmallocSize(c->querybuf);
|
||||||
mem += sizeof(client);
|
mem += zmalloc_size(c);
|
||||||
|
mem += c->argv_len_sum;
|
||||||
|
if (c->argv) mem += zmalloc_size(c->argv);
|
||||||
/* Now that we have the memory used by the client, remove the old
|
/* Now that we have the memory used by the client, remove the old
|
||||||
* value from the old categoty, and add it back. */
|
* value from the old category, and add it back. */
|
||||||
g_pserver->stat_clients_type_memory[c->client_cron_last_memory_type] -=
|
g_pserver->stat_clients_type_memory[c->client_cron_last_memory_type] -=
|
||||||
c->client_cron_last_memory_usage;
|
c->client_cron_last_memory_usage;
|
||||||
g_pserver->stat_clients_type_memory[type] += mem;
|
g_pserver->stat_clients_type_memory[type] += mem;
|
||||||
@ -2035,8 +2040,8 @@ int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {
|
|||||||
for (j = 0; j < cserver.dbnum; j++) {
|
for (j = 0; j < cserver.dbnum; j++) {
|
||||||
long long size, used, vkeys;
|
long long size, used, vkeys;
|
||||||
|
|
||||||
size = dictSlots(g_pserver->db[j].pdict);
|
size = dictSlots(g_pserver->db[j].dict);
|
||||||
used = dictSize(g_pserver->db[j].pdict);
|
used = dictSize(g_pserver->db[j].dict);
|
||||||
vkeys = g_pserver->db[j].setexpire->size();
|
vkeys = g_pserver->db[j].setexpire->size();
|
||||||
if (used || vkeys) {
|
if (used || vkeys) {
|
||||||
serverLog(LL_VERBOSE,"DB %d: %lld keys (%lld volatile) in %lld slots HT.",j,used,vkeys,size);
|
serverLog(LL_VERBOSE,"DB %d: %lld keys (%lld volatile) in %lld slots HT.",j,used,vkeys,size);
|
||||||
@ -2114,6 +2119,9 @@ int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
/* Just for the sake of defensive programming, to avoid forgeting to
|
||||||
|
* call this function when need. */
|
||||||
|
updateDictResizePolicy();
|
||||||
|
|
||||||
|
|
||||||
/* AOF postponed flush: Try at every cron cycle if the slow fsync
|
/* AOF postponed flush: Try at every cron cycle if the slow fsync
|
||||||
@ -2123,7 +2131,7 @@ int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {
|
|||||||
/* AOF write errors: in this case we have a buffer to flush as well and
|
/* AOF write errors: in this case we have a buffer to flush as well and
|
||||||
* clear the AOF error in case of success to make the DB writable again,
|
* clear the AOF error in case of success to make the DB writable again,
|
||||||
* however to try every second is enough in case of 'hz' is set to
|
* however to try every second is enough in case of 'hz' is set to
|
||||||
* an higher frequency. */
|
* a higher frequency. */
|
||||||
run_with_period(1000) {
|
run_with_period(1000) {
|
||||||
if (g_pserver->aof_last_write_status == C_ERR)
|
if (g_pserver->aof_last_write_status == C_ERR)
|
||||||
flushAppendOnlyFile(0);
|
flushAppendOnlyFile(0);
|
||||||
@ -2238,6 +2246,10 @@ void beforeSleep(struct aeEventLoop *eventLoop) {
|
|||||||
UNUSED(eventLoop);
|
UNUSED(eventLoop);
|
||||||
int iel = ielFromEventLoop(eventLoop);
|
int iel = ielFromEventLoop(eventLoop);
|
||||||
|
|
||||||
|
size_t zmalloc_used = zmalloc_used_memory();
|
||||||
|
if (zmalloc_used > g_pserver->stat_peak_memory)
|
||||||
|
g_pserver->stat_peak_memory = zmalloc_used;
|
||||||
|
|
||||||
locker.arm();
|
locker.arm();
|
||||||
serverAssert(g_pserver->repl_batch_offStart < 0);
|
serverAssert(g_pserver->repl_batch_offStart < 0);
|
||||||
runAndPropogateToReplicas(processClients);
|
runAndPropogateToReplicas(processClients);
|
||||||
@ -2323,6 +2335,11 @@ void beforeSleep(struct aeEventLoop *eventLoop) {
|
|||||||
/* Close clients that need to be closed asynchronous */
|
/* Close clients that need to be closed asynchronous */
|
||||||
freeClientsInAsyncFreeQueue(iel);
|
freeClientsInAsyncFreeQueue(iel);
|
||||||
|
|
||||||
|
/* Try to process blocked clients every once in while. Example: A module
|
||||||
|
* calls RM_SignalKeyAsReady from within a timer callback (So we don't
|
||||||
|
* visit processCommand() at all). */
|
||||||
|
handleClientsBlockedOnKeys();
|
||||||
|
|
||||||
/* Before we are going to sleep, let the threads access the dataset by
|
/* Before we are going to sleep, let the threads access the dataset by
|
||||||
* releasing the GIL. Redis main thread will not touch anything at this
|
* releasing the GIL. Redis main thread will not touch anything at this
|
||||||
* time. */
|
* time. */
|
||||||
@ -2331,13 +2348,18 @@ void beforeSleep(struct aeEventLoop *eventLoop) {
|
|||||||
if (!fSentReplies)
|
if (!fSentReplies)
|
||||||
handleClientsWithPendingWrites(iel, aof_state);
|
handleClientsWithPendingWrites(iel, aof_state);
|
||||||
if (moduleCount()) moduleReleaseGIL(TRUE /*fServerThread*/);
|
if (moduleCount()) moduleReleaseGIL(TRUE /*fServerThread*/);
|
||||||
|
|
||||||
|
/* Do NOT add anything below moduleReleaseGIL !!! */
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This function is called immadiately after the event loop multiplexing
|
/* This function is called immediately after the event loop multiplexing
|
||||||
* API returned, and the control is going to soon return to Redis by invoking
|
* API returned, and the control is going to soon return to Redis by invoking
|
||||||
* the different events callbacks. */
|
* the different events callbacks. */
|
||||||
void afterSleep(struct aeEventLoop *eventLoop) {
|
void afterSleep(struct aeEventLoop *eventLoop) {
|
||||||
UNUSED(eventLoop);
|
UNUSED(eventLoop);
|
||||||
|
/* Do NOT add anything above moduleAcquireGIL !!! */
|
||||||
|
|
||||||
|
/* Aquire the modules GIL so that their threads won't touch anything. */
|
||||||
if (moduleCount()) moduleAcquireGIL(TRUE /*fServerThread*/);
|
if (moduleCount()) moduleAcquireGIL(TRUE /*fServerThread*/);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2582,7 +2604,7 @@ void initServerConfig(void) {
|
|||||||
R_NegInf = -1.0/R_Zero;
|
R_NegInf = -1.0/R_Zero;
|
||||||
R_Nan = R_Zero/R_Zero;
|
R_Nan = R_Zero/R_Zero;
|
||||||
|
|
||||||
/* Command table -- we initiialize it here as it is part of the
|
/* Command table -- we initialize it here as it is part of the
|
||||||
* initial configuration, since command names may be changed via
|
* initial configuration, since command names may be changed via
|
||||||
* keydb.conf using the rename-command directive. */
|
* keydb.conf using the rename-command directive. */
|
||||||
g_pserver->commands = dictCreate(&commandTableDictType,NULL);
|
g_pserver->commands = dictCreate(&commandTableDictType,NULL);
|
||||||
@ -2723,7 +2745,7 @@ static void readOOMScoreAdj(void) {
|
|||||||
*/
|
*/
|
||||||
int setOOMScoreAdj(int process_class) {
|
int setOOMScoreAdj(int process_class) {
|
||||||
|
|
||||||
if (!g_pserver->oom_score_adj) return C_OK;
|
if (g_pserver->oom_score_adj == OOM_SCORE_ADJ_NO) return C_OK;
|
||||||
if (process_class == -1)
|
if (process_class == -1)
|
||||||
process_class = (listLength(g_pserver->masters) ? CONFIG_OOM_REPLICA : CONFIG_OOM_MASTER);
|
process_class = (listLength(g_pserver->masters) ? CONFIG_OOM_REPLICA : CONFIG_OOM_MASTER);
|
||||||
|
|
||||||
@ -2735,6 +2757,8 @@ int setOOMScoreAdj(int process_class) {
|
|||||||
char buf[64];
|
char buf[64];
|
||||||
|
|
||||||
val = g_pserver->oom_score_adj_base + g_pserver->oom_score_adj_values[process_class];
|
val = g_pserver->oom_score_adj_base + g_pserver->oom_score_adj_values[process_class];
|
||||||
|
if (g_pserver->oom_score_adj == OOM_SCORE_RELATIVE)
|
||||||
|
val += g_pserver->oom_score_adj_base;
|
||||||
if (val > 1000) val = 1000;
|
if (val > 1000) val = 1000;
|
||||||
if (val < -1000) val = -1000;
|
if (val < -1000) val = -1000;
|
||||||
|
|
||||||
@ -2975,6 +2999,11 @@ void resetServerStats(void) {
|
|||||||
g_pserver->aof_delayed_fsync = 0;
|
g_pserver->aof_delayed_fsync = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void makeThreadKillable(void) {
|
||||||
|
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);
|
||||||
|
pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, NULL);
|
||||||
|
}
|
||||||
|
|
||||||
static void initNetworkingThread(int iel, int fReusePort)
|
static void initNetworkingThread(int iel, int fReusePort)
|
||||||
{
|
{
|
||||||
/* Open the TCP listening socket for the user commands. */
|
/* Open the TCP listening socket for the user commands. */
|
||||||
@ -3004,6 +3033,8 @@ static void initNetworkingThread(int iel, int fReusePort)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
makeThreadKillable();
|
||||||
|
|
||||||
for (int j = 0; j < g_pserver->rgthreadvar[iel].tlsfd_count; j++) {
|
for (int j = 0; j < g_pserver->rgthreadvar[iel].tlsfd_count; j++) {
|
||||||
if (aeCreateFileEvent(g_pserver->rgthreadvar[iel].el, g_pserver->rgthreadvar[iel].tlsfd[j], AE_READABLE,
|
if (aeCreateFileEvent(g_pserver->rgthreadvar[iel].el, g_pserver->rgthreadvar[iel].tlsfd[j], AE_READABLE,
|
||||||
acceptTLSHandler,NULL) == AE_ERR)
|
acceptTLSHandler,NULL) == AE_ERR)
|
||||||
@ -3104,7 +3135,7 @@ void initServer(void) {
|
|||||||
/* Create the Redis databases, and initialize other internal state. */
|
/* Create the Redis databases, and initialize other internal state. */
|
||||||
for (int j = 0; j < cserver.dbnum; j++) {
|
for (int j = 0; j < cserver.dbnum; j++) {
|
||||||
new (&g_pserver->db[j]) redisDb;
|
new (&g_pserver->db[j]) redisDb;
|
||||||
g_pserver->db[j].pdict = dictCreate(&dbDictType,NULL);
|
g_pserver->db[j].dict = dictCreate(&dbDictType,NULL);
|
||||||
g_pserver->db[j].setexpire = new(MALLOC_LOCAL) expireset();
|
g_pserver->db[j].setexpire = new(MALLOC_LOCAL) expireset();
|
||||||
g_pserver->db[j].expireitr = g_pserver->db[j].setexpire->end();
|
g_pserver->db[j].expireitr = g_pserver->db[j].setexpire->end();
|
||||||
g_pserver->db[j].blocking_keys = dictCreate(&keylistDictType,NULL);
|
g_pserver->db[j].blocking_keys = dictCreate(&keylistDictType,NULL);
|
||||||
@ -3137,6 +3168,8 @@ void initServer(void) {
|
|||||||
g_pserver->aof_state = g_pserver->aof_enabled ? AOF_ON : AOF_OFF;
|
g_pserver->aof_state = g_pserver->aof_enabled ? AOF_ON : AOF_OFF;
|
||||||
g_pserver->hz = g_pserver->config_hz;
|
g_pserver->hz = g_pserver->config_hz;
|
||||||
cserver.pid = getpid();
|
cserver.pid = getpid();
|
||||||
|
g_pserver->in_fork_child = CHILD_TYPE_NONE;
|
||||||
|
cserver.main_thread_id = pthread_self();
|
||||||
g_pserver->clients_index = raxNew();
|
g_pserver->clients_index = raxNew();
|
||||||
g_pserver->clients_to_close = listCreate();
|
g_pserver->clients_to_close = listCreate();
|
||||||
g_pserver->replicaseldb = -1; /* Force to emit the first SELECT command. */
|
g_pserver->replicaseldb = -1; /* Force to emit the first SELECT command. */
|
||||||
@ -3319,7 +3352,7 @@ int populateCommandTableParseFlags(struct redisCommand *c, const char *strflags)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Populates the Redis Command Table starting from the hard coded list
|
/* Populates the Redis Command Table starting from the hard coded list
|
||||||
* we have on top of redis.c file. */
|
* we have on top of server.c file. */
|
||||||
void populateCommandTable(void) {
|
void populateCommandTable(void) {
|
||||||
int j;
|
int j;
|
||||||
int numcommands = sizeof(redisCommandTable)/sizeof(struct redisCommand);
|
int numcommands = sizeof(redisCommandTable)/sizeof(struct redisCommand);
|
||||||
@ -3454,12 +3487,12 @@ void propagate(struct redisCommand *cmd, int dbid, robj **argv, int argc,
|
|||||||
*
|
*
|
||||||
* 'cmd' must be a pointer to the Redis command to replicate, dbid is the
|
* 'cmd' must be a pointer to the Redis command to replicate, dbid is the
|
||||||
* database ID the command should be propagated into.
|
* database ID the command should be propagated into.
|
||||||
* Arguments of the command to propagte are passed as an array of redis
|
* Arguments of the command to propagate are passed as an array of redis
|
||||||
* objects pointers of len 'argc', using the 'argv' vector.
|
* objects pointers of len 'argc', using the 'argv' vector.
|
||||||
*
|
*
|
||||||
* The function does not take a reference to the passed 'argv' vector,
|
* The function does not take a reference to the passed 'argv' vector,
|
||||||
* so it is up to the caller to release the passed argv (but it is usually
|
* so it is up to the caller to release the passed argv (but it is usually
|
||||||
* stack allocated). The function autoamtically increments ref count of
|
* stack allocated). The function automatically increments ref count of
|
||||||
* passed objects, so the caller does not need to. */
|
* passed objects, so the caller does not need to. */
|
||||||
void alsoPropagate(struct redisCommand *cmd, int dbid, robj **argv, int argc,
|
void alsoPropagate(struct redisCommand *cmd, int dbid, robj **argv, int argc,
|
||||||
int target)
|
int target)
|
||||||
@ -3589,6 +3622,13 @@ void call(client *c, int flags) {
|
|||||||
dirty = g_pserver->dirty-dirty;
|
dirty = g_pserver->dirty-dirty;
|
||||||
if (dirty < 0) dirty = 0;
|
if (dirty < 0) dirty = 0;
|
||||||
|
|
||||||
|
/* After executing command, we will close the client after writing entire
|
||||||
|
* reply if it is set 'CLIENT_CLOSE_AFTER_COMMAND' flag. */
|
||||||
|
if (c->flags & CLIENT_CLOSE_AFTER_COMMAND) {
|
||||||
|
c->flags &= ~CLIENT_CLOSE_AFTER_COMMAND;
|
||||||
|
c->flags |= CLIENT_CLOSE_AFTER_REPLY;
|
||||||
|
}
|
||||||
|
|
||||||
/* When EVAL is called loading the AOF we don't want commands called
|
/* When EVAL is called loading the AOF we don't want commands called
|
||||||
* from Lua to go into the slowlog or to populate statistics. */
|
* from Lua to go into the slowlog or to populate statistics. */
|
||||||
if (g_pserver->loading && c->flags & CLIENT_LUA)
|
if (g_pserver->loading && c->flags & CLIENT_LUA)
|
||||||
@ -3637,7 +3677,7 @@ void call(client *c, int flags) {
|
|||||||
if (c->flags & CLIENT_FORCE_AOF) propagate_flags |= PROPAGATE_AOF;
|
if (c->flags & CLIENT_FORCE_AOF) propagate_flags |= PROPAGATE_AOF;
|
||||||
|
|
||||||
/* However prevent AOF / replication propagation if the command
|
/* However prevent AOF / replication propagation if the command
|
||||||
* implementations called preventCommandPropagation() or similar,
|
* implementation called preventCommandPropagation() or similar,
|
||||||
* or if we don't have the call() flags to do so. */
|
* or if we don't have the call() flags to do so. */
|
||||||
if (c->flags & CLIENT_PREVENT_REPL_PROP ||
|
if (c->flags & CLIENT_PREVENT_REPL_PROP ||
|
||||||
!(flags & CMD_CALL_PROPAGATE_REPL))
|
!(flags & CMD_CALL_PROPAGATE_REPL))
|
||||||
@ -3719,6 +3759,12 @@ void call(client *c, int flags) {
|
|||||||
|
|
||||||
g_pserver->stat_numcommands++;
|
g_pserver->stat_numcommands++;
|
||||||
serverTL->fixed_time_expire--;
|
serverTL->fixed_time_expire--;
|
||||||
|
|
||||||
|
/* Record peak memory after each command and before the eviction that runs
|
||||||
|
* before the next command. */
|
||||||
|
size_t zmalloc_used = zmalloc_used_memory();
|
||||||
|
if (zmalloc_used > g_pserver->stat_peak_memory)
|
||||||
|
g_pserver->stat_peak_memory = zmalloc_used;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Used when a command that is ready for execution needs to be rejected, due to
|
/* Used when a command that is ready for execution needs to be rejected, due to
|
||||||
@ -3895,7 +3941,7 @@ int processCommand(client *c, int callFlags) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Save out_of_memory result at script start, otherwise if we check OOM
|
/* Save out_of_memory result at script start, otherwise if we check OOM
|
||||||
* untill first write within script, memory used by lua stack and
|
* until first write within script, memory used by lua stack and
|
||||||
* arguments might interfere. */
|
* arguments might interfere. */
|
||||||
if (c->cmd->proc == evalCommand || c->cmd->proc == evalShaCommand) {
|
if (c->cmd->proc == evalCommand || c->cmd->proc == evalShaCommand) {
|
||||||
g_pserver->lua_oom = out_of_memory;
|
g_pserver->lua_oom = out_of_memory;
|
||||||
@ -4090,6 +4136,10 @@ int prepareForShutdown(int flags) {
|
|||||||
overwrite the synchronous saving did by SHUTDOWN. */
|
overwrite the synchronous saving did by SHUTDOWN. */
|
||||||
if (g_pserver->rdb_child_pid != -1) {
|
if (g_pserver->rdb_child_pid != -1) {
|
||||||
serverLog(LL_WARNING,"There is a child saving an .rdb. Killing it!");
|
serverLog(LL_WARNING,"There is a child saving an .rdb. Killing it!");
|
||||||
|
/* Note that, in killRDBChild, we call rdbRemoveTempFile that will
|
||||||
|
* do close fd(in order to unlink file actully) in background thread.
|
||||||
|
* The temp rdb file fd may won't be closed when redis exits quickly,
|
||||||
|
* but OS will close this fd when process exits. */
|
||||||
killRDBChild();
|
killRDBChild();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -4171,7 +4221,7 @@ int prepareForShutdown(int flags) {
|
|||||||
|
|
||||||
/*================================== Commands =============================== */
|
/*================================== Commands =============================== */
|
||||||
|
|
||||||
/* Sometimes Redis cannot accept write commands because there is a perstence
|
/* Sometimes Redis cannot accept write commands because there is a persistence
|
||||||
* error with the RDB or AOF file, and Redis is configured in order to stop
|
* error with the RDB or AOF file, and Redis is configured in order to stop
|
||||||
* accepting writes in such situation. This function returns if such a
|
* accepting writes in such situation. This function returns if such a
|
||||||
* condition is active, and the type of the condition.
|
* condition is active, and the type of the condition.
|
||||||
@ -4320,7 +4370,8 @@ NULL
|
|||||||
addReplyLongLong(c, dictSize(g_pserver->commands));
|
addReplyLongLong(c, dictSize(g_pserver->commands));
|
||||||
} else if (!strcasecmp((const char*)ptrFromObj(c->argv[1]),"getkeys") && c->argc >= 3) {
|
} else if (!strcasecmp((const char*)ptrFromObj(c->argv[1]),"getkeys") && c->argc >= 3) {
|
||||||
struct redisCommand *cmd = (redisCommand*)lookupCommand((sds)ptrFromObj(c->argv[2]));
|
struct redisCommand *cmd = (redisCommand*)lookupCommand((sds)ptrFromObj(c->argv[2]));
|
||||||
int *keys, numkeys, j;
|
getKeysResult result = GETKEYS_RESULT_INIT;
|
||||||
|
int j;
|
||||||
|
|
||||||
if (!cmd) {
|
if (!cmd) {
|
||||||
addReplyError(c,"Invalid command specified");
|
addReplyError(c,"Invalid command specified");
|
||||||
@ -4335,14 +4386,13 @@ NULL
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
keys = getKeysFromCommand(cmd,c->argv+2,c->argc-2,&numkeys);
|
if (!getKeysFromCommand(cmd,c->argv+2,c->argc-2,&result)) {
|
||||||
if (!keys) {
|
|
||||||
addReplyError(c,"Invalid arguments specified for command");
|
addReplyError(c,"Invalid arguments specified for command");
|
||||||
} else {
|
} else {
|
||||||
addReplyArrayLen(c,numkeys);
|
addReplyArrayLen(c,result.numkeys);
|
||||||
for (j = 0; j < numkeys; j++) addReplyBulk(c,c->argv[keys[j]+2]);
|
for (j = 0; j < result.numkeys; j++) addReplyBulk(c,c->argv[result.keys[j]+2]);
|
||||||
getKeysFreeResult(keys);
|
|
||||||
}
|
}
|
||||||
|
getKeysFreeResult(&result);
|
||||||
} else {
|
} else {
|
||||||
addReplySubcommandSyntaxError(c);
|
addReplySubcommandSyntaxError(c);
|
||||||
}
|
}
|
||||||
@ -4988,7 +5038,7 @@ sds genRedisInfoString(const char *section) {
|
|||||||
for (j = 0; j < cserver.dbnum; j++) {
|
for (j = 0; j < cserver.dbnum; j++) {
|
||||||
long long keys, vkeys;
|
long long keys, vkeys;
|
||||||
|
|
||||||
keys = dictSize(g_pserver->db[j].pdict);
|
keys = dictSize(g_pserver->db[j].dict);
|
||||||
vkeys = g_pserver->db[j].setexpire->size();
|
vkeys = g_pserver->db[j].setexpire->size();
|
||||||
|
|
||||||
// Adjust TTL by the current time
|
// Adjust TTL by the current time
|
||||||
@ -5042,6 +5092,21 @@ void monitorCommand(client *c) {
|
|||||||
|
|
||||||
/* =================================== Main! ================================ */
|
/* =================================== Main! ================================ */
|
||||||
|
|
||||||
|
int checkIgnoreWarning(const char *warning) {
|
||||||
|
int argc, j;
|
||||||
|
sds *argv = sdssplitargs(g_pserver->ignore_warnings, &argc);
|
||||||
|
if (argv == NULL)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
for (j = 0; j < argc; j++) {
|
||||||
|
char *flag = argv[j];
|
||||||
|
if (!strcasecmp(flag, warning))
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
sdsfreesplitres(argv,argc);
|
||||||
|
return j < argc;
|
||||||
|
}
|
||||||
|
|
||||||
#ifdef __linux__
|
#ifdef __linux__
|
||||||
int linuxOvercommitMemoryValue(void) {
|
int linuxOvercommitMemoryValue(void) {
|
||||||
FILE *fp = fopen("/proc/sys/vm/overcommit_memory","r");
|
FILE *fp = fopen("/proc/sys/vm/overcommit_memory","r");
|
||||||
@ -5065,6 +5130,113 @@ void linuxMemoryWarnings(void) {
|
|||||||
serverLog(LL_WARNING,"WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with KeyDB. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. KeyDB must be restarted after THP is disabled (set to 'madvise' or 'never').");
|
serverLog(LL_WARNING,"WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with KeyDB. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. KeyDB must be restarted after THP is disabled (set to 'madvise' or 'never').");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef __arm64__
|
||||||
|
|
||||||
|
/* Get size in kilobytes of the Shared_Dirty pages of the calling process for the
|
||||||
|
* memory map corresponding to the provided address, or -1 on error. */
|
||||||
|
static int smapsGetSharedDirty(unsigned long addr) {
|
||||||
|
int ret, in_mapping = 0, val = -1;
|
||||||
|
unsigned long from, to;
|
||||||
|
char buf[64];
|
||||||
|
FILE *f;
|
||||||
|
|
||||||
|
f = fopen("/proc/self/smaps", "r");
|
||||||
|
serverAssert(f);
|
||||||
|
|
||||||
|
while (1) {
|
||||||
|
if (!fgets(buf, sizeof(buf), f))
|
||||||
|
break;
|
||||||
|
|
||||||
|
ret = sscanf(buf, "%lx-%lx", &from, &to);
|
||||||
|
if (ret == 2)
|
||||||
|
in_mapping = from <= addr && addr < to;
|
||||||
|
|
||||||
|
if (in_mapping && !memcmp(buf, "Shared_Dirty:", 13)) {
|
||||||
|
ret = sscanf(buf, "%*s %d", &val);
|
||||||
|
serverAssert(ret == 1);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fclose(f);
|
||||||
|
return val;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Older arm64 Linux kernels have a bug that could lead to data corruption
|
||||||
|
* during background save in certain scenarios. This function checks if the
|
||||||
|
* kernel is affected.
|
||||||
|
* The bug was fixed in commit ff1712f953e27f0b0718762ec17d0adb15c9fd0b
|
||||||
|
* titled: "arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect()"
|
||||||
|
* Return 1 if the kernel seems to be affected, and 0 otherwise. */
|
||||||
|
int linuxMadvFreeForkBugCheck(void) {
|
||||||
|
int ret, pipefd[2];
|
||||||
|
pid_t pid;
|
||||||
|
char *p, *q, bug_found = 0;
|
||||||
|
const long map_size = 3 * 4096;
|
||||||
|
|
||||||
|
/* Create a memory map that's in our full control (not one used by the allocator). */
|
||||||
|
p = mmap(NULL, map_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
|
||||||
|
serverAssert(p != MAP_FAILED);
|
||||||
|
|
||||||
|
q = p + 4096;
|
||||||
|
|
||||||
|
/* Split the memory map in 3 pages by setting their protection as RO|RW|RO to prevent
|
||||||
|
* Linux from merging this memory map with adjacent VMAs. */
|
||||||
|
ret = mprotect(q, 4096, PROT_READ | PROT_WRITE);
|
||||||
|
serverAssert(!ret);
|
||||||
|
|
||||||
|
/* Write to the page once to make it resident */
|
||||||
|
*(volatile char*)q = 0;
|
||||||
|
|
||||||
|
/* Tell the kernel that this page is free to be reclaimed. */
|
||||||
|
#ifndef MADV_FREE
|
||||||
|
#define MADV_FREE 8
|
||||||
|
#endif
|
||||||
|
ret = madvise(q, 4096, MADV_FREE);
|
||||||
|
serverAssert(!ret);
|
||||||
|
|
||||||
|
/* Write to the page after being marked for freeing, this is supposed to take
|
||||||
|
* ownership of that page again. */
|
||||||
|
*(volatile char*)q = 0;
|
||||||
|
|
||||||
|
/* Create a pipe for the child to return the info to the parent. */
|
||||||
|
ret = pipe(pipefd);
|
||||||
|
serverAssert(!ret);
|
||||||
|
|
||||||
|
/* Fork the process. */
|
||||||
|
pid = fork();
|
||||||
|
serverAssert(pid >= 0);
|
||||||
|
if (!pid) {
|
||||||
|
/* Child: check if the page is marked as dirty, expecing 4 (kB).
|
||||||
|
* A value of 0 means the kernel is affected by the bug. */
|
||||||
|
if (!smapsGetSharedDirty((unsigned long)q))
|
||||||
|
bug_found = 1;
|
||||||
|
|
||||||
|
ret = write(pipefd[1], &bug_found, 1);
|
||||||
|
serverAssert(ret == 1);
|
||||||
|
|
||||||
|
exit(0);
|
||||||
|
} else {
|
||||||
|
/* Read the result from the child. */
|
||||||
|
ret = read(pipefd[0], &bug_found, 1);
|
||||||
|
serverAssert(ret == 1);
|
||||||
|
|
||||||
|
/* Reap the child pid. */
|
||||||
|
serverAssert(waitpid(pid, NULL, 0) == pid);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Cleanup */
|
||||||
|
ret = close(pipefd[0]);
|
||||||
|
serverAssert(!ret);
|
||||||
|
ret = close(pipefd[1]);
|
||||||
|
serverAssert(!ret);
|
||||||
|
ret = munmap(p, map_size);
|
||||||
|
serverAssert(!ret);
|
||||||
|
|
||||||
|
return bug_found;
|
||||||
|
}
|
||||||
|
#endif /* __arm64__ */
|
||||||
#endif /* __linux__ */
|
#endif /* __linux__ */
|
||||||
|
|
||||||
void createPidFile(void) {
|
void createPidFile(void) {
|
||||||
@ -5185,7 +5357,7 @@ static void sigShutdownHandler(int sig) {
|
|||||||
* on disk. */
|
* on disk. */
|
||||||
if (g_pserver->shutdown_asap && sig == SIGINT) {
|
if (g_pserver->shutdown_asap && sig == SIGINT) {
|
||||||
serverLogFromHandler(LL_WARNING, "You insist... exiting now.");
|
serverLogFromHandler(LL_WARNING, "You insist... exiting now.");
|
||||||
rdbRemoveTempFile(getpid());
|
rdbRemoveTempFile(getpid(), 1);
|
||||||
exit(1); /* Exit with an error since this was not a clean shutdown. */
|
exit(1); /* Exit with an error since this was not a clean shutdown. */
|
||||||
} else if (g_pserver->loading) {
|
} else if (g_pserver->loading) {
|
||||||
serverLogFromHandler(LL_WARNING, "Received shutdown signal during loading, exiting now.");
|
serverLogFromHandler(LL_WARNING, "Received shutdown signal during loading, exiting now.");
|
||||||
@ -5225,7 +5397,8 @@ void setupSignalHandlers(void) {
|
|||||||
* accepting writes because of a write error condition. */
|
* accepting writes because of a write error condition. */
|
||||||
static void sigKillChildHandler(int sig) {
|
static void sigKillChildHandler(int sig) {
|
||||||
UNUSED(sig);
|
UNUSED(sig);
|
||||||
serverLogFromHandler(LL_WARNING, "Received SIGUSR1 in child, exiting now.");
|
int level = g_pserver->in_fork_child == CHILD_TYPE_MODULE? LL_VERBOSE: LL_WARNING;
|
||||||
|
serverLogFromHandler(level, "Received SIGUSR1 in child, exiting now.");
|
||||||
exitFromChild(SERVER_CHILD_NOERROR_RETVAL);
|
exitFromChild(SERVER_CHILD_NOERROR_RETVAL);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -5249,13 +5422,20 @@ void closeClildUnusedResourceAfterFork() {
|
|||||||
closeListeningSockets(0);
|
closeListeningSockets(0);
|
||||||
if (g_pserver->cluster_enabled && g_pserver->cluster_config_file_lock_fd != -1)
|
if (g_pserver->cluster_enabled && g_pserver->cluster_config_file_lock_fd != -1)
|
||||||
close(g_pserver->cluster_config_file_lock_fd); /* don't care if this fails */
|
close(g_pserver->cluster_config_file_lock_fd); /* don't care if this fails */
|
||||||
|
|
||||||
|
/* Clear cserver.pidfile, this is the parent pidfile which should not
|
||||||
|
* be touched (or deleted) by the child (on exit / crash) */
|
||||||
|
zfree(cserver.pidfile);
|
||||||
|
cserver.pidfile = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
int redisFork() {
|
/* purpose is one of CHILD_TYPE_ types */
|
||||||
|
int redisFork(int purpose) {
|
||||||
int childpid;
|
int childpid;
|
||||||
long long start = ustime();
|
long long start = ustime();
|
||||||
if ((childpid = fork()) == 0) {
|
if ((childpid = fork()) == 0) {
|
||||||
/* Child */
|
/* Child */
|
||||||
|
g_pserver->in_fork_child = purpose;
|
||||||
setOOMScoreAdj(CONFIG_OOM_BGCHILD);
|
setOOMScoreAdj(CONFIG_OOM_BGCHILD);
|
||||||
setupChildSignalHandlers();
|
setupChildSignalHandlers();
|
||||||
closeClildUnusedResourceAfterFork();
|
closeClildUnusedResourceAfterFork();
|
||||||
@ -5267,7 +5447,6 @@ int redisFork() {
|
|||||||
if (childpid == -1) {
|
if (childpid == -1) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
updateDictResizePolicy();
|
|
||||||
}
|
}
|
||||||
return childpid;
|
return childpid;
|
||||||
}
|
}
|
||||||
@ -5529,7 +5708,6 @@ int iAmMaster(void) {
|
|||||||
(g_pserver->cluster_enabled && nodeIsMaster(g_pserver->cluster->myself)));
|
(g_pserver->cluster_enabled && nodeIsMaster(g_pserver->cluster->myself)));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
int main(int argc, char **argv) {
|
int main(int argc, char **argv) {
|
||||||
struct timeval tv;
|
struct timeval tv;
|
||||||
int j;
|
int j;
|
||||||
@ -5728,7 +5906,16 @@ int main(int argc, char **argv) {
|
|||||||
serverLog(LL_WARNING,"Server initialized");
|
serverLog(LL_WARNING,"Server initialized");
|
||||||
#ifdef __linux__
|
#ifdef __linux__
|
||||||
linuxMemoryWarnings();
|
linuxMemoryWarnings();
|
||||||
#endif
|
#if defined (__arm64__)
|
||||||
|
if (linuxMadvFreeForkBugCheck()) {
|
||||||
|
serverLog(LL_WARNING,"WARNING Your kernel has a bug that could lead to data corruption during background save. Please upgrade to the latest stable kernel.");
|
||||||
|
if (!checkIgnoreWarning("ARM64-COW-BUG")) {
|
||||||
|
serverLog(LL_WARNING,"Redis will now exit to prevent data corruption. Note that it is possible to suppress this warning by setting the following config: ignore-warnings ARM64-COW-BUG");
|
||||||
|
exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#endif /* __arm64__ */
|
||||||
|
#endif /* __linux__ */
|
||||||
moduleLoadFromQueue();
|
moduleLoadFromQueue();
|
||||||
ACLLoadUsersAtStartup();
|
ACLLoadUsersAtStartup();
|
||||||
|
|
||||||
|
143
src/server.h
143
src/server.h
@ -352,7 +352,7 @@ extern int configOOMScoreAdjValuesDefaults[CONFIG_OOM_COUNT];
|
|||||||
/* Hash table parameters */
|
/* Hash table parameters */
|
||||||
#define HASHTABLE_MIN_FILL 10 /* Minimal hash table fill 10% */
|
#define HASHTABLE_MIN_FILL 10 /* Minimal hash table fill 10% */
|
||||||
|
|
||||||
/* Command flags. Please check the command table defined in the redis.c file
|
/* Command flags. Please check the command table defined in the server.c file
|
||||||
* for more information about the meaning of every flag. */
|
* for more information about the meaning of every flag. */
|
||||||
#define CMD_WRITE (1ULL<<0) /* "write" flag */
|
#define CMD_WRITE (1ULL<<0) /* "write" flag */
|
||||||
#define CMD_READONLY (1ULL<<1) /* "read-only" flag */
|
#define CMD_READONLY (1ULL<<1) /* "read-only" flag */
|
||||||
@ -457,7 +457,9 @@ extern int configOOMScoreAdjValuesDefaults[CONFIG_OOM_COUNT];
|
|||||||
about writes performed by myself.*/
|
about writes performed by myself.*/
|
||||||
#define CLIENT_IN_TO_TABLE (1ULL<<38) /* This client is in the timeout table. */
|
#define CLIENT_IN_TO_TABLE (1ULL<<38) /* This client is in the timeout table. */
|
||||||
#define CLIENT_PROTOCOL_ERROR (1ULL<<39) /* Protocol error chatting with it. */
|
#define CLIENT_PROTOCOL_ERROR (1ULL<<39) /* Protocol error chatting with it. */
|
||||||
#define CLIENT_FORCE_REPLY (1ULL<<40) /* Should addReply be forced to write the text? */
|
#define CLIENT_CLOSE_AFTER_COMMAND (1ULL<<40) /* Close after executing commands
|
||||||
|
* and writing entire reply. */
|
||||||
|
#define CLIENT_FORCE_REPLY (1ULL<<41) /* Should addReply be forced to write the text? */
|
||||||
|
|
||||||
/* Client block type (btype field in client structure)
|
/* Client block type (btype field in client structure)
|
||||||
* if CLIENT_BLOCKED flag is set. */
|
* if CLIENT_BLOCKED flag is set. */
|
||||||
@ -573,6 +575,11 @@ extern int configOOMScoreAdjValuesDefaults[CONFIG_OOM_COUNT];
|
|||||||
#define SET_OP_DIFF 1
|
#define SET_OP_DIFF 1
|
||||||
#define SET_OP_INTER 2
|
#define SET_OP_INTER 2
|
||||||
|
|
||||||
|
/* oom-score-adj defines */
|
||||||
|
#define OOM_SCORE_ADJ_NO 0
|
||||||
|
#define OOM_SCORE_RELATIVE 1
|
||||||
|
#define OOM_SCORE_ADJ_ABSOLUTE 2
|
||||||
|
|
||||||
/* Redis maxmemory strategies. Instead of using just incremental number
|
/* Redis maxmemory strategies. Instead of using just incremental number
|
||||||
* for this defines, we use a set of flags so that testing for certain
|
* for this defines, we use a set of flags so that testing for certain
|
||||||
* properties common to multiple policies is faster. */
|
* properties common to multiple policies is faster. */
|
||||||
@ -922,19 +929,24 @@ struct redisDb {
|
|||||||
|
|
||||||
~redisDb();
|
~redisDb();
|
||||||
|
|
||||||
dict *pdict; /* The keyspace for this DB */
|
::dict *dict; /* The keyspace for this DB */
|
||||||
expireset *setexpire;
|
expireset *setexpire;
|
||||||
expireset::setiter expireitr;
|
expireset::setiter expireitr;
|
||||||
|
|
||||||
dict *blocking_keys; /* Keys with clients waiting for data (BLPOP)*/
|
::dict *blocking_keys; /* Keys with clients waiting for data (BLPOP)*/
|
||||||
dict *ready_keys; /* Blocked keys that received a PUSH */
|
::dict *ready_keys; /* Blocked keys that received a PUSH */
|
||||||
dict *watched_keys; /* WATCHED keys for MULTI/EXEC CAS */
|
::dict *watched_keys; /* WATCHED keys for MULTI/EXEC CAS */
|
||||||
int id; /* Database ID */
|
int id; /* Database ID */
|
||||||
long long last_expire_set; /* when the last expire was set */
|
long long last_expire_set; /* when the last expire was set */
|
||||||
double avg_ttl; /* Average TTL, just for stats */
|
double avg_ttl; /* Average TTL, just for stats */
|
||||||
list *defrag_later; /* List of key names to attempt to defrag one by one, gradually. */
|
list *defrag_later; /* List of key names to attempt to defrag one by one, gradually. */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/* Declare database backup that include redis main DBs and slots to keys map.
|
||||||
|
* Definition is in db.c. We can't define it here since we define CLUSTER_SLOTS
|
||||||
|
* in cluster.h. */
|
||||||
|
typedef struct dbBackup dbBackup;
|
||||||
|
|
||||||
/* Client MULTI/EXEC state */
|
/* Client MULTI/EXEC state */
|
||||||
typedef struct multiCmd {
|
typedef struct multiCmd {
|
||||||
robj **argv;
|
robj **argv;
|
||||||
@ -963,7 +975,7 @@ typedef struct blockingState {
|
|||||||
* is > timeout then the operation timed out. */
|
* is > timeout then the operation timed out. */
|
||||||
|
|
||||||
/* BLOCKED_LIST, BLOCKED_ZSET and BLOCKED_STREAM */
|
/* BLOCKED_LIST, BLOCKED_ZSET and BLOCKED_STREAM */
|
||||||
dict *keys; /* The keys we are waiting to terminate a blocking
|
::dict *keys; /* The keys we are waiting to terminate a blocking
|
||||||
* operation such as BLPOP or XREAD. Or NULL. */
|
* operation such as BLPOP or XREAD. Or NULL. */
|
||||||
robj *target; /* The key that should receive the element,
|
robj *target; /* The key that should receive the element,
|
||||||
* for BRPOPLPUSH. */
|
* for BRPOPLPUSH. */
|
||||||
@ -1065,6 +1077,7 @@ typedef struct client {
|
|||||||
size_t querybuf_peak; /* Recent (100ms or more) peak of querybuf size. */
|
size_t querybuf_peak; /* Recent (100ms or more) peak of querybuf size. */
|
||||||
int argc; /* Num of arguments of current command. */
|
int argc; /* Num of arguments of current command. */
|
||||||
robj **argv; /* Arguments of current command. */
|
robj **argv; /* Arguments of current command. */
|
||||||
|
size_t argv_len_sum; /* Sum of lengths of objects in argv list. */
|
||||||
struct redisCommand *cmd, *lastcmd; /* Last command executed. */
|
struct redisCommand *cmd, *lastcmd; /* Last command executed. */
|
||||||
user *puser; /* User associated with this connection. If the
|
user *puser; /* User associated with this connection. If the
|
||||||
user is set to NULL the connection can do
|
user is set to NULL the connection can do
|
||||||
@ -1100,7 +1113,7 @@ typedef struct client {
|
|||||||
copying this replica output buffer
|
copying this replica output buffer
|
||||||
should use. */
|
should use. */
|
||||||
char replid[CONFIG_RUN_ID_SIZE+1]; /* Master replication ID (if master). */
|
char replid[CONFIG_RUN_ID_SIZE+1]; /* Master replication ID (if master). */
|
||||||
int slave_listening_port; /* As configured with: SLAVECONF listening-port */
|
int slave_listening_port; /* As configured with: REPLCONF listening-port */
|
||||||
char slave_ip[NET_IP_STR_LEN]; /* Optionally given by REPLCONF ip-address */
|
char slave_ip[NET_IP_STR_LEN]; /* Optionally given by REPLCONF ip-address */
|
||||||
int slave_capa; /* Slave capabilities: SLAVE_CAPA_* bitwise OR. */
|
int slave_capa; /* Slave capabilities: SLAVE_CAPA_* bitwise OR. */
|
||||||
multiState mstate; /* MULTI/EXEC state */
|
multiState mstate; /* MULTI/EXEC state */
|
||||||
@ -1108,7 +1121,7 @@ typedef struct client {
|
|||||||
blockingState bpop; /* blocking state */
|
blockingState bpop; /* blocking state */
|
||||||
long long woff; /* Last write global replication offset. */
|
long long woff; /* Last write global replication offset. */
|
||||||
list *watched_keys; /* Keys WATCHED for MULTI/EXEC CAS */
|
list *watched_keys; /* Keys WATCHED for MULTI/EXEC CAS */
|
||||||
dict *pubsub_channels; /* channels a client is interested in (SUBSCRIBE) */
|
::dict *pubsub_channels; /* channels a client is interested in (SUBSCRIBE) */
|
||||||
list *pubsub_patterns; /* patterns a client is interested in (SUBSCRIBE) */
|
list *pubsub_patterns; /* patterns a client is interested in (SUBSCRIBE) */
|
||||||
sds peerid; /* Cached peer ID. */
|
sds peerid; /* Cached peer ID. */
|
||||||
listNode *client_list_node; /* list node in client list */
|
listNode *client_list_node; /* list node in client list */
|
||||||
@ -1210,7 +1223,7 @@ typedef struct zskiplist {
|
|||||||
} zskiplist;
|
} zskiplist;
|
||||||
|
|
||||||
typedef struct zset {
|
typedef struct zset {
|
||||||
dict *pdict;
|
::dict *dict;
|
||||||
zskiplist *zsl;
|
zskiplist *zsl;
|
||||||
} zset;
|
} zset;
|
||||||
|
|
||||||
@ -1235,7 +1248,7 @@ typedef struct redisOp {
|
|||||||
} redisOp;
|
} redisOp;
|
||||||
|
|
||||||
/* Defines an array of Redis operations. There is an API to add to this
|
/* Defines an array of Redis operations. There is an API to add to this
|
||||||
* structure in a easy way.
|
* structure in an easy way.
|
||||||
*
|
*
|
||||||
* redisOpArrayInit();
|
* redisOpArrayInit();
|
||||||
* redisOpArrayAppend();
|
* redisOpArrayAppend();
|
||||||
@ -1342,9 +1355,11 @@ struct clusterState;
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#define CHILD_INFO_MAGIC 0xC17DDA7A12345678LL
|
#define CHILD_INFO_MAGIC 0xC17DDA7A12345678LL
|
||||||
#define CHILD_INFO_TYPE_RDB 0
|
#define CHILD_TYPE_NONE 0
|
||||||
#define CHILD_INFO_TYPE_AOF 1
|
#define CHILD_TYPE_RDB 1
|
||||||
#define CHILD_INFO_TYPE_MODULE 3
|
#define CHILD_TYPE_AOF 2
|
||||||
|
#define CHILD_TYPE_LDB 3
|
||||||
|
#define CHILD_TYPE_MODULE 4
|
||||||
|
|
||||||
#define MAX_EVENT_LOOPS 16
|
#define MAX_EVENT_LOOPS 16
|
||||||
#define IDX_EVENT_LOOP_MAIN 0
|
#define IDX_EVENT_LOOP_MAIN 0
|
||||||
@ -1409,6 +1424,7 @@ struct redisMaster {
|
|||||||
struct redisServerConst {
|
struct redisServerConst {
|
||||||
pid_t pid; /* Main process pid. */
|
pid_t pid; /* Main process pid. */
|
||||||
time_t stat_starttime; /* Server start time */
|
time_t stat_starttime; /* Server start time */
|
||||||
|
pthread_t main_thread_id; /* Main thread id */
|
||||||
char *configfile; /* Absolute config file path, or NULL */
|
char *configfile; /* Absolute config file path, or NULL */
|
||||||
char *executable; /* Absolute executable file path. */
|
char *executable; /* Absolute executable file path. */
|
||||||
char **exec_argv; /* Executable argv vector (copy). */
|
char **exec_argv; /* Executable argv vector (copy). */
|
||||||
@ -1466,9 +1482,10 @@ struct redisServer {
|
|||||||
the actual 'hz' field value if dynamic-hz
|
the actual 'hz' field value if dynamic-hz
|
||||||
is enabled. */
|
is enabled. */
|
||||||
std::atomic<int> hz; /* serverCron() calls frequency in hertz */
|
std::atomic<int> hz; /* serverCron() calls frequency in hertz */
|
||||||
|
int in_fork_child; /* indication that this is a fork child */
|
||||||
redisDb *db;
|
redisDb *db;
|
||||||
dict *commands; /* Command table */
|
::dict *commands; /* Command table */
|
||||||
dict *orig_commands; /* Command table before command renaming. */
|
::dict *orig_commands; /* Command table before command renaming. */
|
||||||
|
|
||||||
struct redisServerThreadVars rgthreadvar[MAX_EVENT_LOOPS];
|
struct redisServerThreadVars rgthreadvar[MAX_EVENT_LOOPS];
|
||||||
|
|
||||||
@ -1481,9 +1498,10 @@ struct redisServer {
|
|||||||
int sentinel_mode; /* True if this instance is a Sentinel. */
|
int sentinel_mode; /* True if this instance is a Sentinel. */
|
||||||
size_t initial_memory_usage; /* Bytes used after initialization. */
|
size_t initial_memory_usage; /* Bytes used after initialization. */
|
||||||
int always_show_logo; /* Show logo even for non-stdout logging. */
|
int always_show_logo; /* Show logo even for non-stdout logging. */
|
||||||
|
char *ignore_warnings; /* Config: warnings that should be ignored. */
|
||||||
/* Modules */
|
/* Modules */
|
||||||
dict *moduleapi; /* Exported core APIs dictionary for modules. */
|
::dict *moduleapi; /* Exported core APIs dictionary for modules. */
|
||||||
dict *sharedapi; /* Like moduleapi but containing the APIs that
|
::dict *sharedapi; /* Like moduleapi but containing the APIs that
|
||||||
modules share with each other. */
|
modules share with each other. */
|
||||||
list *loadmodule_queue; /* List of modules to load at startup. */
|
list *loadmodule_queue; /* List of modules to load at startup. */
|
||||||
pid_t module_child_pid; /* PID of module child */
|
pid_t module_child_pid; /* PID of module child */
|
||||||
@ -1504,7 +1522,7 @@ struct redisServer {
|
|||||||
rax *clients_timeout_table; /* Radix tree for blocked clients timeouts. */
|
rax *clients_timeout_table; /* Radix tree for blocked clients timeouts. */
|
||||||
rax *clients_index; /* Active clients dictionary by client ID. */
|
rax *clients_index; /* Active clients dictionary by client ID. */
|
||||||
mstime_t clients_pause_end_time; /* Time when we undo clients_paused */
|
mstime_t clients_pause_end_time; /* Time when we undo clients_paused */
|
||||||
dict *migrate_cached_sockets;/* MIGRATE cached sockets */
|
::dict *migrate_cached_sockets;/* MIGRATE cached sockets */
|
||||||
std::atomic<uint64_t> next_client_id; /* Next client unique ID. Incremental. */
|
std::atomic<uint64_t> next_client_id; /* Next client unique ID. Incremental. */
|
||||||
int protected_mode; /* Don't accept external connections. */
|
int protected_mode; /* Don't accept external connections. */
|
||||||
long long events_processed_while_blocked; /* processEventsWhileBlocked() */
|
long long events_processed_while_blocked; /* processEventsWhileBlocked() */
|
||||||
@ -1692,7 +1710,7 @@ struct redisServer {
|
|||||||
char *slave_announce_ip; /* Give the master this ip address. */
|
char *slave_announce_ip; /* Give the master this ip address. */
|
||||||
int repl_slave_lazy_flush; /* Lazy FLUSHALL before loading DB? */
|
int repl_slave_lazy_flush; /* Lazy FLUSHALL before loading DB? */
|
||||||
/* Replication script cache. */
|
/* Replication script cache. */
|
||||||
dict *repl_scriptcache_dict; /* SHA1 all slaves are aware of. */
|
::dict *repl_scriptcache_dict; /* SHA1 all slaves are aware of. */
|
||||||
list *repl_scriptcache_fifo; /* First in, first out LRU eviction. */
|
list *repl_scriptcache_fifo; /* First in, first out LRU eviction. */
|
||||||
unsigned int repl_scriptcache_size; /* Max number of elements. */
|
unsigned int repl_scriptcache_size; /* Max number of elements. */
|
||||||
/* Synchronous replication. */
|
/* Synchronous replication. */
|
||||||
@ -1702,7 +1720,7 @@ struct redisServer {
|
|||||||
unsigned int maxclients; /* Max number of simultaneous clients */
|
unsigned int maxclients; /* Max number of simultaneous clients */
|
||||||
unsigned long long maxmemory; /* Max number of memory bytes to use */
|
unsigned long long maxmemory; /* Max number of memory bytes to use */
|
||||||
int maxmemory_policy; /* Policy for key eviction */
|
int maxmemory_policy; /* Policy for key eviction */
|
||||||
int maxmemory_samples; /* Pricision of random sampling */
|
int maxmemory_samples; /* Precision of random sampling */
|
||||||
int lfu_log_factor; /* LFU logarithmic counter factor. */
|
int lfu_log_factor; /* LFU logarithmic counter factor. */
|
||||||
int lfu_decay_time; /* LFU counter decay factor. */
|
int lfu_decay_time; /* LFU counter decay factor. */
|
||||||
long long proto_max_bulk_len; /* Protocol bulk length maximum size. */
|
long long proto_max_bulk_len; /* Protocol bulk length maximum size. */
|
||||||
@ -1741,9 +1759,9 @@ struct redisServer {
|
|||||||
mstime_t mstime; /* 'unixtime' in milliseconds. */
|
mstime_t mstime; /* 'unixtime' in milliseconds. */
|
||||||
ustime_t ustime; /* 'unixtime' in microseconds. */
|
ustime_t ustime; /* 'unixtime' in microseconds. */
|
||||||
/* Pubsub */
|
/* Pubsub */
|
||||||
dict *pubsub_channels; /* Map channels to list of subscribed clients */
|
::dict *pubsub_channels; /* Map channels to list of subscribed clients */
|
||||||
list *pubsub_patterns; /* A list of pubsub_patterns */
|
list *pubsub_patterns; /* A list of pubsub_patterns */
|
||||||
dict *pubsub_patterns_dict; /* A dict of pubsub_patterns */
|
::dict *pubsub_patterns_dict; /* A dict of pubsub_patterns */
|
||||||
int notify_keyspace_events; /* Events to propagate via Pub/Sub. This is an
|
int notify_keyspace_events; /* Events to propagate via Pub/Sub. This is an
|
||||||
xor of NOTIFY_... flags. */
|
xor of NOTIFY_... flags. */
|
||||||
/* Cluster */
|
/* Cluster */
|
||||||
@ -1771,7 +1789,7 @@ struct redisServer {
|
|||||||
lua_State *lua; /* The Lua interpreter. We use just one for all clients */
|
lua_State *lua; /* The Lua interpreter. We use just one for all clients */
|
||||||
client *lua_caller = nullptr; /* The client running EVAL right now, or NULL */
|
client *lua_caller = nullptr; /* The client running EVAL right now, or NULL */
|
||||||
char* lua_cur_script = nullptr; /* SHA1 of the script currently running, or NULL */
|
char* lua_cur_script = nullptr; /* SHA1 of the script currently running, or NULL */
|
||||||
dict *lua_scripts; /* A dictionary of SHA1 -> Lua scripts */
|
::dict *lua_scripts; /* A dictionary of SHA1 -> Lua scripts */
|
||||||
unsigned long long lua_scripts_mem; /* Cached scripts' memory + oh */
|
unsigned long long lua_scripts_mem; /* Cached scripts' memory + oh */
|
||||||
mstime_t lua_time_limit; /* Script timeout in milliseconds */
|
mstime_t lua_time_limit; /* Script timeout in milliseconds */
|
||||||
mstime_t lua_time_start; /* Start time of script, milliseconds time */
|
mstime_t lua_time_start; /* Start time of script, milliseconds time */
|
||||||
@ -1780,7 +1798,7 @@ struct redisServer {
|
|||||||
int lua_random_dirty; /* True if a random command was called during the
|
int lua_random_dirty; /* True if a random command was called during the
|
||||||
execution of the current script. */
|
execution of the current script. */
|
||||||
int lua_replicate_commands; /* True if we are doing single commands repl. */
|
int lua_replicate_commands; /* True if we are doing single commands repl. */
|
||||||
int lua_multi_emitted;/* True if we already proagated MULTI. */
|
int lua_multi_emitted;/* True if we already propagated MULTI. */
|
||||||
int lua_repl; /* Script replication flags for redis.set_repl(). */
|
int lua_repl; /* Script replication flags for redis.set_repl(). */
|
||||||
int lua_timedout; /* True if we reached the time limit for script
|
int lua_timedout; /* True if we reached the time limit for script
|
||||||
execution. */
|
execution. */
|
||||||
@ -1794,7 +1812,7 @@ struct redisServer {
|
|||||||
int lazyfree_lazy_user_del;
|
int lazyfree_lazy_user_del;
|
||||||
/* Latency monitor */
|
/* Latency monitor */
|
||||||
long long latency_monitor_threshold;
|
long long latency_monitor_threshold;
|
||||||
dict *latency_events;
|
::dict *latency_events;
|
||||||
/* ACLs */
|
/* ACLs */
|
||||||
char *acl_filename; /* ACL Users file. NULL if not configured. */
|
char *acl_filename; /* ACL Users file. NULL if not configured. */
|
||||||
unsigned long acllog_max_len; /* Maximum length of the ACL LOG list. */
|
unsigned long acllog_max_len; /* Maximum length of the ACL LOG list. */
|
||||||
@ -1839,8 +1857,21 @@ typedef struct pubsubPattern {
|
|||||||
robj *pattern;
|
robj *pattern;
|
||||||
} pubsubPattern;
|
} pubsubPattern;
|
||||||
|
|
||||||
|
#define MAX_KEYS_BUFFER 256
|
||||||
|
|
||||||
|
/* A result structure for the various getkeys function calls. It lists the
|
||||||
|
* keys as indices to the provided argv.
|
||||||
|
*/
|
||||||
|
typedef struct {
|
||||||
|
int keysbuf[MAX_KEYS_BUFFER]; /* Pre-allocated buffer, to save heap allocations */
|
||||||
|
int *keys; /* Key indices array, points to keysbuf or heap */
|
||||||
|
int numkeys; /* Number of key indices return */
|
||||||
|
int size; /* Available array size */
|
||||||
|
} getKeysResult;
|
||||||
|
#define GETKEYS_RESULT_INIT { {0}, NULL, 0, MAX_KEYS_BUFFER }
|
||||||
|
|
||||||
typedef void redisCommandProc(client *c);
|
typedef void redisCommandProc(client *c);
|
||||||
typedef int *redisGetKeysProc(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
typedef int redisGetKeysProc(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
struct redisCommand {
|
struct redisCommand {
|
||||||
const char *name;
|
const char *name;
|
||||||
redisCommandProc *proc;
|
redisCommandProc *proc;
|
||||||
@ -1952,7 +1983,7 @@ extern dictType modulesDictType;
|
|||||||
void moduleInitModulesSystem(void);
|
void moduleInitModulesSystem(void);
|
||||||
int moduleLoad(const char *path, void **argv, int argc);
|
int moduleLoad(const char *path, void **argv, int argc);
|
||||||
void moduleLoadFromQueue(void);
|
void moduleLoadFromQueue(void);
|
||||||
int *moduleGetCommandKeysViaAPI(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int moduleGetCommandKeysViaAPI(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
moduleType *moduleTypeLookupModuleByID(uint64_t id);
|
moduleType *moduleTypeLookupModuleByID(uint64_t id);
|
||||||
void moduleTypeNameByID(char *name, uint64_t moduleid);
|
void moduleTypeNameByID(char *name, uint64_t moduleid);
|
||||||
void moduleFreeContext(struct RedisModuleCtx *ctx);
|
void moduleFreeContext(struct RedisModuleCtx *ctx);
|
||||||
@ -2124,7 +2155,7 @@ void initClientMultiState(client *c);
|
|||||||
void freeClientMultiState(client *c);
|
void freeClientMultiState(client *c);
|
||||||
void queueMultiCommand(client *c);
|
void queueMultiCommand(client *c);
|
||||||
void touchWatchedKey(redisDb *db, robj *key);
|
void touchWatchedKey(redisDb *db, robj *key);
|
||||||
void touchWatchedKeysOnFlush(int dbid);
|
void touchAllWatchedKeysInDb(redisDb *emptied, redisDb *replaced_with);
|
||||||
void discardTransaction(client *c);
|
void discardTransaction(client *c);
|
||||||
void flagTransaction(client *c);
|
void flagTransaction(client *c);
|
||||||
void execCommandAbort(client *c, sds error);
|
void execCommandAbort(client *c, sds error);
|
||||||
@ -2179,7 +2210,7 @@ const char *strEncoding(int encoding);
|
|||||||
int compareStringObjects(robj *a, robj *b);
|
int compareStringObjects(robj *a, robj *b);
|
||||||
int collateStringObjects(robj *a, robj *b);
|
int collateStringObjects(robj *a, robj *b);
|
||||||
int equalStringObjects(robj *a, robj *b);
|
int equalStringObjects(robj *a, robj *b);
|
||||||
unsigned long long estimateObjectIdleTime(robj *o);
|
unsigned long long estimateObjectIdleTime(robj_roptr o);
|
||||||
void trimStringObjectIfNeeded(robj *o);
|
void trimStringObjectIfNeeded(robj *o);
|
||||||
#define sdsEncodedObject(objptr) (objptr->encoding == OBJ_ENCODING_RAW || objptr->encoding == OBJ_ENCODING_EMBSTR)
|
#define sdsEncodedObject(objptr) (objptr->encoding == OBJ_ENCODING_RAW || objptr->encoding == OBJ_ENCODING_EMBSTR)
|
||||||
|
|
||||||
@ -2243,6 +2274,7 @@ int writeCommandsDeniedByDiskError(void);
|
|||||||
/* RDB persistence */
|
/* RDB persistence */
|
||||||
#include "rdb.h"
|
#include "rdb.h"
|
||||||
void killRDBChild(void);
|
void killRDBChild(void);
|
||||||
|
int bg_unlink(const char *filename);
|
||||||
|
|
||||||
/* AOF persistence */
|
/* AOF persistence */
|
||||||
void flushAppendOnlyFile(int force);
|
void flushAppendOnlyFile(int force);
|
||||||
@ -2266,7 +2298,7 @@ void sendChildInfo(int process_type);
|
|||||||
void receiveChildInfo(void);
|
void receiveChildInfo(void);
|
||||||
|
|
||||||
/* Fork helpers */
|
/* Fork helpers */
|
||||||
int redisFork();
|
int redisFork(int type);
|
||||||
int hasActiveChildProcess();
|
int hasActiveChildProcess();
|
||||||
void sendChildCOWInfo(int ptype, const char *pname);
|
void sendChildCOWInfo(int ptype, const char *pname);
|
||||||
|
|
||||||
@ -2314,7 +2346,7 @@ void addACLLogEntry(client *c, int reason, int keypos, sds username);
|
|||||||
/* Flags only used by the ZADD command but not by zsetAdd() API: */
|
/* Flags only used by the ZADD command but not by zsetAdd() API: */
|
||||||
#define ZADD_CH (1<<16) /* Return num of elements added or updated. */
|
#define ZADD_CH (1<<16) /* Return num of elements added or updated. */
|
||||||
|
|
||||||
/* Struct to hold a inclusive/exclusive range spec by score comparison. */
|
/* Struct to hold an inclusive/exclusive range spec by score comparison. */
|
||||||
typedef struct {
|
typedef struct {
|
||||||
double min, max;
|
double min, max;
|
||||||
int minex, maxex; /* are min or max exclusive? */
|
int minex, maxex; /* are min or max exclusive? */
|
||||||
@ -2489,13 +2521,14 @@ robj_roptr lookupKeyReadOrReply(client *c, robj *key, robj *reply);
|
|||||||
robj *lookupKeyWriteOrReply(client *c, robj *key, robj *reply);
|
robj *lookupKeyWriteOrReply(client *c, robj *key, robj *reply);
|
||||||
robj_roptr lookupKeyReadWithFlags(redisDb *db, robj *key, int flags);
|
robj_roptr lookupKeyReadWithFlags(redisDb *db, robj *key, int flags);
|
||||||
robj *lookupKeyWriteWithFlags(redisDb *db, robj *key, int flags);
|
robj *lookupKeyWriteWithFlags(redisDb *db, robj *key, int flags);
|
||||||
robj *objectCommandLookup(client *c, robj *key);
|
robj_roptr objectCommandLookup(client *c, robj *key);
|
||||||
robj *objectCommandLookupOrReply(client *c, robj *key, robj *reply);
|
robj_roptr objectCommandLookupOrReply(client *c, robj *key, robj *reply);
|
||||||
int objectSetLRUOrLFU(robj *val, long long lfu_freq, long long lru_idle,
|
int objectSetLRUOrLFU(robj *val, long long lfu_freq, long long lru_idle,
|
||||||
long long lru_clock, int lru_multiplier);
|
long long lru_clock, int lru_multiplier);
|
||||||
#define LOOKUP_NONE 0
|
#define LOOKUP_NONE 0
|
||||||
#define LOOKUP_NOTOUCH (1<<0)
|
#define LOOKUP_NOTOUCH (1<<0)
|
||||||
#define LOOKUP_UPDATEMVCC (1<<1)
|
#define LOOKUP_NONOTIFY (1<<1)
|
||||||
|
#define LOOKUP_UPDATEMVCC (1<<2)
|
||||||
void dbAdd(redisDb *db, robj *key, robj *val);
|
void dbAdd(redisDb *db, robj *key, robj *val);
|
||||||
int dbAddRDBLoad(redisDb *db, sds key, robj *val);
|
int dbAddRDBLoad(redisDb *db, sds key, robj *val);
|
||||||
void dbOverwrite(redisDb *db, robj *key, robj *val);
|
void dbOverwrite(redisDb *db, robj *key, robj *val);
|
||||||
@ -2510,11 +2543,14 @@ robj *dbUnshareStringValue(redisDb *db, robj *key, robj *o);
|
|||||||
|
|
||||||
#define EMPTYDB_NO_FLAGS 0 /* No flags. */
|
#define EMPTYDB_NO_FLAGS 0 /* No flags. */
|
||||||
#define EMPTYDB_ASYNC (1<<0) /* Reclaim memory in another thread. */
|
#define EMPTYDB_ASYNC (1<<0) /* Reclaim memory in another thread. */
|
||||||
#define EMPTYDB_BACKUP (1<<2) /* DB array is a backup for REPL_DISKLESS_LOAD_SWAPDB. */
|
|
||||||
long long emptyDb(int dbnum, int flags, void(callback)(void*));
|
long long emptyDb(int dbnum, int flags, void(callback)(void*));
|
||||||
long long emptyDbGeneric(redisDb *dbarray, int dbnum, int flags, void(callback)(void*));
|
long long emptyDbStructure(redisDb *dbarray, int dbnum, int async, void(callback)(void*));
|
||||||
void flushAllDataAndResetRDB(int flags);
|
void flushAllDataAndResetRDB(int flags);
|
||||||
long long dbTotalServerKeyCount();
|
long long dbTotalServerKeyCount();
|
||||||
|
dbBackup *backupDb(void);
|
||||||
|
void restoreDbBackup(dbBackup *buckup);
|
||||||
|
void discardDbBackup(dbBackup *buckup, int flags, void(callback)(void*));
|
||||||
|
|
||||||
|
|
||||||
int selectDb(client *c, int id);
|
int selectDb(client *c, int id);
|
||||||
void signalModifiedKey(client *c, redisDb *db, robj *key);
|
void signalModifiedKey(client *c, redisDb *db, robj *key);
|
||||||
@ -2527,24 +2563,27 @@ void scanGenericCommand(client *c, robj_roptr o, unsigned long cursor);
|
|||||||
int parseScanCursorOrReply(client *c, robj *o, unsigned long *cursor);
|
int parseScanCursorOrReply(client *c, robj *o, unsigned long *cursor);
|
||||||
void slotToKeyAdd(sds key);
|
void slotToKeyAdd(sds key);
|
||||||
void slotToKeyDel(sds key);
|
void slotToKeyDel(sds key);
|
||||||
void slotToKeyFlush(void);
|
|
||||||
int dbAsyncDelete(redisDb *db, robj *key);
|
int dbAsyncDelete(redisDb *db, robj *key);
|
||||||
void emptyDbAsync(redisDb *db);
|
void emptyDbAsync(redisDb *db);
|
||||||
void slotToKeyFlushAsync(void);
|
void slotToKeyFlush(int async);
|
||||||
size_t lazyfreeGetPendingObjectsCount(void);
|
size_t lazyfreeGetPendingObjectsCount(void);
|
||||||
void freeObjAsync(robj *o);
|
void freeObjAsync(robj *obj);
|
||||||
|
void freeSlotsToKeysMapAsync(rax *rt);
|
||||||
|
void freeSlotsToKeysMap(rax *rt, int async);
|
||||||
|
|
||||||
|
|
||||||
/* API to get key arguments from commands */
|
/* API to get key arguments from commands */
|
||||||
int *getKeysFromCommand(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int *getKeysPrepareResult(getKeysResult *result, int numkeys);
|
||||||
void getKeysFreeResult(int *result);
|
int getKeysFromCommand(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
int *zunionInterGetKeys(struct redisCommand *cmd,robj **argv, int argc, int *numkeys);
|
void getKeysFreeResult(getKeysResult *result);
|
||||||
int *evalGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int zunionInterGetKeys(struct redisCommand *cmd,robj **argv, int argc, getKeysResult *result);
|
||||||
int *sortGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int evalGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
int *migrateGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int sortGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
int *georadiusGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int migrateGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
int *xreadGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int georadiusGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
int *memoryGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int xreadGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
int *lcsGetKeys(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
int memoryGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
|
int lcsGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);
|
||||||
|
|
||||||
/* Cluster */
|
/* Cluster */
|
||||||
void clusterInit(void);
|
void clusterInit(void);
|
||||||
@ -2911,6 +2950,10 @@ void runAndPropogateToReplicas(FN_PTR *pfn, TARGS... args) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void killIOThreads(void);
|
||||||
|
void killThreads(void);
|
||||||
|
void makeThreadKillable(void);
|
||||||
|
|
||||||
/* TLS stuff */
|
/* TLS stuff */
|
||||||
void tlsInit(void);
|
void tlsInit(void);
|
||||||
void tlsInitThread();
|
void tlsInitThread();
|
||||||
|
@ -36,6 +36,10 @@
|
|||||||
#include <sys/param.h>
|
#include <sys/param.h>
|
||||||
#include <sys/cpuset.h>
|
#include <sys/cpuset.h>
|
||||||
#endif
|
#endif
|
||||||
|
#ifdef __DragonFly__
|
||||||
|
#include <pthread.h>
|
||||||
|
#include <pthread_np.h>
|
||||||
|
#endif
|
||||||
#ifdef __NetBSD__
|
#ifdef __NetBSD__
|
||||||
#include <pthread.h>
|
#include <pthread.h>
|
||||||
#include <sched.h>
|
#include <sched.h>
|
||||||
@ -72,7 +76,7 @@ void setcpuaffinity(const char *cpulist) {
|
|||||||
#ifdef __linux__
|
#ifdef __linux__
|
||||||
cpu_set_t cpuset;
|
cpu_set_t cpuset;
|
||||||
#endif
|
#endif
|
||||||
#ifdef __FreeBSD__
|
#if defined (__FreeBSD__) || defined(__DragonFly__)
|
||||||
cpuset_t cpuset;
|
cpuset_t cpuset;
|
||||||
#endif
|
#endif
|
||||||
#ifdef __NetBSD__
|
#ifdef __NetBSD__
|
||||||
@ -139,6 +143,9 @@ void setcpuaffinity(const char *cpulist) {
|
|||||||
#ifdef __FreeBSD__
|
#ifdef __FreeBSD__
|
||||||
cpuset_setaffinity(CPU_LEVEL_WHICH, CPU_WHICH_TID, -1, sizeof(cpuset), &cpuset);
|
cpuset_setaffinity(CPU_LEVEL_WHICH, CPU_WHICH_TID, -1, sizeof(cpuset), &cpuset);
|
||||||
#endif
|
#endif
|
||||||
|
#ifdef __DragonFly__
|
||||||
|
pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
|
||||||
|
#endif
|
||||||
#ifdef __NetBSD__
|
#ifdef __NetBSD__
|
||||||
pthread_setaffinity_np(pthread_self(), cpuset_size(cpuset), cpuset);
|
pthread_setaffinity_np(pthread_self(), cpuset_size(cpuset), cpuset);
|
||||||
cpuset_destroy(cpuset);
|
cpuset_destroy(cpuset);
|
||||||
|
@ -50,6 +50,10 @@
|
|||||||
#if !HAVE_SETPROCTITLE
|
#if !HAVE_SETPROCTITLE
|
||||||
#if (defined __linux || defined __APPLE__)
|
#if (defined __linux || defined __APPLE__)
|
||||||
|
|
||||||
|
#ifdef __GLIBC__
|
||||||
|
#define HAVE_CLEARENV
|
||||||
|
#endif
|
||||||
|
|
||||||
extern char **environ;
|
extern char **environ;
|
||||||
|
|
||||||
static struct {
|
static struct {
|
||||||
@ -80,11 +84,9 @@ static inline size_t spt_min(size_t a, size_t b) {
|
|||||||
* For discussion on the portability of the various methods, see
|
* For discussion on the portability of the various methods, see
|
||||||
* http://lists.freebsd.org/pipermail/freebsd-stable/2008-June/043136.html
|
* http://lists.freebsd.org/pipermail/freebsd-stable/2008-June/043136.html
|
||||||
*/
|
*/
|
||||||
static int spt_clearenv(void) {
|
int spt_clearenv(void) {
|
||||||
#if __GLIBC__
|
#ifdef HAVE_CLEARENV
|
||||||
clearenv();
|
return clearenv();
|
||||||
|
|
||||||
return 0;
|
|
||||||
#else
|
#else
|
||||||
extern char **environ;
|
extern char **environ;
|
||||||
static char **tmp;
|
static char **tmp;
|
||||||
@ -100,34 +102,62 @@ static int spt_clearenv(void) {
|
|||||||
} /* spt_clearenv() */
|
} /* spt_clearenv() */
|
||||||
|
|
||||||
|
|
||||||
static int spt_copyenv(char *oldenv[]) {
|
static int spt_copyenv(int envc, char *oldenv[]) {
|
||||||
extern char **environ;
|
extern char **environ;
|
||||||
|
char **envcopy = NULL;
|
||||||
char *eq;
|
char *eq;
|
||||||
int i, error;
|
int i, error;
|
||||||
|
int envsize;
|
||||||
|
|
||||||
if (environ != oldenv)
|
if (environ != oldenv)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
if ((error = spt_clearenv()))
|
/* Copy environ into envcopy before clearing it. Shallow copy is
|
||||||
goto error;
|
* enough as clearenv() only clears the environ array.
|
||||||
|
*/
|
||||||
|
envsize = (envc + 1) * sizeof(char *);
|
||||||
|
envcopy = malloc(envsize);
|
||||||
|
if (!envcopy)
|
||||||
|
return ENOMEM;
|
||||||
|
memcpy(envcopy, oldenv, envsize);
|
||||||
|
|
||||||
for (i = 0; oldenv[i]; i++) {
|
/* Note that the state after clearenv() failure is undefined, but we'll
|
||||||
if (!(eq = strchr(oldenv[i], '=')))
|
* just assume an error means it was left unchanged.
|
||||||
|
*/
|
||||||
|
if ((error = spt_clearenv())) {
|
||||||
|
environ = oldenv;
|
||||||
|
free(envcopy);
|
||||||
|
return error;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Set environ from envcopy */
|
||||||
|
for (i = 0; envcopy[i]; i++) {
|
||||||
|
if (!(eq = strchr(envcopy[i], '=')))
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
*eq = '\0';
|
*eq = '\0';
|
||||||
error = (0 != setenv(oldenv[i], eq + 1, 1))? errno : 0;
|
error = (0 != setenv(envcopy[i], eq + 1, 1))? errno : 0;
|
||||||
*eq = '=';
|
*eq = '=';
|
||||||
|
|
||||||
if (error)
|
/* On error, do our best to restore state */
|
||||||
goto error;
|
if (error) {
|
||||||
|
#ifdef HAVE_CLEARENV
|
||||||
|
/* We don't assume it is safe to free environ, so we
|
||||||
|
* may leak it. As clearenv() was shallow using envcopy
|
||||||
|
* here is safe.
|
||||||
|
*/
|
||||||
|
environ = envcopy;
|
||||||
|
#else
|
||||||
|
free(envcopy);
|
||||||
|
free(environ); /* Safe to free, we have just alloc'd it */
|
||||||
|
environ = oldenv;
|
||||||
|
#endif
|
||||||
|
return error;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
free(envcopy);
|
||||||
return 0;
|
return 0;
|
||||||
error:
|
|
||||||
environ = oldenv;
|
|
||||||
|
|
||||||
return error;
|
|
||||||
} /* spt_copyenv() */
|
} /* spt_copyenv() */
|
||||||
|
|
||||||
|
|
||||||
@ -148,32 +178,57 @@ static int spt_copyargs(int argc, char *argv[]) {
|
|||||||
return 0;
|
return 0;
|
||||||
} /* spt_copyargs() */
|
} /* spt_copyargs() */
|
||||||
|
|
||||||
|
/* Initialize and populate SPT to allow a future setproctitle()
|
||||||
|
* call.
|
||||||
|
*
|
||||||
|
* As setproctitle() basically needs to overwrite argv[0], we're
|
||||||
|
* trying to determine what is the largest contiguous block
|
||||||
|
* starting at argv[0] we can use for this purpose.
|
||||||
|
*
|
||||||
|
* As this range will overwrite some or all of the argv and environ
|
||||||
|
* strings, a deep copy of these two arrays is performed.
|
||||||
|
*/
|
||||||
void spt_init(int argc, char *argv[]) {
|
void spt_init(int argc, char *argv[]) {
|
||||||
char **envp = environ;
|
char **envp = environ;
|
||||||
char *base, *end, *nul, *tmp;
|
char *base, *end, *nul, *tmp;
|
||||||
int i, error;
|
int i, error, envc;
|
||||||
|
|
||||||
if (!(base = argv[0]))
|
if (!(base = argv[0]))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
/* We start with end pointing at the end of argv[0] */
|
||||||
nul = &base[strlen(base)];
|
nul = &base[strlen(base)];
|
||||||
end = nul + 1;
|
end = nul + 1;
|
||||||
|
|
||||||
|
/* Attempt to extend end as far as we can, while making sure
|
||||||
|
* that the range between base and end is only allocated to
|
||||||
|
* argv, or anything that immediately follows argv (presumably
|
||||||
|
* envp).
|
||||||
|
*/
|
||||||
for (i = 0; i < argc || (i >= argc && argv[i]); i++) {
|
for (i = 0; i < argc || (i >= argc && argv[i]); i++) {
|
||||||
if (!argv[i] || argv[i] < end)
|
if (!argv[i] || argv[i] < end)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
|
if (end >= argv[i] && end <= argv[i] + strlen(argv[i]))
|
||||||
end = argv[i] + strlen(argv[i]) + 1;
|
end = argv[i] + strlen(argv[i]) + 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* In case the envp array was not an immediate extension to argv,
|
||||||
|
* scan it explicitly.
|
||||||
|
*/
|
||||||
for (i = 0; envp[i]; i++) {
|
for (i = 0; envp[i]; i++) {
|
||||||
if (envp[i] < end)
|
if (envp[i] < end)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
|
if (end >= envp[i] && end <= envp[i] + strlen(envp[i]))
|
||||||
end = envp[i] + strlen(envp[i]) + 1;
|
end = envp[i] + strlen(envp[i]) + 1;
|
||||||
}
|
}
|
||||||
|
envc = i;
|
||||||
|
|
||||||
|
/* We're going to deep copy argv[], but argv[0] will still point to
|
||||||
|
* the old memory for the purpose of updating the title so we need
|
||||||
|
* to keep the original value elsewhere.
|
||||||
|
*/
|
||||||
if (!(SPT.arg0 = strdup(argv[0])))
|
if (!(SPT.arg0 = strdup(argv[0])))
|
||||||
goto syerr;
|
goto syerr;
|
||||||
|
|
||||||
@ -194,8 +249,8 @@ void spt_init(int argc, char *argv[]) {
|
|||||||
setprogname(tmp);
|
setprogname(tmp);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/* Now make a full deep copy of the environment and argv[] */
|
||||||
if ((error = spt_copyenv(envp)))
|
if ((error = spt_copyenv(envc, envp)))
|
||||||
goto error;
|
goto error;
|
||||||
|
|
||||||
if ((error = spt_copyargs(argc, argv)))
|
if ((error = spt_copyargs(argc, argv)))
|
||||||
@ -263,3 +318,14 @@ error:
|
|||||||
|
|
||||||
#endif /* __linux || __APPLE__ */
|
#endif /* __linux || __APPLE__ */
|
||||||
#endif /* !HAVE_SETPROCTITLE */
|
#endif /* !HAVE_SETPROCTITLE */
|
||||||
|
|
||||||
|
#ifdef SETPROCTITLE_TEST_MAIN
|
||||||
|
int main(int argc, char *argv[]) {
|
||||||
|
spt_init(argc, argv);
|
||||||
|
|
||||||
|
printf("SPT.arg0: [%p] '%s'\n", SPT.arg0, SPT.arg0);
|
||||||
|
printf("SPT.base: [%p] '%s'\n", SPT.base, SPT.base);
|
||||||
|
printf("SPT.end: [%p] (%d bytes after base)'\n", SPT.end, (int) (SPT.end - SPT.base));
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
@ -22,7 +22,7 @@
|
|||||||
1. We use SipHash 1-2. This is not believed to be as strong as the
|
1. We use SipHash 1-2. This is not believed to be as strong as the
|
||||||
suggested 2-4 variant, but AFAIK there are not trivial attacks
|
suggested 2-4 variant, but AFAIK there are not trivial attacks
|
||||||
against this reduced-rounds version, and it runs at the same speed
|
against this reduced-rounds version, and it runs at the same speed
|
||||||
as Murmurhash2 that we used previously, why the 2-4 variant slowed
|
as Murmurhash2 that we used previously, while the 2-4 variant slowed
|
||||||
down Redis by a 4% figure more or less.
|
down Redis by a 4% figure more or less.
|
||||||
2. Hard-code rounds in the hope the compiler can optimize it more
|
2. Hard-code rounds in the hope the compiler can optimize it more
|
||||||
in this raw from. Anyway we always want the standard 2-4 variant.
|
in this raw from. Anyway we always want the standard 2-4 variant.
|
||||||
@ -36,7 +36,7 @@
|
|||||||
perform a text transformation in some temporary buffer, which is costly.
|
perform a text transformation in some temporary buffer, which is costly.
|
||||||
5. Remove debugging code.
|
5. Remove debugging code.
|
||||||
6. Modified the original test.c file to be a stand-alone function testing
|
6. Modified the original test.c file to be a stand-alone function testing
|
||||||
the function in the new form (returing an uint64_t) using just the
|
the function in the new form (returning an uint64_t) using just the
|
||||||
relevant test vector.
|
relevant test vector.
|
||||||
*/
|
*/
|
||||||
#include <assert.h>
|
#include <assert.h>
|
||||||
@ -46,7 +46,7 @@
|
|||||||
#include <ctype.h>
|
#include <ctype.h>
|
||||||
|
|
||||||
/* Fast tolower() alike function that does not care about locale
|
/* Fast tolower() alike function that does not care about locale
|
||||||
* but just returns a-z insetad of A-Z. */
|
* but just returns a-z instead of A-Z. */
|
||||||
int siptlw(int c) {
|
int siptlw(int c) {
|
||||||
if (c >= 'A' && c <= 'Z') {
|
if (c >= 'A' && c <= 'Z') {
|
||||||
return c+('a'-'A');
|
return c+('a'-'A');
|
||||||
|
@ -75,7 +75,7 @@ slowlogEntry *slowlogCreateEntry(client *c, robj **argv, int argc, long long dur
|
|||||||
} else if (argv[j]->getrefcount(std::memory_order_relaxed) == OBJ_SHARED_REFCOUNT) {
|
} else if (argv[j]->getrefcount(std::memory_order_relaxed) == OBJ_SHARED_REFCOUNT) {
|
||||||
se->argv[j] = argv[j];
|
se->argv[j] = argv[j];
|
||||||
} else {
|
} else {
|
||||||
/* Here we need to dupliacate the string objects composing the
|
/* Here we need to duplicate the string objects composing the
|
||||||
* argument vector of the command, because those may otherwise
|
* argument vector of the command, because those may otherwise
|
||||||
* end shared with string objects stored into keys. Having
|
* end shared with string objects stored into keys. Having
|
||||||
* shared objects between any part of Redis, and the data
|
* shared objects between any part of Redis, and the data
|
||||||
|
10
src/sort.cpp
10
src/sort.cpp
@ -116,7 +116,7 @@ robj *lookupKeyByPattern(redisDb *db, robj *pattern, robj *subst, int writeflag)
|
|||||||
if (fieldobj) {
|
if (fieldobj) {
|
||||||
if (o->type != OBJ_HASH) goto noobj;
|
if (o->type != OBJ_HASH) goto noobj;
|
||||||
|
|
||||||
/* Retrieve value from hash by the field name. The returend object
|
/* Retrieve value from hash by the field name. The returned object
|
||||||
* is a new object with refcount already incremented. */
|
* is a new object with refcount already incremented. */
|
||||||
o = hashTypeGetValueObject(o, szFromObj(fieldobj));
|
o = hashTypeGetValueObject(o, szFromObj(fieldobj));
|
||||||
} else {
|
} else {
|
||||||
@ -271,7 +271,7 @@ void sortCommand(client *c) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Lookup the key to sort. It must be of the right types */
|
/* Lookup the key to sort. It must be of the right types */
|
||||||
if (storekey)
|
if (!storekey)
|
||||||
sortval = lookupKeyRead(c->db,c->argv[1]).unsafe_robjcast();
|
sortval = lookupKeyRead(c->db,c->argv[1]).unsafe_robjcast();
|
||||||
else
|
else
|
||||||
sortval = lookupKeyWrite(c->db,c->argv[1]);
|
sortval = lookupKeyWrite(c->db,c->argv[1]);
|
||||||
@ -317,7 +317,7 @@ void sortCommand(client *c) {
|
|||||||
switch(sortval->type) {
|
switch(sortval->type) {
|
||||||
case OBJ_LIST: vectorlen = listTypeLength(sortval); break;
|
case OBJ_LIST: vectorlen = listTypeLength(sortval); break;
|
||||||
case OBJ_SET: vectorlen = setTypeSize(sortval); break;
|
case OBJ_SET: vectorlen = setTypeSize(sortval); break;
|
||||||
case OBJ_ZSET: vectorlen = dictSize(((zset*)ptrFromObj(sortval))->pdict); break;
|
case OBJ_ZSET: vectorlen = dictSize(((zset*)ptrFromObj(sortval))->dict); break;
|
||||||
default: vectorlen = 0; serverPanic("Bad SORT type"); /* Avoid GCC warning */
|
default: vectorlen = 0; serverPanic("Bad SORT type"); /* Avoid GCC warning */
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -412,7 +412,7 @@ void sortCommand(client *c) {
|
|||||||
|
|
||||||
/* Check if starting point is trivial, before doing log(N) lookup. */
|
/* Check if starting point is trivial, before doing log(N) lookup. */
|
||||||
if (desc) {
|
if (desc) {
|
||||||
long zsetlen = dictSize(((zset*)ptrFromObj(sortval))->pdict);
|
long zsetlen = dictSize(((zset*)ptrFromObj(sortval))->dict);
|
||||||
|
|
||||||
ln = zsl->tail;
|
ln = zsl->tail;
|
||||||
if (start > 0)
|
if (start > 0)
|
||||||
@ -436,7 +436,7 @@ void sortCommand(client *c) {
|
|||||||
end -= start;
|
end -= start;
|
||||||
start = 0;
|
start = 0;
|
||||||
} else if (sortval->type == OBJ_ZSET) {
|
} else if (sortval->type == OBJ_ZSET) {
|
||||||
dict *set = ((zset*)ptrFromObj(sortval))->pdict;
|
dict *set = ((zset*)ptrFromObj(sortval))->dict;
|
||||||
dictIterator *di;
|
dictIterator *di;
|
||||||
dictEntry *setele;
|
dictEntry *setele;
|
||||||
sds sdsele;
|
sds sdsele;
|
||||||
|
@ -92,7 +92,7 @@ void freeSparklineSequence(struct sequence *seq) {
|
|||||||
* ------------------------------------------------------------------------- */
|
* ------------------------------------------------------------------------- */
|
||||||
|
|
||||||
/* Render part of a sequence, so that render_sequence() call call this function
|
/* Render part of a sequence, so that render_sequence() call call this function
|
||||||
* with differnent parts in order to create the full output without overflowing
|
* with different parts in order to create the full output without overflowing
|
||||||
* the current terminal columns. */
|
* the current terminal columns. */
|
||||||
sds sparklineRenderRange(sds output, struct sequence *seq, int rows, int offset, int len, int flags) {
|
sds sparklineRenderRange(sds output, struct sequence *seq, int rows, int offset, int len, int flags) {
|
||||||
int j;
|
int j;
|
||||||
|
@ -74,7 +74,7 @@ typedef struct streamConsumer {
|
|||||||
consumer not yet acknowledged. Keys are
|
consumer not yet acknowledged. Keys are
|
||||||
big endian message IDs, while values are
|
big endian message IDs, while values are
|
||||||
the same streamNACK structure referenced
|
the same streamNACK structure referenced
|
||||||
in the "pel" of the conumser group structure
|
in the "pel" of the consumer group structure
|
||||||
itself, so the value is shared. */
|
itself, so the value is shared. */
|
||||||
} streamConsumer;
|
} streamConsumer;
|
||||||
|
|
||||||
|
@ -630,7 +630,7 @@ void hincrbyfloatCommand(client *c) {
|
|||||||
g_pserver->dirty++;
|
g_pserver->dirty++;
|
||||||
|
|
||||||
/* Always replicate HINCRBYFLOAT as an HSET command with the final value
|
/* Always replicate HINCRBYFLOAT as an HSET command with the final value
|
||||||
* in order to make sure that differences in float pricision or formatting
|
* in order to make sure that differences in float precision or formatting
|
||||||
* will not create differences in replicas or after an AOF restart. */
|
* will not create differences in replicas or after an AOF restart. */
|
||||||
robj *aux, *newobj;
|
robj *aux, *newobj;
|
||||||
aux = createStringObject("HSET",4);
|
aux = createStringObject("HSET",4);
|
||||||
|
@ -724,7 +724,7 @@ void rpoplpushCommand(client *c) {
|
|||||||
* Blocking POP operations
|
* Blocking POP operations
|
||||||
*----------------------------------------------------------------------------*/
|
*----------------------------------------------------------------------------*/
|
||||||
|
|
||||||
/* This is a helper function for handleClientsBlockedOnKeys(). It's work
|
/* This is a helper function for handleClientsBlockedOnKeys(). Its work
|
||||||
* is to serve a specific client (receiver) that is blocked on 'key'
|
* is to serve a specific client (receiver) that is blocked on 'key'
|
||||||
* in the context of the specified 'db', doing the following:
|
* in the context of the specified 'db', doing the following:
|
||||||
*
|
*
|
||||||
@ -815,7 +815,7 @@ void blockingPopGenericCommand(client *c, int where) {
|
|||||||
return;
|
return;
|
||||||
} else {
|
} else {
|
||||||
if (listTypeLength(o) != 0) {
|
if (listTypeLength(o) != 0) {
|
||||||
/* Non empty list, this is like a non normal [LR]POP. */
|
/* Non empty list, this is like a normal [LR]POP. */
|
||||||
const char *event = (where == LIST_HEAD) ? "lpop" : "rpop";
|
const char *event = (where == LIST_HEAD) ? "lpop" : "rpop";
|
||||||
robj *value = listTypePop(o,where);
|
robj *value = listTypePop(o,where);
|
||||||
serverAssert(value != NULL);
|
serverAssert(value != NULL);
|
||||||
@ -851,7 +851,7 @@ void blockingPopGenericCommand(client *c, int where) {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* If the list is empty or the key does not exists we must block */
|
/* If the keys do not exist we must block */
|
||||||
blockForKeys(c,BLOCKED_LIST,c->argv + 1,c->argc - 2,timeout,NULL,NULL);
|
blockForKeys(c,BLOCKED_LIST,c->argv + 1,c->argc - 2,timeout,NULL,NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -66,7 +66,7 @@ robj *fetchFromKey(redisDb *db, robj_roptr key) {
|
|||||||
|
|
||||||
dict *d = nullptr;
|
dict *d = nullptr;
|
||||||
if (o == nullptr)
|
if (o == nullptr)
|
||||||
d = db->pdict;
|
d = db->dict;
|
||||||
else
|
else
|
||||||
d = (dict*)ptrFromObj(o);
|
d = (dict*)ptrFromObj(o);
|
||||||
|
|
||||||
@ -105,7 +105,7 @@ bool setWithKey(redisDb *db, robj_roptr key, robj *val, bool fCreateBuckets) {
|
|||||||
|
|
||||||
dict *d = nullptr;
|
dict *d = nullptr;
|
||||||
if (o == nullptr)
|
if (o == nullptr)
|
||||||
d = db->pdict;
|
d = db->dict;
|
||||||
else
|
else
|
||||||
d = (dict*)ptrFromObj(o);
|
d = (dict*)ptrFromObj(o);
|
||||||
|
|
||||||
|
@ -193,7 +193,7 @@ sds setTypeNextObject(setTypeIterator *si) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Return random element from a non empty set.
|
/* Return random element from a non empty set.
|
||||||
* The returned element can be a int64_t value if the set is encoded
|
* The returned element can be an int64_t value if the set is encoded
|
||||||
* as an "intset" blob of integers, or an SDS string if the set
|
* as an "intset" blob of integers, or an SDS string if the set
|
||||||
* is a regular set.
|
* is a regular set.
|
||||||
*
|
*
|
||||||
@ -447,7 +447,7 @@ void spopWithCountCommand(client *c) {
|
|||||||
dbDelete(c->db,c->argv[1]);
|
dbDelete(c->db,c->argv[1]);
|
||||||
notifyKeyspaceEvent(NOTIFY_GENERIC,"del",c->argv[1],c->db->id);
|
notifyKeyspaceEvent(NOTIFY_GENERIC,"del",c->argv[1],c->db->id);
|
||||||
|
|
||||||
/* Propagate this command as an DEL operation */
|
/* Propagate this command as a DEL operation */
|
||||||
rewriteClientCommandVector(c,2,shared.del,c->argv[1]);
|
rewriteClientCommandVector(c,2,shared.del,c->argv[1]);
|
||||||
signalModifiedKey(c,c->db,c->argv[1]);
|
signalModifiedKey(c,c->db,c->argv[1]);
|
||||||
g_pserver->dirty++;
|
g_pserver->dirty++;
|
||||||
@ -681,7 +681,7 @@ void srandmemberWithCountCommand(client *c) {
|
|||||||
* In this case we create a set from scratch with all the elements, and
|
* In this case we create a set from scratch with all the elements, and
|
||||||
* subtract random elements to reach the requested number of elements.
|
* subtract random elements to reach the requested number of elements.
|
||||||
*
|
*
|
||||||
* This is done because if the number of requsted elements is just
|
* This is done because if the number of requested elements is just
|
||||||
* a bit less than the number of elements in the set, the natural approach
|
* a bit less than the number of elements in the set, the natural approach
|
||||||
* used into CASE 3 is highly inefficient. */
|
* used into CASE 3 is highly inefficient. */
|
||||||
if (count*SRANDMEMBER_SUB_STRATEGY_MUL > size) {
|
if (count*SRANDMEMBER_SUB_STRATEGY_MUL > size) {
|
||||||
|
@ -1207,7 +1207,7 @@ void xaddCommand(client *c) {
|
|||||||
int id_given = 0; /* Was an ID different than "*" specified? */
|
int id_given = 0; /* Was an ID different than "*" specified? */
|
||||||
long long maxlen = -1; /* If left to -1 no trimming is performed. */
|
long long maxlen = -1; /* If left to -1 no trimming is performed. */
|
||||||
int approx_maxlen = 0; /* If 1 only delete whole radix tree nodes, so
|
int approx_maxlen = 0; /* If 1 only delete whole radix tree nodes, so
|
||||||
the maxium length is not applied verbatim. */
|
the maximum length is not applied verbatim. */
|
||||||
int maxlen_arg_idx = 0; /* Index of the count in MAXLEN, for rewriting. */
|
int maxlen_arg_idx = 0; /* Index of the count in MAXLEN, for rewriting. */
|
||||||
|
|
||||||
/* Parse options. */
|
/* Parse options. */
|
||||||
@ -1903,7 +1903,7 @@ NULL
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* XSETID <stream> <groupname> <id>
|
/* XSETID <stream> <id>
|
||||||
*
|
*
|
||||||
* Set the internal "last ID" of a stream. */
|
* Set the internal "last ID" of a stream. */
|
||||||
void xsetidCommand(client *c) {
|
void xsetidCommand(client *c) {
|
||||||
@ -1992,7 +1992,7 @@ void xackCommand(client *c) {
|
|||||||
*
|
*
|
||||||
* If start and stop are omitted, the command just outputs information about
|
* If start and stop are omitted, the command just outputs information about
|
||||||
* the amount of pending messages for the key/group pair, together with
|
* the amount of pending messages for the key/group pair, together with
|
||||||
* the minimum and maxium ID of pending messages.
|
* the minimum and maximum ID of pending messages.
|
||||||
*
|
*
|
||||||
* If start and stop are provided instead, the pending messages are returned
|
* If start and stop are provided instead, the pending messages are returned
|
||||||
* with informations about the current owner, number of deliveries and last
|
* with informations about the current owner, number of deliveries and last
|
||||||
|
@ -317,7 +317,7 @@ void msetGenericCommand(client *c, int nx) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Handle the NX flag. The MSETNX semantic is to return zero and don't
|
/* Handle the NX flag. The MSETNX semantic is to return zero and don't
|
||||||
* set anything if at least one key alerady exists. */
|
* set anything if at least one key already exists. */
|
||||||
if (nx) {
|
if (nx) {
|
||||||
for (j = 1; j < c->argc; j += 2) {
|
for (j = 1; j < c->argc; j += 2) {
|
||||||
if (lookupKeyWrite(c->db,c->argv[j]) != NULL) {
|
if (lookupKeyWrite(c->db,c->argv[j]) != NULL) {
|
||||||
|
@ -245,7 +245,7 @@ int zslDelete(zskiplist *zsl, double score, sds ele, zskiplistNode **node) {
|
|||||||
return 0; /* not found */
|
return 0; /* not found */
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Update the score of an elmenent inside the sorted set skiplist.
|
/* Update the score of an element inside the sorted set skiplist.
|
||||||
* Note that the element must exist and must match 'score'.
|
* Note that the element must exist and must match 'score'.
|
||||||
* This function does not update the score in the hash table side, the
|
* This function does not update the score in the hash table side, the
|
||||||
* caller should take care of it.
|
* caller should take care of it.
|
||||||
@ -1184,7 +1184,7 @@ void zsetConvert(robj *zobj, int encoding) {
|
|||||||
serverPanic("Unknown target encoding");
|
serverPanic("Unknown target encoding");
|
||||||
|
|
||||||
zs = (zset*)zmalloc(sizeof(*zs), MALLOC_SHARED);
|
zs = (zset*)zmalloc(sizeof(*zs), MALLOC_SHARED);
|
||||||
zs->pdict = dictCreate(&zsetDictType,NULL);
|
zs->dict = dictCreate(&zsetDictType,NULL);
|
||||||
zs->zsl = zslCreate();
|
zs->zsl = zslCreate();
|
||||||
|
|
||||||
eptr = ziplistIndex(zl,0);
|
eptr = ziplistIndex(zl,0);
|
||||||
@ -1201,7 +1201,7 @@ void zsetConvert(robj *zobj, int encoding) {
|
|||||||
ele = sdsnewlen((char*)vstr,vlen);
|
ele = sdsnewlen((char*)vstr,vlen);
|
||||||
|
|
||||||
node = zslInsert(zs->zsl,score,ele);
|
node = zslInsert(zs->zsl,score,ele);
|
||||||
serverAssert(dictAdd(zs->pdict,ele,&node->score) == DICT_OK);
|
serverAssert(dictAdd(zs->dict,ele,&node->score) == DICT_OK);
|
||||||
zzlNext(zl,&eptr,&sptr);
|
zzlNext(zl,&eptr,&sptr);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1217,7 +1217,7 @@ void zsetConvert(robj *zobj, int encoding) {
|
|||||||
/* Approach similar to zslFree(), since we want to free the skiplist at
|
/* Approach similar to zslFree(), since we want to free the skiplist at
|
||||||
* the same time as creating the ziplist. */
|
* the same time as creating the ziplist. */
|
||||||
zs = (zset*)zobj->m_ptr;
|
zs = (zset*)zobj->m_ptr;
|
||||||
dictRelease(zs->pdict);
|
dictRelease(zs->dict);
|
||||||
node = zs->zsl->header->level(0)->forward;
|
node = zs->zsl->header->level(0)->forward;
|
||||||
zfree(zs->zsl->header);
|
zfree(zs->zsl->header);
|
||||||
zfree(zs->zsl);
|
zfree(zs->zsl);
|
||||||
@ -1260,7 +1260,7 @@ int zsetScore(robj_roptr zobj, sds member, double *score) {
|
|||||||
if (zzlFind((unsigned char*)zobj->m_ptr, member, score) == NULL) return C_ERR;
|
if (zzlFind((unsigned char*)zobj->m_ptr, member, score) == NULL) return C_ERR;
|
||||||
} else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {
|
} else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {
|
||||||
zset *zs = (zset*)zobj->m_ptr;
|
zset *zs = (zset*)zobj->m_ptr;
|
||||||
dictEntry *de = dictFind(zs->pdict, member);
|
dictEntry *de = dictFind(zs->dict, member);
|
||||||
if (de == NULL) return C_ERR;
|
if (de == NULL) return C_ERR;
|
||||||
*score = *(double*)dictGetVal(de);
|
*score = *(double*)dictGetVal(de);
|
||||||
} else {
|
} else {
|
||||||
@ -1373,7 +1373,7 @@ int zsetAdd(robj *zobj, double score, sds ele, int *flags, double *newscore) {
|
|||||||
zskiplistNode *znode;
|
zskiplistNode *znode;
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
|
|
||||||
de = dictFind(zs->pdict,ele);
|
de = dictFind(zs->dict,ele);
|
||||||
if (de != NULL) {
|
if (de != NULL) {
|
||||||
/* NX? Return, same element already exists. */
|
/* NX? Return, same element already exists. */
|
||||||
if (nx) {
|
if (nx) {
|
||||||
@ -1405,7 +1405,7 @@ int zsetAdd(robj *zobj, double score, sds ele, int *flags, double *newscore) {
|
|||||||
} else if (!xx) {
|
} else if (!xx) {
|
||||||
ele = sdsdup(ele);
|
ele = sdsdup(ele);
|
||||||
znode = zslInsert(zs->zsl,score,ele);
|
znode = zslInsert(zs->zsl,score,ele);
|
||||||
serverAssert(dictAdd(zs->pdict,ele,&znode->score) == DICT_OK);
|
serverAssert(dictAdd(zs->dict,ele,&znode->score) == DICT_OK);
|
||||||
*flags |= ZADD_ADDED;
|
*flags |= ZADD_ADDED;
|
||||||
if (newscore) *newscore = score;
|
if (newscore) *newscore = score;
|
||||||
return 1;
|
return 1;
|
||||||
@ -1434,7 +1434,7 @@ int zsetDel(robj *zobj, sds ele) {
|
|||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
double score;
|
double score;
|
||||||
|
|
||||||
de = dictUnlink(zs->pdict,ele);
|
de = dictUnlink(zs->dict,ele);
|
||||||
if (de != NULL) {
|
if (de != NULL) {
|
||||||
/* Get the score in order to delete from the skiplist later. */
|
/* Get the score in order to delete from the skiplist later. */
|
||||||
score = *(double*)dictGetVal(de);
|
score = *(double*)dictGetVal(de);
|
||||||
@ -1444,13 +1444,13 @@ int zsetDel(robj *zobj, sds ele) {
|
|||||||
* actually releases the SDS string representing the element,
|
* actually releases the SDS string representing the element,
|
||||||
* which is shared between the skiplist and the hash table, so
|
* which is shared between the skiplist and the hash table, so
|
||||||
* we need to delete from the skiplist as the final step. */
|
* we need to delete from the skiplist as the final step. */
|
||||||
dictFreeUnlinkedEntry(zs->pdict,de);
|
dictFreeUnlinkedEntry(zs->dict,de);
|
||||||
|
|
||||||
/* Delete from skiplist. */
|
/* Delete from skiplist. */
|
||||||
int retval = zslDelete(zs->zsl,score,ele,NULL);
|
int retval = zslDelete(zs->zsl,score,ele,NULL);
|
||||||
serverAssert(retval);
|
serverAssert(retval);
|
||||||
|
|
||||||
if (htNeedsResize(zs->pdict)) dictResize(zs->pdict);
|
if (htNeedsResize(zs->dict)) dictResize(zs->dict);
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
@ -1507,7 +1507,7 @@ long zsetRank(robj_roptr zobj, sds ele, int reverse) {
|
|||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
double score;
|
double score;
|
||||||
|
|
||||||
de = dictFind(zs->pdict,ele);
|
de = dictFind(zs->dict,ele);
|
||||||
if (de != NULL) {
|
if (de != NULL) {
|
||||||
score = *(double*)dictGetVal(de);
|
score = *(double*)dictGetVal(de);
|
||||||
rank = zslGetRank(zsl,score,ele);
|
rank = zslGetRank(zsl,score,ele);
|
||||||
@ -1758,17 +1758,17 @@ void zremrangeGenericCommand(client *c, int rangetype) {
|
|||||||
zset *zs = (zset*)zobj->m_ptr;
|
zset *zs = (zset*)zobj->m_ptr;
|
||||||
switch(rangetype) {
|
switch(rangetype) {
|
||||||
case ZRANGE_RANK:
|
case ZRANGE_RANK:
|
||||||
deleted = zslDeleteRangeByRank(zs->zsl,start+1,end+1,zs->pdict);
|
deleted = zslDeleteRangeByRank(zs->zsl,start+1,end+1,zs->dict);
|
||||||
break;
|
break;
|
||||||
case ZRANGE_SCORE:
|
case ZRANGE_SCORE:
|
||||||
deleted = zslDeleteRangeByScore(zs->zsl,&range,zs->pdict);
|
deleted = zslDeleteRangeByScore(zs->zsl,&range,zs->dict);
|
||||||
break;
|
break;
|
||||||
case ZRANGE_LEX:
|
case ZRANGE_LEX:
|
||||||
deleted = zslDeleteRangeByLex(zs->zsl,&lexrange,zs->pdict);
|
deleted = zslDeleteRangeByLex(zs->zsl,&lexrange,zs->dict);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
if (htNeedsResize(zs->pdict)) dictResize(zs->pdict);
|
if (htNeedsResize(zs->dict)) dictResize(zs->dict);
|
||||||
if (dictSize(zs->pdict) == 0) {
|
if (dictSize(zs->dict) == 0) {
|
||||||
dbDelete(c->db,key);
|
dbDelete(c->db,key);
|
||||||
keyremoved = 1;
|
keyremoved = 1;
|
||||||
}
|
}
|
||||||
@ -1817,7 +1817,7 @@ struct zsetopsrc {
|
|||||||
int ii;
|
int ii;
|
||||||
} is;
|
} is;
|
||||||
struct {
|
struct {
|
||||||
dict *pdict;
|
::dict *dict;
|
||||||
dictIterator *di;
|
dictIterator *di;
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
} ht;
|
} ht;
|
||||||
@ -1872,7 +1872,7 @@ void zuiInitIterator(zsetopsrc *op) {
|
|||||||
it->is.is = (intset*)op->subject->m_ptr;
|
it->is.is = (intset*)op->subject->m_ptr;
|
||||||
it->is.ii = 0;
|
it->is.ii = 0;
|
||||||
} else if (op->encoding == OBJ_ENCODING_HT) {
|
} else if (op->encoding == OBJ_ENCODING_HT) {
|
||||||
it->ht.pdict = (dict*)op->subject->m_ptr;
|
it->ht.dict = (dict*)op->subject->m_ptr;
|
||||||
it->ht.di = dictGetIterator((dict*)op->subject->m_ptr);
|
it->ht.di = dictGetIterator((dict*)op->subject->m_ptr);
|
||||||
it->ht.de = dictNext(it->ht.di);
|
it->ht.de = dictNext(it->ht.di);
|
||||||
} else {
|
} else {
|
||||||
@ -2117,7 +2117,7 @@ int zuiFind(zsetopsrc *op, zsetopval *val, double *score) {
|
|||||||
} else if (op->encoding == OBJ_ENCODING_SKIPLIST) {
|
} else if (op->encoding == OBJ_ENCODING_SKIPLIST) {
|
||||||
zset *zs = (zset*)op->subject->m_ptr;
|
zset *zs = (zset*)op->subject->m_ptr;
|
||||||
dictEntry *de;
|
dictEntry *de;
|
||||||
if ((de = dictFind(zs->pdict,val->ele)) != NULL) {
|
if ((de = dictFind(zs->dict,val->ele)) != NULL) {
|
||||||
*score = *(double*)dictGetVal(de);
|
*score = *(double*)dictGetVal(de);
|
||||||
return 1;
|
return 1;
|
||||||
} else {
|
} else {
|
||||||
@ -2303,7 +2303,7 @@ void zunionInterGenericCommand(client *c, robj *dstkey, int op) {
|
|||||||
if (j == setnum) {
|
if (j == setnum) {
|
||||||
tmp = zuiNewSdsFromValue(&zval);
|
tmp = zuiNewSdsFromValue(&zval);
|
||||||
znode = zslInsert(dstzset->zsl,score,tmp);
|
znode = zslInsert(dstzset->zsl,score,tmp);
|
||||||
dictAdd(dstzset->pdict,tmp,&znode->score);
|
dictAdd(dstzset->dict,tmp,&znode->score);
|
||||||
if (sdslen(tmp) > maxelelen) maxelelen = sdslen(tmp);
|
if (sdslen(tmp) > maxelelen) maxelelen = sdslen(tmp);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -2363,13 +2363,13 @@ void zunionInterGenericCommand(client *c, robj *dstkey, int op) {
|
|||||||
/* We now are aware of the final size of the resulting sorted set,
|
/* We now are aware of the final size of the resulting sorted set,
|
||||||
* let's resize the dictionary embedded inside the sorted set to the
|
* let's resize the dictionary embedded inside the sorted set to the
|
||||||
* right size, in order to save rehashing time. */
|
* right size, in order to save rehashing time. */
|
||||||
dictExpand(dstzset->pdict,dictSize(accumulator));
|
dictExpand(dstzset->dict,dictSize(accumulator));
|
||||||
|
|
||||||
while((de = dictNext(di)) != NULL) {
|
while((de = dictNext(di)) != NULL) {
|
||||||
sds ele = (sds)dictGetKey(de);
|
sds ele = (sds)dictGetKey(de);
|
||||||
score = dictGetDoubleVal(de);
|
score = dictGetDoubleVal(de);
|
||||||
znode = zslInsert(dstzset->zsl,score,ele);
|
znode = zslInsert(dstzset->zsl,score,ele);
|
||||||
dictAdd(dstzset->pdict,ele,&znode->score);
|
dictAdd(dstzset->dict,ele,&znode->score);
|
||||||
}
|
}
|
||||||
dictReleaseIterator(di);
|
dictReleaseIterator(di);
|
||||||
dictRelease(accumulator);
|
dictRelease(accumulator);
|
||||||
|
45
src/tls.cpp
45
src/tls.cpp
@ -39,6 +39,7 @@
|
|||||||
#include <openssl/ssl.h>
|
#include <openssl/ssl.h>
|
||||||
#include <openssl/err.h>
|
#include <openssl/err.h>
|
||||||
#include <openssl/rand.h>
|
#include <openssl/rand.h>
|
||||||
|
#include <openssl/pem.h>
|
||||||
|
|
||||||
#define REDIS_TLS_PROTO_TLSv1 (1<<0)
|
#define REDIS_TLS_PROTO_TLSv1 (1<<0)
|
||||||
#define REDIS_TLS_PROTO_TLSv1_1 (1<<1)
|
#define REDIS_TLS_PROTO_TLSv1_1 (1<<1)
|
||||||
@ -177,8 +178,9 @@ int tlsConfigure(redisTLSContextConfig *ctx_config) {
|
|||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!ctx_config->ca_cert_file && !ctx_config->ca_cert_dir) {
|
if (((g_pserver->tls_auth_clients != TLS_CLIENT_AUTH_NO) || g_pserver->tls_cluster || g_pserver->tls_replication) &&
|
||||||
serverLog(LL_WARNING, "Either tls-ca-cert-file or tls-ca-cert-dir must be configured!");
|
!ctx_config->ca_cert_file && !ctx_config->ca_cert_dir) {
|
||||||
|
serverLog(LL_WARNING, "Either tls-ca-cert-file or tls-ca-cert-dir must be specified when tls-cluster, tls-replication or tls-auth-clients are enabled!");
|
||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -245,7 +247,8 @@ int tlsConfigure(redisTLSContextConfig *ctx_config) {
|
|||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (SSL_CTX_load_verify_locations(ctx, ctx_config->ca_cert_file, ctx_config->ca_cert_dir) <= 0) {
|
if ((ctx_config->ca_cert_file || ctx_config->ca_cert_dir) &&
|
||||||
|
SSL_CTX_load_verify_locations(ctx, ctx_config->ca_cert_file, ctx_config->ca_cert_dir) <= 0) {
|
||||||
ERR_error_string_n(ERR_get_error(), errbuf, sizeof(errbuf));
|
ERR_error_string_n(ERR_get_error(), errbuf, sizeof(errbuf));
|
||||||
serverLog(LL_WARNING, "Failed to configure CA certificate(s) file/directory: %s", errbuf);
|
serverLog(LL_WARNING, "Failed to configure CA certificate(s) file/directory: %s", errbuf);
|
||||||
goto error;
|
goto error;
|
||||||
@ -491,7 +494,7 @@ void updateSSLEvent(tls_connection *conn) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
void tlsHandleEvent(tls_connection *conn, int mask) {
|
void tlsHandleEvent(tls_connection *conn, int mask) {
|
||||||
int ret;
|
int ret, conn_error;
|
||||||
serverAssert(conn->el == serverTL->el);
|
serverAssert(conn->el == serverTL->el);
|
||||||
|
|
||||||
TLSCONN_DEBUG("tlsEventHandler(): fd=%d, state=%d, mask=%d, r=%d, w=%d, flags=%d",
|
TLSCONN_DEBUG("tlsEventHandler(): fd=%d, state=%d, mask=%d, r=%d, w=%d, flags=%d",
|
||||||
@ -502,8 +505,9 @@ void tlsHandleEvent(tls_connection *conn, int mask) {
|
|||||||
|
|
||||||
switch (conn->c.state) {
|
switch (conn->c.state) {
|
||||||
case CONN_STATE_CONNECTING:
|
case CONN_STATE_CONNECTING:
|
||||||
if (connGetSocketError((connection *) conn)) {
|
conn_error = connGetSocketError((connection *) conn);
|
||||||
conn->c.last_errno = errno;
|
if (conn_error) {
|
||||||
|
conn->c.last_errno = conn_error;
|
||||||
conn->c.state = CONN_STATE_ERROR;
|
conn->c.state = CONN_STATE_ERROR;
|
||||||
} else {
|
} else {
|
||||||
if (!(conn->flags & TLS_CONN_FLAG_FD_SET)) {
|
if (!(conn->flags & TLS_CONN_FLAG_FD_SET)) {
|
||||||
@ -961,6 +965,30 @@ int tlsProcessPendingData() {
|
|||||||
return processed;
|
return processed;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Fetch the peer certificate used for authentication on the specified
|
||||||
|
* connection and return it as a PEM-encoded sds.
|
||||||
|
*/
|
||||||
|
sds connTLSGetPeerCert(connection *conn_) {
|
||||||
|
tls_connection *conn = (tls_connection *) conn_;
|
||||||
|
if (conn_->type->get_type(conn_) != CONN_TYPE_TLS || !conn->ssl) return NULL;
|
||||||
|
|
||||||
|
X509 *cert = SSL_get_peer_certificate(conn->ssl);
|
||||||
|
if (!cert) return NULL;
|
||||||
|
|
||||||
|
BIO *bio = BIO_new(BIO_s_mem());
|
||||||
|
if (bio == NULL || !PEM_write_bio_X509(bio, cert)) {
|
||||||
|
if (bio != NULL) BIO_free(bio);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
const char *bio_ptr;
|
||||||
|
long long bio_len = BIO_get_mem_data(bio, &bio_ptr);
|
||||||
|
sds cert_pem = sdsnewlen(bio_ptr, bio_len);
|
||||||
|
BIO_free(bio);
|
||||||
|
|
||||||
|
return cert_pem;
|
||||||
|
}
|
||||||
|
|
||||||
#else /* USE_OPENSSL */
|
#else /* USE_OPENSSL */
|
||||||
|
|
||||||
void tlsInit(void) {
|
void tlsInit(void) {
|
||||||
@ -992,4 +1020,9 @@ int tlsProcessPendingData() {
|
|||||||
|
|
||||||
void tlsInitThread() {}
|
void tlsInitThread() {}
|
||||||
|
|
||||||
|
sds connTLSGetPeerCert(connection *conn_) {
|
||||||
|
(void) conn_;
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
@ -134,7 +134,7 @@ void enableTracking(client *c, uint64_t redirect_to, uint64_t options, robj **pr
|
|||||||
CLIENT_TRACKING_NOLOOP);
|
CLIENT_TRACKING_NOLOOP);
|
||||||
c->client_tracking_redirection = redirect_to;
|
c->client_tracking_redirection = redirect_to;
|
||||||
|
|
||||||
/* This may be the first client we ever enable. Crete the tracking
|
/* This may be the first client we ever enable. Create the tracking
|
||||||
* table if it does not exist. */
|
* table if it does not exist. */
|
||||||
if (TrackingTable == NULL) {
|
if (TrackingTable == NULL) {
|
||||||
TrackingTable = raxNew();
|
TrackingTable = raxNew();
|
||||||
@ -171,9 +171,14 @@ void trackingRememberKeys(client *c) {
|
|||||||
uint64_t caching_given = c->flags & CLIENT_TRACKING_CACHING;
|
uint64_t caching_given = c->flags & CLIENT_TRACKING_CACHING;
|
||||||
if ((optin && !caching_given) || (optout && caching_given)) return;
|
if ((optin && !caching_given) || (optout && caching_given)) return;
|
||||||
|
|
||||||
int numkeys;
|
getKeysResult result = GETKEYS_RESULT_INIT;
|
||||||
int *keys = getKeysFromCommand(c->cmd,c->argv,c->argc,&numkeys);
|
int numkeys = getKeysFromCommand(c->cmd,c->argv,c->argc,&result);
|
||||||
if (keys == NULL) return;
|
if (!numkeys) {
|
||||||
|
getKeysFreeResult(&result);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
int *keys = result.keys;
|
||||||
|
|
||||||
for(int j = 0; j < numkeys; j++) {
|
for(int j = 0; j < numkeys; j++) {
|
||||||
int idx = keys[j];
|
int idx = keys[j];
|
||||||
@ -188,7 +193,7 @@ void trackingRememberKeys(client *c) {
|
|||||||
if (raxTryInsert(ids,(unsigned char*)&c->id,sizeof(c->id),NULL,NULL))
|
if (raxTryInsert(ids,(unsigned char*)&c->id,sizeof(c->id),NULL,NULL))
|
||||||
TrackingTableTotalItems++;
|
TrackingTableTotalItems++;
|
||||||
}
|
}
|
||||||
getKeysFreeResult(keys);
|
getKeysFreeResult(&result);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Given a key name, this function sends an invalidation message in the
|
/* Given a key name, this function sends an invalidation message in the
|
||||||
|
@ -1,17 +1,17 @@
|
|||||||
{
|
{
|
||||||
<lzf_unitialized_hash_table>
|
<lzf_uninitialized_hash_table>
|
||||||
Memcheck:Cond
|
Memcheck:Cond
|
||||||
fun:lzf_compress
|
fun:lzf_compress
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
<lzf_unitialized_hash_table>
|
<lzf_uninitialized_hash_table>
|
||||||
Memcheck:Value4
|
Memcheck:Value4
|
||||||
fun:lzf_compress
|
fun:lzf_compress
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
<lzf_unitialized_hash_table>
|
<lzf_uninitialized_hash_table>
|
||||||
Memcheck:Value8
|
Memcheck:Value8
|
||||||
fun:lzf_compress
|
fun:lzf_compress
|
||||||
}
|
}
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
#define KEYDB_REAL_VERSION "0.0.0"
|
#define KEYDB_REAL_VERSION "0.0.0"
|
||||||
|
#define KEYDB_VERSION_NUM 0x00000000
|
||||||
extern const char *KEYDB_SET_VERSION; // Unlike real version, this can be overriden by the config
|
extern const char *KEYDB_SET_VERSION; // Unlike real version, this can be overriden by the config
|
||||||
|
|
||||||
|
@ -99,7 +99,7 @@
|
|||||||
* Integer encoded as 24 bit signed (3 bytes).
|
* Integer encoded as 24 bit signed (3 bytes).
|
||||||
* |11111110| - 2 bytes
|
* |11111110| - 2 bytes
|
||||||
* Integer encoded as 8 bit signed (1 byte).
|
* Integer encoded as 8 bit signed (1 byte).
|
||||||
* |1111xxxx| - (with xxxx between 0000 and 1101) immediate 4 bit integer.
|
* |1111xxxx| - (with xxxx between 0001 and 1101) immediate 4 bit integer.
|
||||||
* Unsigned integer from 0 to 12. The encoded value is actually from
|
* Unsigned integer from 0 to 12. The encoded value is actually from
|
||||||
* 1 to 13 because 0000 and 1111 can not be used, so 1 should be
|
* 1 to 13 because 0000 and 1111 can not be used, so 1 should be
|
||||||
* subtracted from the encoded 4 bit value to obtain the right value.
|
* subtracted from the encoded 4 bit value to obtain the right value.
|
||||||
@ -191,10 +191,10 @@
|
|||||||
#include "redisassert.h"
|
#include "redisassert.h"
|
||||||
|
|
||||||
#define ZIP_END 255 /* Special "end of ziplist" entry. */
|
#define ZIP_END 255 /* Special "end of ziplist" entry. */
|
||||||
#define ZIP_BIG_PREVLEN 254 /* Max number of bytes of the previous entry, for
|
#define ZIP_BIG_PREVLEN 254 /* ZIP_BIG_PREVLEN - 1 is the max number of bytes of
|
||||||
the "prevlen" field prefixing each entry, to be
|
the previous entry, for the "prevlen" field prefixing
|
||||||
represented with just a single byte. Otherwise
|
each entry, to be represented with just a single byte.
|
||||||
it is represented as FE AA BB CC DD, where
|
Otherwise it is represented as FE AA BB CC DD, where
|
||||||
AA BB CC DD are a 4 bytes unsigned integer
|
AA BB CC DD are a 4 bytes unsigned integer
|
||||||
representing the previous entry len. */
|
representing the previous entry len. */
|
||||||
|
|
||||||
@ -317,7 +317,7 @@ unsigned int zipIntSize(unsigned char encoding) {
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Write the encoidng header of the entry in 'p'. If p is NULL it just returns
|
/* Write the encoding header of the entry in 'p'. If p is NULL it just returns
|
||||||
* the amount of bytes required to encode such a length. Arguments:
|
* the amount of bytes required to encode such a length. Arguments:
|
||||||
*
|
*
|
||||||
* 'encoding' is the encoding we are using for the entry. It could be
|
* 'encoding' is the encoding we are using for the entry. It could be
|
||||||
@ -325,7 +325,7 @@ unsigned int zipIntSize(unsigned char encoding) {
|
|||||||
* for single-byte small immediate integers.
|
* for single-byte small immediate integers.
|
||||||
*
|
*
|
||||||
* 'rawlen' is only used for ZIP_STR_* encodings and is the length of the
|
* 'rawlen' is only used for ZIP_STR_* encodings and is the length of the
|
||||||
* srting that this entry represents.
|
* string that this entry represents.
|
||||||
*
|
*
|
||||||
* The function returns the number of bytes used by the encoding/length
|
* The function returns the number of bytes used by the encoding/length
|
||||||
* header stored in 'p'. */
|
* header stored in 'p'. */
|
||||||
@ -390,7 +390,7 @@ unsigned int zipStoreEntryEncoding(unsigned char *p, unsigned char encoding, uns
|
|||||||
(lensize) = 1; \
|
(lensize) = 1; \
|
||||||
(len) = zipIntSize(encoding); \
|
(len) = zipIntSize(encoding); \
|
||||||
} \
|
} \
|
||||||
} while(0);
|
} while(0)
|
||||||
|
|
||||||
/* Encode the length of the previous entry and write it to "p". This only
|
/* Encode the length of the previous entry and write it to "p". This only
|
||||||
* uses the larger encoding (required in __ziplistCascadeUpdate). */
|
* uses the larger encoding (required in __ziplistCascadeUpdate). */
|
||||||
@ -426,7 +426,7 @@ unsigned int zipStorePrevEntryLength(unsigned char *p, unsigned int len) {
|
|||||||
} else { \
|
} else { \
|
||||||
(prevlensize) = 5; \
|
(prevlensize) = 5; \
|
||||||
} \
|
} \
|
||||||
} while(0);
|
} while(0)
|
||||||
|
|
||||||
/* Return the length of the previous element, and the number of bytes that
|
/* Return the length of the previous element, and the number of bytes that
|
||||||
* are used in order to encode the previous element length.
|
* are used in order to encode the previous element length.
|
||||||
@ -444,7 +444,7 @@ unsigned int zipStorePrevEntryLength(unsigned char *p, unsigned int len) {
|
|||||||
memcpy(&(prevlen), ((char*)(ptr)) + 1, 4); \
|
memcpy(&(prevlen), ((char*)(ptr)) + 1, 4); \
|
||||||
memrev32ifbe(&prevlen); \
|
memrev32ifbe(&prevlen); \
|
||||||
} \
|
} \
|
||||||
} while(0);
|
} while(0)
|
||||||
|
|
||||||
/* Given a pointer 'p' to the prevlen info that prefixes an entry, this
|
/* Given a pointer 'p' to the prevlen info that prefixes an entry, this
|
||||||
* function returns the difference in number of bytes needed to encode
|
* function returns the difference in number of bytes needed to encode
|
||||||
@ -914,7 +914,7 @@ unsigned char *ziplistMerge(unsigned char **first, unsigned char **second) {
|
|||||||
} else {
|
} else {
|
||||||
/* !append == prepending to target */
|
/* !append == prepending to target */
|
||||||
/* Move target *contents* exactly size of (source - [END]),
|
/* Move target *contents* exactly size of (source - [END]),
|
||||||
* then copy source into vacataed space (source - [END]):
|
* then copy source into vacated space (source - [END]):
|
||||||
* [SOURCE - END, TARGET - HEADER] */
|
* [SOURCE - END, TARGET - HEADER] */
|
||||||
memmove(target + source_bytes - ZIPLIST_END_SIZE,
|
memmove(target + source_bytes - ZIPLIST_END_SIZE,
|
||||||
target + ZIPLIST_HEADER_SIZE,
|
target + ZIPLIST_HEADER_SIZE,
|
||||||
|
@ -133,7 +133,7 @@ static unsigned int zipmapEncodeLength(unsigned char *p, unsigned int len) {
|
|||||||
* zipmap. Returns NULL if the key is not found.
|
* zipmap. Returns NULL if the key is not found.
|
||||||
*
|
*
|
||||||
* If NULL is returned, and totlen is not NULL, it is set to the entire
|
* If NULL is returned, and totlen is not NULL, it is set to the entire
|
||||||
* size of the zimap, so that the calling function will be able to
|
* size of the zipmap, so that the calling function will be able to
|
||||||
* reallocate the original zipmap to make room for more entries. */
|
* reallocate the original zipmap to make room for more entries. */
|
||||||
static unsigned char *zipmapLookupRaw(unsigned char *zm, unsigned char *key, unsigned int klen, unsigned int *totlen) {
|
static unsigned char *zipmapLookupRaw(unsigned char *zm, unsigned char *key, unsigned int klen, unsigned int *totlen) {
|
||||||
unsigned char *p = zm+1, *k = NULL;
|
unsigned char *p = zm+1, *k = NULL;
|
||||||
|
@ -186,9 +186,6 @@ void *zrealloc(void *ptr, size_t size, enum MALLOC_CLASS mclass) {
|
|||||||
size_t zmalloc_size(void *ptr) {
|
size_t zmalloc_size(void *ptr) {
|
||||||
void *realptr = (char*)ptr-PREFIX_SIZE;
|
void *realptr = (char*)ptr-PREFIX_SIZE;
|
||||||
size_t size = *((size_t*)realptr);
|
size_t size = *((size_t*)realptr);
|
||||||
/* Assume at least that all the allocations are padded at sizeof(long) by
|
|
||||||
* the underlying allocator. */
|
|
||||||
if (size&(sizeof(long)-1)) size += sizeof(long)-(size&(sizeof(long)-1));
|
|
||||||
return size+PREFIX_SIZE;
|
return size+PREFIX_SIZE;
|
||||||
}
|
}
|
||||||
size_t zmalloc_usable(void *ptr) {
|
size_t zmalloc_usable(void *ptr) {
|
||||||
@ -319,6 +316,26 @@ size_t zmalloc_get_rss(void) {
|
|||||||
|
|
||||||
return 0L;
|
return 0L;
|
||||||
}
|
}
|
||||||
|
#elif defined(__NetBSD__)
|
||||||
|
#include <sys/types.h>
|
||||||
|
#include <sys/sysctl.h>
|
||||||
|
#include <unistd.h>
|
||||||
|
|
||||||
|
size_t zmalloc_get_rss(void) {
|
||||||
|
struct kinfo_proc2 info;
|
||||||
|
size_t infolen = sizeof(info);
|
||||||
|
int mib[6];
|
||||||
|
mib[0] = CTL_KERN;
|
||||||
|
mib[1] = KERN_PROC;
|
||||||
|
mib[2] = KERN_PROC_PID;
|
||||||
|
mib[3] = getpid();
|
||||||
|
mib[4] = sizeof(info);
|
||||||
|
mib[5] = 1;
|
||||||
|
if (sysctl(mib, 4, &info, &infolen, NULL, 0) == 0)
|
||||||
|
return (size_t)info.p_vm_rssize;
|
||||||
|
|
||||||
|
return 0L;
|
||||||
|
}
|
||||||
#else
|
#else
|
||||||
size_t zmalloc_get_rss(void) {
|
size_t zmalloc_get_rss(void) {
|
||||||
/* If we can't get the RSS in an OS-specific way for this system just
|
/* If we can't get the RSS in an OS-specific way for this system just
|
||||||
|
@ -57,6 +57,11 @@ proc CI {n field} {
|
|||||||
get_info_field [R $n cluster info] $field
|
get_info_field [R $n cluster info] $field
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Return the value of the specified INFO field.
|
||||||
|
proc s {n field} {
|
||||||
|
get_info_field [R $n info] $field
|
||||||
|
}
|
||||||
|
|
||||||
# Assuming nodes are reest, this function performs slots allocation.
|
# Assuming nodes are reest, this function performs slots allocation.
|
||||||
# Only the first 'n' nodes are used.
|
# Only the first 'n' nodes are used.
|
||||||
proc cluster_allocate_slots {n} {
|
proc cluster_allocate_slots {n} {
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Failover stress test.
|
# Failover stress test.
|
||||||
# In this test a different node is killed in a loop for N
|
# In this test a different node is killed in a loop for N
|
||||||
# iterations. The test checks that certain properties
|
# iterations. The test checks that certain properties
|
||||||
# are preseved across iterations.
|
# are preserved across iterations.
|
||||||
|
|
||||||
source "../tests/includes/init-tests.tcl"
|
source "../tests/includes/init-tests.tcl"
|
||||||
source "../../../tests/support/cli.tcl"
|
source "../../../tests/support/cli.tcl"
|
||||||
@ -32,7 +32,7 @@ test "Enable AOF in all the instances" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Return nno-zero if the specified PID is about a process still in execution,
|
# Return non-zero if the specified PID is about a process still in execution,
|
||||||
# otherwise 0 is returned.
|
# otherwise 0 is returned.
|
||||||
proc process_is_running {pid} {
|
proc process_is_running {pid} {
|
||||||
# PS should return with an error if PID is non existing,
|
# PS should return with an error if PID is non existing,
|
||||||
@ -45,7 +45,7 @@ proc process_is_running {pid} {
|
|||||||
#
|
#
|
||||||
# - N commands are sent to the cluster in the course of the test.
|
# - N commands are sent to the cluster in the course of the test.
|
||||||
# - Every command selects a random key from key:0 to key:MAX-1.
|
# - Every command selects a random key from key:0 to key:MAX-1.
|
||||||
# - The operation RPUSH key <randomvalue> is perforemd.
|
# - The operation RPUSH key <randomvalue> is performed.
|
||||||
# - Tcl remembers into an array all the values pushed to each list.
|
# - Tcl remembers into an array all the values pushed to each list.
|
||||||
# - After N/2 commands, the resharding process is started in background.
|
# - After N/2 commands, the resharding process is started in background.
|
||||||
# - The test continues while the resharding is in progress.
|
# - The test continues while the resharding is in progress.
|
||||||
|
@ -15,6 +15,7 @@ set replica [Rn 1]
|
|||||||
|
|
||||||
test "Cant read from replica without READONLY" {
|
test "Cant read from replica without READONLY" {
|
||||||
$primary SET a 1
|
$primary SET a 1
|
||||||
|
wait_for_ofs_sync $primary $replica
|
||||||
catch {$replica GET a} err
|
catch {$replica GET a} err
|
||||||
assert {[string range $err 0 4] eq {MOVED}}
|
assert {[string range $err 0 4] eq {MOVED}}
|
||||||
}
|
}
|
||||||
@ -28,6 +29,7 @@ test "Can preform HSET primary and HGET from replica" {
|
|||||||
$primary HSET h a 1
|
$primary HSET h a 1
|
||||||
$primary HSET h b 2
|
$primary HSET h b 2
|
||||||
$primary HSET h c 3
|
$primary HSET h c 3
|
||||||
|
wait_for_ofs_sync $primary $replica
|
||||||
assert {[$replica HGET h a] eq {1}}
|
assert {[$replica HGET h a] eq {1}}
|
||||||
assert {[$replica HGET h b] eq {2}}
|
assert {[$replica HGET h b] eq {2}}
|
||||||
assert {[$replica HGET h c] eq {3}}
|
assert {[$replica HGET h c] eq {3}}
|
||||||
@ -45,4 +47,25 @@ test "MULTI-EXEC with write operations is MOVED" {
|
|||||||
$replica MULTI
|
$replica MULTI
|
||||||
catch {$replica HSET h b 4} err
|
catch {$replica HSET h b 4} err
|
||||||
assert {[string range $err 0 4] eq {MOVED}}
|
assert {[string range $err 0 4] eq {MOVED}}
|
||||||
|
catch {$replica exec} err
|
||||||
|
assert {[string range $err 0 8] eq {EXECABORT}}
|
||||||
|
}
|
||||||
|
|
||||||
|
test "read-only blocking operations from replica" {
|
||||||
|
set rd [redis_deferring_client redis 1]
|
||||||
|
$rd readonly
|
||||||
|
$rd read
|
||||||
|
$rd XREAD BLOCK 0 STREAMS k 0
|
||||||
|
|
||||||
|
wait_for_condition 1000 50 {
|
||||||
|
[RI 1 blocked_clients] eq {1}
|
||||||
|
} else {
|
||||||
|
fail "client wasn't blocked"
|
||||||
|
}
|
||||||
|
|
||||||
|
$primary XADD k * foo bar
|
||||||
|
set res [$rd read]
|
||||||
|
set res [lindex [lindex [lindex [lindex $res 0] 1] 0] 1]
|
||||||
|
assert {$res eq {foo bar}}
|
||||||
|
$rd close
|
||||||
}
|
}
|
||||||
|
79
tests/cluster/tests/17-diskless-load-swapdb.tcl
Normal file
79
tests/cluster/tests/17-diskless-load-swapdb.tcl
Normal file
@ -0,0 +1,79 @@
|
|||||||
|
# Check replica can restore database buckup correctly if fail to diskless load.
|
||||||
|
|
||||||
|
source "../tests/includes/init-tests.tcl"
|
||||||
|
|
||||||
|
test "Create a primary with a replica" {
|
||||||
|
create_cluster 1 1
|
||||||
|
}
|
||||||
|
|
||||||
|
test "Cluster should start ok" {
|
||||||
|
assert_cluster_state ok
|
||||||
|
}
|
||||||
|
|
||||||
|
test "Cluster is writable" {
|
||||||
|
cluster_write_test 0
|
||||||
|
}
|
||||||
|
|
||||||
|
test "Right to restore backups when fail to diskless load " {
|
||||||
|
set master [Rn 0]
|
||||||
|
set replica [Rn 1]
|
||||||
|
set master_id 0
|
||||||
|
set replica_id 1
|
||||||
|
|
||||||
|
$replica READONLY
|
||||||
|
$replica config set repl-diskless-load swapdb
|
||||||
|
$replica config set appendonly no
|
||||||
|
$replica config set save ""
|
||||||
|
$replica config rewrite
|
||||||
|
$master config set repl-backlog-size 1024
|
||||||
|
$master config set repl-diskless-sync yes
|
||||||
|
$master config set repl-diskless-sync-delay 0
|
||||||
|
$master config set rdb-key-save-delay 10000
|
||||||
|
$master config set rdbcompression no
|
||||||
|
$master config set appendonly no
|
||||||
|
$master config set save ""
|
||||||
|
|
||||||
|
# Write a key that belongs to slot 0
|
||||||
|
set slot0_key "06S"
|
||||||
|
$master set $slot0_key 1
|
||||||
|
after 100
|
||||||
|
assert_equal {1} [$replica get $slot0_key]
|
||||||
|
assert_equal $slot0_key [$replica CLUSTER GETKEYSINSLOT 0 1]
|
||||||
|
|
||||||
|
# Save an RDB and kill the replica
|
||||||
|
$replica save
|
||||||
|
kill_instance redis $replica_id
|
||||||
|
|
||||||
|
# Delete the key from master
|
||||||
|
$master del $slot0_key
|
||||||
|
|
||||||
|
# Replica must full sync with master when start because replication
|
||||||
|
# backlog size is very small, and dumping rdb will cost several seconds.
|
||||||
|
set num 10000
|
||||||
|
set value [string repeat A 1024]
|
||||||
|
set rd [redis_deferring_client redis $master_id]
|
||||||
|
for {set j 0} {$j < $num} {incr j} {
|
||||||
|
$rd set $j $value
|
||||||
|
}
|
||||||
|
for {set j 0} {$j < $num} {incr j} {
|
||||||
|
$rd read
|
||||||
|
}
|
||||||
|
|
||||||
|
# Start the replica again
|
||||||
|
restart_instance redis $replica_id
|
||||||
|
$replica READONLY
|
||||||
|
|
||||||
|
# Start full sync, wait till after db is flushed (backed up)
|
||||||
|
wait_for_condition 500 10 {
|
||||||
|
[s $replica_id loading] eq 1
|
||||||
|
} else {
|
||||||
|
fail "Fail to full sync"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Kill master, abort full sync
|
||||||
|
kill_instance redis $master_id
|
||||||
|
|
||||||
|
# Replica keys and keys to slots map still both are right
|
||||||
|
assert_equal {1} [$replica get $slot0_key]
|
||||||
|
assert_equal $slot0_key [$replica CLUSTER GETKEYSINSLOT 0 1]
|
||||||
|
}
|
@ -243,6 +243,7 @@ proc parse_options {} {
|
|||||||
puts "--pause-on-error Pause for manual inspection on error."
|
puts "--pause-on-error Pause for manual inspection on error."
|
||||||
puts "--fail Simulate a test failure."
|
puts "--fail Simulate a test failure."
|
||||||
puts "--valgrind Run with valgrind."
|
puts "--valgrind Run with valgrind."
|
||||||
|
puts "--tls Run tests in TLS mode."
|
||||||
puts "--help Shows this help."
|
puts "--help Shows this help."
|
||||||
exit 0
|
exit 0
|
||||||
} else {
|
} else {
|
||||||
@ -322,7 +323,7 @@ proc pause_on_error {} {
|
|||||||
puts "S <id> cmd ... arg Call command in Sentinel <id>."
|
puts "S <id> cmd ... arg Call command in Sentinel <id>."
|
||||||
puts "R <id> cmd ... arg Call command in Redis <id>."
|
puts "R <id> cmd ... arg Call command in Redis <id>."
|
||||||
puts "SI <id> <field> Show Sentinel <id> INFO <field>."
|
puts "SI <id> <field> Show Sentinel <id> INFO <field>."
|
||||||
puts "RI <id> <field> Show Sentinel <id> INFO <field>."
|
puts "RI <id> <field> Show Redis <id> INFO <field>."
|
||||||
puts "continue Resume test."
|
puts "continue Resume test."
|
||||||
} else {
|
} else {
|
||||||
set errcode [catch {eval $line} retval]
|
set errcode [catch {eval $line} retval]
|
||||||
@ -605,3 +606,16 @@ proc restart_instance {type id} {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
proc redis_deferring_client {type id} {
|
||||||
|
set port [get_instance_attrib $type $id port]
|
||||||
|
set host [get_instance_attrib $type $id host]
|
||||||
|
set client [redis $host $port 1 $::tls]
|
||||||
|
return $client
|
||||||
|
}
|
||||||
|
|
||||||
|
proc redis_client {type id} {
|
||||||
|
set port [get_instance_attrib $type $id port]
|
||||||
|
set host [get_instance_attrib $type $id host]
|
||||||
|
set client [redis $host $port 0 $::tls]
|
||||||
|
return $client
|
||||||
|
}
|
||||||
|
@ -16,7 +16,7 @@ start_server {tags {"repl"}} {
|
|||||||
s 0 role
|
s 0 role
|
||||||
} {slave}
|
} {slave}
|
||||||
|
|
||||||
test {Test replication with parallel clients writing in differnet DBs} {
|
test {Test replication with parallel clients writing in different DBs} {
|
||||||
after 5000
|
after 5000
|
||||||
stop_bg_complex_data $load_handle0
|
stop_bg_complex_data $load_handle0
|
||||||
stop_bg_complex_data $load_handle1
|
stop_bg_complex_data $load_handle1
|
||||||
|
@ -594,11 +594,11 @@ start_server {tags {"repl"}} {
|
|||||||
puts "master utime: $master_utime"
|
puts "master utime: $master_utime"
|
||||||
puts "master stime: $master_stime"
|
puts "master stime: $master_stime"
|
||||||
}
|
}
|
||||||
if {$all_drop == "all" || $all_drop == "slow"} {
|
if {!$::no_latency && ($all_drop == "all" || $all_drop == "slow")} {
|
||||||
assert {$master_utime < 70}
|
assert {$master_utime < 70}
|
||||||
assert {$master_stime < 70}
|
assert {$master_stime < 70}
|
||||||
}
|
}
|
||||||
if {$all_drop == "none" || $all_drop == "fast"} {
|
if {!$::no_latency && ($all_drop == "none" || $all_drop == "fast")} {
|
||||||
assert {$master_utime < 15}
|
assert {$master_utime < 15}
|
||||||
assert {$master_stime < 15}
|
assert {$master_stime < 15}
|
||||||
}
|
}
|
||||||
|
@ -24,7 +24,8 @@ TEST_MODULES = \
|
|||||||
datatype.so \
|
datatype.so \
|
||||||
auth.so \
|
auth.so \
|
||||||
keyspace_events.so \
|
keyspace_events.so \
|
||||||
blockedclient.so
|
blockedclient.so \
|
||||||
|
getkeys.so
|
||||||
|
|
||||||
|
|
||||||
.PHONY: all
|
.PHONY: all
|
||||||
|
@ -57,6 +57,14 @@ int acquire_gil(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
|||||||
UNUSED(argv);
|
UNUSED(argv);
|
||||||
UNUSED(argc);
|
UNUSED(argc);
|
||||||
|
|
||||||
|
int flags = RedisModule_GetContextFlags(ctx);
|
||||||
|
int allFlags = RedisModule_GetContextFlagsAll();
|
||||||
|
if ((allFlags & REDISMODULE_CTX_FLAGS_MULTI) &&
|
||||||
|
(flags & REDISMODULE_CTX_FLAGS_MULTI)) {
|
||||||
|
RedisModule_ReplyWithSimpleString(ctx, "Blocked client is not supported inside multi");
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
/* This command handler tries to acquire the GIL twice
|
/* This command handler tries to acquire the GIL twice
|
||||||
* once in the worker thread using "RedisModule_ThreadSafeContextLock"
|
* once in the worker thread using "RedisModule_ThreadSafeContextLock"
|
||||||
* second in the sub-worker thread
|
* second in the sub-worker thread
|
||||||
@ -71,6 +79,105 @@ int acquire_gil(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
|||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
typedef struct {
|
||||||
|
RedisModuleString **argv;
|
||||||
|
int argc;
|
||||||
|
RedisModuleBlockedClient *bc;
|
||||||
|
} bg_call_data;
|
||||||
|
|
||||||
|
void *bg_call_worker(void *arg) {
|
||||||
|
bg_call_data *bg = arg;
|
||||||
|
|
||||||
|
// Get Redis module context
|
||||||
|
RedisModuleCtx *ctx = RedisModule_GetThreadSafeContext(bg->bc);
|
||||||
|
|
||||||
|
// Acquire GIL
|
||||||
|
RedisModule_ThreadSafeContextLock(ctx);
|
||||||
|
|
||||||
|
// Call the command
|
||||||
|
const char* cmd = RedisModule_StringPtrLen(bg->argv[1], NULL);
|
||||||
|
RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, "v", bg->argv + 2, bg->argc - 2);
|
||||||
|
|
||||||
|
// Release GIL
|
||||||
|
RedisModule_ThreadSafeContextUnlock(ctx);
|
||||||
|
|
||||||
|
// Reply to client
|
||||||
|
if (!rep) {
|
||||||
|
RedisModule_ReplyWithError(ctx, "NULL reply returned");
|
||||||
|
} else {
|
||||||
|
RedisModule_ReplyWithCallReply(ctx, rep);
|
||||||
|
RedisModule_FreeCallReply(rep);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unblock client
|
||||||
|
RedisModule_UnblockClient(bg->bc, NULL);
|
||||||
|
|
||||||
|
/* Free the arguments */
|
||||||
|
for (int i=0; i<bg->argc; i++)
|
||||||
|
RedisModule_FreeString(ctx, bg->argv[i]);
|
||||||
|
RedisModule_Free(bg->argv);
|
||||||
|
RedisModule_Free(bg);
|
||||||
|
|
||||||
|
// Free the Redis module context
|
||||||
|
RedisModule_FreeThreadSafeContext(ctx);
|
||||||
|
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
int do_bg_rm_call(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
||||||
|
{
|
||||||
|
UNUSED(argv);
|
||||||
|
UNUSED(argc);
|
||||||
|
|
||||||
|
/* Make sure we're not trying to block a client when we shouldn't */
|
||||||
|
int flags = RedisModule_GetContextFlags(ctx);
|
||||||
|
int allFlags = RedisModule_GetContextFlagsAll();
|
||||||
|
if ((allFlags & REDISMODULE_CTX_FLAGS_MULTI) &&
|
||||||
|
(flags & REDISMODULE_CTX_FLAGS_MULTI)) {
|
||||||
|
RedisModule_ReplyWithSimpleString(ctx, "Blocked client is not supported inside multi");
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Make a copy of the arguments and pass them to the thread. */
|
||||||
|
bg_call_data *bg = RedisModule_Alloc(sizeof(bg_call_data));
|
||||||
|
bg->argv = RedisModule_Alloc(sizeof(RedisModuleString*)*argc);
|
||||||
|
bg->argc = argc;
|
||||||
|
for (int i=0; i<argc; i++)
|
||||||
|
bg->argv[i] = RedisModule_HoldString(ctx, argv[i]);
|
||||||
|
|
||||||
|
/* Block the client */
|
||||||
|
bg->bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);
|
||||||
|
|
||||||
|
/* Start a thread to handle the request */
|
||||||
|
pthread_t tid;
|
||||||
|
int res = pthread_create(&tid, NULL, bg_call_worker, bg);
|
||||||
|
assert(res == 0);
|
||||||
|
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
int do_rm_call(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){
|
||||||
|
UNUSED(argv);
|
||||||
|
UNUSED(argc);
|
||||||
|
|
||||||
|
if(argc < 2){
|
||||||
|
return RedisModule_WrongArity(ctx);
|
||||||
|
}
|
||||||
|
|
||||||
|
const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);
|
||||||
|
|
||||||
|
RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, "v", argv + 2, argc - 2);
|
||||||
|
if(!rep){
|
||||||
|
RedisModule_ReplyWithError(ctx, "NULL reply returned");
|
||||||
|
}else{
|
||||||
|
RedisModule_ReplyWithCallReply(ctx, rep);
|
||||||
|
RedisModule_FreeCallReply(rep);
|
||||||
|
}
|
||||||
|
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
||||||
REDISMODULE_NOT_USED(argv);
|
REDISMODULE_NOT_USED(argv);
|
||||||
REDISMODULE_NOT_USED(argc);
|
REDISMODULE_NOT_USED(argc);
|
||||||
@ -81,5 +188,11 @@ int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
|||||||
if (RedisModule_CreateCommand(ctx, "acquire_gil", acquire_gil, "", 0, 0, 0) == REDISMODULE_ERR)
|
if (RedisModule_CreateCommand(ctx, "acquire_gil", acquire_gil, "", 0, 0, 0) == REDISMODULE_ERR)
|
||||||
return REDISMODULE_ERR;
|
return REDISMODULE_ERR;
|
||||||
|
|
||||||
|
if (RedisModule_CreateCommand(ctx, "do_rm_call", do_rm_call, "", 0, 0, 0) == REDISMODULE_ERR)
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
|
||||||
|
if (RedisModule_CreateCommand(ctx, "do_bg_rm_call", do_bg_rm_call, "", 0, 0, 0) == REDISMODULE_ERR)
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
|
||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
}
|
}
|
||||||
|
@ -28,6 +28,12 @@ int fork_create(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
|||||||
RedisModule_WrongArity(ctx);
|
RedisModule_WrongArity(ctx);
|
||||||
return REDISMODULE_OK;
|
return REDISMODULE_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if(!RMAPI_FUNC_SUPPORTED(RedisModule_Fork)){
|
||||||
|
RedisModule_ReplyWithError(ctx, "Fork api is not supported in the current redis version");
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
RedisModule_StringToLongLong(argv[1], &code_to_exit_with);
|
RedisModule_StringToLongLong(argv[1], &code_to_exit_with);
|
||||||
exitted_with_code = -1;
|
exitted_with_code = -1;
|
||||||
child_pid = RedisModule_Fork(done_handler, (void*)0xdeadbeef);
|
child_pid = RedisModule_Fork(done_handler, (void*)0xdeadbeef);
|
||||||
|
122
tests/modules/getkeys.c
Normal file
122
tests/modules/getkeys.c
Normal file
@ -0,0 +1,122 @@
|
|||||||
|
#define REDISMODULE_EXPERIMENTAL_API
|
||||||
|
|
||||||
|
#include "redismodule.h"
|
||||||
|
#include <strings.h>
|
||||||
|
#include <assert.h>
|
||||||
|
#include <unistd.h>
|
||||||
|
#include <errno.h>
|
||||||
|
|
||||||
|
#define UNUSED(V) ((void) V)
|
||||||
|
|
||||||
|
/* A sample movable keys command that returns a list of all
|
||||||
|
* arguments that follow a KEY argument, i.e.
|
||||||
|
*/
|
||||||
|
int getkeys_command(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
int count = 0;
|
||||||
|
|
||||||
|
/* Handle getkeys-api introspection */
|
||||||
|
if (RedisModule_IsKeysPositionRequest(ctx)) {
|
||||||
|
for (i = 0; i < argc; i++) {
|
||||||
|
size_t len;
|
||||||
|
const char *str = RedisModule_StringPtrLen(argv[i], &len);
|
||||||
|
|
||||||
|
if (len == 3 && !strncasecmp(str, "key", 3) && i + 1 < argc)
|
||||||
|
RedisModule_KeyAtPos(ctx, i + 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Handle real command invocation */
|
||||||
|
RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN);
|
||||||
|
for (i = 0; i < argc; i++) {
|
||||||
|
size_t len;
|
||||||
|
const char *str = RedisModule_StringPtrLen(argv[i], &len);
|
||||||
|
|
||||||
|
if (len == 3 && !strncasecmp(str, "key", 3) && i + 1 < argc) {
|
||||||
|
RedisModule_ReplyWithString(ctx, argv[i+1]);
|
||||||
|
count++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
RedisModule_ReplySetArrayLength(ctx, count);
|
||||||
|
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
int getkeys_fixed(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
RedisModule_ReplyWithArray(ctx, argc - 1);
|
||||||
|
for (i = 1; i < argc; i++) {
|
||||||
|
RedisModule_ReplyWithString(ctx, argv[i]);
|
||||||
|
}
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Introspect a command using RM_GetCommandKeys() and returns the list
|
||||||
|
* of keys. Essentially this is COMMAND GETKEYS implemented in a module.
|
||||||
|
*/
|
||||||
|
int getkeys_introspect(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
||||||
|
{
|
||||||
|
UNUSED(argv);
|
||||||
|
UNUSED(argc);
|
||||||
|
|
||||||
|
if (argc < 3) {
|
||||||
|
RedisModule_WrongArity(ctx);
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
int num_keys;
|
||||||
|
int *keyidx = RedisModule_GetCommandKeys(ctx, &argv[1], argc - 1, &num_keys);
|
||||||
|
|
||||||
|
if (!keyidx) {
|
||||||
|
if (!errno)
|
||||||
|
RedisModule_ReplyWithEmptyArray(ctx);
|
||||||
|
else {
|
||||||
|
char err[100];
|
||||||
|
switch (errno) {
|
||||||
|
case ENOENT:
|
||||||
|
RedisModule_ReplyWithError(ctx, "ERR ENOENT");
|
||||||
|
break;
|
||||||
|
case EINVAL:
|
||||||
|
RedisModule_ReplyWithError(ctx, "ERR EINVAL");
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
snprintf(err, sizeof(err) - 1, "ERR errno=%d", errno);
|
||||||
|
RedisModule_ReplyWithError(ctx, err);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
int i;
|
||||||
|
|
||||||
|
RedisModule_ReplyWithArray(ctx, num_keys);
|
||||||
|
for (i = 0; i < num_keys; i++)
|
||||||
|
RedisModule_ReplyWithString(ctx, argv[1 + keyidx[i]]);
|
||||||
|
|
||||||
|
RedisModule_Free(keyidx);
|
||||||
|
}
|
||||||
|
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
||||||
|
UNUSED(argv);
|
||||||
|
UNUSED(argc);
|
||||||
|
if (RedisModule_Init(ctx,"getkeys",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
|
||||||
|
if (RedisModule_CreateCommand(ctx,"getkeys.command", getkeys_command,"getkeys-api",0,0,0) == REDISMODULE_ERR)
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
|
||||||
|
if (RedisModule_CreateCommand(ctx,"getkeys.fixed", getkeys_fixed,"",2,4,1) == REDISMODULE_ERR)
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
|
||||||
|
if (RedisModule_CreateCommand(ctx,"getkeys.introspect", getkeys_introspect,"",0,0,0) == REDISMODULE_ERR)
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
|
||||||
|
return REDISMODULE_OK;
|
||||||
|
}
|
@ -256,15 +256,35 @@ void moduleChangeCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub,
|
|||||||
LogStringEvent(ctx, keyname, ei->module_name);
|
LogStringEvent(ctx, keyname, ei->module_name);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void swapDbCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)
|
||||||
|
{
|
||||||
|
REDISMODULE_NOT_USED(e);
|
||||||
|
REDISMODULE_NOT_USED(sub);
|
||||||
|
|
||||||
|
RedisModuleSwapDbInfo *ei = data;
|
||||||
|
LogNumericEvent(ctx, "swapdb-first", ei->dbnum_first);
|
||||||
|
LogNumericEvent(ctx, "swapdb-second", ei->dbnum_second);
|
||||||
|
}
|
||||||
|
|
||||||
/* This function must be present on each Redis module. It is used in order to
|
/* This function must be present on each Redis module. It is used in order to
|
||||||
* register the commands into the Redis server. */
|
* register the commands into the Redis server. */
|
||||||
int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
||||||
|
#define VerifySubEventSupported(e, s) \
|
||||||
|
if (!RedisModule_IsSubEventSupported(e, s)) { \
|
||||||
|
return REDISMODULE_ERR; \
|
||||||
|
}
|
||||||
|
|
||||||
REDISMODULE_NOT_USED(argv);
|
REDISMODULE_NOT_USED(argv);
|
||||||
REDISMODULE_NOT_USED(argc);
|
REDISMODULE_NOT_USED(argc);
|
||||||
|
|
||||||
if (RedisModule_Init(ctx,"testhook",1,REDISMODULE_APIVER_1)
|
if (RedisModule_Init(ctx,"testhook",1,REDISMODULE_APIVER_1)
|
||||||
== REDISMODULE_ERR) return REDISMODULE_ERR;
|
== REDISMODULE_ERR) return REDISMODULE_ERR;
|
||||||
|
|
||||||
|
/* Example on how to check if a server sub event is supported */
|
||||||
|
if (!RedisModule_IsSubEventSupported(RedisModuleEvent_ReplicationRoleChanged, REDISMODULE_EVENT_REPLROLECHANGED_NOW_MASTER)) {
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
}
|
||||||
|
|
||||||
/* replication related hooks */
|
/* replication related hooks */
|
||||||
RedisModule_SubscribeToServerEvent(ctx,
|
RedisModule_SubscribeToServerEvent(ctx,
|
||||||
RedisModuleEvent_ReplicationRoleChanged, roleChangeCallback);
|
RedisModuleEvent_ReplicationRoleChanged, roleChangeCallback);
|
||||||
@ -290,8 +310,11 @@ int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
|||||||
RedisModuleEvent_Shutdown, shutdownCallback);
|
RedisModuleEvent_Shutdown, shutdownCallback);
|
||||||
RedisModule_SubscribeToServerEvent(ctx,
|
RedisModule_SubscribeToServerEvent(ctx,
|
||||||
RedisModuleEvent_CronLoop, cronLoopCallback);
|
RedisModuleEvent_CronLoop, cronLoopCallback);
|
||||||
|
|
||||||
RedisModule_SubscribeToServerEvent(ctx,
|
RedisModule_SubscribeToServerEvent(ctx,
|
||||||
RedisModuleEvent_ModuleChange, moduleChangeCallback);
|
RedisModuleEvent_ModuleChange, moduleChangeCallback);
|
||||||
|
RedisModule_SubscribeToServerEvent(ctx,
|
||||||
|
RedisModuleEvent_SwapDB, swapDbCallback);
|
||||||
|
|
||||||
event_log = RedisModule_CreateDict(ctx);
|
event_log = RedisModule_CreateDict(ctx);
|
||||||
|
|
||||||
|
@ -87,6 +87,13 @@ int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
|||||||
|
|
||||||
loaded_event_log = RedisModule_CreateDict(ctx);
|
loaded_event_log = RedisModule_CreateDict(ctx);
|
||||||
|
|
||||||
|
int keySpaceAll = RedisModule_GetKeyspaceNotificationFlagsAll();
|
||||||
|
|
||||||
|
if (!(keySpaceAll & REDISMODULE_NOTIFY_LOADED)) {
|
||||||
|
// REDISMODULE_NOTIFY_LOADED event are not supported we can not start
|
||||||
|
return REDISMODULE_ERR;
|
||||||
|
}
|
||||||
|
|
||||||
if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_LOADED, KeySpace_Notification) != REDISMODULE_OK){
|
if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_LOADED, KeySpace_Notification) != REDISMODULE_OK){
|
||||||
return REDISMODULE_ERR;
|
return REDISMODULE_ERR;
|
||||||
}
|
}
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user