commit
49816941a4
18
README.md
18
README.md
@ -119,7 +119,7 @@ parameter (the path of the configuration file):
|
||||
It is possible to alter the Redis configuration by passing parameters directly
|
||||
as options using the command line. Examples:
|
||||
|
||||
% ./redis-server --port 9999 --slaveof 127.0.0.1 6379
|
||||
% ./redis-server --port 9999 --replicaof 127.0.0.1 6379
|
||||
% ./redis-server /etc/redis/6379.conf --loglevel debug
|
||||
|
||||
All the options in redis.conf are also supported as options using the command
|
||||
@ -216,7 +216,7 @@ Inside the root are the following important directories:
|
||||
|
||||
* `src`: contains the Redis implementation, written in C.
|
||||
* `tests`: contains the unit tests, implemented in Tcl.
|
||||
* `deps`: contains libraries Redis uses. Everything needed to compile Redis is inside this directory; your system just needs to provide `libc`, a POSIX compatible interface and a C compiler. Notably `deps` contains a copy of `jemalloc`, which is the default allocator of Redis under Linux. Note that under `deps` there are also things which started with the Redis project, but for which the main repository is not `anitrez/redis`. An exception to this rule is `deps/geohash-int` which is the low level geocoding library used by Redis: it originated from a different project, but at this point it diverged so much that it is developed as a separated entity directly inside the Redis repository.
|
||||
* `deps`: contains libraries Redis uses. Everything needed to compile Redis is inside this directory; your system just needs to provide `libc`, a POSIX compatible interface and a C compiler. Notably `deps` contains a copy of `jemalloc`, which is the default allocator of Redis under Linux. Note that under `deps` there are also things which started with the Redis project, but for which the main repository is not `antirez/redis`.
|
||||
|
||||
There are a few more directories but they are not very important for our goals
|
||||
here. We'll focus mostly on `src`, where the Redis implementation is contained,
|
||||
@ -227,7 +227,7 @@ of complexity incrementally.
|
||||
Note: lately Redis was refactored quite a bit. Function names and file
|
||||
names have been changed, so you may find that this documentation reflects the
|
||||
`unstable` branch more closely. For instance in Redis 3.0 the `server.c`
|
||||
and `server.h` files were named to `redis.c` and `redis.h`. However the overall
|
||||
and `server.h` files were named `redis.c` and `redis.h`. However the overall
|
||||
structure is the same. Keep in mind that all the new developments and pull
|
||||
requests should be performed against the `unstable` branch.
|
||||
|
||||
@ -245,7 +245,7 @@ A few important fields in this structure are:
|
||||
* `server.db` is an array of Redis databases, where data is stored.
|
||||
* `server.commands` is the command table.
|
||||
* `server.clients` is a linked list of clients connected to the server.
|
||||
* `server.master` is a special client, the master, if the instance is a slave.
|
||||
* `server.master` is a special client, the master, if the instance is a replica.
|
||||
|
||||
There are tons of other fields. Most fields are commented directly inside
|
||||
the structure definition.
|
||||
@ -323,7 +323,7 @@ Inside server.c you can find code that handles other vital things of the Redis s
|
||||
networking.c
|
||||
---
|
||||
|
||||
This file defines all the I/O functions with clients, masters and slaves
|
||||
This file defines all the I/O functions with clients, masters and replicas
|
||||
(which in Redis are just special clients):
|
||||
|
||||
* `createClient()` allocates and initializes a new client.
|
||||
@ -390,16 +390,16 @@ replication.c
|
||||
|
||||
This is one of the most complex files inside Redis, it is recommended to
|
||||
approach it only after getting a bit familiar with the rest of the code base.
|
||||
In this file there is the implementation of both the master and slave role
|
||||
In this file there is the implementation of both the master and replica role
|
||||
of Redis.
|
||||
|
||||
One of the most important functions inside this file is `replicationFeedSlaves()` that writes commands to the clients representing slave instances connected
|
||||
to our master, so that the slaves can get the writes performed by the clients:
|
||||
One of the most important functions inside this file is `replicationFeedSlaves()` that writes commands to the clients representing replica instances connected
|
||||
to our master, so that the replicas can get the writes performed by the clients:
|
||||
this way their data set will remain synchronized with the one in the master.
|
||||
|
||||
This file also implements both the `SYNC` and `PSYNC` commands that are
|
||||
used in order to perform the first synchronization between masters and
|
||||
slaves, or to continue the replication after a disconnection.
|
||||
replicas, or to continue the replication after a disconnection.
|
||||
|
||||
Other C files
|
||||
---
|
||||
|
6
deps/README.md
vendored
6
deps/README.md
vendored
@ -2,7 +2,6 @@ This directory contains all Redis dependencies, except for the libc that
|
||||
should be provided by the operating system.
|
||||
|
||||
* **Jemalloc** is our memory allocator, used as replacement for libc malloc on Linux by default. It has good performances and excellent fragmentation behavior. This component is upgraded from time to time.
|
||||
* **geohash-int** is inside the dependencies directory but is actually part of the Redis project, since it is our private fork (heavily modified) of a library initially developed for Ardb, which is in turn a fork of Redis.
|
||||
* **hiredis** is the official C client library for Redis. It is used by redis-cli, redis-benchmark and Redis Sentinel. It is part of the Redis official ecosystem but is developed externally from the Redis repository, so we just upgrade it as needed.
|
||||
* **linenoise** is a readline replacement. It is developed by the same authors of Redis but is managed as a separated project and updated as needed.
|
||||
* **lua** is Lua 5.1 with minor changes for security and additional libraries.
|
||||
@ -42,11 +41,6 @@ the following additional steps:
|
||||
changed, otherwise you could just copy the old implementation if you are
|
||||
upgrading just to a similar version of Jemalloc.
|
||||
|
||||
Geohash
|
||||
---
|
||||
|
||||
This is never upgraded since it's part of the Redis project. If there are changes to merge from Ardb there is the need to manually check differences, but at this point the source code is pretty different.
|
||||
|
||||
Hiredis
|
||||
---
|
||||
|
||||
|
6
deps/hiredis/.travis.yml
vendored
6
deps/hiredis/.travis.yml
vendored
@ -8,6 +8,12 @@ os:
|
||||
- linux
|
||||
- osx
|
||||
|
||||
branches:
|
||||
only:
|
||||
- staging
|
||||
- trying
|
||||
- master
|
||||
|
||||
before_script:
|
||||
- if [ "$TRAVIS_OS_NAME" == "osx" ] ; then brew update; brew install redis; fi
|
||||
|
||||
|
53
deps/hiredis/CHANGELOG.md
vendored
53
deps/hiredis/CHANGELOG.md
vendored
@ -1,7 +1,51 @@
|
||||
### 1.0.0 (unreleased)
|
||||
|
||||
**Fixes**:
|
||||
**BREAKING CHANGES**:
|
||||
|
||||
* Bulk and multi-bulk lengths less than -1 or greater than `LLONG_MAX` are now
|
||||
protocol errors. This is consistent with the RESP specification. On 32-bit
|
||||
platforms, the upper bound is lowered to `SIZE_MAX`.
|
||||
|
||||
* Change `redisReply.len` to `size_t`, as it denotes the the size of a string
|
||||
|
||||
User code should compare this to `size_t` values as well. If it was used to
|
||||
compare to other values, casting might be necessary or can be removed, if
|
||||
casting was applied before.
|
||||
|
||||
### 0.14.0 (2018-09-25)
|
||||
|
||||
* Make string2ll static to fix conflict with Redis (Tom Lee [c3188b])
|
||||
* Use -dynamiclib instead of -shared for OSX (Ryan Schmidt [a65537])
|
||||
* Use string2ll from Redis w/added tests (Michael Grunder [7bef04, 60f622])
|
||||
* Makefile - OSX compilation fixes (Ryan Schmidt [881fcb, 0e9af8])
|
||||
* Remove redundant NULL checks (Justin Brewer [54acc8, 58e6b8])
|
||||
* Fix bulk and multi-bulk length truncation (Justin Brewer [109197])
|
||||
* Fix SIGSEGV in OpenBSD by checking for NULL before calling freeaddrinfo (Justin Brewer [546d94])
|
||||
* Several POSIX compatibility fixes (Justin Brewer [bbeab8, 49bbaa, d1c1b6])
|
||||
* Makefile - Compatibility fixes (Dimitri Vorobiev [3238cf, 12a9d1])
|
||||
* Makefile - Fix make install on FreeBSD (Zach Shipko [a2ef2b])
|
||||
* Makefile - don't assume $(INSTALL) is cp (Igor Gnatenko [725a96])
|
||||
* Separate side-effect causing function from assert and small cleanup (amallia [b46413, 3c3234])
|
||||
* Don't send negative values to `__redisAsyncCommand` (Frederik Deweerdt [706129])
|
||||
* Fix leak if setsockopt fails (Frederik Deweerdt [e21c9c])
|
||||
* Fix libevent leak (zfz [515228])
|
||||
* Clean up GCC warning (Ichito Nagata [2ec774])
|
||||
* Keep track of errno in `__redisSetErrorFromErrno()` as snprintf may use it (Jin Qing [25cd88])
|
||||
* Solaris compilation fix (Donald Whyte [41b07d])
|
||||
* Reorder linker arguments when building examples (Tustfarm-heart [06eedd])
|
||||
* Keep track of subscriptions in case of rapid subscribe/unsubscribe (Hyungjin Kim [073dc8, be76c5, d46999])
|
||||
* libuv use after free fix (Paul Scott [cbb956])
|
||||
* Properly close socket fd on reconnect attempt (WSL [64d1ec])
|
||||
* Skip valgrind in OSX tests (Jan-Erik Rediger [9deb78])
|
||||
* Various updates for Travis testing OSX (Ted Nyman [fa3774, 16a459, bc0ea5])
|
||||
* Update libevent (Chris Xin [386802])
|
||||
* Change sds.h for building in C++ projects (Ali Volkan ATLI [f5b32e])
|
||||
* Use proper format specifier in redisFormatSdsCommandArgv (Paulino Huerta, Jan-Erik Rediger [360a06, 8655a6])
|
||||
* Better handling of NULL reply in example code (Jan-Erik Rediger [1b8ed3])
|
||||
* Prevent overflow when formatting an error (Jan-Erik Rediger [0335cb])
|
||||
* Compatibility fix for strerror_r (Tom Lee [bb1747])
|
||||
* Properly detect integer parse/overflow errors (Justin Brewer [93421f])
|
||||
* Adds CI for Windows and cygwin fixes (owent, [6c53d6, 6c3e40])
|
||||
* Catch a buffer overflow when formatting the error message
|
||||
* Import latest upstream sds. This breaks applications that are linked against the old hiredis v0.13
|
||||
* Fix warnings, when compiled with -Wshadow
|
||||
@ -9,11 +53,6 @@
|
||||
|
||||
**BREAKING CHANGES**:
|
||||
|
||||
* Change `redisReply.len` to `size_t`, as it denotes the the size of a string
|
||||
|
||||
User code should compare this to `size_t` values as well.
|
||||
If it was used to compare to other values, casting might be necessary or can be removed, if casting was applied before.
|
||||
|
||||
* Remove backwards compatibility macro's
|
||||
|
||||
This removes the following old function aliases, use the new name now:
|
||||
@ -94,7 +133,7 @@ The parser, standalone since v0.12.0, can now be compiled on Windows
|
||||
|
||||
* Add IPv6 support
|
||||
|
||||
* Remove possiblity of multiple close on same fd
|
||||
* Remove possibility of multiple close on same fd
|
||||
|
||||
* Add ability to bind source address on connect
|
||||
|
||||
|
24
deps/hiredis/Makefile
vendored
24
deps/hiredis/Makefile
vendored
@ -36,13 +36,13 @@ endef
|
||||
export REDIS_TEST_CONFIG
|
||||
|
||||
# Fallback to gcc when $CC is not in $PATH.
|
||||
CC:=$(shell sh -c 'type $(CC) >/dev/null 2>/dev/null && echo $(CC) || echo gcc')
|
||||
CXX:=$(shell sh -c 'type $(CXX) >/dev/null 2>/dev/null && echo $(CXX) || echo g++')
|
||||
CC:=$(shell sh -c 'type $${CC%% *} >/dev/null 2>/dev/null && echo $(CC) || echo gcc')
|
||||
CXX:=$(shell sh -c 'type $${CXX%% *} >/dev/null 2>/dev/null && echo $(CXX) || echo g++')
|
||||
OPTIMIZATION?=-O3
|
||||
WARNINGS=-Wall -W -Wstrict-prototypes -Wwrite-strings
|
||||
DEBUG_FLAGS?= -g -ggdb
|
||||
REAL_CFLAGS=$(OPTIMIZATION) -fPIC $(CFLAGS) $(WARNINGS) $(DEBUG_FLAGS) $(ARCH)
|
||||
REAL_LDFLAGS=$(LDFLAGS) $(ARCH)
|
||||
REAL_CFLAGS=$(OPTIMIZATION) -fPIC $(CPPFLAGS) $(CFLAGS) $(WARNINGS) $(DEBUG_FLAGS)
|
||||
REAL_LDFLAGS=$(LDFLAGS)
|
||||
|
||||
DYLIBSUFFIX=so
|
||||
STLIBSUFFIX=a
|
||||
@ -58,12 +58,11 @@ uname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not')
|
||||
ifeq ($(uname_S),SunOS)
|
||||
REAL_LDFLAGS+= -ldl -lnsl -lsocket
|
||||
DYLIB_MAKE_CMD=$(CC) -G -o $(DYLIBNAME) -h $(DYLIB_MINOR_NAME) $(LDFLAGS)
|
||||
INSTALL= cp -r
|
||||
endif
|
||||
ifeq ($(uname_S),Darwin)
|
||||
DYLIBSUFFIX=dylib
|
||||
DYLIB_MINOR_NAME=$(LIBNAME).$(HIREDIS_SONAME).$(DYLIBSUFFIX)
|
||||
DYLIB_MAKE_CMD=$(CC) -shared -Wl,-install_name,$(DYLIB_MINOR_NAME) -o $(DYLIBNAME) $(LDFLAGS)
|
||||
DYLIB_MAKE_CMD=$(CC) -dynamiclib -Wl,-install_name,$(PREFIX)/$(LIBRARY_PATH)/$(DYLIB_MINOR_NAME) -o $(DYLIBNAME) $(LDFLAGS)
|
||||
endif
|
||||
|
||||
all: $(DYLIBNAME) $(STLIBNAME) hiredis-test $(PKGCONFNAME)
|
||||
@ -94,7 +93,7 @@ hiredis-example-libev: examples/example-libev.c adapters/libev.h $(STLIBNAME)
|
||||
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. $< -lev $(STLIBNAME)
|
||||
|
||||
hiredis-example-glib: examples/example-glib.c adapters/glib.h $(STLIBNAME)
|
||||
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) $(shell pkg-config --cflags --libs glib-2.0) -I. $< $(STLIBNAME)
|
||||
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. $< $(shell pkg-config --cflags --libs glib-2.0) $(STLIBNAME)
|
||||
|
||||
hiredis-example-ivykis: examples/example-ivykis.c adapters/ivykis.h $(STLIBNAME)
|
||||
$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. $< -livykis $(STLIBNAME)
|
||||
@ -161,11 +160,7 @@ clean:
|
||||
dep:
|
||||
$(CC) -MM *.c
|
||||
|
||||
ifeq ($(uname_S),SunOS)
|
||||
INSTALL?= cp -r
|
||||
endif
|
||||
|
||||
INSTALL?= cp -a
|
||||
INSTALL?= cp -pPR
|
||||
|
||||
$(PKGCONFNAME): hiredis.h
|
||||
@echo "Generating $@ for pkgconfig..."
|
||||
@ -181,8 +176,9 @@ $(PKGCONFNAME): hiredis.h
|
||||
@echo Cflags: -I\$${includedir} -D_FILE_OFFSET_BITS=64 >> $@
|
||||
|
||||
install: $(DYLIBNAME) $(STLIBNAME) $(PKGCONFNAME)
|
||||
mkdir -p $(INSTALL_INCLUDE_PATH) $(INSTALL_LIBRARY_PATH)
|
||||
$(INSTALL) hiredis.h async.h read.h sds.h adapters $(INSTALL_INCLUDE_PATH)
|
||||
mkdir -p $(INSTALL_INCLUDE_PATH) $(INSTALL_INCLUDE_PATH)/adapters $(INSTALL_LIBRARY_PATH)
|
||||
$(INSTALL) hiredis.h async.h read.h sds.h $(INSTALL_INCLUDE_PATH)
|
||||
$(INSTALL) adapters/*.h $(INSTALL_INCLUDE_PATH)/adapters
|
||||
$(INSTALL) $(DYLIBNAME) $(INSTALL_LIBRARY_PATH)/$(DYLIB_MINOR_NAME)
|
||||
cd $(INSTALL_LIBRARY_PATH) && ln -sf $(DYLIB_MINOR_NAME) $(DYLIBNAME)
|
||||
$(INSTALL) $(STLIBNAME) $(INSTALL_LIBRARY_PATH)
|
||||
|
4
deps/hiredis/adapters/libevent.h
vendored
4
deps/hiredis/adapters/libevent.h
vendored
@ -73,8 +73,8 @@ static void redisLibeventDelWrite(void *privdata) {
|
||||
|
||||
static void redisLibeventCleanup(void *privdata) {
|
||||
redisLibeventEvents *e = (redisLibeventEvents*)privdata;
|
||||
event_del(e->rev);
|
||||
event_del(e->wev);
|
||||
event_free(e->rev);
|
||||
event_free(e->wev);
|
||||
free(e);
|
||||
}
|
||||
|
||||
|
9
deps/hiredis/adapters/libuv.h
vendored
9
deps/hiredis/adapters/libuv.h
vendored
@ -15,15 +15,12 @@ typedef struct redisLibuvEvents {
|
||||
|
||||
static void redisLibuvPoll(uv_poll_t* handle, int status, int events) {
|
||||
redisLibuvEvents* p = (redisLibuvEvents*)handle->data;
|
||||
int ev = (status ? p->events : events);
|
||||
|
||||
if (status != 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (p->context != NULL && (events & UV_READABLE)) {
|
||||
if (p->context != NULL && (ev & UV_READABLE)) {
|
||||
redisAsyncHandleRead(p->context);
|
||||
}
|
||||
if (p->context != NULL && (events & UV_WRITABLE)) {
|
||||
if (p->context != NULL && (ev & UV_WRITABLE)) {
|
||||
redisAsyncHandleWrite(p->context);
|
||||
}
|
||||
}
|
||||
|
17
deps/hiredis/appveyor.yml
vendored
17
deps/hiredis/appveyor.yml
vendored
@ -1,24 +1,13 @@
|
||||
# Appveyor configuration file for CI build of hiredis on Windows (under Cygwin)
|
||||
environment:
|
||||
matrix:
|
||||
- CYG_ROOT: C:\cygwin64
|
||||
CYG_SETUP: setup-x86_64.exe
|
||||
CYG_MIRROR: http://cygwin.mirror.constant.com
|
||||
CYG_CACHE: C:\cygwin64\var\cache\setup
|
||||
CYG_BASH: C:\cygwin64\bin\bash
|
||||
- CYG_BASH: C:\cygwin64\bin\bash
|
||||
CC: gcc
|
||||
- CYG_ROOT: C:\cygwin
|
||||
CYG_SETUP: setup-x86.exe
|
||||
CYG_MIRROR: http://cygwin.mirror.constant.com
|
||||
CYG_CACHE: C:\cygwin\var\cache\setup
|
||||
CYG_BASH: C:\cygwin\bin\bash
|
||||
- CYG_BASH: C:\cygwin\bin\bash
|
||||
CC: gcc
|
||||
TARGET: 32bit
|
||||
TARGET_VARS: 32bit-vars
|
||||
|
||||
# Cache Cygwin files to speed up build
|
||||
cache:
|
||||
- '%CYG_CACHE%'
|
||||
clone_depth: 1
|
||||
|
||||
# Attempt to ensure we don't try to convert line endings to Win32 CRLF as this will cause build to fail
|
||||
@ -27,8 +16,6 @@ init:
|
||||
|
||||
# Install needed build dependencies
|
||||
install:
|
||||
- ps: 'Start-FileDownload "http://cygwin.com/$env:CYG_SETUP" -FileName "$env:CYG_SETUP"'
|
||||
- '%CYG_SETUP% --quiet-mode --no-shortcuts --only-site --root "%CYG_ROOT%" --site "%CYG_MIRROR%" --local-package-dir "%CYG_CACHE%" --packages automake,bison,gcc-core,libtool,make,gettext-devel,gettext,intltool,pkg-config,clang,llvm > NUL 2>&1'
|
||||
- '%CYG_BASH% -lc "cygcheck -dc cygwin"'
|
||||
|
||||
build_script:
|
||||
|
67
deps/hiredis/async.c
vendored
67
deps/hiredis/async.c
vendored
@ -336,7 +336,8 @@ static void __redisAsyncDisconnect(redisAsyncContext *ac) {
|
||||
|
||||
if (ac->err == 0) {
|
||||
/* For clean disconnects, there should be no pending callbacks. */
|
||||
assert(__redisShiftCallback(&ac->replies,NULL) == REDIS_ERR);
|
||||
int ret = __redisShiftCallback(&ac->replies,NULL);
|
||||
assert(ret == REDIS_ERR);
|
||||
} else {
|
||||
/* Disconnection is caused by an error, make sure that pending
|
||||
* callbacks cannot call new commands. */
|
||||
@ -364,6 +365,7 @@ void redisAsyncDisconnect(redisAsyncContext *ac) {
|
||||
static int __redisGetSubscribeCallback(redisAsyncContext *ac, redisReply *reply, redisCallback *dstcb) {
|
||||
redisContext *c = &(ac->c);
|
||||
dict *callbacks;
|
||||
redisCallback *cb;
|
||||
dictEntry *de;
|
||||
int pvariant;
|
||||
char *stype;
|
||||
@ -387,16 +389,28 @@ static int __redisGetSubscribeCallback(redisAsyncContext *ac, redisReply *reply,
|
||||
sname = sdsnewlen(reply->element[1]->str,reply->element[1]->len);
|
||||
de = dictFind(callbacks,sname);
|
||||
if (de != NULL) {
|
||||
memcpy(dstcb,dictGetEntryVal(de),sizeof(*dstcb));
|
||||
cb = dictGetEntryVal(de);
|
||||
|
||||
/* If this is an subscribe reply decrease pending counter. */
|
||||
if (strcasecmp(stype+pvariant,"subscribe") == 0) {
|
||||
cb->pending_subs -= 1;
|
||||
}
|
||||
|
||||
memcpy(dstcb,cb,sizeof(*dstcb));
|
||||
|
||||
/* If this is an unsubscribe message, remove it. */
|
||||
if (strcasecmp(stype+pvariant,"unsubscribe") == 0) {
|
||||
dictDelete(callbacks,sname);
|
||||
if (cb->pending_subs == 0)
|
||||
dictDelete(callbacks,sname);
|
||||
|
||||
/* If this was the last unsubscribe message, revert to
|
||||
* non-subscribe mode. */
|
||||
assert(reply->element[2]->type == REDIS_REPLY_INTEGER);
|
||||
if (reply->element[2]->integer == 0)
|
||||
|
||||
/* Unset subscribed flag only when no pipelined pending subscribe. */
|
||||
if (reply->element[2]->integer == 0
|
||||
&& dictSize(ac->sub.channels) == 0
|
||||
&& dictSize(ac->sub.patterns) == 0)
|
||||
c->flags &= ~REDIS_SUBSCRIBED;
|
||||
}
|
||||
}
|
||||
@ -410,7 +424,7 @@ static int __redisGetSubscribeCallback(redisAsyncContext *ac, redisReply *reply,
|
||||
|
||||
void redisProcessCallbacks(redisAsyncContext *ac) {
|
||||
redisContext *c = &(ac->c);
|
||||
redisCallback cb = {NULL, NULL, NULL};
|
||||
redisCallback cb = {NULL, NULL, 0, NULL};
|
||||
void *reply = NULL;
|
||||
int status;
|
||||
|
||||
@ -492,22 +506,22 @@ void redisProcessCallbacks(redisAsyncContext *ac) {
|
||||
* write event fires. When connecting was not successful, the connect callback
|
||||
* is called with a REDIS_ERR status and the context is free'd. */
|
||||
static int __redisAsyncHandleConnect(redisAsyncContext *ac) {
|
||||
int completed = 0;
|
||||
redisContext *c = &(ac->c);
|
||||
|
||||
if (redisCheckSocketError(c) == REDIS_ERR) {
|
||||
/* Try again later when connect(2) is still in progress. */
|
||||
if (errno == EINPROGRESS)
|
||||
return REDIS_OK;
|
||||
|
||||
if (ac->onConnect) ac->onConnect(ac,REDIS_ERR);
|
||||
if (redisCheckConnectDone(c, &completed) == REDIS_ERR) {
|
||||
/* Error! */
|
||||
redisCheckSocketError(c);
|
||||
if (ac->onConnect) ac->onConnect(ac, REDIS_ERR);
|
||||
__redisAsyncDisconnect(ac);
|
||||
return REDIS_ERR;
|
||||
} else if (completed == 1) {
|
||||
/* connected! */
|
||||
if (ac->onConnect) ac->onConnect(ac, REDIS_OK);
|
||||
c->flags |= REDIS_CONNECTED;
|
||||
return REDIS_OK;
|
||||
} else {
|
||||
return REDIS_OK;
|
||||
}
|
||||
|
||||
/* Mark context as connected. */
|
||||
c->flags |= REDIS_CONNECTED;
|
||||
if (ac->onConnect) ac->onConnect(ac,REDIS_OK);
|
||||
return REDIS_OK;
|
||||
}
|
||||
|
||||
/* This function should be called when the socket is readable.
|
||||
@ -583,6 +597,9 @@ static const char *nextArgument(const char *start, const char **str, size_t *len
|
||||
static int __redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *cmd, size_t len) {
|
||||
redisContext *c = &(ac->c);
|
||||
redisCallback cb;
|
||||
struct dict *cbdict;
|
||||
dictEntry *de;
|
||||
redisCallback *existcb;
|
||||
int pvariant, hasnext;
|
||||
const char *cstr, *astr;
|
||||
size_t clen, alen;
|
||||
@ -596,6 +613,7 @@ static int __redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void
|
||||
/* Setup callback */
|
||||
cb.fn = fn;
|
||||
cb.privdata = privdata;
|
||||
cb.pending_subs = 1;
|
||||
|
||||
/* Find out which command will be appended. */
|
||||
p = nextArgument(cmd,&cstr,&clen);
|
||||
@ -612,9 +630,18 @@ static int __redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void
|
||||
while ((p = nextArgument(p,&astr,&alen)) != NULL) {
|
||||
sname = sdsnewlen(astr,alen);
|
||||
if (pvariant)
|
||||
ret = dictReplace(ac->sub.patterns,sname,&cb);
|
||||
cbdict = ac->sub.patterns;
|
||||
else
|
||||
ret = dictReplace(ac->sub.channels,sname,&cb);
|
||||
cbdict = ac->sub.channels;
|
||||
|
||||
de = dictFind(cbdict,sname);
|
||||
|
||||
if (de != NULL) {
|
||||
existcb = dictGetEntryVal(de);
|
||||
cb.pending_subs = existcb->pending_subs + 1;
|
||||
}
|
||||
|
||||
ret = dictReplace(cbdict,sname,&cb);
|
||||
|
||||
if (ret == 0) sdsfree(sname);
|
||||
}
|
||||
@ -676,6 +703,8 @@ int redisAsyncCommandArgv(redisAsyncContext *ac, redisCallbackFn *fn, void *priv
|
||||
int len;
|
||||
int status;
|
||||
len = redisFormatSdsCommandArgv(&cmd,argc,argv,argvlen);
|
||||
if (len < 0)
|
||||
return REDIS_ERR;
|
||||
status = __redisAsyncCommand(ac,fn,privdata,cmd,len);
|
||||
sdsfree(cmd);
|
||||
return status;
|
||||
|
5
deps/hiredis/async.h
vendored
5
deps/hiredis/async.h
vendored
@ -45,6 +45,7 @@ typedef void (redisCallbackFn)(struct redisAsyncContext*, void*, void*);
|
||||
typedef struct redisCallback {
|
||||
struct redisCallback *next; /* simple singly linked list */
|
||||
redisCallbackFn *fn;
|
||||
int pending_subs;
|
||||
void *privdata;
|
||||
} redisCallback;
|
||||
|
||||
@ -92,6 +93,10 @@ typedef struct redisAsyncContext {
|
||||
/* Regular command callbacks */
|
||||
redisCallbackList replies;
|
||||
|
||||
/* Address used for connect() */
|
||||
struct sockaddr *saddr;
|
||||
size_t addrlen;
|
||||
|
||||
/* Subscription callbacks */
|
||||
struct {
|
||||
redisCallbackList invalid;
|
||||
|
19
deps/hiredis/fmacros.h
vendored
19
deps/hiredis/fmacros.h
vendored
@ -1,25 +1,12 @@
|
||||
#ifndef __HIREDIS_FMACRO_H
|
||||
#define __HIREDIS_FMACRO_H
|
||||
|
||||
#if defined(__linux__)
|
||||
#define _BSD_SOURCE
|
||||
#define _DEFAULT_SOURCE
|
||||
#endif
|
||||
|
||||
#if defined(__CYGWIN__)
|
||||
#include <sys/cdefs.h>
|
||||
#endif
|
||||
|
||||
#if defined(__sun__)
|
||||
#define _POSIX_C_SOURCE 200112L
|
||||
#else
|
||||
#if !(defined(__APPLE__) && defined(__MACH__)) && !(defined(__FreeBSD__))
|
||||
#define _XOPEN_SOURCE 600
|
||||
#endif
|
||||
#endif
|
||||
#define _POSIX_C_SOURCE 200112L
|
||||
|
||||
#if defined(__APPLE__) && defined(__MACH__)
|
||||
#define _OSX
|
||||
/* Enable TCP_KEEPALIVE */
|
||||
#define _DARWIN_C_SOURCE
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
125
deps/hiredis/hiredis.c
vendored
125
deps/hiredis/hiredis.c
vendored
@ -47,7 +47,9 @@ static redisReply *createReplyObject(int type);
|
||||
static void *createStringObject(const redisReadTask *task, char *str, size_t len);
|
||||
static void *createArrayObject(const redisReadTask *task, int elements);
|
||||
static void *createIntegerObject(const redisReadTask *task, long long value);
|
||||
static void *createDoubleObject(const redisReadTask *task, double value, char *str, size_t len);
|
||||
static void *createNilObject(const redisReadTask *task);
|
||||
static void *createBoolObject(const redisReadTask *task, int bval);
|
||||
|
||||
/* Default set of functions to build the reply. Keep in mind that such a
|
||||
* function returning NULL is interpreted as OOM. */
|
||||
@ -55,7 +57,9 @@ static redisReplyObjectFunctions defaultFunctions = {
|
||||
createStringObject,
|
||||
createArrayObject,
|
||||
createIntegerObject,
|
||||
createDoubleObject,
|
||||
createNilObject,
|
||||
createBoolObject,
|
||||
freeReplyObject
|
||||
};
|
||||
|
||||
@ -82,18 +86,19 @@ void freeReplyObject(void *reply) {
|
||||
case REDIS_REPLY_INTEGER:
|
||||
break; /* Nothing to free */
|
||||
case REDIS_REPLY_ARRAY:
|
||||
case REDIS_REPLY_MAP:
|
||||
case REDIS_REPLY_SET:
|
||||
if (r->element != NULL) {
|
||||
for (j = 0; j < r->elements; j++)
|
||||
if (r->element[j] != NULL)
|
||||
freeReplyObject(r->element[j]);
|
||||
freeReplyObject(r->element[j]);
|
||||
free(r->element);
|
||||
}
|
||||
break;
|
||||
case REDIS_REPLY_ERROR:
|
||||
case REDIS_REPLY_STATUS:
|
||||
case REDIS_REPLY_STRING:
|
||||
if (r->str != NULL)
|
||||
free(r->str);
|
||||
case REDIS_REPLY_DOUBLE:
|
||||
free(r->str);
|
||||
break;
|
||||
}
|
||||
free(r);
|
||||
@ -125,7 +130,9 @@ static void *createStringObject(const redisReadTask *task, char *str, size_t len
|
||||
|
||||
if (task->parent) {
|
||||
parent = task->parent->obj;
|
||||
assert(parent->type == REDIS_REPLY_ARRAY);
|
||||
assert(parent->type == REDIS_REPLY_ARRAY ||
|
||||
parent->type == REDIS_REPLY_MAP ||
|
||||
parent->type == REDIS_REPLY_SET);
|
||||
parent->element[task->idx] = r;
|
||||
}
|
||||
return r;
|
||||
@ -134,7 +141,7 @@ static void *createStringObject(const redisReadTask *task, char *str, size_t len
|
||||
static void *createArrayObject(const redisReadTask *task, int elements) {
|
||||
redisReply *r, *parent;
|
||||
|
||||
r = createReplyObject(REDIS_REPLY_ARRAY);
|
||||
r = createReplyObject(task->type);
|
||||
if (r == NULL)
|
||||
return NULL;
|
||||
|
||||
@ -150,7 +157,9 @@ static void *createArrayObject(const redisReadTask *task, int elements) {
|
||||
|
||||
if (task->parent) {
|
||||
parent = task->parent->obj;
|
||||
assert(parent->type == REDIS_REPLY_ARRAY);
|
||||
assert(parent->type == REDIS_REPLY_ARRAY ||
|
||||
parent->type == REDIS_REPLY_MAP ||
|
||||
parent->type == REDIS_REPLY_SET);
|
||||
parent->element[task->idx] = r;
|
||||
}
|
||||
return r;
|
||||
@ -167,7 +176,41 @@ static void *createIntegerObject(const redisReadTask *task, long long value) {
|
||||
|
||||
if (task->parent) {
|
||||
parent = task->parent->obj;
|
||||
assert(parent->type == REDIS_REPLY_ARRAY);
|
||||
assert(parent->type == REDIS_REPLY_ARRAY ||
|
||||
parent->type == REDIS_REPLY_MAP ||
|
||||
parent->type == REDIS_REPLY_SET);
|
||||
parent->element[task->idx] = r;
|
||||
}
|
||||
return r;
|
||||
}
|
||||
|
||||
static void *createDoubleObject(const redisReadTask *task, double value, char *str, size_t len) {
|
||||
redisReply *r, *parent;
|
||||
|
||||
r = createReplyObject(REDIS_REPLY_DOUBLE);
|
||||
if (r == NULL)
|
||||
return NULL;
|
||||
|
||||
r->dval = value;
|
||||
r->str = malloc(len+1);
|
||||
if (r->str == NULL) {
|
||||
freeReplyObject(r);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* The double reply also has the original protocol string representing a
|
||||
* double as a null terminated string. This way the caller does not need
|
||||
* to format back for string conversion, especially since Redis does efforts
|
||||
* to make the string more human readable avoiding the calssical double
|
||||
* decimal string conversion artifacts. */
|
||||
memcpy(r->str, str, len);
|
||||
r->str[len] = '\0';
|
||||
|
||||
if (task->parent) {
|
||||
parent = task->parent->obj;
|
||||
assert(parent->type == REDIS_REPLY_ARRAY ||
|
||||
parent->type == REDIS_REPLY_MAP ||
|
||||
parent->type == REDIS_REPLY_SET);
|
||||
parent->element[task->idx] = r;
|
||||
}
|
||||
return r;
|
||||
@ -182,7 +225,28 @@ static void *createNilObject(const redisReadTask *task) {
|
||||
|
||||
if (task->parent) {
|
||||
parent = task->parent->obj;
|
||||
assert(parent->type == REDIS_REPLY_ARRAY);
|
||||
assert(parent->type == REDIS_REPLY_ARRAY ||
|
||||
parent->type == REDIS_REPLY_MAP ||
|
||||
parent->type == REDIS_REPLY_SET);
|
||||
parent->element[task->idx] = r;
|
||||
}
|
||||
return r;
|
||||
}
|
||||
|
||||
static void *createBoolObject(const redisReadTask *task, int bval) {
|
||||
redisReply *r, *parent;
|
||||
|
||||
r = createReplyObject(REDIS_REPLY_BOOL);
|
||||
if (r == NULL)
|
||||
return NULL;
|
||||
|
||||
r->integer = bval != 0;
|
||||
|
||||
if (task->parent) {
|
||||
parent = task->parent->obj;
|
||||
assert(parent->type == REDIS_REPLY_ARRAY ||
|
||||
parent->type == REDIS_REPLY_MAP ||
|
||||
parent->type == REDIS_REPLY_SET);
|
||||
parent->element[task->idx] = r;
|
||||
}
|
||||
return r;
|
||||
@ -432,11 +496,7 @@ cleanup:
|
||||
}
|
||||
|
||||
sdsfree(curarg);
|
||||
|
||||
/* No need to check cmd since it is the last statement that can fail,
|
||||
* but do it anyway to be as defensive as possible. */
|
||||
if (cmd != NULL)
|
||||
free(cmd);
|
||||
free(cmd);
|
||||
|
||||
return error_type;
|
||||
}
|
||||
@ -581,7 +641,7 @@ void __redisSetError(redisContext *c, int type, const char *str) {
|
||||
} else {
|
||||
/* Only REDIS_ERR_IO may lack a description! */
|
||||
assert(type == REDIS_ERR_IO);
|
||||
__redis_strerror_r(errno, c->errstr, sizeof(c->errstr));
|
||||
strerror_r(errno, c->errstr, sizeof(c->errstr));
|
||||
}
|
||||
}
|
||||
|
||||
@ -596,14 +656,8 @@ static redisContext *redisContextInit(void) {
|
||||
if (c == NULL)
|
||||
return NULL;
|
||||
|
||||
c->err = 0;
|
||||
c->errstr[0] = '\0';
|
||||
c->obuf = sdsempty();
|
||||
c->reader = redisReaderCreate();
|
||||
c->tcp.host = NULL;
|
||||
c->tcp.source_addr = NULL;
|
||||
c->unix_sock.path = NULL;
|
||||
c->timeout = NULL;
|
||||
|
||||
if (c->obuf == NULL || c->reader == NULL) {
|
||||
redisFree(c);
|
||||
@ -618,18 +672,14 @@ void redisFree(redisContext *c) {
|
||||
return;
|
||||
if (c->fd > 0)
|
||||
close(c->fd);
|
||||
if (c->obuf != NULL)
|
||||
sdsfree(c->obuf);
|
||||
if (c->reader != NULL)
|
||||
redisReaderFree(c->reader);
|
||||
if (c->tcp.host)
|
||||
free(c->tcp.host);
|
||||
if (c->tcp.source_addr)
|
||||
free(c->tcp.source_addr);
|
||||
if (c->unix_sock.path)
|
||||
free(c->unix_sock.path);
|
||||
if (c->timeout)
|
||||
free(c->timeout);
|
||||
|
||||
sdsfree(c->obuf);
|
||||
redisReaderFree(c->reader);
|
||||
free(c->tcp.host);
|
||||
free(c->tcp.source_addr);
|
||||
free(c->unix_sock.path);
|
||||
free(c->timeout);
|
||||
free(c->saddr);
|
||||
free(c);
|
||||
}
|
||||
|
||||
@ -710,6 +760,8 @@ redisContext *redisConnectNonBlock(const char *ip, int port) {
|
||||
redisContext *redisConnectBindNonBlock(const char *ip, int port,
|
||||
const char *source_addr) {
|
||||
redisContext *c = redisContextInit();
|
||||
if (c == NULL)
|
||||
return NULL;
|
||||
c->flags &= ~REDIS_BLOCK;
|
||||
redisContextConnectBindTcp(c,ip,port,NULL,source_addr);
|
||||
return c;
|
||||
@ -718,6 +770,8 @@ redisContext *redisConnectBindNonBlock(const char *ip, int port,
|
||||
redisContext *redisConnectBindNonBlockWithReuse(const char *ip, int port,
|
||||
const char *source_addr) {
|
||||
redisContext *c = redisContextInit();
|
||||
if (c == NULL)
|
||||
return NULL;
|
||||
c->flags &= ~REDIS_BLOCK;
|
||||
c->flags |= REDIS_REUSEADDR;
|
||||
redisContextConnectBindTcp(c,ip,port,NULL,source_addr);
|
||||
@ -789,7 +843,7 @@ int redisEnableKeepAlive(redisContext *c) {
|
||||
/* Use this function to handle a read event on the descriptor. It will try
|
||||
* and read some bytes from the socket and feed them to the reply parser.
|
||||
*
|
||||
* After this function is called, you may use redisContextReadReply to
|
||||
* After this function is called, you may use redisGetReplyFromReader to
|
||||
* see if there is a reply available. */
|
||||
int redisBufferRead(redisContext *c) {
|
||||
char buf[1024*16];
|
||||
@ -1007,9 +1061,8 @@ void *redisvCommand(redisContext *c, const char *format, va_list ap) {
|
||||
|
||||
void *redisCommand(redisContext *c, const char *format, ...) {
|
||||
va_list ap;
|
||||
void *reply = NULL;
|
||||
va_start(ap,format);
|
||||
reply = redisvCommand(c,format,ap);
|
||||
void *reply = redisvCommand(c,format,ap);
|
||||
va_end(ap);
|
||||
return reply;
|
||||
}
|
||||
|
37
deps/hiredis/hiredis.h
vendored
37
deps/hiredis/hiredis.h
vendored
@ -40,9 +40,9 @@
|
||||
#include "sds.h" /* for sds */
|
||||
|
||||
#define HIREDIS_MAJOR 0
|
||||
#define HIREDIS_MINOR 13
|
||||
#define HIREDIS_PATCH 3
|
||||
#define HIREDIS_SONAME 0.13
|
||||
#define HIREDIS_MINOR 14
|
||||
#define HIREDIS_PATCH 0
|
||||
#define HIREDIS_SONAME 0.14
|
||||
|
||||
/* Connection type can be blocking or non-blocking and is set in the
|
||||
* least significant bit of the flags field in redisContext. */
|
||||
@ -80,30 +80,6 @@
|
||||
* SO_REUSEADDR is being used. */
|
||||
#define REDIS_CONNECT_RETRIES 10
|
||||
|
||||
/* strerror_r has two completely different prototypes and behaviors
|
||||
* depending on system issues, so we need to operate on the error buffer
|
||||
* differently depending on which strerror_r we're using. */
|
||||
#ifndef _GNU_SOURCE
|
||||
/* "regular" POSIX strerror_r that does the right thing. */
|
||||
#define __redis_strerror_r(errno, buf, len) \
|
||||
do { \
|
||||
strerror_r((errno), (buf), (len)); \
|
||||
} while (0)
|
||||
#else
|
||||
/* "bad" GNU strerror_r we need to clean up after. */
|
||||
#define __redis_strerror_r(errno, buf, len) \
|
||||
do { \
|
||||
char *err_str = strerror_r((errno), (buf), (len)); \
|
||||
/* If return value _isn't_ the start of the buffer we passed in, \
|
||||
* then GNU strerror_r returned an internal static buffer and we \
|
||||
* need to copy the result into our private buffer. */ \
|
||||
if (err_str != (buf)) { \
|
||||
strncpy((buf), err_str, ((len) - 1)); \
|
||||
buf[(len)-1] = '\0'; \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
@ -112,8 +88,10 @@ extern "C" {
|
||||
typedef struct redisReply {
|
||||
int type; /* REDIS_REPLY_* */
|
||||
long long integer; /* The integer when type is REDIS_REPLY_INTEGER */
|
||||
double dval; /* The double when type is REDIS_REPLY_DOUBLE */
|
||||
size_t len; /* Length of string */
|
||||
char *str; /* Used for both REDIS_REPLY_ERROR and REDIS_REPLY_STRING */
|
||||
char *str; /* Used for REDIS_REPLY_ERROR, REDIS_REPLY_STRING
|
||||
and REDIS_REPLY_DOUBLE (in additionl to dval). */
|
||||
size_t elements; /* number of elements, for REDIS_REPLY_ARRAY */
|
||||
struct redisReply **element; /* elements vector for REDIS_REPLY_ARRAY */
|
||||
} redisReply;
|
||||
@ -158,6 +136,9 @@ typedef struct redisContext {
|
||||
char *path;
|
||||
} unix_sock;
|
||||
|
||||
/* For non-blocking connect */
|
||||
struct sockadr *saddr;
|
||||
size_t addrlen;
|
||||
} redisContext;
|
||||
|
||||
redisContext *redisConnect(const char *ip, int port);
|
||||
|
75
deps/hiredis/net.c
vendored
75
deps/hiredis/net.c
vendored
@ -65,12 +65,13 @@ static void redisContextCloseFd(redisContext *c) {
|
||||
}
|
||||
|
||||
static void __redisSetErrorFromErrno(redisContext *c, int type, const char *prefix) {
|
||||
int errorno = errno; /* snprintf() may change errno */
|
||||
char buf[128] = { 0 };
|
||||
size_t len = 0;
|
||||
|
||||
if (prefix != NULL)
|
||||
len = snprintf(buf,sizeof(buf),"%s: ",prefix);
|
||||
__redis_strerror_r(errno, (char *)(buf + len), sizeof(buf) - len);
|
||||
strerror_r(errorno, (char *)(buf + len), sizeof(buf) - len);
|
||||
__redisSetError(c,type,buf);
|
||||
}
|
||||
|
||||
@ -135,14 +136,13 @@ int redisKeepAlive(redisContext *c, int interval) {
|
||||
|
||||
val = interval;
|
||||
|
||||
#ifdef _OSX
|
||||
#if defined(__APPLE__) && defined(__MACH__)
|
||||
if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPALIVE, &val, sizeof(val)) < 0) {
|
||||
__redisSetError(c,REDIS_ERR_OTHER,strerror(errno));
|
||||
return REDIS_ERR;
|
||||
}
|
||||
#else
|
||||
#if defined(__GLIBC__) && !defined(__FreeBSD_kernel__)
|
||||
val = interval;
|
||||
if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPIDLE, &val, sizeof(val)) < 0) {
|
||||
__redisSetError(c,REDIS_ERR_OTHER,strerror(errno));
|
||||
return REDIS_ERR;
|
||||
@ -221,8 +221,10 @@ static int redisContextWaitReady(redisContext *c, long msec) {
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
if (redisCheckSocketError(c) != REDIS_OK)
|
||||
if (redisCheckConnectDone(c, &res) != REDIS_OK || res == 0) {
|
||||
redisCheckSocketError(c);
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
return REDIS_OK;
|
||||
}
|
||||
@ -232,8 +234,28 @@ static int redisContextWaitReady(redisContext *c, long msec) {
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
int redisCheckConnectDone(redisContext *c, int *completed) {
|
||||
int rc = connect(c->fd, (const struct sockaddr *)c->saddr, c->addrlen);
|
||||
if (rc == 0) {
|
||||
*completed = 1;
|
||||
return REDIS_OK;
|
||||
}
|
||||
switch (errno) {
|
||||
case EISCONN:
|
||||
*completed = 1;
|
||||
return REDIS_OK;
|
||||
case EALREADY:
|
||||
case EINPROGRESS:
|
||||
case EWOULDBLOCK:
|
||||
*completed = 0;
|
||||
return REDIS_OK;
|
||||
default:
|
||||
return REDIS_ERR;
|
||||
}
|
||||
}
|
||||
|
||||
int redisCheckSocketError(redisContext *c) {
|
||||
int err = 0;
|
||||
int err = 0, errno_saved = errno;
|
||||
socklen_t errlen = sizeof(err);
|
||||
|
||||
if (getsockopt(c->fd, SOL_SOCKET, SO_ERROR, &err, &errlen) == -1) {
|
||||
@ -241,6 +263,10 @@ int redisCheckSocketError(redisContext *c) {
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
if (err == 0) {
|
||||
err = errno_saved;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
errno = err;
|
||||
__redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);
|
||||
@ -285,8 +311,7 @@ static int _redisContextConnectTcp(redisContext *c, const char *addr, int port,
|
||||
* This is a bit ugly, but atleast it works and doesn't leak memory.
|
||||
**/
|
||||
if (c->tcp.host != addr) {
|
||||
if (c->tcp.host)
|
||||
free(c->tcp.host);
|
||||
free(c->tcp.host);
|
||||
|
||||
c->tcp.host = strdup(addr);
|
||||
}
|
||||
@ -299,8 +324,7 @@ static int _redisContextConnectTcp(redisContext *c, const char *addr, int port,
|
||||
memcpy(c->timeout, timeout, sizeof(struct timeval));
|
||||
}
|
||||
} else {
|
||||
if (c->timeout)
|
||||
free(c->timeout);
|
||||
free(c->timeout);
|
||||
c->timeout = NULL;
|
||||
}
|
||||
|
||||
@ -356,6 +380,7 @@ addrretry:
|
||||
n = 1;
|
||||
if (setsockopt(s, SOL_SOCKET, SO_REUSEADDR, (char*) &n,
|
||||
sizeof(n)) < 0) {
|
||||
freeaddrinfo(bservinfo);
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
@ -374,12 +399,27 @@ addrretry:
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
|
||||
/* For repeat connection */
|
||||
if (c->saddr) {
|
||||
free(c->saddr);
|
||||
}
|
||||
c->saddr = malloc(p->ai_addrlen);
|
||||
memcpy(c->saddr, p->ai_addr, p->ai_addrlen);
|
||||
c->addrlen = p->ai_addrlen;
|
||||
|
||||
if (connect(s,p->ai_addr,p->ai_addrlen) == -1) {
|
||||
if (errno == EHOSTUNREACH) {
|
||||
redisContextCloseFd(c);
|
||||
continue;
|
||||
} else if (errno == EINPROGRESS && !blocking) {
|
||||
/* This is ok. */
|
||||
} else if (errno == EINPROGRESS) {
|
||||
if (blocking) {
|
||||
goto wait_for_ready;
|
||||
}
|
||||
/* This is ok.
|
||||
* Note that even when it's in blocking mode, we unset blocking
|
||||
* for `connect()`
|
||||
*/
|
||||
} else if (errno == EADDRNOTAVAIL && reuseaddr) {
|
||||
if (++reuses >= REDIS_CONNECT_RETRIES) {
|
||||
goto error;
|
||||
@ -388,6 +428,7 @@ addrretry:
|
||||
goto addrretry;
|
||||
}
|
||||
} else {
|
||||
wait_for_ready:
|
||||
if (redisContextWaitReady(c,timeout_msec) != REDIS_OK)
|
||||
goto error;
|
||||
}
|
||||
@ -411,7 +452,10 @@ addrretry:
|
||||
error:
|
||||
rv = REDIS_ERR;
|
||||
end:
|
||||
freeaddrinfo(servinfo);
|
||||
if(servinfo) {
|
||||
freeaddrinfo(servinfo);
|
||||
}
|
||||
|
||||
return rv; // Need to return REDIS_OK if alright
|
||||
}
|
||||
|
||||
@ -431,7 +475,7 @@ int redisContextConnectUnix(redisContext *c, const char *path, const struct time
|
||||
struct sockaddr_un sa;
|
||||
long timeout_msec = -1;
|
||||
|
||||
if (redisCreateSocket(c,AF_LOCAL) < 0)
|
||||
if (redisCreateSocket(c,AF_UNIX) < 0)
|
||||
return REDIS_ERR;
|
||||
if (redisSetBlocking(c,0) != REDIS_OK)
|
||||
return REDIS_ERR;
|
||||
@ -448,15 +492,14 @@ int redisContextConnectUnix(redisContext *c, const char *path, const struct time
|
||||
memcpy(c->timeout, timeout, sizeof(struct timeval));
|
||||
}
|
||||
} else {
|
||||
if (c->timeout)
|
||||
free(c->timeout);
|
||||
free(c->timeout);
|
||||
c->timeout = NULL;
|
||||
}
|
||||
|
||||
if (redisContextTimeoutMsec(c,&timeout_msec) != REDIS_OK)
|
||||
return REDIS_ERR;
|
||||
|
||||
sa.sun_family = AF_LOCAL;
|
||||
sa.sun_family = AF_UNIX;
|
||||
strncpy(sa.sun_path,path,sizeof(sa.sun_path)-1);
|
||||
if (connect(c->fd, (struct sockaddr*)&sa, sizeof(sa)) == -1) {
|
||||
if (errno == EINPROGRESS && !blocking) {
|
||||
|
5
deps/hiredis/net.h
vendored
5
deps/hiredis/net.h
vendored
@ -37,10 +37,6 @@
|
||||
|
||||
#include "hiredis.h"
|
||||
|
||||
#if defined(__sun)
|
||||
#define AF_LOCAL AF_UNIX
|
||||
#endif
|
||||
|
||||
int redisCheckSocketError(redisContext *c);
|
||||
int redisContextSetTimeout(redisContext *c, const struct timeval tv);
|
||||
int redisContextConnectTcp(redisContext *c, const char *addr, int port, const struct timeval *timeout);
|
||||
@ -49,5 +45,6 @@ int redisContextConnectBindTcp(redisContext *c, const char *addr, int port,
|
||||
const char *source_addr);
|
||||
int redisContextConnectUnix(redisContext *c, const char *path, const struct timeval *timeout);
|
||||
int redisKeepAlive(redisContext *c, int interval);
|
||||
int redisCheckConnectDone(redisContext *c, int *completed);
|
||||
|
||||
#endif
|
||||
|
233
deps/hiredis/read.c
vendored
233
deps/hiredis/read.c
vendored
@ -29,7 +29,6 @@
|
||||
* POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
|
||||
#include "fmacros.h"
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
@ -39,6 +38,8 @@
|
||||
#include <assert.h>
|
||||
#include <errno.h>
|
||||
#include <ctype.h>
|
||||
#include <limits.h>
|
||||
#include <math.h>
|
||||
|
||||
#include "read.h"
|
||||
#include "sds.h"
|
||||
@ -52,11 +53,9 @@ static void __redisReaderSetError(redisReader *r, int type, const char *str) {
|
||||
}
|
||||
|
||||
/* Clear input buffer on errors. */
|
||||
if (r->buf != NULL) {
|
||||
sdsfree(r->buf);
|
||||
r->buf = NULL;
|
||||
r->pos = r->len = 0;
|
||||
}
|
||||
sdsfree(r->buf);
|
||||
r->buf = NULL;
|
||||
r->pos = r->len = 0;
|
||||
|
||||
/* Reset task stack. */
|
||||
r->ridx = -1;
|
||||
@ -143,33 +142,79 @@ static char *seekNewline(char *s, size_t len) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Read a long long value starting at *s, under the assumption that it will be
|
||||
* terminated by \r\n. Ambiguously returns -1 for unexpected input. */
|
||||
static long long readLongLong(char *s) {
|
||||
long long v = 0;
|
||||
int dec, mult = 1;
|
||||
char c;
|
||||
/* Convert a string into a long long. Returns REDIS_OK if the string could be
|
||||
* parsed into a (non-overflowing) long long, REDIS_ERR otherwise. The value
|
||||
* will be set to the parsed value when appropriate.
|
||||
*
|
||||
* Note that this function demands that the string strictly represents
|
||||
* a long long: no spaces or other characters before or after the string
|
||||
* representing the number are accepted, nor zeroes at the start if not
|
||||
* for the string "0" representing the zero number.
|
||||
*
|
||||
* Because of its strictness, it is safe to use this function to check if
|
||||
* you can convert a string into a long long, and obtain back the string
|
||||
* from the number without any loss in the string representation. */
|
||||
static int string2ll(const char *s, size_t slen, long long *value) {
|
||||
const char *p = s;
|
||||
size_t plen = 0;
|
||||
int negative = 0;
|
||||
unsigned long long v;
|
||||
|
||||
if (*s == '-') {
|
||||
mult = -1;
|
||||
s++;
|
||||
} else if (*s == '+') {
|
||||
mult = 1;
|
||||
s++;
|
||||
if (plen == slen)
|
||||
return REDIS_ERR;
|
||||
|
||||
/* Special case: first and only digit is 0. */
|
||||
if (slen == 1 && p[0] == '0') {
|
||||
if (value != NULL) *value = 0;
|
||||
return REDIS_OK;
|
||||
}
|
||||
|
||||
while ((c = *(s++)) != '\r') {
|
||||
dec = c - '0';
|
||||
if (dec >= 0 && dec < 10) {
|
||||
v *= 10;
|
||||
v += dec;
|
||||
} else {
|
||||
/* Should not happen... */
|
||||
return -1;
|
||||
}
|
||||
if (p[0] == '-') {
|
||||
negative = 1;
|
||||
p++; plen++;
|
||||
|
||||
/* Abort on only a negative sign. */
|
||||
if (plen == slen)
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
return mult*v;
|
||||
/* First digit should be 1-9, otherwise the string should just be 0. */
|
||||
if (p[0] >= '1' && p[0] <= '9') {
|
||||
v = p[0]-'0';
|
||||
p++; plen++;
|
||||
} else if (p[0] == '0' && slen == 1) {
|
||||
*value = 0;
|
||||
return REDIS_OK;
|
||||
} else {
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
while (plen < slen && p[0] >= '0' && p[0] <= '9') {
|
||||
if (v > (ULLONG_MAX / 10)) /* Overflow. */
|
||||
return REDIS_ERR;
|
||||
v *= 10;
|
||||
|
||||
if (v > (ULLONG_MAX - (p[0]-'0'))) /* Overflow. */
|
||||
return REDIS_ERR;
|
||||
v += p[0]-'0';
|
||||
|
||||
p++; plen++;
|
||||
}
|
||||
|
||||
/* Return if not all bytes were used. */
|
||||
if (plen < slen)
|
||||
return REDIS_ERR;
|
||||
|
||||
if (negative) {
|
||||
if (v > ((unsigned long long)(-(LLONG_MIN+1))+1)) /* Overflow. */
|
||||
return REDIS_ERR;
|
||||
if (value != NULL) *value = -v;
|
||||
} else {
|
||||
if (v > LLONG_MAX) /* Overflow. */
|
||||
return REDIS_ERR;
|
||||
if (value != NULL) *value = v;
|
||||
}
|
||||
return REDIS_OK;
|
||||
}
|
||||
|
||||
static char *readLine(redisReader *r, int *_len) {
|
||||
@ -198,7 +243,9 @@ static void moveToNextTask(redisReader *r) {
|
||||
|
||||
cur = &(r->rstack[r->ridx]);
|
||||
prv = &(r->rstack[r->ridx-1]);
|
||||
assert(prv->type == REDIS_REPLY_ARRAY);
|
||||
assert(prv->type == REDIS_REPLY_ARRAY ||
|
||||
prv->type == REDIS_REPLY_MAP ||
|
||||
prv->type == REDIS_REPLY_SET);
|
||||
if (cur->idx == prv->elements-1) {
|
||||
r->ridx--;
|
||||
} else {
|
||||
@ -220,10 +267,58 @@ static int processLineItem(redisReader *r) {
|
||||
|
||||
if ((p = readLine(r,&len)) != NULL) {
|
||||
if (cur->type == REDIS_REPLY_INTEGER) {
|
||||
if (r->fn && r->fn->createInteger)
|
||||
obj = r->fn->createInteger(cur,readLongLong(p));
|
||||
else
|
||||
if (r->fn && r->fn->createInteger) {
|
||||
long long v;
|
||||
if (string2ll(p, len, &v) == REDIS_ERR) {
|
||||
__redisReaderSetError(r,REDIS_ERR_PROTOCOL,
|
||||
"Bad integer value");
|
||||
return REDIS_ERR;
|
||||
}
|
||||
obj = r->fn->createInteger(cur,v);
|
||||
} else {
|
||||
obj = (void*)REDIS_REPLY_INTEGER;
|
||||
}
|
||||
} else if (cur->type == REDIS_REPLY_DOUBLE) {
|
||||
if (r->fn && r->fn->createDouble) {
|
||||
char buf[326], *eptr;
|
||||
double d;
|
||||
|
||||
if ((size_t)len >= sizeof(buf)) {
|
||||
__redisReaderSetError(r,REDIS_ERR_PROTOCOL,
|
||||
"Double value is too large");
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
memcpy(buf,p,len);
|
||||
buf[len] = '\0';
|
||||
|
||||
if (strcasecmp(buf,",inf") == 0) {
|
||||
d = 1.0/0.0; /* Positive infinite. */
|
||||
} else if (strcasecmp(buf,",-inf") == 0) {
|
||||
d = -1.0/0.0; /* Nevative infinite. */
|
||||
} else {
|
||||
d = strtod((char*)buf,&eptr);
|
||||
if (buf[0] == '\0' || eptr[0] != '\0' || isnan(d)) {
|
||||
__redisReaderSetError(r,REDIS_ERR_PROTOCOL,
|
||||
"Bad double value");
|
||||
return REDIS_ERR;
|
||||
}
|
||||
}
|
||||
obj = r->fn->createDouble(cur,d,buf,len);
|
||||
} else {
|
||||
obj = (void*)REDIS_REPLY_DOUBLE;
|
||||
}
|
||||
} else if (cur->type == REDIS_REPLY_NIL) {
|
||||
if (r->fn && r->fn->createNil)
|
||||
obj = r->fn->createNil(cur);
|
||||
else
|
||||
obj = (void*)REDIS_REPLY_NIL;
|
||||
} else if (cur->type == REDIS_REPLY_BOOL) {
|
||||
int bval = p[0] == 't' || p[0] == 'T';
|
||||
if (r->fn && r->fn->createBool)
|
||||
obj = r->fn->createBool(cur,bval);
|
||||
else
|
||||
obj = (void*)REDIS_REPLY_BOOL;
|
||||
} else {
|
||||
/* Type will be error or status. */
|
||||
if (r->fn && r->fn->createString)
|
||||
@ -250,7 +345,7 @@ static int processBulkItem(redisReader *r) {
|
||||
redisReadTask *cur = &(r->rstack[r->ridx]);
|
||||
void *obj = NULL;
|
||||
char *p, *s;
|
||||
long len;
|
||||
long long len;
|
||||
unsigned long bytelen;
|
||||
int success = 0;
|
||||
|
||||
@ -259,9 +354,20 @@ static int processBulkItem(redisReader *r) {
|
||||
if (s != NULL) {
|
||||
p = r->buf+r->pos;
|
||||
bytelen = s-(r->buf+r->pos)+2; /* include \r\n */
|
||||
len = readLongLong(p);
|
||||
|
||||
if (len < 0) {
|
||||
if (string2ll(p, bytelen - 2, &len) == REDIS_ERR) {
|
||||
__redisReaderSetError(r,REDIS_ERR_PROTOCOL,
|
||||
"Bad bulk string length");
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
if (len < -1 || (LLONG_MAX > SIZE_MAX && len > (long long)SIZE_MAX)) {
|
||||
__redisReaderSetError(r,REDIS_ERR_PROTOCOL,
|
||||
"Bulk string length out of range");
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
if (len == -1) {
|
||||
/* The nil object can always be created. */
|
||||
if (r->fn && r->fn->createNil)
|
||||
obj = r->fn->createNil(cur);
|
||||
@ -299,12 +405,13 @@ static int processBulkItem(redisReader *r) {
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
static int processMultiBulkItem(redisReader *r) {
|
||||
/* Process the array, map and set types. */
|
||||
static int processAggregateItem(redisReader *r) {
|
||||
redisReadTask *cur = &(r->rstack[r->ridx]);
|
||||
void *obj;
|
||||
char *p;
|
||||
long elements;
|
||||
int root = 0;
|
||||
long long elements;
|
||||
int root = 0, len;
|
||||
|
||||
/* Set error for nested multi bulks with depth > 7 */
|
||||
if (r->ridx == 8) {
|
||||
@ -313,10 +420,21 @@ static int processMultiBulkItem(redisReader *r) {
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
if ((p = readLine(r,NULL)) != NULL) {
|
||||
elements = readLongLong(p);
|
||||
if ((p = readLine(r,&len)) != NULL) {
|
||||
if (string2ll(p, len, &elements) == REDIS_ERR) {
|
||||
__redisReaderSetError(r,REDIS_ERR_PROTOCOL,
|
||||
"Bad multi-bulk length");
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
root = (r->ridx == 0);
|
||||
|
||||
if (elements < -1 || elements > INT_MAX) {
|
||||
__redisReaderSetError(r,REDIS_ERR_PROTOCOL,
|
||||
"Multi-bulk length out of range");
|
||||
return REDIS_ERR;
|
||||
}
|
||||
|
||||
if (elements == -1) {
|
||||
if (r->fn && r->fn->createNil)
|
||||
obj = r->fn->createNil(cur);
|
||||
@ -330,10 +448,12 @@ static int processMultiBulkItem(redisReader *r) {
|
||||
|
||||
moveToNextTask(r);
|
||||
} else {
|
||||
if (cur->type == REDIS_REPLY_MAP) elements *= 2;
|
||||
|
||||
if (r->fn && r->fn->createArray)
|
||||
obj = r->fn->createArray(cur,elements);
|
||||
else
|
||||
obj = (void*)REDIS_REPLY_ARRAY;
|
||||
obj = (void*)(long)cur->type;
|
||||
|
||||
if (obj == NULL) {
|
||||
__redisReaderSetErrorOOM(r);
|
||||
@ -381,12 +501,27 @@ static int processItem(redisReader *r) {
|
||||
case ':':
|
||||
cur->type = REDIS_REPLY_INTEGER;
|
||||
break;
|
||||
case ',':
|
||||
cur->type = REDIS_REPLY_DOUBLE;
|
||||
break;
|
||||
case '_':
|
||||
cur->type = REDIS_REPLY_NIL;
|
||||
break;
|
||||
case '$':
|
||||
cur->type = REDIS_REPLY_STRING;
|
||||
break;
|
||||
case '*':
|
||||
cur->type = REDIS_REPLY_ARRAY;
|
||||
break;
|
||||
case '%':
|
||||
cur->type = REDIS_REPLY_MAP;
|
||||
break;
|
||||
case '~':
|
||||
cur->type = REDIS_REPLY_SET;
|
||||
break;
|
||||
case '#':
|
||||
cur->type = REDIS_REPLY_BOOL;
|
||||
break;
|
||||
default:
|
||||
__redisReaderSetErrorProtocolByte(r,*p);
|
||||
return REDIS_ERR;
|
||||
@ -402,11 +537,16 @@ static int processItem(redisReader *r) {
|
||||
case REDIS_REPLY_ERROR:
|
||||
case REDIS_REPLY_STATUS:
|
||||
case REDIS_REPLY_INTEGER:
|
||||
case REDIS_REPLY_DOUBLE:
|
||||
case REDIS_REPLY_NIL:
|
||||
case REDIS_REPLY_BOOL:
|
||||
return processLineItem(r);
|
||||
case REDIS_REPLY_STRING:
|
||||
return processBulkItem(r);
|
||||
case REDIS_REPLY_ARRAY:
|
||||
return processMultiBulkItem(r);
|
||||
case REDIS_REPLY_MAP:
|
||||
case REDIS_REPLY_SET:
|
||||
return processAggregateItem(r);
|
||||
default:
|
||||
assert(NULL);
|
||||
return REDIS_ERR; /* Avoid warning. */
|
||||
@ -416,12 +556,10 @@ static int processItem(redisReader *r) {
|
||||
redisReader *redisReaderCreateWithFunctions(redisReplyObjectFunctions *fn) {
|
||||
redisReader *r;
|
||||
|
||||
r = calloc(sizeof(redisReader),1);
|
||||
r = calloc(1,sizeof(redisReader));
|
||||
if (r == NULL)
|
||||
return NULL;
|
||||
|
||||
r->err = 0;
|
||||
r->errstr[0] = '\0';
|
||||
r->fn = fn;
|
||||
r->buf = sdsempty();
|
||||
r->maxbuf = REDIS_READER_MAX_BUF;
|
||||
@ -435,10 +573,11 @@ redisReader *redisReaderCreateWithFunctions(redisReplyObjectFunctions *fn) {
|
||||
}
|
||||
|
||||
void redisReaderFree(redisReader *r) {
|
||||
if (r == NULL)
|
||||
return;
|
||||
if (r->reply != NULL && r->fn && r->fn->freeObject)
|
||||
r->fn->freeObject(r->reply);
|
||||
if (r->buf != NULL)
|
||||
sdsfree(r->buf);
|
||||
sdsfree(r->buf);
|
||||
free(r);
|
||||
}
|
||||
|
||||
|
10
deps/hiredis/read.h
vendored
10
deps/hiredis/read.h
vendored
@ -53,6 +53,14 @@
|
||||
#define REDIS_REPLY_NIL 4
|
||||
#define REDIS_REPLY_STATUS 5
|
||||
#define REDIS_REPLY_ERROR 6
|
||||
#define REDIS_REPLY_DOUBLE 7
|
||||
#define REDIS_REPLY_BOOL 8
|
||||
#define REDIS_REPLY_VERB 9
|
||||
#define REDIS_REPLY_MAP 9
|
||||
#define REDIS_REPLY_SET 10
|
||||
#define REDIS_REPLY_ATTR 11
|
||||
#define REDIS_REPLY_PUSH 12
|
||||
#define REDIS_REPLY_BIGNUM 13
|
||||
|
||||
#define REDIS_READER_MAX_BUF (1024*16) /* Default max unused reader buffer. */
|
||||
|
||||
@ -73,7 +81,9 @@ typedef struct redisReplyObjectFunctions {
|
||||
void *(*createString)(const redisReadTask*, char*, size_t);
|
||||
void *(*createArray)(const redisReadTask*, int);
|
||||
void *(*createInteger)(const redisReadTask*, long long);
|
||||
void *(*createDouble)(const redisReadTask*, double, char*, size_t);
|
||||
void *(*createNil)(const redisReadTask*);
|
||||
void *(*createBool)(const redisReadTask*, int);
|
||||
void (*freeObject)(void*);
|
||||
} redisReplyObjectFunctions;
|
||||
|
||||
|
29
deps/hiredis/sds.c
vendored
29
deps/hiredis/sds.c
vendored
@ -219,7 +219,10 @@ sds sdsMakeRoomFor(sds s, size_t addlen) {
|
||||
hdrlen = sdsHdrSize(type);
|
||||
if (oldtype==type) {
|
||||
newsh = s_realloc(sh, hdrlen+newlen+1);
|
||||
if (newsh == NULL) return NULL;
|
||||
if (newsh == NULL) {
|
||||
s_free(sh);
|
||||
return NULL;
|
||||
}
|
||||
s = (char*)newsh+hdrlen;
|
||||
} else {
|
||||
/* Since the header size changes, need to move the string forward,
|
||||
@ -592,6 +595,7 @@ sds sdscatfmt(sds s, char const *fmt, ...) {
|
||||
/* Make sure there is always space for at least 1 char. */
|
||||
if (sdsavail(s)==0) {
|
||||
s = sdsMakeRoomFor(s,1);
|
||||
if (s == NULL) goto fmt_error;
|
||||
}
|
||||
|
||||
switch(*f) {
|
||||
@ -605,6 +609,7 @@ sds sdscatfmt(sds s, char const *fmt, ...) {
|
||||
l = (next == 's') ? strlen(str) : sdslen(str);
|
||||
if (sdsavail(s) < l) {
|
||||
s = sdsMakeRoomFor(s,l);
|
||||
if (s == NULL) goto fmt_error;
|
||||
}
|
||||
memcpy(s+i,str,l);
|
||||
sdsinclen(s,l);
|
||||
@ -621,6 +626,7 @@ sds sdscatfmt(sds s, char const *fmt, ...) {
|
||||
l = sdsll2str(buf,num);
|
||||
if (sdsavail(s) < l) {
|
||||
s = sdsMakeRoomFor(s,l);
|
||||
if (s == NULL) goto fmt_error;
|
||||
}
|
||||
memcpy(s+i,buf,l);
|
||||
sdsinclen(s,l);
|
||||
@ -638,6 +644,7 @@ sds sdscatfmt(sds s, char const *fmt, ...) {
|
||||
l = sdsull2str(buf,unum);
|
||||
if (sdsavail(s) < l) {
|
||||
s = sdsMakeRoomFor(s,l);
|
||||
if (s == NULL) goto fmt_error;
|
||||
}
|
||||
memcpy(s+i,buf,l);
|
||||
sdsinclen(s,l);
|
||||
@ -662,6 +669,10 @@ sds sdscatfmt(sds s, char const *fmt, ...) {
|
||||
/* Add null-term */
|
||||
s[i] = '\0';
|
||||
return s;
|
||||
|
||||
fmt_error:
|
||||
va_end(ap);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Remove the part of the string from left and from right composed just of
|
||||
@ -1018,10 +1029,18 @@ sds *sdssplitargs(const char *line, int *argc) {
|
||||
if (*p) p++;
|
||||
}
|
||||
/* add the token to the vector */
|
||||
vector = s_realloc(vector,((*argc)+1)*sizeof(char*));
|
||||
vector[*argc] = current;
|
||||
(*argc)++;
|
||||
current = NULL;
|
||||
{
|
||||
char **new_vector = s_realloc(vector,((*argc)+1)*sizeof(char*));
|
||||
if (new_vector == NULL) {
|
||||
s_free(vector);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
vector = new_vector;
|
||||
vector[*argc] = current;
|
||||
(*argc)++;
|
||||
current = NULL;
|
||||
}
|
||||
} else {
|
||||
/* Even on empty input string return something not NULL. */
|
||||
if (vector == NULL) vector = s_malloc(sizeof(void*));
|
||||
|
154
deps/hiredis/test.c
vendored
154
deps/hiredis/test.c
vendored
@ -3,7 +3,9 @@
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <strings.h>
|
||||
#include <sys/socket.h>
|
||||
#include <sys/time.h>
|
||||
#include <netdb.h>
|
||||
#include <assert.h>
|
||||
#include <unistd.h>
|
||||
#include <signal.h>
|
||||
@ -91,7 +93,7 @@ static int disconnect(redisContext *c, int keep_fd) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
static redisContext *connect(struct config config) {
|
||||
static redisContext *do_connect(struct config config) {
|
||||
redisContext *c = NULL;
|
||||
|
||||
if (config.type == CONN_TCP) {
|
||||
@ -248,7 +250,7 @@ static void test_append_formatted_commands(struct config config) {
|
||||
char *cmd;
|
||||
int len;
|
||||
|
||||
c = connect(config);
|
||||
c = do_connect(config);
|
||||
|
||||
test("Append format command: ");
|
||||
|
||||
@ -302,6 +304,82 @@ static void test_reply_reader(void) {
|
||||
strncasecmp(reader->errstr,"No support for",14) == 0);
|
||||
redisReaderFree(reader);
|
||||
|
||||
test("Correctly parses LLONG_MAX: ");
|
||||
reader = redisReaderCreate();
|
||||
redisReaderFeed(reader, ":9223372036854775807\r\n",22);
|
||||
ret = redisReaderGetReply(reader,&reply);
|
||||
test_cond(ret == REDIS_OK &&
|
||||
((redisReply*)reply)->type == REDIS_REPLY_INTEGER &&
|
||||
((redisReply*)reply)->integer == LLONG_MAX);
|
||||
freeReplyObject(reply);
|
||||
redisReaderFree(reader);
|
||||
|
||||
test("Set error when > LLONG_MAX: ");
|
||||
reader = redisReaderCreate();
|
||||
redisReaderFeed(reader, ":9223372036854775808\r\n",22);
|
||||
ret = redisReaderGetReply(reader,&reply);
|
||||
test_cond(ret == REDIS_ERR &&
|
||||
strcasecmp(reader->errstr,"Bad integer value") == 0);
|
||||
freeReplyObject(reply);
|
||||
redisReaderFree(reader);
|
||||
|
||||
test("Correctly parses LLONG_MIN: ");
|
||||
reader = redisReaderCreate();
|
||||
redisReaderFeed(reader, ":-9223372036854775808\r\n",23);
|
||||
ret = redisReaderGetReply(reader,&reply);
|
||||
test_cond(ret == REDIS_OK &&
|
||||
((redisReply*)reply)->type == REDIS_REPLY_INTEGER &&
|
||||
((redisReply*)reply)->integer == LLONG_MIN);
|
||||
freeReplyObject(reply);
|
||||
redisReaderFree(reader);
|
||||
|
||||
test("Set error when < LLONG_MIN: ");
|
||||
reader = redisReaderCreate();
|
||||
redisReaderFeed(reader, ":-9223372036854775809\r\n",23);
|
||||
ret = redisReaderGetReply(reader,&reply);
|
||||
test_cond(ret == REDIS_ERR &&
|
||||
strcasecmp(reader->errstr,"Bad integer value") == 0);
|
||||
freeReplyObject(reply);
|
||||
redisReaderFree(reader);
|
||||
|
||||
test("Set error when array < -1: ");
|
||||
reader = redisReaderCreate();
|
||||
redisReaderFeed(reader, "*-2\r\n+asdf\r\n",12);
|
||||
ret = redisReaderGetReply(reader,&reply);
|
||||
test_cond(ret == REDIS_ERR &&
|
||||
strcasecmp(reader->errstr,"Multi-bulk length out of range") == 0);
|
||||
freeReplyObject(reply);
|
||||
redisReaderFree(reader);
|
||||
|
||||
test("Set error when bulk < -1: ");
|
||||
reader = redisReaderCreate();
|
||||
redisReaderFeed(reader, "$-2\r\nasdf\r\n",11);
|
||||
ret = redisReaderGetReply(reader,&reply);
|
||||
test_cond(ret == REDIS_ERR &&
|
||||
strcasecmp(reader->errstr,"Bulk string length out of range") == 0);
|
||||
freeReplyObject(reply);
|
||||
redisReaderFree(reader);
|
||||
|
||||
test("Set error when array > INT_MAX: ");
|
||||
reader = redisReaderCreate();
|
||||
redisReaderFeed(reader, "*9223372036854775807\r\n+asdf\r\n",29);
|
||||
ret = redisReaderGetReply(reader,&reply);
|
||||
test_cond(ret == REDIS_ERR &&
|
||||
strcasecmp(reader->errstr,"Multi-bulk length out of range") == 0);
|
||||
freeReplyObject(reply);
|
||||
redisReaderFree(reader);
|
||||
|
||||
#if LLONG_MAX > SIZE_MAX
|
||||
test("Set error when bulk > SIZE_MAX: ");
|
||||
reader = redisReaderCreate();
|
||||
redisReaderFeed(reader, "$9223372036854775807\r\nasdf\r\n",28);
|
||||
ret = redisReaderGetReply(reader,&reply);
|
||||
test_cond(ret == REDIS_ERR &&
|
||||
strcasecmp(reader->errstr,"Bulk string length out of range") == 0);
|
||||
freeReplyObject(reply);
|
||||
redisReaderFree(reader);
|
||||
#endif
|
||||
|
||||
test("Works with NULL functions for reply: ");
|
||||
reader = redisReaderCreate();
|
||||
reader->fn = NULL;
|
||||
@ -358,18 +436,32 @@ static void test_free_null(void) {
|
||||
|
||||
static void test_blocking_connection_errors(void) {
|
||||
redisContext *c;
|
||||
struct addrinfo hints = {.ai_family = AF_INET};
|
||||
struct addrinfo *ai_tmp = NULL;
|
||||
const char *bad_domain = "idontexist.com";
|
||||
|
||||
test("Returns error when host cannot be resolved: ");
|
||||
c = redisConnect((char*)"idontexist.test", 6379);
|
||||
test_cond(c->err == REDIS_ERR_OTHER &&
|
||||
(strcmp(c->errstr,"Name or service not known") == 0 ||
|
||||
strcmp(c->errstr,"Can't resolve: idontexist.test") == 0 ||
|
||||
strcmp(c->errstr,"nodename nor servname provided, or not known") == 0 ||
|
||||
strcmp(c->errstr,"No address associated with hostname") == 0 ||
|
||||
strcmp(c->errstr,"Temporary failure in name resolution") == 0 ||
|
||||
strcmp(c->errstr,"hostname nor servname provided, or not known") == 0 ||
|
||||
strcmp(c->errstr,"no address associated with name") == 0));
|
||||
redisFree(c);
|
||||
int rv = getaddrinfo(bad_domain, "6379", &hints, &ai_tmp);
|
||||
if (rv != 0) {
|
||||
// Address does *not* exist
|
||||
test("Returns error when host cannot be resolved: ");
|
||||
// First see if this domain name *actually* resolves to NXDOMAIN
|
||||
c = redisConnect("dontexist.com", 6379);
|
||||
test_cond(
|
||||
c->err == REDIS_ERR_OTHER &&
|
||||
(strcmp(c->errstr, "Name or service not known") == 0 ||
|
||||
strcmp(c->errstr, "Can't resolve: sadkfjaskfjsa.com") == 0 ||
|
||||
strcmp(c->errstr,
|
||||
"nodename nor servname provided, or not known") == 0 ||
|
||||
strcmp(c->errstr, "No address associated with hostname") == 0 ||
|
||||
strcmp(c->errstr, "Temporary failure in name resolution") == 0 ||
|
||||
strcmp(c->errstr,
|
||||
"hostname nor servname provided, or not known") == 0 ||
|
||||
strcmp(c->errstr, "no address associated with name") == 0));
|
||||
redisFree(c);
|
||||
} else {
|
||||
printf("Skipping NXDOMAIN test. Found evil ISP!\n");
|
||||
freeaddrinfo(ai_tmp);
|
||||
}
|
||||
|
||||
test("Returns error when the port is not open: ");
|
||||
c = redisConnect((char*)"localhost", 1);
|
||||
@ -387,7 +479,7 @@ static void test_blocking_connection(struct config config) {
|
||||
redisContext *c;
|
||||
redisReply *reply;
|
||||
|
||||
c = connect(config);
|
||||
c = do_connect(config);
|
||||
|
||||
test("Is able to deliver commands: ");
|
||||
reply = redisCommand(c,"PING");
|
||||
@ -468,7 +560,7 @@ static void test_blocking_connection_timeouts(struct config config) {
|
||||
const char *cmd = "DEBUG SLEEP 3\r\n";
|
||||
struct timeval tv;
|
||||
|
||||
c = connect(config);
|
||||
c = do_connect(config);
|
||||
test("Successfully completes a command when the timeout is not exceeded: ");
|
||||
reply = redisCommand(c,"SET foo fast");
|
||||
freeReplyObject(reply);
|
||||
@ -480,7 +572,7 @@ static void test_blocking_connection_timeouts(struct config config) {
|
||||
freeReplyObject(reply);
|
||||
disconnect(c, 0);
|
||||
|
||||
c = connect(config);
|
||||
c = do_connect(config);
|
||||
test("Does not return a reply when the command times out: ");
|
||||
s = write(c->fd, cmd, strlen(cmd));
|
||||
tv.tv_sec = 0;
|
||||
@ -514,7 +606,7 @@ static void test_blocking_io_errors(struct config config) {
|
||||
int major, minor;
|
||||
|
||||
/* Connect to target given by config. */
|
||||
c = connect(config);
|
||||
c = do_connect(config);
|
||||
{
|
||||
/* Find out Redis version to determine the path for the next test */
|
||||
const char *field = "redis_version:";
|
||||
@ -549,7 +641,7 @@ static void test_blocking_io_errors(struct config config) {
|
||||
strcmp(c->errstr,"Server closed the connection") == 0);
|
||||
redisFree(c);
|
||||
|
||||
c = connect(config);
|
||||
c = do_connect(config);
|
||||
test("Returns I/O error on socket timeout: ");
|
||||
struct timeval tv = { 0, 1000 };
|
||||
assert(redisSetTimeout(c,tv) == REDIS_OK);
|
||||
@ -583,7 +675,7 @@ static void test_invalid_timeout_errors(struct config config) {
|
||||
}
|
||||
|
||||
static void test_throughput(struct config config) {
|
||||
redisContext *c = connect(config);
|
||||
redisContext *c = do_connect(config);
|
||||
redisReply **replies;
|
||||
int i, num;
|
||||
long long t1, t2;
|
||||
@ -616,6 +708,17 @@ static void test_throughput(struct config config) {
|
||||
free(replies);
|
||||
printf("\t(%dx LRANGE with 500 elements: %.3fs)\n", num, (t2-t1)/1000000.0);
|
||||
|
||||
replies = malloc(sizeof(redisReply*)*num);
|
||||
t1 = usec();
|
||||
for (i = 0; i < num; i++) {
|
||||
replies[i] = redisCommand(c, "INCRBY incrkey %d", 1000000);
|
||||
assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_INTEGER);
|
||||
}
|
||||
t2 = usec();
|
||||
for (i = 0; i < num; i++) freeReplyObject(replies[i]);
|
||||
free(replies);
|
||||
printf("\t(%dx INCRBY: %.3fs)\n", num, (t2-t1)/1000000.0);
|
||||
|
||||
num = 10000;
|
||||
replies = malloc(sizeof(redisReply*)*num);
|
||||
for (i = 0; i < num; i++)
|
||||
@ -644,6 +747,19 @@ static void test_throughput(struct config config) {
|
||||
free(replies);
|
||||
printf("\t(%dx LRANGE with 500 elements (pipelined): %.3fs)\n", num, (t2-t1)/1000000.0);
|
||||
|
||||
replies = malloc(sizeof(redisReply*)*num);
|
||||
for (i = 0; i < num; i++)
|
||||
redisAppendCommand(c,"INCRBY incrkey %d", 1000000);
|
||||
t1 = usec();
|
||||
for (i = 0; i < num; i++) {
|
||||
assert(redisGetReply(c, (void*)&replies[i]) == REDIS_OK);
|
||||
assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_INTEGER);
|
||||
}
|
||||
t2 = usec();
|
||||
for (i = 0; i < num; i++) freeReplyObject(replies[i]);
|
||||
free(replies);
|
||||
printf("\t(%dx INCRBY (pipelined): %.3fs)\n", num, (t2-t1)/1000000.0);
|
||||
|
||||
disconnect(c, 0);
|
||||
}
|
||||
|
||||
|
476
redis.conf
476
redis.conf
@ -264,59 +264,75 @@ dir ./
|
||||
|
||||
################################# REPLICATION #################################
|
||||
|
||||
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
|
||||
# Master-Replica replication. Use replicaof to make a Redis instance a copy of
|
||||
# another Redis server. A few things to understand ASAP about Redis replication.
|
||||
#
|
||||
# +------------------+ +---------------+
|
||||
# | Master | ---> | Replica |
|
||||
# | (receive writes) | | (exact copy) |
|
||||
# +------------------+ +---------------+
|
||||
#
|
||||
# 1) Redis replication is asynchronous, but you can configure a master to
|
||||
# stop accepting writes if it appears to be not connected with at least
|
||||
# a given number of slaves.
|
||||
# 2) Redis slaves are able to perform a partial resynchronization with the
|
||||
# a given number of replicas.
|
||||
# 2) Redis replicas are able to perform a partial resynchronization with the
|
||||
# master if the replication link is lost for a relatively small amount of
|
||||
# time. You may want to configure the replication backlog size (see the next
|
||||
# sections of this file) with a sensible value depending on your needs.
|
||||
# 3) Replication is automatic and does not need user intervention. After a
|
||||
# network partition slaves automatically try to reconnect to masters
|
||||
# network partition replicas automatically try to reconnect to masters
|
||||
# and resynchronize with them.
|
||||
#
|
||||
# slaveof <masterip> <masterport>
|
||||
# replicaof <masterip> <masterport>
|
||||
|
||||
# If the master is password protected (using the "requirepass" configuration
|
||||
# directive below) it is possible to tell the slave to authenticate before
|
||||
# directive below) it is possible to tell the replica to authenticate before
|
||||
# starting the replication synchronization process, otherwise the master will
|
||||
# refuse the slave request.
|
||||
# refuse the replica request.
|
||||
#
|
||||
# masterauth <master-password>
|
||||
|
||||
# When a slave loses its connection with the master, or when the replication
|
||||
# is still in progress, the slave can act in two different ways:
|
||||
#
|
||||
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
|
||||
# However this is not enough if you are using Redis ACLs (for Redis version
|
||||
# 6 or greater), and the default user is not capable of running the PSYNC
|
||||
# command and/or other commands needed for replication. In this case it's
|
||||
# better to configure a special user to use with replication, and specify the
|
||||
# masteruser configuration as such:
|
||||
#
|
||||
# masteruser <username>
|
||||
#
|
||||
# When masteruser is specified, the replica will authenticate against its
|
||||
# master using the new AUTH form: AUTH <username> <password>.
|
||||
|
||||
# When a replica loses its connection with the master, or when the replication
|
||||
# is still in progress, the replica can act in two different ways:
|
||||
#
|
||||
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
|
||||
# still reply to client requests, possibly with out of date data, or the
|
||||
# data set may just be empty if this is the first synchronization.
|
||||
#
|
||||
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
|
||||
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
|
||||
# an error "SYNC with master in progress" to all the kind of commands
|
||||
# but to INFO, SLAVEOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
|
||||
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
|
||||
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
|
||||
# COMMAND, POST, HOST: and LATENCY.
|
||||
#
|
||||
slave-serve-stale-data yes
|
||||
replica-serve-stale-data yes
|
||||
|
||||
# You can configure a slave instance to accept writes or not. Writing against
|
||||
# a slave instance may be useful to store some ephemeral data (because data
|
||||
# written on a slave will be easily deleted after resync with the master) but
|
||||
# You can configure a replica instance to accept writes or not. Writing against
|
||||
# a replica instance may be useful to store some ephemeral data (because data
|
||||
# written on a replica will be easily deleted after resync with the master) but
|
||||
# may also cause problems if clients are writing to it because of a
|
||||
# misconfiguration.
|
||||
#
|
||||
# Since Redis 2.6 by default slaves are read-only.
|
||||
# Since Redis 2.6 by default replicas are read-only.
|
||||
#
|
||||
# Note: read only slaves are not designed to be exposed to untrusted clients
|
||||
# Note: read only replicas are not designed to be exposed to untrusted clients
|
||||
# on the internet. It's just a protection layer against misuse of the instance.
|
||||
# Still a read only slave exports by default all the administrative commands
|
||||
# Still a read only replica exports by default all the administrative commands
|
||||
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
|
||||
# security of read only slaves using 'rename-command' to shadow all the
|
||||
# security of read only replicas using 'rename-command' to shadow all the
|
||||
# administrative / dangerous commands.
|
||||
slave-read-only yes
|
||||
replica-read-only yes
|
||||
|
||||
# Replication SYNC strategy: disk or socket.
|
||||
#
|
||||
@ -324,25 +340,25 @@ slave-read-only yes
|
||||
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
|
||||
# -------------------------------------------------------
|
||||
#
|
||||
# New slaves and reconnecting slaves that are not able to continue the replication
|
||||
# New replicas and reconnecting replicas that are not able to continue the replication
|
||||
# process just receiving differences, need to do what is called a "full
|
||||
# synchronization". An RDB file is transmitted from the master to the slaves.
|
||||
# synchronization". An RDB file is transmitted from the master to the replicas.
|
||||
# The transmission can happen in two different ways:
|
||||
#
|
||||
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
|
||||
# file on disk. Later the file is transferred by the parent
|
||||
# process to the slaves incrementally.
|
||||
# process to the replicas incrementally.
|
||||
# 2) Diskless: The Redis master creates a new process that directly writes the
|
||||
# RDB file to slave sockets, without touching the disk at all.
|
||||
# RDB file to replica sockets, without touching the disk at all.
|
||||
#
|
||||
# With disk-backed replication, while the RDB file is generated, more slaves
|
||||
# With disk-backed replication, while the RDB file is generated, more replicas
|
||||
# can be queued and served with the RDB file as soon as the current child producing
|
||||
# the RDB file finishes its work. With diskless replication instead once
|
||||
# the transfer starts, new slaves arriving will be queued and a new transfer
|
||||
# the transfer starts, new replicas arriving will be queued and a new transfer
|
||||
# will start when the current one terminates.
|
||||
#
|
||||
# When diskless replication is used, the master waits a configurable amount of
|
||||
# time (in seconds) before starting the transfer in the hope that multiple slaves
|
||||
# time (in seconds) before starting the transfer in the hope that multiple replicas
|
||||
# will arrive and the transfer can be parallelized.
|
||||
#
|
||||
# With slow disks and fast (large bandwidth) networks, diskless replication
|
||||
@ -351,157 +367,264 @@ repl-diskless-sync no
|
||||
|
||||
# When diskless replication is enabled, it is possible to configure the delay
|
||||
# the server waits in order to spawn the child that transfers the RDB via socket
|
||||
# to the slaves.
|
||||
# to the replicas.
|
||||
#
|
||||
# This is important since once the transfer starts, it is not possible to serve
|
||||
# new slaves arriving, that will be queued for the next RDB transfer, so the server
|
||||
# waits a delay in order to let more slaves arrive.
|
||||
# new replicas arriving, that will be queued for the next RDB transfer, so the server
|
||||
# waits a delay in order to let more replicas arrive.
|
||||
#
|
||||
# The delay is specified in seconds, and by default is 5 seconds. To disable
|
||||
# it entirely just set it to 0 seconds and the transfer will start ASAP.
|
||||
repl-diskless-sync-delay 5
|
||||
|
||||
# Slaves send PINGs to server in a predefined interval. It's possible to change
|
||||
# this interval with the repl_ping_slave_period option. The default value is 10
|
||||
# Replicas send PINGs to server in a predefined interval. It's possible to change
|
||||
# this interval with the repl_ping_replica_period option. The default value is 10
|
||||
# seconds.
|
||||
#
|
||||
# repl-ping-slave-period 10
|
||||
# repl-ping-replica-period 10
|
||||
|
||||
# The following option sets the replication timeout for:
|
||||
#
|
||||
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
|
||||
# 2) Master timeout from the point of view of slaves (data, pings).
|
||||
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
|
||||
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
|
||||
# 2) Master timeout from the point of view of replicas (data, pings).
|
||||
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
|
||||
#
|
||||
# It is important to make sure that this value is greater than the value
|
||||
# specified for repl-ping-slave-period otherwise a timeout will be detected
|
||||
# every time there is low traffic between the master and the slave.
|
||||
# specified for repl-ping-replica-period otherwise a timeout will be detected
|
||||
# every time there is low traffic between the master and the replica.
|
||||
#
|
||||
# repl-timeout 60
|
||||
|
||||
# Disable TCP_NODELAY on the slave socket after SYNC?
|
||||
# Disable TCP_NODELAY on the replica socket after SYNC?
|
||||
#
|
||||
# If you select "yes" Redis will use a smaller number of TCP packets and
|
||||
# less bandwidth to send data to slaves. But this can add a delay for
|
||||
# the data to appear on the slave side, up to 40 milliseconds with
|
||||
# less bandwidth to send data to replicas. But this can add a delay for
|
||||
# the data to appear on the replica side, up to 40 milliseconds with
|
||||
# Linux kernels using a default configuration.
|
||||
#
|
||||
# If you select "no" the delay for data to appear on the slave side will
|
||||
# If you select "no" the delay for data to appear on the replica side will
|
||||
# be reduced but more bandwidth will be used for replication.
|
||||
#
|
||||
# By default we optimize for low latency, but in very high traffic conditions
|
||||
# or when the master and slaves are many hops away, turning this to "yes" may
|
||||
# or when the master and replicas are many hops away, turning this to "yes" may
|
||||
# be a good idea.
|
||||
repl-disable-tcp-nodelay no
|
||||
|
||||
# Set the replication backlog size. The backlog is a buffer that accumulates
|
||||
# slave data when slaves are disconnected for some time, so that when a slave
|
||||
# replica data when replicas are disconnected for some time, so that when a replica
|
||||
# wants to reconnect again, often a full resync is not needed, but a partial
|
||||
# resync is enough, just passing the portion of data the slave missed while
|
||||
# resync is enough, just passing the portion of data the replica missed while
|
||||
# disconnected.
|
||||
#
|
||||
# The bigger the replication backlog, the longer the time the slave can be
|
||||
# The bigger the replication backlog, the longer the time the replica can be
|
||||
# disconnected and later be able to perform a partial resynchronization.
|
||||
#
|
||||
# The backlog is only allocated once there is at least a slave connected.
|
||||
# The backlog is only allocated once there is at least a replica connected.
|
||||
#
|
||||
# repl-backlog-size 1mb
|
||||
|
||||
# After a master has no longer connected slaves for some time, the backlog
|
||||
# After a master has no longer connected replicas for some time, the backlog
|
||||
# will be freed. The following option configures the amount of seconds that
|
||||
# need to elapse, starting from the time the last slave disconnected, for
|
||||
# need to elapse, starting from the time the last replica disconnected, for
|
||||
# the backlog buffer to be freed.
|
||||
#
|
||||
# Note that slaves never free the backlog for timeout, since they may be
|
||||
# Note that replicas never free the backlog for timeout, since they may be
|
||||
# promoted to masters later, and should be able to correctly "partially
|
||||
# resynchronize" with the slaves: hence they should always accumulate backlog.
|
||||
# resynchronize" with the replicas: hence they should always accumulate backlog.
|
||||
#
|
||||
# A value of 0 means to never release the backlog.
|
||||
#
|
||||
# repl-backlog-ttl 3600
|
||||
|
||||
# The slave priority is an integer number published by Redis in the INFO output.
|
||||
# It is used by Redis Sentinel in order to select a slave to promote into a
|
||||
# The replica priority is an integer number published by Redis in the INFO output.
|
||||
# It is used by Redis Sentinel in order to select a replica to promote into a
|
||||
# master if the master is no longer working correctly.
|
||||
#
|
||||
# A slave with a low priority number is considered better for promotion, so
|
||||
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
|
||||
# A replica with a low priority number is considered better for promotion, so
|
||||
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
|
||||
# pick the one with priority 10, that is the lowest.
|
||||
#
|
||||
# However a special priority of 0 marks the slave as not able to perform the
|
||||
# role of master, so a slave with priority of 0 will never be selected by
|
||||
# However a special priority of 0 marks the replica as not able to perform the
|
||||
# role of master, so a replica with priority of 0 will never be selected by
|
||||
# Redis Sentinel for promotion.
|
||||
#
|
||||
# By default the priority is 100.
|
||||
slave-priority 100
|
||||
replica-priority 100
|
||||
|
||||
# It is possible for a master to stop accepting writes if there are less than
|
||||
# N slaves connected, having a lag less or equal than M seconds.
|
||||
# N replicas connected, having a lag less or equal than M seconds.
|
||||
#
|
||||
# The N slaves need to be in "online" state.
|
||||
# The N replicas need to be in "online" state.
|
||||
#
|
||||
# The lag in seconds, that must be <= the specified value, is calculated from
|
||||
# the last ping received from the slave, that is usually sent every second.
|
||||
# the last ping received from the replica, that is usually sent every second.
|
||||
#
|
||||
# This option does not GUARANTEE that N replicas will accept the write, but
|
||||
# will limit the window of exposure for lost writes in case not enough slaves
|
||||
# will limit the window of exposure for lost writes in case not enough replicas
|
||||
# are available, to the specified number of seconds.
|
||||
#
|
||||
# For example to require at least 3 slaves with a lag <= 10 seconds use:
|
||||
# For example to require at least 3 replicas with a lag <= 10 seconds use:
|
||||
#
|
||||
# min-slaves-to-write 3
|
||||
# min-slaves-max-lag 10
|
||||
# min-replicas-to-write 3
|
||||
# min-replicas-max-lag 10
|
||||
#
|
||||
# Setting one or the other to 0 disables the feature.
|
||||
#
|
||||
# By default min-slaves-to-write is set to 0 (feature disabled) and
|
||||
# min-slaves-max-lag is set to 10.
|
||||
# By default min-replicas-to-write is set to 0 (feature disabled) and
|
||||
# min-replicas-max-lag is set to 10.
|
||||
|
||||
# A Redis master is able to list the address and port of the attached
|
||||
# slaves in different ways. For example the "INFO replication" section
|
||||
# replicas in different ways. For example the "INFO replication" section
|
||||
# offers this information, which is used, among other tools, by
|
||||
# Redis Sentinel in order to discover slave instances.
|
||||
# Redis Sentinel in order to discover replica instances.
|
||||
# Another place where this info is available is in the output of the
|
||||
# "ROLE" command of a master.
|
||||
#
|
||||
# The listed IP and address normally reported by a slave is obtained
|
||||
# The listed IP and address normally reported by a replica is obtained
|
||||
# in the following way:
|
||||
#
|
||||
# IP: The address is auto detected by checking the peer address
|
||||
# of the socket used by the slave to connect with the master.
|
||||
# of the socket used by the replica to connect with the master.
|
||||
#
|
||||
# Port: The port is communicated by the slave during the replication
|
||||
# handshake, and is normally the port that the slave is using to
|
||||
# list for connections.
|
||||
# Port: The port is communicated by the replica during the replication
|
||||
# handshake, and is normally the port that the replica is using to
|
||||
# listen for connections.
|
||||
#
|
||||
# However when port forwarding or Network Address Translation (NAT) is
|
||||
# used, the slave may be actually reachable via different IP and port
|
||||
# pairs. The following two options can be used by a slave in order to
|
||||
# used, the replica may be actually reachable via different IP and port
|
||||
# pairs. The following two options can be used by a replica in order to
|
||||
# report to its master a specific set of IP and port, so that both INFO
|
||||
# and ROLE will report those values.
|
||||
#
|
||||
# There is no need to use both the options if you need to override just
|
||||
# the port or the IP address.
|
||||
#
|
||||
# slave-announce-ip 5.5.5.5
|
||||
# slave-announce-port 1234
|
||||
# replica-announce-ip 5.5.5.5
|
||||
# replica-announce-port 1234
|
||||
|
||||
################################## SECURITY ###################################
|
||||
|
||||
# Require clients to issue AUTH <PASSWORD> before processing any other
|
||||
# commands. This might be useful in environments in which you do not trust
|
||||
# others with access to the host running redis-server.
|
||||
#
|
||||
# This should stay commented out for backward compatibility and because most
|
||||
# people do not need auth (e.g. they run their own servers).
|
||||
#
|
||||
# Warning: since Redis is pretty fast an outside user can try up to
|
||||
# 150k passwords per second against a good box. This means that you should
|
||||
# use a very strong password otherwise it will be very easy to break.
|
||||
# 1 million passwords per second against a modern box. This means that you
|
||||
# should use very strong passwords, otherwise they will be very easy to break.
|
||||
# Note that because the password is really a shared secret between the client
|
||||
# and the server, and should not be memorized by any human, the password
|
||||
# can be easily a long string from /dev/urandom or whatever, so by using a
|
||||
# long and unguessable password no brute force attack will be possible.
|
||||
|
||||
# Redis ACL users are defined in the following format:
|
||||
#
|
||||
# user <username> ... acl rules ...
|
||||
#
|
||||
# For example:
|
||||
#
|
||||
# user worker +@list +@connection ~jobs:* on >ffa9203c493aa99
|
||||
#
|
||||
# The special username "default" is used for new connections. If this user
|
||||
# has the "nopass" rule, then new connections will be immediately authenticated
|
||||
# as the "default" user without the need of any password provided via the
|
||||
# AUTH command. Otherwise if the "default" user is not flagged with "nopass"
|
||||
# the connections will start in not authenticated state, and will require
|
||||
# AUTH (or the HELLO command AUTH option) in order to be authenticated and
|
||||
# start to work.
|
||||
#
|
||||
# The ACL rules that describe what an user can do are the following:
|
||||
#
|
||||
# on Enable the user: it is possible to authenticate as this user.
|
||||
# off Disable the user: it's no longer possible to authenticate
|
||||
# with this user, however the already authenticated connections
|
||||
# will still work.
|
||||
# +<command> Allow the execution of that command
|
||||
# -<command> Disallow the execution of that command
|
||||
# +@<category> Allow the execution of all the commands in such category
|
||||
# with valid categories are like @admin, @set, @sortedset, ...
|
||||
# and so forth, see the full list in the server.c file where
|
||||
# the Redis command table is described and defined.
|
||||
# The special category @all means all the commands, but currently
|
||||
# present in the server, and that will be loaded in the future
|
||||
# via modules.
|
||||
# +<command>|subcommand Allow a specific subcommand of an otherwise
|
||||
# disabled command. Note that this form is not
|
||||
# allowed as negative like -DEBUG|SEGFAULT, but
|
||||
# only additive starting with "+".
|
||||
# allcommands Alias for +@all. Note that it implies the ability to execute
|
||||
# all the future commands loaded via the modules system.
|
||||
# nocommands Alias for -@all.
|
||||
# ~<pattern> Add a pattern of keys that can be mentioned as part of
|
||||
# commands. For instance ~* allows all the keys. The pattern
|
||||
# is a glob-style pattern like the one of KEYS.
|
||||
# It is possible to specify multiple patterns.
|
||||
# allkeys Alias for ~*
|
||||
# resetkeys Flush the list of allowed keys patterns.
|
||||
# ><password> Add this passowrd to the list of valid password for the user.
|
||||
# For example >mypass will add "mypass" to the list.
|
||||
# This directive clears the "nopass" flag (see later).
|
||||
# <<password> Remove this password from the list of valid passwords.
|
||||
# nopass All the set passwords of the user are removed, and the user
|
||||
# is flagged as requiring no password: it means that every
|
||||
# password will work against this user. If this directive is
|
||||
# used for the default user, every new connection will be
|
||||
# immediately authenticated with the default user without
|
||||
# any explicit AUTH command required. Note that the "resetpass"
|
||||
# directive will clear this condition.
|
||||
# resetpass Flush the list of allowed passwords. Moreover removes the
|
||||
# "nopass" status. After "resetpass" the user has no associated
|
||||
# passwords and there is no way to authenticate without adding
|
||||
# some password (or setting it as "nopass" later).
|
||||
# reset Performs the following actions: resetpass, resetkeys, off,
|
||||
# -@all. The user returns to the same state it has immediately
|
||||
# after its creation.
|
||||
#
|
||||
# ACL rules can be specified in any order: for instance you can start with
|
||||
# passwords, then flags, or key patterns. However note that the additive
|
||||
# and subtractive rules will CHANGE MEANING depending on the ordering.
|
||||
# For instance see the following example:
|
||||
#
|
||||
# user alice on +@all -DEBUG ~* >somepassword
|
||||
#
|
||||
# This will allow "alice" to use all the commands with the exception of the
|
||||
# DEBUG command, since +@all added all the commands to the set of the commands
|
||||
# alice can use, and later DEBUG was removed. However if we invert the order
|
||||
# of two ACL rules the result will be different:
|
||||
#
|
||||
# user alice on -DEBUG +@all ~* >somepassword
|
||||
#
|
||||
# Now DEBUG was removed when alice had yet no commands in the set of allowed
|
||||
# commands, later all the commands are added, so the user will be able to
|
||||
# execute everything.
|
||||
#
|
||||
# Basically ACL rules are processed left-to-right.
|
||||
#
|
||||
# For more information about ACL configuration please refer to
|
||||
# the Redis web site at https://redis.io/topics/acl
|
||||
|
||||
# Using an external ACL file
|
||||
#
|
||||
# Instead of configuring users here in this file, it is possible to use
|
||||
# a stand-alone file just listing users. The two methods cannot be mixed:
|
||||
# if you configure users here and at the same time you activate the exteranl
|
||||
# ACL file, the server will refuse to start.
|
||||
#
|
||||
# The format of the external ACL user file is exactly the same as the
|
||||
# format that is used inside redis.conf to describe users.
|
||||
#
|
||||
# aclfile /etc/redis/users.acl
|
||||
|
||||
# IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatiblity
|
||||
# layer on top of the new ACL system. The option effect will be just setting
|
||||
# the password for the default user. Clients will still authenticate using
|
||||
# AUTH <password> as usually, or more explicitly with AUTH default <password>
|
||||
# if they follow the new protocol: both will work.
|
||||
#
|
||||
# requirepass foobared
|
||||
|
||||
# Command renaming.
|
||||
# Command renaming (DEPRECATED).
|
||||
#
|
||||
# ------------------------------------------------------------------------
|
||||
# WARNING: avoid using this option if possible. Instead use ACLs to remove
|
||||
# commands from the default user, and put them only in some admin user you
|
||||
# create for administrative purposes.
|
||||
# ------------------------------------------------------------------------
|
||||
#
|
||||
# It is possible to change the name of dangerous commands in a shared
|
||||
# environment. For instance the CONFIG command may be renamed into something
|
||||
@ -518,7 +641,7 @@ slave-priority 100
|
||||
# rename-command CONFIG ""
|
||||
#
|
||||
# Please note that changing the name of commands that are logged into the
|
||||
# AOF file or transmitted to slaves may cause problems.
|
||||
# AOF file or transmitted to replicas may cause problems.
|
||||
|
||||
################################### CLIENTS ####################################
|
||||
|
||||
@ -547,15 +670,15 @@ slave-priority 100
|
||||
# This option is usually useful when using Redis as an LRU or LFU cache, or to
|
||||
# set a hard memory limit for an instance (using the 'noeviction' policy).
|
||||
#
|
||||
# WARNING: If you have slaves attached to an instance with maxmemory on,
|
||||
# the size of the output buffers needed to feed the slaves are subtracted
|
||||
# WARNING: If you have replicas attached to an instance with maxmemory on,
|
||||
# the size of the output buffers needed to feed the replicas are subtracted
|
||||
# from the used memory count, so that network problems / resyncs will
|
||||
# not trigger a loop where keys are evicted, and in turn the output
|
||||
# buffer of slaves is full with DELs of keys evicted triggering the deletion
|
||||
# buffer of replicas is full with DELs of keys evicted triggering the deletion
|
||||
# of more keys, and so forth until the database is completely emptied.
|
||||
#
|
||||
# In short... if you have slaves attached it is suggested that you set a lower
|
||||
# limit for maxmemory so that there is some free RAM on the system for slave
|
||||
# In short... if you have replicas attached it is suggested that you set a lower
|
||||
# limit for maxmemory so that there is some free RAM on the system for replica
|
||||
# output buffers (but this is not needed if the policy is 'noeviction').
|
||||
#
|
||||
# maxmemory <bytes>
|
||||
@ -602,6 +725,26 @@ slave-priority 100
|
||||
#
|
||||
# maxmemory-samples 5
|
||||
|
||||
# Starting from Redis 5, by default a replica will ignore its maxmemory setting
|
||||
# (unless it is promoted to master after a failover or manually). It means
|
||||
# that the eviction of keys will be just handled by the master, sending the
|
||||
# DEL commands to the replica as keys evict in the master side.
|
||||
#
|
||||
# This behavior ensures that masters and replicas stay consistent, and is usually
|
||||
# what you want, however if your replica is writable, or you want the replica to have
|
||||
# a different memory setting, and you are sure all the writes performed to the
|
||||
# replica are idempotent, then you may change this default (but be sure to understand
|
||||
# what you are doing).
|
||||
#
|
||||
# Note that since the replica by default does not evict, it may end using more
|
||||
# memory than the one set via maxmemory (there are certain buffers that may
|
||||
# be larger on the replica, or data structures may sometimes take more memory and so
|
||||
# forth). So make sure you monitor your replicas and make sure they have enough
|
||||
# memory to never hit a real out-of-memory condition before the master hits
|
||||
# the configured maxmemory setting.
|
||||
#
|
||||
# replica-ignore-maxmemory yes
|
||||
|
||||
############################# LAZY FREEING ####################################
|
||||
|
||||
# Redis has two primitives to delete keys. One is called DEL and is a blocking
|
||||
@ -637,7 +780,7 @@ slave-priority 100
|
||||
# or SORT with STORE option may delete existing keys. The SET command
|
||||
# itself removes any old content of the specified key in order to replace
|
||||
# it with the specified string.
|
||||
# 4) During replication, when a slave performs a full resynchronization with
|
||||
# 4) During replication, when a replica performs a full resynchronization with
|
||||
# its master, the content of the whole database is removed in order to
|
||||
# load the RDB file just transferred.
|
||||
#
|
||||
@ -649,7 +792,7 @@ slave-priority 100
|
||||
lazyfree-lazy-eviction no
|
||||
lazyfree-lazy-expire no
|
||||
lazyfree-lazy-server-del no
|
||||
slave-lazy-flush no
|
||||
replica-lazy-flush no
|
||||
|
||||
############################## APPEND ONLY MODE ###############################
|
||||
|
||||
@ -826,42 +969,42 @@ lua-time-limit 5000
|
||||
#
|
||||
# cluster-node-timeout 15000
|
||||
|
||||
# A slave of a failing master will avoid to start a failover if its data
|
||||
# A replica of a failing master will avoid to start a failover if its data
|
||||
# looks too old.
|
||||
#
|
||||
# There is no simple way for a slave to actually have an exact measure of
|
||||
# There is no simple way for a replica to actually have an exact measure of
|
||||
# its "data age", so the following two checks are performed:
|
||||
#
|
||||
# 1) If there are multiple slaves able to failover, they exchange messages
|
||||
# in order to try to give an advantage to the slave with the best
|
||||
# 1) If there are multiple replicas able to failover, they exchange messages
|
||||
# in order to try to give an advantage to the replica with the best
|
||||
# replication offset (more data from the master processed).
|
||||
# Slaves will try to get their rank by offset, and apply to the start
|
||||
# Replicas will try to get their rank by offset, and apply to the start
|
||||
# of the failover a delay proportional to their rank.
|
||||
#
|
||||
# 2) Every single slave computes the time of the last interaction with
|
||||
# 2) Every single replica computes the time of the last interaction with
|
||||
# its master. This can be the last ping or command received (if the master
|
||||
# is still in the "connected" state), or the time that elapsed since the
|
||||
# disconnection with the master (if the replication link is currently down).
|
||||
# If the last interaction is too old, the slave will not try to failover
|
||||
# If the last interaction is too old, the replica will not try to failover
|
||||
# at all.
|
||||
#
|
||||
# The point "2" can be tuned by user. Specifically a slave will not perform
|
||||
# The point "2" can be tuned by user. Specifically a replica will not perform
|
||||
# the failover if, since the last interaction with the master, the time
|
||||
# elapsed is greater than:
|
||||
#
|
||||
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
|
||||
# (node-timeout * replica-validity-factor) + repl-ping-replica-period
|
||||
#
|
||||
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
|
||||
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
|
||||
# slave will not try to failover if it was not able to talk with the master
|
||||
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
|
||||
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
|
||||
# replica will not try to failover if it was not able to talk with the master
|
||||
# for longer than 310 seconds.
|
||||
#
|
||||
# A large slave-validity-factor may allow slaves with too old data to failover
|
||||
# A large replica-validity-factor may allow replicas with too old data to failover
|
||||
# a master, while a too small value may prevent the cluster from being able to
|
||||
# elect a slave at all.
|
||||
# elect a replica at all.
|
||||
#
|
||||
# For maximum availability, it is possible to set the slave-validity-factor
|
||||
# to a value of 0, which means, that slaves will always try to failover the
|
||||
# For maximum availability, it is possible to set the replica-validity-factor
|
||||
# to a value of 0, which means, that replicas will always try to failover the
|
||||
# master regardless of the last time they interacted with the master.
|
||||
# (However they'll always try to apply a delay proportional to their
|
||||
# offset rank).
|
||||
@ -869,22 +1012,22 @@ lua-time-limit 5000
|
||||
# Zero is the only value able to guarantee that when all the partitions heal
|
||||
# the cluster will always be able to continue.
|
||||
#
|
||||
# cluster-slave-validity-factor 10
|
||||
# cluster-replica-validity-factor 10
|
||||
|
||||
# Cluster slaves are able to migrate to orphaned masters, that are masters
|
||||
# that are left without working slaves. This improves the cluster ability
|
||||
# Cluster replicas are able to migrate to orphaned masters, that are masters
|
||||
# that are left without working replicas. This improves the cluster ability
|
||||
# to resist to failures as otherwise an orphaned master can't be failed over
|
||||
# in case of failure if it has no working slaves.
|
||||
# in case of failure if it has no working replicas.
|
||||
#
|
||||
# Slaves migrate to orphaned masters only if there are still at least a
|
||||
# given number of other working slaves for their old master. This number
|
||||
# is the "migration barrier". A migration barrier of 1 means that a slave
|
||||
# will migrate only if there is at least 1 other working slave for its master
|
||||
# and so forth. It usually reflects the number of slaves you want for every
|
||||
# Replicas migrate to orphaned masters only if there are still at least a
|
||||
# given number of other working replicas for their old master. This number
|
||||
# is the "migration barrier". A migration barrier of 1 means that a replica
|
||||
# will migrate only if there is at least 1 other working replica for its master
|
||||
# and so forth. It usually reflects the number of replicas you want for every
|
||||
# master in your cluster.
|
||||
#
|
||||
# Default is 1 (slaves migrate only if their masters remain with at least
|
||||
# one slave). To disable migration just set it to a very large value.
|
||||
# Default is 1 (replicas migrate only if their masters remain with at least
|
||||
# one replica). To disable migration just set it to a very large value.
|
||||
# A value of 0 can be set but is useful only for debugging and dangerous
|
||||
# in production.
|
||||
#
|
||||
@ -903,7 +1046,7 @@ lua-time-limit 5000
|
||||
#
|
||||
# cluster-require-full-coverage yes
|
||||
|
||||
# This option, when set to yes, prevents slaves from trying to failover its
|
||||
# This option, when set to yes, prevents replicas from trying to failover its
|
||||
# master during master failures. However the master can still perform a
|
||||
# manual failover, if forced to do so.
|
||||
#
|
||||
@ -911,7 +1054,7 @@ lua-time-limit 5000
|
||||
# data center operations, where we want one side to never be promoted if not
|
||||
# in the case of a total DC failure.
|
||||
#
|
||||
# cluster-slave-no-failover no
|
||||
# cluster-replica-no-failover no
|
||||
|
||||
# In order to setup your cluster make sure to read the documentation
|
||||
# available at http://redis.io web site.
|
||||
@ -1040,6 +1183,61 @@ latency-monitor-threshold 0
|
||||
# specify at least one of K or E, no events will be delivered.
|
||||
notify-keyspace-events ""
|
||||
|
||||
############################### GOPHER SERVER #################################
|
||||
|
||||
# Redis contains an implementation of the Gopher protocol, as specified in
|
||||
# the RFC 1436 (https://www.ietf.org/rfc/rfc1436.txt).
|
||||
#
|
||||
# The Gopher protocol was very popular in the late '90s. It is an alternative
|
||||
# to the web, and the implementation both server and client side is so simple
|
||||
# that the Redis server has just 100 lines of code in order to implement this
|
||||
# support.
|
||||
#
|
||||
# What do you do with Gopher nowadays? Well Gopher never *really* died, and
|
||||
# lately there is a movement in order for the Gopher more hierarchical content
|
||||
# composed of just plain text documents to be resurrected. Some want a simpler
|
||||
# internet, others believe that the mainstream internet became too much
|
||||
# controlled, and it's cool to create an alternative space for people that
|
||||
# want a bit of fresh air.
|
||||
#
|
||||
# Anyway for the 10nth birthday of the Redis, we gave it the Gopher protocol
|
||||
# as a gift.
|
||||
#
|
||||
# --- HOW IT WORKS? ---
|
||||
#
|
||||
# The Redis Gopher support uses the inline protocol of Redis, and specifically
|
||||
# two kind of inline requests that were anyway illegal: an empty request
|
||||
# or any request that starts with "/" (there are no Redis commands starting
|
||||
# with such a slash). Normal RESP2/RESP3 requests are completely out of the
|
||||
# path of the Gopher protocol implementation and are served as usually as well.
|
||||
#
|
||||
# If you open a connection to Redis when Gopher is enabled and send it
|
||||
# a string like "/foo", if there is a key named "/foo" it is served via the
|
||||
# Gopher protocol.
|
||||
#
|
||||
# In order to create a real Gopher "hole" (the name of a Gopher site in Gopher
|
||||
# talking), you likely need a script like the following:
|
||||
#
|
||||
# https://github.com/antirez/gopher2redis
|
||||
#
|
||||
# --- SECURITY WARNING ---
|
||||
#
|
||||
# If you plan to put Redis on the internet in a publicly accessible address
|
||||
# to server Gopher pages MAKE SURE TO SET A PASSWORD to the instance.
|
||||
# Once a password is set:
|
||||
#
|
||||
# 1. The Gopher server (when enabled, not by default) will kill serve
|
||||
# content via Gopher.
|
||||
# 2. However other commands cannot be called before the client will
|
||||
# authenticate.
|
||||
#
|
||||
# So use the 'requirepass' option to protect your instance.
|
||||
#
|
||||
# To enable Gopher support uncomment the following line and set
|
||||
# the option from no (the default) to yes.
|
||||
#
|
||||
# gopher-enabled no
|
||||
|
||||
############################### ADVANCED CONFIG ###############################
|
||||
|
||||
# Hashes are encoded using a memory efficient data structure when they have a
|
||||
@ -1145,7 +1343,7 @@ activerehashing yes
|
||||
# The limit can be set differently for the three different classes of clients:
|
||||
#
|
||||
# normal -> normal clients including MONITOR clients
|
||||
# slave -> slave clients
|
||||
# replica -> replica clients
|
||||
# pubsub -> clients subscribed to at least one pubsub channel or pattern
|
||||
#
|
||||
# The syntax of every client-output-buffer-limit directive is the following:
|
||||
@ -1166,12 +1364,12 @@ activerehashing yes
|
||||
# asynchronous clients may create a scenario where data is requested faster
|
||||
# than it can read.
|
||||
#
|
||||
# Instead there is a default limit for pubsub and slave clients, since
|
||||
# subscribers and slaves receive data in a push fashion.
|
||||
# Instead there is a default limit for pubsub and replica clients, since
|
||||
# subscribers and replicas receive data in a push fashion.
|
||||
#
|
||||
# Both the hard or the soft limit can be disabled by setting them to zero.
|
||||
client-output-buffer-limit normal 0 0 0
|
||||
client-output-buffer-limit slave 256mb 64mb 60
|
||||
client-output-buffer-limit replica 256mb 64mb 60
|
||||
client-output-buffer-limit pubsub 32mb 8mb 60
|
||||
|
||||
# Client query buffers accumulate new commands. They are limited to a fixed
|
||||
@ -1205,6 +1403,22 @@ client-output-buffer-limit pubsub 32mb 8mb 60
|
||||
# 100 only in environments where very low latency is required.
|
||||
hz 10
|
||||
|
||||
# Normally it is useful to have an HZ value which is proportional to the
|
||||
# number of clients connected. This is useful in order, for instance, to
|
||||
# avoid too many clients are processed for each background task invocation
|
||||
# in order to avoid latency spikes.
|
||||
#
|
||||
# Since the default HZ value by default is conservatively set to 10, Redis
|
||||
# offers, and enables by default, the ability to use an adaptive HZ value
|
||||
# which will temporary raise when there are many connected clients.
|
||||
#
|
||||
# When dynamic HZ is enabled, the actual configured HZ will be used as
|
||||
# as a baseline, but multiples of the configured HZ value will be actually
|
||||
# used as needed once more clients are connected. In this way an idle
|
||||
# instance will use very little CPU time while a busy instance will be
|
||||
# more responsive.
|
||||
dynamic-hz yes
|
||||
|
||||
# When a child rewrites the AOF file, if the following option is enabled
|
||||
# the file will be fsync-ed every 32 MB of data generated. This is useful
|
||||
# in order to commit the file to the disk more incrementally and avoid
|
||||
|
2
runtest
2
runtest
@ -11,4 +11,4 @@ then
|
||||
echo "You need tcl 8.5 or newer in order to run the Redis test"
|
||||
exit 1
|
||||
fi
|
||||
$TCLSH tests/test_helper.tcl $*
|
||||
$TCLSH tests/test_helper.tcl "${@}"
|
||||
|
@ -20,6 +20,21 @@
|
||||
# The port that this sentinel instance will run on
|
||||
port 26379
|
||||
|
||||
# By default Redis Sentinel does not run as a daemon. Use 'yes' if you need it.
|
||||
# Note that Redis will write a pid file in /var/run/redis-sentinel.pid when
|
||||
# daemonized.
|
||||
daemonize no
|
||||
|
||||
# When running daemonized, Redis Sentinel writes a pid file in
|
||||
# /var/run/redis-sentinel.pid by default. You can specify a custom pid file
|
||||
# location here.
|
||||
pidfile /var/run/redis-sentinel.pid
|
||||
|
||||
# Specify the log file name. Also the empty string can be used to force
|
||||
# Sentinel to log on the standard output. Note that if you use standard
|
||||
# output for logging but daemonize, logs will be sent to /dev/null
|
||||
logfile ""
|
||||
|
||||
# sentinel announce-ip <ip>
|
||||
# sentinel announce-port <port>
|
||||
#
|
||||
@ -58,11 +73,11 @@ dir /tmp
|
||||
# be elected by the majority of the known Sentinels in order to
|
||||
# start a failover, so no failover can be performed in minority.
|
||||
#
|
||||
# Slaves are auto-discovered, so you don't need to specify slaves in
|
||||
# Replicas are auto-discovered, so you don't need to specify replicas in
|
||||
# any way. Sentinel itself will rewrite this configuration file adding
|
||||
# the slaves using additional configuration options.
|
||||
# the replicas using additional configuration options.
|
||||
# Also note that the configuration file is rewritten when a
|
||||
# slave is promoted to master.
|
||||
# replica is promoted to master.
|
||||
#
|
||||
# Note: master name should not include special characters or spaces.
|
||||
# The valid charset is A-z 0-9 and the three characters ".-_".
|
||||
@ -70,11 +85,11 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
|
||||
# sentinel auth-pass <master-name> <password>
|
||||
#
|
||||
# Set the password to use to authenticate with the master and slaves.
|
||||
# Set the password to use to authenticate with the master and replicas.
|
||||
# Useful if there is a password set in the Redis instances to monitor.
|
||||
#
|
||||
# Note that the master password is also used for slaves, so it is not
|
||||
# possible to set a different password in masters and slaves instances
|
||||
# Note that the master password is also used for replicas, so it is not
|
||||
# possible to set a different password in masters and replicas instances
|
||||
# if you want to be able to monitor these instances with Sentinel.
|
||||
#
|
||||
# However you can have Redis instances without the authentication enabled
|
||||
@ -89,7 +104,7 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
|
||||
# sentinel down-after-milliseconds <master-name> <milliseconds>
|
||||
#
|
||||
# Number of milliseconds the master (or any attached slave or sentinel) should
|
||||
# Number of milliseconds the master (or any attached replica or sentinel) should
|
||||
# be unreachable (as in, not acceptable reply to PING, continuously, for the
|
||||
# specified period) in order to consider it in S_DOWN state (Subjectively
|
||||
# Down).
|
||||
@ -97,11 +112,11 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
# Default is 30 seconds.
|
||||
sentinel down-after-milliseconds mymaster 30000
|
||||
|
||||
# sentinel parallel-syncs <master-name> <numslaves>
|
||||
# sentinel parallel-syncs <master-name> <numreplicas>
|
||||
#
|
||||
# How many slaves we can reconfigure to point to the new slave simultaneously
|
||||
# during the failover. Use a low number if you use the slaves to serve query
|
||||
# to avoid that all the slaves will be unreachable at about the same
|
||||
# How many replicas we can reconfigure to point to the new replica simultaneously
|
||||
# during the failover. Use a low number if you use the replicas to serve query
|
||||
# to avoid that all the replicas will be unreachable at about the same
|
||||
# time while performing the synchronization with the master.
|
||||
sentinel parallel-syncs mymaster 1
|
||||
|
||||
@ -113,18 +128,18 @@ sentinel parallel-syncs mymaster 1
|
||||
# already tried against the same master by a given Sentinel, is two
|
||||
# times the failover timeout.
|
||||
#
|
||||
# - The time needed for a slave replicating to a wrong master according
|
||||
# - The time needed for a replica replicating to a wrong master according
|
||||
# to a Sentinel current configuration, to be forced to replicate
|
||||
# with the right master, is exactly the failover timeout (counting since
|
||||
# the moment a Sentinel detected the misconfiguration).
|
||||
#
|
||||
# - The time needed to cancel a failover that is already in progress but
|
||||
# did not produced any configuration change (SLAVEOF NO ONE yet not
|
||||
# acknowledged by the promoted slave).
|
||||
# acknowledged by the promoted replica).
|
||||
#
|
||||
# - The maximum time a failover in progress waits for all the slaves to be
|
||||
# reconfigured as slaves of the new master. However even after this time
|
||||
# the slaves will be reconfigured by the Sentinels anyway, but not with
|
||||
# - The maximum time a failover in progress waits for all the replicas to be
|
||||
# reconfigured as replicas of the new master. However even after this time
|
||||
# the replicas will be reconfigured by the Sentinels anyway, but not with
|
||||
# the exact parallel-syncs progression as specified.
|
||||
#
|
||||
# Default is 3 minutes.
|
||||
@ -185,7 +200,7 @@ sentinel failover-timeout mymaster 180000
|
||||
# <role> is either "leader" or "observer"
|
||||
#
|
||||
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
|
||||
# the old address of the master and the new address of the elected slave
|
||||
# the old address of the master and the new address of the elected replica
|
||||
# (now a master).
|
||||
#
|
||||
# This script should be resistant to multiple invocations.
|
||||
@ -213,12 +228,17 @@ sentinel deny-scripts-reconfig yes
|
||||
#
|
||||
# In such case it is possible to tell Sentinel to use different command names
|
||||
# instead of the normal ones. For example if the master "mymaster", and the
|
||||
# associated slaves, have "CONFIG" all renamed to "GUESSME", I could use:
|
||||
# associated replicas, have "CONFIG" all renamed to "GUESSME", I could use:
|
||||
#
|
||||
# sentinel rename-command mymaster CONFIG GUESSME
|
||||
# SENTINEL rename-command mymaster CONFIG GUESSME
|
||||
#
|
||||
# After such configuration is set, every time Sentinel would use CONFIG it will
|
||||
# use GUESSME instead. Note that there is no actual need to respect the command
|
||||
# case, so writing "config guessme" is the same in the example above.
|
||||
#
|
||||
# SENTINEL SET can also be used in order to perform this configuration at runtime.
|
||||
#
|
||||
# In order to set a command back to its original name (undo the renaming), it
|
||||
# is possible to just rename a command to itsef:
|
||||
#
|
||||
# SENTINEL rename-command mymaster CONFIG CONFIG
|
||||
|
24
src/Makefile
24
src/Makefile
@ -21,6 +21,11 @@ NODEPS:=clean distclean
|
||||
|
||||
# Default settings
|
||||
STD=-std=c99 -pedantic -DREDIS_STATIC=''
|
||||
ifneq (,$(findstring clang,$(CC)))
|
||||
ifneq (,$(findstring FreeBSD,$(uname_S)))
|
||||
STD+=-Wno-c11-extensions
|
||||
endif
|
||||
endif
|
||||
WARN=-Wall -W -Wno-missing-field-initializers
|
||||
OPT=$(OPTIMIZATION)
|
||||
|
||||
@ -41,6 +46,10 @@ endif
|
||||
# To get ARM stack traces if Redis crashes we need a special C flag.
|
||||
ifneq (,$(filter aarch64 armv,$(uname_M)))
|
||||
CFLAGS+=-funwind-tables
|
||||
else
|
||||
ifneq (,$(findstring armv,$(uname_M)))
|
||||
CFLAGS+=-funwind-tables
|
||||
endif
|
||||
endif
|
||||
|
||||
# Backwards compatibility for selecting an allocator
|
||||
@ -93,10 +102,20 @@ else
|
||||
ifeq ($(uname_S),OpenBSD)
|
||||
# OpenBSD
|
||||
FINAL_LIBS+= -lpthread
|
||||
ifeq ($(USE_BACKTRACE),yes)
|
||||
FINAL_CFLAGS+= -DUSE_BACKTRACE -I/usr/local/include
|
||||
FINAL_LDFLAGS+= -L/usr/local/lib
|
||||
FINAL_LIBS+= -lexecinfo
|
||||
endif
|
||||
|
||||
else
|
||||
ifeq ($(uname_S),FreeBSD)
|
||||
# FreeBSD
|
||||
FINAL_LIBS+= -lpthread
|
||||
FINAL_LIBS+= -lpthread -lexecinfo
|
||||
else
|
||||
ifeq ($(uname_S),DragonFly)
|
||||
# FreeBSD
|
||||
FINAL_LIBS+= -lpthread -lexecinfo
|
||||
else
|
||||
# All the other OSes (notably Linux)
|
||||
FINAL_LDFLAGS+= -rdynamic
|
||||
@ -106,6 +125,7 @@ endif
|
||||
endif
|
||||
endif
|
||||
endif
|
||||
endif
|
||||
# Include paths to dependencies
|
||||
FINAL_CFLAGS+= -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src
|
||||
|
||||
@ -144,7 +164,7 @@ endif
|
||||
|
||||
REDIS_SERVER_NAME=redis-server
|
||||
REDIS_SENTINEL_NAME=redis-sentinel
|
||||
REDIS_SERVER_OBJ=adlist.o quicklist.o ae.o anet.o dict.o server.o sds.o zmalloc.o lzf_c.o lzf_d.o pqsort.o zipmap.o sha1.o ziplist.o release.o networking.o util.o object.o db.o replication.o rdb.o t_string.o t_list.o t_set.o t_zset.o t_hash.o config.o aof.o pubsub.o multi.o debug.o sort.o intset.o syncio.o cluster.o crc16.o endianconv.o slowlog.o scripting.o bio.o rio.o rand.o memtest.o crc64.o bitops.o sentinel.o notify.o setproctitle.o blocked.o hyperloglog.o latency.o sparkline.o redis-check-rdb.o redis-check-aof.o geo.o lazyfree.o module.o evict.o expire.o geohash.o geohash_helper.o childinfo.o defrag.o siphash.o rax.o t_stream.o listpack.o localtime.o
|
||||
REDIS_SERVER_OBJ=adlist.o quicklist.o ae.o anet.o dict.o server.o sds.o zmalloc.o lzf_c.o lzf_d.o pqsort.o zipmap.o sha1.o ziplist.o release.o networking.o util.o object.o db.o replication.o rdb.o t_string.o t_list.o t_set.o t_zset.o t_hash.o config.o aof.o pubsub.o multi.o debug.o sort.o intset.o syncio.o cluster.o crc16.o endianconv.o slowlog.o scripting.o bio.o rio.o rand.o memtest.o crc64.o bitops.o sentinel.o notify.o setproctitle.o blocked.o hyperloglog.o latency.o sparkline.o redis-check-rdb.o redis-check-aof.o geo.o lazyfree.o module.o evict.o expire.o geohash.o geohash_helper.o childinfo.o defrag.o siphash.o rax.o t_stream.o listpack.o localtime.o lolwut.o lolwut5.o acl.o gopher.o
|
||||
REDIS_CLI_NAME=redis-cli
|
||||
REDIS_CLI_OBJ=anet.o adlist.o dict.o redis-cli.o zmalloc.o release.o anet.o ae.o crc64.o siphash.o crc16.o
|
||||
REDIS_BENCHMARK_NAME=redis-benchmark
|
||||
|
2
src/ae.c
2
src/ae.c
@ -351,8 +351,8 @@ static int processTimeEvents(aeEventLoop *eventLoop) {
|
||||
* if flags has AE_FILE_EVENTS set, file events are processed.
|
||||
* if flags has AE_TIME_EVENTS set, time events are processed.
|
||||
* if flags has AE_DONT_WAIT set the function returns ASAP until all
|
||||
* if flags has AE_CALL_AFTER_SLEEP set, the aftersleep callback is called.
|
||||
* the events that's possible to process without to wait are processed.
|
||||
* if flags has AE_CALL_AFTER_SLEEP set, the aftersleep callback is called.
|
||||
*
|
||||
* The function returns the number of events processed. */
|
||||
int aeProcessEvents(aeEventLoop *eventLoop, int flags)
|
||||
|
86
src/aof.c
86
src/aof.c
@ -204,7 +204,7 @@ void aof_background_fsync(int fd) {
|
||||
}
|
||||
|
||||
/* Kills an AOFRW child process if exists */
|
||||
static void killAppendOnlyChild(void) {
|
||||
void killAppendOnlyChild(void) {
|
||||
int statloc;
|
||||
/* No AOFRW child? return. */
|
||||
if (server.aof_child_pid == -1) return;
|
||||
@ -221,6 +221,8 @@ static void killAppendOnlyChild(void) {
|
||||
server.aof_rewrite_time_start = -1;
|
||||
/* Close pipes used for IPC between the two processes. */
|
||||
aofClosePipes();
|
||||
closeChildInfoPipe();
|
||||
updateDictResizePolicy();
|
||||
}
|
||||
|
||||
/* Called when the user switches from "appendonly yes" to "appendonly no"
|
||||
@ -645,6 +647,8 @@ struct client *createFakeClient(void) {
|
||||
c->obuf_soft_limit_reached_time = 0;
|
||||
c->watched_keys = listCreate();
|
||||
c->peerid = NULL;
|
||||
c->resp = 2;
|
||||
c->user = NULL;
|
||||
listSetFreeMethod(c->reply,freeClientReplyValue);
|
||||
listSetDupMethod(c->reply,dupClientReplyValue);
|
||||
initClientMultiState(c);
|
||||
@ -677,6 +681,7 @@ int loadAppendOnlyFile(char *filename) {
|
||||
int old_aof_state = server.aof_state;
|
||||
long loops = 0;
|
||||
off_t valid_up_to = 0; /* Offset of latest well-formed command loaded. */
|
||||
off_t valid_before_multi = 0; /* Offset before MULTI command loaded. */
|
||||
|
||||
if (fp == NULL) {
|
||||
serverLog(LL_WARNING,"Fatal error: can't open the append log file for reading: %s",strerror(errno));
|
||||
@ -777,16 +782,28 @@ int loadAppendOnlyFile(char *filename) {
|
||||
/* Command lookup */
|
||||
cmd = lookupCommand(argv[0]->ptr);
|
||||
if (!cmd) {
|
||||
serverLog(LL_WARNING,"Unknown command '%s' reading the append only file", (char*)argv[0]->ptr);
|
||||
serverLog(LL_WARNING,
|
||||
"Unknown command '%s' reading the append only file",
|
||||
(char*)argv[0]->ptr);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (cmd == server.multiCommand) valid_before_multi = valid_up_to;
|
||||
|
||||
/* Run the command in the context of a fake client */
|
||||
fakeClient->cmd = cmd;
|
||||
cmd->proc(fakeClient);
|
||||
if (fakeClient->flags & CLIENT_MULTI &&
|
||||
fakeClient->cmd->proc != execCommand)
|
||||
{
|
||||
queueMultiCommand(fakeClient);
|
||||
} else {
|
||||
cmd->proc(fakeClient);
|
||||
}
|
||||
|
||||
/* The fake client should not have a reply */
|
||||
serverAssert(fakeClient->bufpos == 0 && listLength(fakeClient->reply) == 0);
|
||||
serverAssert(fakeClient->bufpos == 0 &&
|
||||
listLength(fakeClient->reply) == 0);
|
||||
|
||||
/* The fake client should never get blocked */
|
||||
serverAssert((fakeClient->flags & CLIENT_BLOCKED) == 0);
|
||||
|
||||
@ -798,8 +815,15 @@ int loadAppendOnlyFile(char *filename) {
|
||||
}
|
||||
|
||||
/* This point can only be reached when EOF is reached without errors.
|
||||
* If the client is in the middle of a MULTI/EXEC, log error and quit. */
|
||||
if (fakeClient->flags & CLIENT_MULTI) goto uxeof;
|
||||
* If the client is in the middle of a MULTI/EXEC, handle it as it was
|
||||
* a short read, even if technically the protocol is correct: we want
|
||||
* to remove the unprocessed tail and continue. */
|
||||
if (fakeClient->flags & CLIENT_MULTI) {
|
||||
serverLog(LL_WARNING,
|
||||
"Revert incomplete MULTI/EXEC transaction in AOF file");
|
||||
valid_up_to = valid_before_multi;
|
||||
goto uxeof;
|
||||
}
|
||||
|
||||
loaded_ok: /* DB loaded, cleanup and return C_OK to the caller. */
|
||||
fclose(fp);
|
||||
@ -1119,25 +1143,47 @@ int rewriteStreamObject(rio *r, robj *key, robj *o) {
|
||||
streamID id;
|
||||
int64_t numfields;
|
||||
|
||||
/* Reconstruct the stream data using XADD commands. */
|
||||
while(streamIteratorGetID(&si,&id,&numfields)) {
|
||||
/* Emit a two elements array for each item. The first is
|
||||
* the ID, the second is an array of field-value pairs. */
|
||||
if (s->length) {
|
||||
/* Reconstruct the stream data using XADD commands. */
|
||||
while(streamIteratorGetID(&si,&id,&numfields)) {
|
||||
/* Emit a two elements array for each item. The first is
|
||||
* the ID, the second is an array of field-value pairs. */
|
||||
|
||||
/* Emit the XADD <key> <id> ...fields... command. */
|
||||
if (rioWriteBulkCount(r,'*',3+numfields*2) == 0) return 0;
|
||||
/* Emit the XADD <key> <id> ...fields... command. */
|
||||
if (rioWriteBulkCount(r,'*',3+numfields*2) == 0) return 0;
|
||||
if (rioWriteBulkString(r,"XADD",4) == 0) return 0;
|
||||
if (rioWriteBulkObject(r,key) == 0) return 0;
|
||||
if (rioWriteBulkStreamID(r,&id) == 0) return 0;
|
||||
while(numfields--) {
|
||||
unsigned char *field, *value;
|
||||
int64_t field_len, value_len;
|
||||
streamIteratorGetField(&si,&field,&value,&field_len,&value_len);
|
||||
if (rioWriteBulkString(r,(char*)field,field_len) == 0) return 0;
|
||||
if (rioWriteBulkString(r,(char*)value,value_len) == 0) return 0;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
/* Use the XADD MAXLEN 0 trick to generate an empty stream if
|
||||
* the key we are serializing is an empty string, which is possible
|
||||
* for the Stream type. */
|
||||
if (rioWriteBulkCount(r,'*',7) == 0) return 0;
|
||||
if (rioWriteBulkString(r,"XADD",4) == 0) return 0;
|
||||
if (rioWriteBulkObject(r,key) == 0) return 0;
|
||||
if (rioWriteBulkStreamID(r,&id) == 0) return 0;
|
||||
while(numfields--) {
|
||||
unsigned char *field, *value;
|
||||
int64_t field_len, value_len;
|
||||
streamIteratorGetField(&si,&field,&value,&field_len,&value_len);
|
||||
if (rioWriteBulkString(r,(char*)field,field_len) == 0) return 0;
|
||||
if (rioWriteBulkString(r,(char*)value,value_len) == 0) return 0;
|
||||
}
|
||||
if (rioWriteBulkString(r,"MAXLEN",6) == 0) return 0;
|
||||
if (rioWriteBulkString(r,"0",1) == 0) return 0;
|
||||
if (rioWriteBulkStreamID(r,&s->last_id) == 0) return 0;
|
||||
if (rioWriteBulkString(r,"x",1) == 0) return 0;
|
||||
if (rioWriteBulkString(r,"y",1) == 0) return 0;
|
||||
}
|
||||
|
||||
/* Append XSETID after XADD, make sure lastid is correct,
|
||||
* in case of XDEL lastid. */
|
||||
if (rioWriteBulkCount(r,'*',3) == 0) return 0;
|
||||
if (rioWriteBulkString(r,"XSETID",6) == 0) return 0;
|
||||
if (rioWriteBulkObject(r,key) == 0) return 0;
|
||||
if (rioWriteBulkStreamID(r,&s->last_id) == 0) return 0;
|
||||
|
||||
|
||||
/* Create all the stream consumer groups. */
|
||||
if (s->cgroups) {
|
||||
raxIterator ri;
|
||||
|
@ -1,7 +1,7 @@
|
||||
/* This file implements atomic counters using __atomic or __sync macros if
|
||||
* available, otherwise synchronizing different threads using a mutex.
|
||||
*
|
||||
* The exported interaface is composed of three macros:
|
||||
* The exported interface is composed of three macros:
|
||||
*
|
||||
* atomicIncr(var,count) -- Increment the atomic counter
|
||||
* atomicGetIncr(var,oldvalue_var,count) -- Get and increment the atomic counter
|
||||
|
@ -17,7 +17,7 @@
|
||||
*
|
||||
* The design is trivial, we have a structure representing a job to perform
|
||||
* and a different thread and job queue for every job type.
|
||||
* Every thread wait for new jobs in its queue, and process every job
|
||||
* Every thread waits for new jobs in its queue, and process every job
|
||||
* sequentially.
|
||||
*
|
||||
* Jobs of the same type are guaranteed to be processed from the least
|
||||
@ -204,14 +204,14 @@ void *bioProcessBackgroundJobs(void *arg) {
|
||||
}
|
||||
zfree(job);
|
||||
|
||||
/* Unblock threads blocked on bioWaitStepOfType() if any. */
|
||||
pthread_cond_broadcast(&bio_step_cond[type]);
|
||||
|
||||
/* Lock again before reiterating the loop, if there are no longer
|
||||
* jobs to process we'll block again in pthread_cond_wait(). */
|
||||
pthread_mutex_lock(&bio_mutex[type]);
|
||||
listDelNode(bio_jobs[type],ln);
|
||||
bio_pending[type]--;
|
||||
|
||||
/* Unblock threads blocked on bioWaitStepOfType() if any. */
|
||||
pthread_cond_broadcast(&bio_step_cond[type]);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1002,7 +1002,7 @@ void bitfieldCommand(client *c) {
|
||||
highest_write_offset)) == NULL) return;
|
||||
}
|
||||
|
||||
addReplyMultiBulkLen(c,numops);
|
||||
addReplyArrayLen(c,numops);
|
||||
|
||||
/* Actually process the operations. */
|
||||
for (j = 0; j < numops; j++) {
|
||||
@ -1047,7 +1047,7 @@ void bitfieldCommand(client *c) {
|
||||
setSignedBitfield(o->ptr,thisop->offset,
|
||||
thisop->bits,newval);
|
||||
} else {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
}
|
||||
} else {
|
||||
uint64_t oldval, newval, wrapped, retval;
|
||||
@ -1076,7 +1076,7 @@ void bitfieldCommand(client *c) {
|
||||
setUnsignedBitfield(o->ptr,thisop->offset,
|
||||
thisop->bits,newval);
|
||||
} else {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
}
|
||||
}
|
||||
changes++;
|
||||
|
@ -126,12 +126,37 @@ void processUnblockedClients(void) {
|
||||
* the code is conceptually more correct this way. */
|
||||
if (!(c->flags & CLIENT_BLOCKED)) {
|
||||
if (c->querybuf && sdslen(c->querybuf) > 0) {
|
||||
processInputBuffer(c);
|
||||
processInputBufferAndReplicate(c);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* This function will schedule the client for reprocessing at a safe time.
|
||||
*
|
||||
* This is useful when a client was blocked for some reason (blocking opeation,
|
||||
* CLIENT PAUSE, or whatever), because it may end with some accumulated query
|
||||
* buffer that needs to be processed ASAP:
|
||||
*
|
||||
* 1. When a client is blocked, its readable handler is still active.
|
||||
* 2. However in this case it only gets data into the query buffer, but the
|
||||
* query is not parsed or executed once there is enough to proceed as
|
||||
* usually (because the client is blocked... so we can't execute commands).
|
||||
* 3. When the client is unblocked, without this function, the client would
|
||||
* have to write some query in order for the readable handler to finally
|
||||
* call processQueryBuffer*() on it.
|
||||
* 4. With this function instead we can put the client in a queue that will
|
||||
* process it for queries ready to be executed at a safe time.
|
||||
*/
|
||||
void queueClientForReprocessing(client *c) {
|
||||
/* The client may already be into the unblocked list because of a previous
|
||||
* blocking operation, don't add back it into the list multiple times. */
|
||||
if (!(c->flags & CLIENT_UNBLOCKED)) {
|
||||
c->flags |= CLIENT_UNBLOCKED;
|
||||
listAddNodeTail(server.unblocked_clients,c);
|
||||
}
|
||||
}
|
||||
|
||||
/* Unblock a client calling the right function depending on the kind
|
||||
* of operation the client is blocking for. */
|
||||
void unblockClient(client *c) {
|
||||
@ -152,12 +177,7 @@ void unblockClient(client *c) {
|
||||
server.blocked_clients_by_type[c->btype]--;
|
||||
c->flags &= ~CLIENT_BLOCKED;
|
||||
c->btype = BLOCKED_NONE;
|
||||
/* The client may already be into the unblocked list because of a previous
|
||||
* blocking operation, don't add back it into the list multiple times. */
|
||||
if (!(c->flags & CLIENT_UNBLOCKED)) {
|
||||
c->flags |= CLIENT_UNBLOCKED;
|
||||
listAddNodeTail(server.unblocked_clients,c);
|
||||
}
|
||||
queueClientForReprocessing(c);
|
||||
}
|
||||
|
||||
/* This function gets called when a blocked client timed out in order to
|
||||
@ -167,7 +187,7 @@ void replyToBlockedClientTimedOut(client *c) {
|
||||
if (c->btype == BLOCKED_LIST ||
|
||||
c->btype == BLOCKED_ZSET ||
|
||||
c->btype == BLOCKED_STREAM) {
|
||||
addReply(c,shared.nullmultibulk);
|
||||
addReplyNullArray(c);
|
||||
} else if (c->btype == BLOCKED_WAIT) {
|
||||
addReplyLongLong(c,replicationCountAcksByOffset(c->bpop.reploffset));
|
||||
} else if (c->btype == BLOCKED_MODULE) {
|
||||
@ -195,7 +215,7 @@ void disconnectAllBlockedClients(void) {
|
||||
if (c->flags & CLIENT_BLOCKED) {
|
||||
addReplySds(c,sdsnew(
|
||||
"-UNBLOCKED force unblock from blocking operation, "
|
||||
"instance state changed (master -> slave?)\r\n"));
|
||||
"instance state changed (master -> replica?)\r\n"));
|
||||
unblockClient(c);
|
||||
c->flags |= CLIENT_CLOSE_AFTER_REPLY;
|
||||
}
|
||||
@ -269,7 +289,7 @@ void handleClientsBlockedOnKeys(void) {
|
||||
robj *dstkey = receiver->bpop.target;
|
||||
int where = (receiver->lastcmd &&
|
||||
receiver->lastcmd->proc == blpopCommand) ?
|
||||
LIST_HEAD : LIST_TAIL;
|
||||
LIST_HEAD : LIST_TAIL;
|
||||
robj *value = listTypePop(o,where);
|
||||
|
||||
if (value) {
|
||||
@ -285,7 +305,7 @@ void handleClientsBlockedOnKeys(void) {
|
||||
{
|
||||
/* If we failed serving the client we need
|
||||
* to also undo the POP operation. */
|
||||
listTypePush(o,value,where);
|
||||
listTypePush(o,value,where);
|
||||
}
|
||||
|
||||
if (dstkey) decrRefCount(dstkey);
|
||||
@ -416,8 +436,12 @@ void handleClientsBlockedOnKeys(void) {
|
||||
* the name of the stream and the data we
|
||||
* extracted from it. Wrapped in a single-item
|
||||
* array, since we have just one key. */
|
||||
addReplyMultiBulkLen(receiver,1);
|
||||
addReplyMultiBulkLen(receiver,2);
|
||||
if (receiver->resp == 2) {
|
||||
addReplyArrayLen(receiver,1);
|
||||
addReplyArrayLen(receiver,2);
|
||||
} else {
|
||||
addReplyMapLen(receiver,1);
|
||||
}
|
||||
addReplyBulk(receiver,rl->key);
|
||||
|
||||
streamPropInfo pi = {
|
||||
|
@ -1230,7 +1230,7 @@ void clearNodeFailureIfNeeded(clusterNode *node) {
|
||||
serverLog(LL_NOTICE,
|
||||
"Clear FAIL state for node %.40s: %s is reachable again.",
|
||||
node->name,
|
||||
nodeIsSlave(node) ? "slave" : "master without slots");
|
||||
nodeIsSlave(node) ? "replica" : "master without slots");
|
||||
node->flags &= ~CLUSTER_NODE_FAIL;
|
||||
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);
|
||||
}
|
||||
@ -1589,6 +1589,12 @@ void clusterUpdateSlotsConfigWith(clusterNode *sender, uint64_t senderConfigEpoc
|
||||
}
|
||||
}
|
||||
|
||||
/* After updating the slots configuration, don't do any actual change
|
||||
* in the state of the server if a module disabled Redis Cluster
|
||||
* keys redirections. */
|
||||
if (server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_REDIRECTION)
|
||||
return;
|
||||
|
||||
/* If at least one slot was reassigned from a node to another node
|
||||
* with a greater configEpoch, it is possible that:
|
||||
* 1) We are a master left without slots. This means that we were
|
||||
@ -2059,7 +2065,7 @@ int clusterProcessPacket(clusterLink *link) {
|
||||
server.cluster->mf_end = mstime() + CLUSTER_MF_TIMEOUT;
|
||||
server.cluster->mf_slave = sender;
|
||||
pauseClients(mstime()+(CLUSTER_MF_TIMEOUT*2));
|
||||
serverLog(LL_WARNING,"Manual failover requested by slave %.40s.",
|
||||
serverLog(LL_WARNING,"Manual failover requested by replica %.40s.",
|
||||
sender->name);
|
||||
} else if (type == CLUSTERMSG_TYPE_UPDATE) {
|
||||
clusterNode *n; /* The node the update is about. */
|
||||
@ -2873,7 +2879,7 @@ void clusterLogCantFailover(int reason) {
|
||||
switch(reason) {
|
||||
case CLUSTER_CANT_FAILOVER_DATA_AGE:
|
||||
msg = "Disconnected from master for longer than allowed. "
|
||||
"Please check the 'cluster-slave-validity-factor' configuration "
|
||||
"Please check the 'cluster-replica-validity-factor' configuration "
|
||||
"option.";
|
||||
break;
|
||||
case CLUSTER_CANT_FAILOVER_WAITING_DELAY:
|
||||
@ -3054,7 +3060,7 @@ void clusterHandleSlaveFailover(void) {
|
||||
server.cluster->failover_auth_time += added_delay;
|
||||
server.cluster->failover_auth_rank = newrank;
|
||||
serverLog(LL_WARNING,
|
||||
"Slave rank updated to #%d, added %lld milliseconds of delay.",
|
||||
"Replica rank updated to #%d, added %lld milliseconds of delay.",
|
||||
newrank, added_delay);
|
||||
}
|
||||
}
|
||||
@ -3210,7 +3216,8 @@ void clusterHandleSlaveMigration(int max_slaves) {
|
||||
* the natural slaves of this instance to advertise their switch from
|
||||
* the old master to the new one. */
|
||||
if (target && candidate == myself &&
|
||||
(mstime()-target->orphaned_time) > CLUSTER_SLAVE_MIGRATION_DELAY)
|
||||
(mstime()-target->orphaned_time) > CLUSTER_SLAVE_MIGRATION_DELAY &&
|
||||
!(server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_FAILOVER))
|
||||
{
|
||||
serverLog(LL_WARNING,"Migrating to orphaned master %.40s",
|
||||
target->name);
|
||||
@ -3321,14 +3328,18 @@ void clusterCron(void) {
|
||||
int changed = 0;
|
||||
|
||||
if (prev_ip == NULL && curr_ip != NULL) changed = 1;
|
||||
if (prev_ip != NULL && curr_ip == NULL) changed = 1;
|
||||
if (prev_ip && curr_ip && strcmp(prev_ip,curr_ip)) changed = 1;
|
||||
else if (prev_ip != NULL && curr_ip == NULL) changed = 1;
|
||||
else if (prev_ip && curr_ip && strcmp(prev_ip,curr_ip)) changed = 1;
|
||||
|
||||
if (changed) {
|
||||
if (prev_ip) zfree(prev_ip);
|
||||
prev_ip = curr_ip;
|
||||
if (prev_ip) prev_ip = zstrdup(prev_ip);
|
||||
|
||||
if (curr_ip) {
|
||||
/* We always take a copy of the previous IP address, by
|
||||
* duplicating the string. This way later we can check if
|
||||
* the address really changed. */
|
||||
prev_ip = zstrdup(prev_ip);
|
||||
strncpy(myself->ip,server.cluster_announce_ip,NET_IP_STR_LEN);
|
||||
myself->ip[NET_IP_STR_LEN-1] = '\0';
|
||||
} else {
|
||||
@ -3559,7 +3570,8 @@ void clusterCron(void) {
|
||||
|
||||
if (nodeIsSlave(myself)) {
|
||||
clusterHandleManualFailover();
|
||||
clusterHandleSlaveFailover();
|
||||
if (!(server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_FAILOVER))
|
||||
clusterHandleSlaveFailover();
|
||||
/* If there are orphaned slaves, and we are a slave among the masters
|
||||
* with the max number of non-failing slaves, consider migrating to
|
||||
* the orphaned masters. Note that it does not make sense to try
|
||||
@ -3865,6 +3877,11 @@ int verifyClusterConfigWithData(void) {
|
||||
int j;
|
||||
int update_config = 0;
|
||||
|
||||
/* Return ASAP if a module disabled cluster redirections. In that case
|
||||
* every master can store keys about every possible hash slot. */
|
||||
if (server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_REDIRECTION)
|
||||
return C_OK;
|
||||
|
||||
/* If this node is a slave, don't perform the check at all as we
|
||||
* completely depend on the replication stream. */
|
||||
if (nodeIsSlave(myself)) return C_OK;
|
||||
@ -4109,7 +4126,7 @@ void clusterReplyMultiBulkSlots(client *c) {
|
||||
*/
|
||||
|
||||
int num_masters = 0;
|
||||
void *slot_replylen = addDeferredMultiBulkLength(c);
|
||||
void *slot_replylen = addReplyDeferredLen(c);
|
||||
|
||||
dictEntry *de;
|
||||
dictIterator *di = dictGetSafeIterator(server.cluster->nodes);
|
||||
@ -4129,7 +4146,7 @@ void clusterReplyMultiBulkSlots(client *c) {
|
||||
}
|
||||
if (start != -1 && (!bit || j == CLUSTER_SLOTS-1)) {
|
||||
int nested_elements = 3; /* slots (2) + master addr (1). */
|
||||
void *nested_replylen = addDeferredMultiBulkLength(c);
|
||||
void *nested_replylen = addReplyDeferredLen(c);
|
||||
|
||||
if (bit && j == CLUSTER_SLOTS-1) j++;
|
||||
|
||||
@ -4145,7 +4162,7 @@ void clusterReplyMultiBulkSlots(client *c) {
|
||||
start = -1;
|
||||
|
||||
/* First node reply position is always the master */
|
||||
addReplyMultiBulkLen(c, 3);
|
||||
addReplyArrayLen(c, 3);
|
||||
addReplyBulkCString(c, node->ip);
|
||||
addReplyLongLong(c, node->port);
|
||||
addReplyBulkCBuffer(c, node->name, CLUSTER_NAMELEN);
|
||||
@ -4155,19 +4172,19 @@ void clusterReplyMultiBulkSlots(client *c) {
|
||||
/* This loop is copy/pasted from clusterGenNodeDescription()
|
||||
* with modifications for per-slot node aggregation */
|
||||
if (nodeFailed(node->slaves[i])) continue;
|
||||
addReplyMultiBulkLen(c, 3);
|
||||
addReplyArrayLen(c, 3);
|
||||
addReplyBulkCString(c, node->slaves[i]->ip);
|
||||
addReplyLongLong(c, node->slaves[i]->port);
|
||||
addReplyBulkCBuffer(c, node->slaves[i]->name, CLUSTER_NAMELEN);
|
||||
nested_elements++;
|
||||
}
|
||||
setDeferredMultiBulkLength(c, nested_replylen, nested_elements);
|
||||
setDeferredArrayLen(c, nested_replylen, nested_elements);
|
||||
num_masters++;
|
||||
}
|
||||
}
|
||||
}
|
||||
dictReleaseIterator(di);
|
||||
setDeferredMultiBulkLength(c, slot_replylen, num_masters);
|
||||
setDeferredArrayLen(c, slot_replylen, num_masters);
|
||||
}
|
||||
|
||||
void clusterCommand(client *c) {
|
||||
@ -4183,7 +4200,7 @@ void clusterCommand(client *c) {
|
||||
"COUNT-failure-reports <node-id> -- Return number of failure reports for <node-id>.",
|
||||
"COUNTKEYSINSLOT <slot> - Return the number of keys in <slot>.",
|
||||
"DELSLOTS <slot> [slot ...] -- Delete slots information from current node.",
|
||||
"FAILOVER [force|takeover] -- Promote current slave node to being a master.",
|
||||
"FAILOVER [force|takeover] -- Promote current replica node to being a master.",
|
||||
"FORGET <node-id> -- Remove a node from the cluster.",
|
||||
"GETKEYSINSLOT <slot> <count> -- Return key names stored by current node in a slot.",
|
||||
"FLUSHSLOTS -- Delete current node own slots information.",
|
||||
@ -4193,11 +4210,11 @@ void clusterCommand(client *c) {
|
||||
"MYID -- Return the node id.",
|
||||
"NODES -- Return cluster configuration seen by node. Output format:",
|
||||
" <id> <ip:port> <flags> <master> <pings> <pongs> <epoch> <link> <slot> ... <slot>",
|
||||
"REPLICATE <node-id> -- Configure current node as slave to <node-id>.",
|
||||
"REPLICATE <node-id> -- Configure current node as replica to <node-id>.",
|
||||
"RESET [hard|soft] -- Reset current node (default: soft).",
|
||||
"SET-config-epoch <epoch> - Set config epoch of current node.",
|
||||
"SETSLOT <slot> (importing|migrating|stable|node <node-id>) -- Set slot state.",
|
||||
"SLAVES <node-id> -- Return <node-id> slaves.",
|
||||
"REPLICAS <node-id> -- Return <node-id> replicas.",
|
||||
"SLOTS -- Return information about slots range mappings. Each range is made of:",
|
||||
" start, end, master and replicas IP addresses, ports and ids",
|
||||
NULL
|
||||
@ -4531,7 +4548,7 @@ NULL
|
||||
|
||||
keys = zmalloc(sizeof(robj*)*maxkeys);
|
||||
numkeys = getKeysInSlot(slot, keys, maxkeys);
|
||||
addReplyMultiBulkLen(c,numkeys);
|
||||
addReplyArrayLen(c,numkeys);
|
||||
for (j = 0; j < numkeys; j++) {
|
||||
addReplyBulk(c,keys[j]);
|
||||
decrRefCount(keys[j]);
|
||||
@ -4574,7 +4591,7 @@ NULL
|
||||
|
||||
/* Can't replicate a slave. */
|
||||
if (nodeIsSlave(n)) {
|
||||
addReplyError(c,"I can only replicate a master, not a slave.");
|
||||
addReplyError(c,"I can only replicate a master, not a replica.");
|
||||
return;
|
||||
}
|
||||
|
||||
@ -4593,7 +4610,8 @@ NULL
|
||||
clusterSetMaster(n);
|
||||
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);
|
||||
addReply(c,shared.ok);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"slaves") && c->argc == 3) {
|
||||
} else if ((!strcasecmp(c->argv[1]->ptr,"slaves") ||
|
||||
!strcasecmp(c->argv[1]->ptr,"replicas")) && c->argc == 3) {
|
||||
/* CLUSTER SLAVES <NODE ID> */
|
||||
clusterNode *n = clusterLookupNode(c->argv[2]->ptr);
|
||||
int j;
|
||||
@ -4609,7 +4627,7 @@ NULL
|
||||
return;
|
||||
}
|
||||
|
||||
addReplyMultiBulkLen(c,n->numslaves);
|
||||
addReplyArrayLen(c,n->numslaves);
|
||||
for (j = 0; j < n->numslaves; j++) {
|
||||
sds ni = clusterGenNodeDescription(n->slaves[j]);
|
||||
addReplyBulkCString(c,ni);
|
||||
@ -4647,10 +4665,10 @@ NULL
|
||||
|
||||
/* Check preconditions. */
|
||||
if (nodeIsMaster(myself)) {
|
||||
addReplyError(c,"You should send CLUSTER FAILOVER to a slave");
|
||||
addReplyError(c,"You should send CLUSTER FAILOVER to a replica");
|
||||
return;
|
||||
} else if (myself->slaveof == NULL) {
|
||||
addReplyError(c,"I'm a slave but my master is unknown to me");
|
||||
addReplyError(c,"I'm a replica but my master is unknown to me");
|
||||
return;
|
||||
} else if (!force &&
|
||||
(nodeFailed(myself->slaveof) ||
|
||||
@ -4818,7 +4836,7 @@ void dumpCommand(client *c) {
|
||||
|
||||
/* Check if the key is here. */
|
||||
if ((o = lookupKeyRead(c->db,c->argv[1])) == NULL) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -5146,6 +5164,11 @@ try_again:
|
||||
serverAssertWithInfo(c,NULL,rioWriteBulkLongLong(&cmd,dbid));
|
||||
}
|
||||
|
||||
int non_expired = 0; /* Number of keys that we'll find non expired.
|
||||
Note that serializing large keys may take some time
|
||||
so certain keys that were found non expired by the
|
||||
lookupKey() function, may be expired later. */
|
||||
|
||||
/* Create RESTORE payload and generate the protocol to call the command. */
|
||||
for (j = 0; j < num_keys; j++) {
|
||||
long long ttl = 0;
|
||||
@ -5153,8 +5176,17 @@ try_again:
|
||||
|
||||
if (expireat != -1) {
|
||||
ttl = expireat-mstime();
|
||||
if (ttl < 0) {
|
||||
continue;
|
||||
}
|
||||
if (ttl < 1) ttl = 1;
|
||||
}
|
||||
|
||||
/* Relocate valid (non expired) keys into the array in successive
|
||||
* positions to remove holes created by the keys that were present
|
||||
* in the first lookup but are now expired after the second lookup. */
|
||||
kv[non_expired++] = kv[j];
|
||||
|
||||
serverAssertWithInfo(c,NULL,
|
||||
rioWriteBulkCount(&cmd,'*',replace ? 5 : 4));
|
||||
|
||||
@ -5182,6 +5214,9 @@ try_again:
|
||||
serverAssertWithInfo(c,NULL,rioWriteBulkString(&cmd,"REPLACE",7));
|
||||
}
|
||||
|
||||
/* Fix the actual number of keys we are migrating. */
|
||||
num_keys = non_expired;
|
||||
|
||||
/* Transfer the query to the other node in 64K chunks. */
|
||||
errno = 0;
|
||||
{
|
||||
@ -5217,6 +5252,10 @@ try_again:
|
||||
int socket_error = 0;
|
||||
int del_idx = 1; /* Index of the key argument for the replicated DEL op. */
|
||||
|
||||
/* Allocate the new argument vector that will replace the current command,
|
||||
* to propagate the MIGRATE as a DEL command (if no COPY option was given).
|
||||
* We allocate num_keys+1 because the additional argument is for "DEL"
|
||||
* command name itself. */
|
||||
if (!copy) newargv = zmalloc(sizeof(robj*)*(num_keys+1));
|
||||
|
||||
for (j = 0; j < num_keys; j++) {
|
||||
@ -5417,9 +5456,17 @@ clusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, in
|
||||
multiCmd mc;
|
||||
int i, slot = 0, migrating_slot = 0, importing_slot = 0, missing_keys = 0;
|
||||
|
||||
/* Allow any key to be set if a module disabled cluster redirections. */
|
||||
if (server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_REDIRECTION)
|
||||
return myself;
|
||||
|
||||
/* Set error code optimistically for the base case. */
|
||||
if (error_code) *error_code = CLUSTER_REDIR_NONE;
|
||||
|
||||
/* Modules can turn off Redis Cluster redirection: this is useful
|
||||
* when writing a module that implements a completely different
|
||||
* distributed system. */
|
||||
|
||||
/* We handle all the cases as if they were EXEC commands, so we have
|
||||
* a common code path for everything */
|
||||
if (cmd->proc == execCommand) {
|
||||
|
@ -100,6 +100,13 @@ typedef struct clusterLink {
|
||||
#define CLUSTERMSG_TYPE_MODULE 9 /* Module cluster API message. */
|
||||
#define CLUSTERMSG_TYPE_COUNT 10 /* Total number of message types. */
|
||||
|
||||
/* Flags that a module can set in order to prevent certain Redis Cluster
|
||||
* features to be enabled. Useful when implementing a different distributed
|
||||
* system on top of Redis Cluster message bus, using modules. */
|
||||
#define CLUSTER_MODULE_FLAG_NONE 0
|
||||
#define CLUSTER_MODULE_FLAG_NO_FAILOVER (1<<1)
|
||||
#define CLUSTER_MODULE_FLAG_NO_REDIRECTION (1<<2)
|
||||
|
||||
/* This structure represent elements of node->fail_reports. */
|
||||
typedef struct clusterNodeFailReport {
|
||||
struct clusterNode *node; /* Node reporting the failure condition. */
|
||||
|
323
src/config.c
323
src/config.c
@ -120,7 +120,7 @@ const char *configEnumGetName(configEnum *ce, int val) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Wrapper for configEnumGetName() returning "unknown" insetad of NULL if
|
||||
/* Wrapper for configEnumGetName() returning "unknown" instead of NULL if
|
||||
* there is no match. */
|
||||
const char *configEnumGetNameOrUnknown(configEnum *ce, int val) {
|
||||
const char *name = configEnumGetName(ce,val);
|
||||
@ -216,6 +216,10 @@ void loadServerConfigFromString(char *config) {
|
||||
if ((server.protected_mode = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"gopher-enabled") && argc == 2) {
|
||||
if ((server.gopher_enabled = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"port") && argc == 2) {
|
||||
server.port = atoi(argv[1]);
|
||||
if (server.port < 0 || server.port > 65535) {
|
||||
@ -283,6 +287,9 @@ void loadServerConfigFromString(char *config) {
|
||||
}
|
||||
fclose(logfp);
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"aclfile") && argc == 2) {
|
||||
zfree(server.acl_filename);
|
||||
server.acl_filename = zstrdup(argv[1]);
|
||||
} else if (!strcasecmp(argv[0],"always-show-logo") && argc == 2) {
|
||||
if ((server.always_show_logo = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
@ -344,15 +351,19 @@ void loadServerConfigFromString(char *config) {
|
||||
err = "lfu-decay-time must be 0 or greater";
|
||||
goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"slaveof") && argc == 3) {
|
||||
} else if ((!strcasecmp(argv[0],"slaveof") ||
|
||||
!strcasecmp(argv[0],"replicaof")) && argc == 3) {
|
||||
slaveof_linenum = linenum;
|
||||
server.masterhost = sdsnew(argv[1]);
|
||||
server.masterport = atoi(argv[2]);
|
||||
server.repl_state = REPL_STATE_CONNECT;
|
||||
} else if (!strcasecmp(argv[0],"repl-ping-slave-period") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"repl-ping-slave-period") ||
|
||||
!strcasecmp(argv[0],"repl-ping-replica-period")) &&
|
||||
argc == 2)
|
||||
{
|
||||
server.repl_ping_slave_period = atoi(argv[1]);
|
||||
if (server.repl_ping_slave_period <= 0) {
|
||||
err = "repl-ping-slave-period must be 1 or greater";
|
||||
err = "repl-ping-replica-period must be 1 or greater";
|
||||
goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"repl-timeout") && argc == 2) {
|
||||
@ -388,17 +399,33 @@ void loadServerConfigFromString(char *config) {
|
||||
err = "repl-backlog-ttl can't be negative ";
|
||||
goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"masteruser") && argc == 2) {
|
||||
zfree(server.masteruser);
|
||||
server.masteruser = argv[1][0] ? zstrdup(argv[1]) : NULL;
|
||||
} else if (!strcasecmp(argv[0],"masterauth") && argc == 2) {
|
||||
zfree(server.masterauth);
|
||||
server.masterauth = argv[1][0] ? zstrdup(argv[1]) : NULL;
|
||||
} else if (!strcasecmp(argv[0],"slave-serve-stale-data") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"slave-serve-stale-data") ||
|
||||
!strcasecmp(argv[0],"replica-serve-stale-data"))
|
||||
&& argc == 2)
|
||||
{
|
||||
if ((server.repl_serve_stale_data = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"slave-read-only") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"slave-read-only") ||
|
||||
!strcasecmp(argv[0],"replica-read-only"))
|
||||
&& argc == 2)
|
||||
{
|
||||
if ((server.repl_slave_ro = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if ((!strcasecmp(argv[0],"slave-ignore-maxmemory") ||
|
||||
!strcasecmp(argv[0],"replica-ignore-maxmemory"))
|
||||
&& argc == 2)
|
||||
{
|
||||
if ((server.repl_slave_ignore_maxmemory = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"rdbcompression") && argc == 2) {
|
||||
if ((server.rdb_compression = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
@ -423,7 +450,9 @@ void loadServerConfigFromString(char *config) {
|
||||
if ((server.lazyfree_lazy_server_del = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"slave-lazy-flush") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"slave-lazy-flush") ||
|
||||
!strcasecmp(argv[0],"replica-lazy-flush")) && argc == 2)
|
||||
{
|
||||
if ((server.repl_slave_lazy_flush = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
@ -440,10 +469,14 @@ void loadServerConfigFromString(char *config) {
|
||||
if ((server.daemonize = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"dynamic-hz") && argc == 2) {
|
||||
if ((server.dynamic_hz = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"hz") && argc == 2) {
|
||||
server.hz = atoi(argv[1]);
|
||||
if (server.hz < CONFIG_MIN_HZ) server.hz = CONFIG_MIN_HZ;
|
||||
if (server.hz > CONFIG_MAX_HZ) server.hz = CONFIG_MAX_HZ;
|
||||
server.config_hz = atoi(argv[1]);
|
||||
if (server.config_hz < CONFIG_MIN_HZ) server.config_hz = CONFIG_MIN_HZ;
|
||||
if (server.config_hz > CONFIG_MAX_HZ) server.config_hz = CONFIG_MAX_HZ;
|
||||
} else if (!strcasecmp(argv[0],"appendonly") && argc == 2) {
|
||||
int yes;
|
||||
|
||||
@ -508,7 +541,12 @@ void loadServerConfigFromString(char *config) {
|
||||
err = "Password is longer than CONFIG_AUTHPASS_MAX_LEN";
|
||||
goto loaderr;
|
||||
}
|
||||
server.requirepass = argv[1][0] ? zstrdup(argv[1]) : NULL;
|
||||
/* The old "requirepass" directive just translates to setting
|
||||
* a password to the default user. */
|
||||
ACLSetUser(DefaultUser,"resetpass",-1);
|
||||
sds aclop = sdscatprintf(sdsempty(),">%s",argv[1]);
|
||||
ACLSetUser(DefaultUser,aclop,sdslen(aclop));
|
||||
sdsfree(aclop);
|
||||
} else if (!strcasecmp(argv[0],"pidfile") && argc == 2) {
|
||||
zfree(server.pidfile);
|
||||
server.pidfile = zstrdup(argv[1]);
|
||||
@ -651,15 +689,17 @@ void loadServerConfigFromString(char *config) {
|
||||
err = "cluster migration barrier must zero or positive";
|
||||
goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"cluster-slave-validity-factor")
|
||||
} else if ((!strcasecmp(argv[0],"cluster-slave-validity-factor") ||
|
||||
!strcasecmp(argv[0],"cluster-replica-validity-factor"))
|
||||
&& argc == 2)
|
||||
{
|
||||
server.cluster_slave_validity_factor = atoi(argv[1]);
|
||||
if (server.cluster_slave_validity_factor < 0) {
|
||||
err = "cluster slave validity factor must be zero or positive";
|
||||
err = "cluster replica validity factor must be zero or positive";
|
||||
goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"cluster-slave-no-failover") &&
|
||||
} else if ((!strcasecmp(argv[0],"cluster-slave-no-failover") ||
|
||||
!strcasecmp(argv[0],"cluster-replica-no-failover")) &&
|
||||
argc == 2)
|
||||
{
|
||||
server.cluster_slave_no_failover = yesnotoi(argv[1]);
|
||||
@ -669,6 +709,8 @@ void loadServerConfigFromString(char *config) {
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"lua-time-limit") && argc == 2) {
|
||||
server.lua_time_limit = strtoll(argv[1],NULL,10);
|
||||
} else if (!strcasecmp(argv[0],"lua-replicate-commands") && argc == 2) {
|
||||
server.lua_always_replicate_commands = yesnotoi(argv[1]);
|
||||
} else if (!strcasecmp(argv[0],"slowlog-log-slower-than") &&
|
||||
argc == 2)
|
||||
{
|
||||
@ -710,27 +752,37 @@ void loadServerConfigFromString(char *config) {
|
||||
if ((server.stop_writes_on_bgsave_err = yesnotoi(argv[1])) == -1) {
|
||||
err = "argument must be 'yes' or 'no'"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"slave-priority") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"slave-priority") ||
|
||||
!strcasecmp(argv[0],"replica-priority")) && argc == 2)
|
||||
{
|
||||
server.slave_priority = atoi(argv[1]);
|
||||
} else if (!strcasecmp(argv[0],"slave-announce-ip") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"slave-announce-ip") ||
|
||||
!strcasecmp(argv[0],"replica-announce-ip")) && argc == 2)
|
||||
{
|
||||
zfree(server.slave_announce_ip);
|
||||
server.slave_announce_ip = zstrdup(argv[1]);
|
||||
} else if (!strcasecmp(argv[0],"slave-announce-port") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"slave-announce-port") ||
|
||||
!strcasecmp(argv[0],"replica-announce-port")) && argc == 2)
|
||||
{
|
||||
server.slave_announce_port = atoi(argv[1]);
|
||||
if (server.slave_announce_port < 0 ||
|
||||
server.slave_announce_port > 65535)
|
||||
{
|
||||
err = "Invalid port"; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"min-slaves-to-write") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"min-slaves-to-write") ||
|
||||
!strcasecmp(argv[0],"min-replicas-to-write")) && argc == 2)
|
||||
{
|
||||
server.repl_min_slaves_to_write = atoi(argv[1]);
|
||||
if (server.repl_min_slaves_to_write < 0) {
|
||||
err = "Invalid value for min-slaves-to-write."; goto loaderr;
|
||||
err = "Invalid value for min-replicas-to-write."; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"min-slaves-max-lag") && argc == 2) {
|
||||
} else if ((!strcasecmp(argv[0],"min-slaves-max-lag") ||
|
||||
!strcasecmp(argv[0],"min-replicas-max-lag")) && argc == 2)
|
||||
{
|
||||
server.repl_min_slaves_max_lag = atoi(argv[1]);
|
||||
if (server.repl_min_slaves_max_lag < 0) {
|
||||
err = "Invalid value for min-slaves-max-lag."; goto loaderr;
|
||||
err = "Invalid value for min-replicas-max-lag."; goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"notify-keyspace-events") && argc == 2) {
|
||||
int flags = keyspaceEventsStringToFlags(argv[1]);
|
||||
@ -749,6 +801,16 @@ void loadServerConfigFromString(char *config) {
|
||||
"Allowed values: 'upstart', 'systemd', 'auto', or 'no'";
|
||||
goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"user") && argc >= 2) {
|
||||
int argc_err;
|
||||
if (ACLAppendUserForLoading(argv,argc,&argc_err) == C_ERR) {
|
||||
char buf[1024];
|
||||
char *errmsg = ACLSetUserStringError();
|
||||
snprintf(buf,sizeof(buf),"Error in user declaration '%s': %s",
|
||||
argv[argc_err],errmsg);
|
||||
err = buf;
|
||||
goto loaderr;
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"loadmodule") && argc >= 2) {
|
||||
queueLoadModule(argv[1],&argv[2],argc-2);
|
||||
} else if (!strcasecmp(argv[0],"sentinel")) {
|
||||
@ -772,7 +834,7 @@ void loadServerConfigFromString(char *config) {
|
||||
if (server.cluster_enabled && server.masterhost) {
|
||||
linenum = slaveof_linenum;
|
||||
i = linenum-1;
|
||||
err = "slaveof directive not allowed in cluster mode";
|
||||
err = "replicaof directive not allowed in cluster mode";
|
||||
goto loaderr;
|
||||
}
|
||||
|
||||
@ -856,6 +918,10 @@ void loadServerConfig(char *filename, char *options) {
|
||||
#define config_set_special_field(_name) \
|
||||
} else if (!strcasecmp(c->argv[2]->ptr,_name)) {
|
||||
|
||||
#define config_set_special_field_with_alias(_name1,_name2) \
|
||||
} else if (!strcasecmp(c->argv[2]->ptr,_name1) || \
|
||||
!strcasecmp(c->argv[2]->ptr,_name2)) {
|
||||
|
||||
#define config_set_else } else
|
||||
|
||||
void configSetCommand(client *c) {
|
||||
@ -878,8 +944,15 @@ void configSetCommand(client *c) {
|
||||
server.rdb_filename = zstrdup(o->ptr);
|
||||
} config_set_special_field("requirepass") {
|
||||
if (sdslen(o->ptr) > CONFIG_AUTHPASS_MAX_LEN) goto badfmt;
|
||||
zfree(server.requirepass);
|
||||
server.requirepass = ((char*)o->ptr)[0] ? zstrdup(o->ptr) : NULL;
|
||||
/* The old "requirepass" directive just translates to setting
|
||||
* a password to the default user. */
|
||||
ACLSetUser(DefaultUser,"resetpass",-1);
|
||||
sds aclop = sdscatprintf(sdsempty(),">%s",(char*)o->ptr);
|
||||
ACLSetUser(DefaultUser,aclop,sdslen(aclop));
|
||||
sdsfree(aclop);
|
||||
} config_set_special_field("masteruser") {
|
||||
zfree(server.masteruser);
|
||||
server.masteruser = ((char*)o->ptr)[0] ? zstrdup(o->ptr) : NULL;
|
||||
} config_set_special_field("masterauth") {
|
||||
zfree(server.masterauth);
|
||||
server.masterauth = ((char*)o->ptr)[0] ? zstrdup(o->ptr) : NULL;
|
||||
@ -1015,7 +1088,9 @@ void configSetCommand(client *c) {
|
||||
|
||||
if (flags == -1) goto badfmt;
|
||||
server.notify_keyspace_events = flags;
|
||||
} config_set_special_field("slave-announce-ip") {
|
||||
} config_set_special_field_with_alias("slave-announce-ip",
|
||||
"replica-announce-ip")
|
||||
{
|
||||
zfree(server.slave_announce_ip);
|
||||
server.slave_announce_ip = ((char*)o->ptr)[0] ? zstrdup(o->ptr) : NULL;
|
||||
|
||||
@ -1031,6 +1106,8 @@ void configSetCommand(client *c) {
|
||||
"cluster-require-full-coverage",server.cluster_require_full_coverage) {
|
||||
} config_set_bool_field(
|
||||
"cluster-slave-no-failover",server.cluster_slave_no_failover) {
|
||||
} config_set_bool_field(
|
||||
"cluster-replica-no-failover",server.cluster_slave_no_failover) {
|
||||
} config_set_bool_field(
|
||||
"aof-rewrite-incremental-fsync",server.aof_rewrite_incremental_fsync) {
|
||||
} config_set_bool_field(
|
||||
@ -1041,8 +1118,16 @@ void configSetCommand(client *c) {
|
||||
"aof-use-rdb-preamble",server.aof_use_rdb_preamble) {
|
||||
} config_set_bool_field(
|
||||
"slave-serve-stale-data",server.repl_serve_stale_data) {
|
||||
} config_set_bool_field(
|
||||
"replica-serve-stale-data",server.repl_serve_stale_data) {
|
||||
} config_set_bool_field(
|
||||
"slave-read-only",server.repl_slave_ro) {
|
||||
} config_set_bool_field(
|
||||
"replica-read-only",server.repl_slave_ro) {
|
||||
} config_set_bool_field(
|
||||
"slave-ignore-maxmemory",server.repl_slave_ignore_maxmemory) {
|
||||
} config_set_bool_field(
|
||||
"replica-ignore-maxmemory",server.repl_slave_ignore_maxmemory) {
|
||||
} config_set_bool_field(
|
||||
"activerehashing",server.activerehashing) {
|
||||
} config_set_bool_field(
|
||||
@ -1060,6 +1145,8 @@ void configSetCommand(client *c) {
|
||||
#endif
|
||||
} config_set_bool_field(
|
||||
"protected-mode",server.protected_mode) {
|
||||
} config_set_bool_field(
|
||||
"gopher-enabled",server.gopher_enabled) {
|
||||
} config_set_bool_field(
|
||||
"stop-writes-on-bgsave-error",server.stop_writes_on_bgsave_err) {
|
||||
} config_set_bool_field(
|
||||
@ -1070,8 +1157,12 @@ void configSetCommand(client *c) {
|
||||
"lazyfree-lazy-server-del",server.lazyfree_lazy_server_del) {
|
||||
} config_set_bool_field(
|
||||
"slave-lazy-flush",server.repl_slave_lazy_flush) {
|
||||
} config_set_bool_field(
|
||||
"replica-lazy-flush",server.repl_slave_lazy_flush) {
|
||||
} config_set_bool_field(
|
||||
"no-appendfsync-on-rewrite",server.aof_no_fsync_on_rewrite) {
|
||||
} config_set_bool_field(
|
||||
"dynamic-hz",server.dynamic_hz) {
|
||||
|
||||
/* Numerical fields.
|
||||
* config_set_numerical_field(name,var,min,max) */
|
||||
@ -1131,6 +1222,8 @@ void configSetCommand(client *c) {
|
||||
"latency-monitor-threshold",server.latency_monitor_threshold,0,LLONG_MAX){
|
||||
} config_set_numerical_field(
|
||||
"repl-ping-slave-period",server.repl_ping_slave_period,1,INT_MAX) {
|
||||
} config_set_numerical_field(
|
||||
"repl-ping-replica-period",server.repl_ping_slave_period,1,INT_MAX) {
|
||||
} config_set_numerical_field(
|
||||
"repl-timeout",server.repl_timeout,1,INT_MAX) {
|
||||
} config_set_numerical_field(
|
||||
@ -1139,14 +1232,24 @@ void configSetCommand(client *c) {
|
||||
"repl-diskless-sync-delay",server.repl_diskless_sync_delay,0,INT_MAX) {
|
||||
} config_set_numerical_field(
|
||||
"slave-priority",server.slave_priority,0,INT_MAX) {
|
||||
} config_set_numerical_field(
|
||||
"replica-priority",server.slave_priority,0,INT_MAX) {
|
||||
} config_set_numerical_field(
|
||||
"slave-announce-port",server.slave_announce_port,0,65535) {
|
||||
} config_set_numerical_field(
|
||||
"replica-announce-port",server.slave_announce_port,0,65535) {
|
||||
} config_set_numerical_field(
|
||||
"min-slaves-to-write",server.repl_min_slaves_to_write,0,INT_MAX) {
|
||||
refreshGoodSlavesCount();
|
||||
} config_set_numerical_field(
|
||||
"min-replicas-to-write",server.repl_min_slaves_to_write,0,INT_MAX) {
|
||||
refreshGoodSlavesCount();
|
||||
} config_set_numerical_field(
|
||||
"min-slaves-max-lag",server.repl_min_slaves_max_lag,0,INT_MAX) {
|
||||
refreshGoodSlavesCount();
|
||||
} config_set_numerical_field(
|
||||
"min-replicas-max-lag",server.repl_min_slaves_max_lag,0,INT_MAX) {
|
||||
refreshGoodSlavesCount();
|
||||
} config_set_numerical_field(
|
||||
"cluster-node-timeout",server.cluster_node_timeout,0,LLONG_MAX) {
|
||||
} config_set_numerical_field(
|
||||
@ -1158,11 +1261,13 @@ void configSetCommand(client *c) {
|
||||
} config_set_numerical_field(
|
||||
"cluster-slave-validity-factor",server.cluster_slave_validity_factor,0,INT_MAX) {
|
||||
} config_set_numerical_field(
|
||||
"hz",server.hz,0,INT_MAX) {
|
||||
"cluster-replica-validity-factor",server.cluster_slave_validity_factor,0,INT_MAX) {
|
||||
} config_set_numerical_field(
|
||||
"hz",server.config_hz,0,INT_MAX) {
|
||||
/* Hz is more an hint from the user, so we accept values out of range
|
||||
* but cap them to reasonable values. */
|
||||
if (server.hz < CONFIG_MIN_HZ) server.hz = CONFIG_MIN_HZ;
|
||||
if (server.hz > CONFIG_MAX_HZ) server.hz = CONFIG_MAX_HZ;
|
||||
if (server.config_hz < CONFIG_MIN_HZ) server.config_hz = CONFIG_MIN_HZ;
|
||||
if (server.config_hz > CONFIG_MAX_HZ) server.config_hz = CONFIG_MAX_HZ;
|
||||
} config_set_numerical_field(
|
||||
"watchdog-period",ll,0,INT_MAX) {
|
||||
if (ll)
|
||||
@ -1175,9 +1280,9 @@ void configSetCommand(client *c) {
|
||||
} config_set_memory_field("maxmemory",server.maxmemory) {
|
||||
if (server.maxmemory) {
|
||||
if (server.maxmemory < zmalloc_used_memory()) {
|
||||
serverLog(LL_WARNING,"WARNING: the new maxmemory value set via CONFIG SET is smaller than the current memory usage. This will result in keys eviction and/or inability to accept new write commands depending on the maxmemory-policy.");
|
||||
serverLog(LL_WARNING,"WARNING: the new maxmemory value set via CONFIG SET is smaller than the current memory usage. This will result in key eviction and/or the inability to accept new write commands depending on the maxmemory-policy.");
|
||||
}
|
||||
freeMemoryIfNeeded();
|
||||
freeMemoryIfNeededAndSafe();
|
||||
}
|
||||
} config_set_memory_field(
|
||||
"proto-max-bulk-len",server.proto_max_bulk_len) {
|
||||
@ -1253,7 +1358,7 @@ badfmt: /* Bad format errors */
|
||||
|
||||
void configGetCommand(client *c) {
|
||||
robj *o = c->argv[2];
|
||||
void *replylen = addDeferredMultiBulkLength(c);
|
||||
void *replylen = addReplyDeferredLen(c);
|
||||
char *pattern = o->ptr;
|
||||
char buf[128];
|
||||
int matches = 0;
|
||||
@ -1261,13 +1366,15 @@ void configGetCommand(client *c) {
|
||||
|
||||
/* String values */
|
||||
config_get_string_field("dbfilename",server.rdb_filename);
|
||||
config_get_string_field("requirepass",server.requirepass);
|
||||
config_get_string_field("masteruser",server.masteruser);
|
||||
config_get_string_field("masterauth",server.masterauth);
|
||||
config_get_string_field("cluster-announce-ip",server.cluster_announce_ip);
|
||||
config_get_string_field("unixsocket",server.unixsocket);
|
||||
config_get_string_field("logfile",server.logfile);
|
||||
config_get_string_field("aclfile",server.acl_filename);
|
||||
config_get_string_field("pidfile",server.pidfile);
|
||||
config_get_string_field("slave-announce-ip",server.slave_announce_ip);
|
||||
config_get_string_field("replica-announce-ip",server.slave_announce_ip);
|
||||
|
||||
/* Numerical values */
|
||||
config_get_numerical_field("maxmemory",server.maxmemory);
|
||||
@ -1320,19 +1427,25 @@ void configGetCommand(client *c) {
|
||||
config_get_numerical_field("tcp-backlog",server.tcp_backlog);
|
||||
config_get_numerical_field("databases",server.dbnum);
|
||||
config_get_numerical_field("repl-ping-slave-period",server.repl_ping_slave_period);
|
||||
config_get_numerical_field("repl-ping-replica-period",server.repl_ping_slave_period);
|
||||
config_get_numerical_field("repl-timeout",server.repl_timeout);
|
||||
config_get_numerical_field("repl-backlog-size",server.repl_backlog_size);
|
||||
config_get_numerical_field("repl-backlog-ttl",server.repl_backlog_time_limit);
|
||||
config_get_numerical_field("maxclients",server.maxclients);
|
||||
config_get_numerical_field("watchdog-period",server.watchdog_period);
|
||||
config_get_numerical_field("slave-priority",server.slave_priority);
|
||||
config_get_numerical_field("replica-priority",server.slave_priority);
|
||||
config_get_numerical_field("slave-announce-port",server.slave_announce_port);
|
||||
config_get_numerical_field("replica-announce-port",server.slave_announce_port);
|
||||
config_get_numerical_field("min-slaves-to-write",server.repl_min_slaves_to_write);
|
||||
config_get_numerical_field("min-replicas-to-write",server.repl_min_slaves_to_write);
|
||||
config_get_numerical_field("min-slaves-max-lag",server.repl_min_slaves_max_lag);
|
||||
config_get_numerical_field("hz",server.hz);
|
||||
config_get_numerical_field("min-replicas-max-lag",server.repl_min_slaves_max_lag);
|
||||
config_get_numerical_field("hz",server.config_hz);
|
||||
config_get_numerical_field("cluster-node-timeout",server.cluster_node_timeout);
|
||||
config_get_numerical_field("cluster-migration-barrier",server.cluster_migration_barrier);
|
||||
config_get_numerical_field("cluster-slave-validity-factor",server.cluster_slave_validity_factor);
|
||||
config_get_numerical_field("cluster-replica-validity-factor",server.cluster_slave_validity_factor);
|
||||
config_get_numerical_field("repl-diskless-sync-delay",server.repl_diskless_sync_delay);
|
||||
config_get_numerical_field("tcp-keepalive",server.tcpkeepalive);
|
||||
|
||||
@ -1341,12 +1454,22 @@ void configGetCommand(client *c) {
|
||||
server.cluster_require_full_coverage);
|
||||
config_get_bool_field("cluster-slave-no-failover",
|
||||
server.cluster_slave_no_failover);
|
||||
config_get_bool_field("cluster-replica-no-failover",
|
||||
server.cluster_slave_no_failover);
|
||||
config_get_bool_field("no-appendfsync-on-rewrite",
|
||||
server.aof_no_fsync_on_rewrite);
|
||||
config_get_bool_field("slave-serve-stale-data",
|
||||
server.repl_serve_stale_data);
|
||||
config_get_bool_field("replica-serve-stale-data",
|
||||
server.repl_serve_stale_data);
|
||||
config_get_bool_field("slave-read-only",
|
||||
server.repl_slave_ro);
|
||||
config_get_bool_field("replica-read-only",
|
||||
server.repl_slave_ro);
|
||||
config_get_bool_field("slave-ignore-maxmemory",
|
||||
server.repl_slave_ignore_maxmemory);
|
||||
config_get_bool_field("replica-ignore-maxmemory",
|
||||
server.repl_slave_ignore_maxmemory);
|
||||
config_get_bool_field("stop-writes-on-bgsave-error",
|
||||
server.stop_writes_on_bgsave_err);
|
||||
config_get_bool_field("daemonize", server.daemonize);
|
||||
@ -1355,6 +1478,7 @@ void configGetCommand(client *c) {
|
||||
config_get_bool_field("activerehashing", server.activerehashing);
|
||||
config_get_bool_field("activedefrag", server.active_defrag_enabled);
|
||||
config_get_bool_field("protected-mode", server.protected_mode);
|
||||
config_get_bool_field("gopher-enabled", server.gopher_enabled);
|
||||
config_get_bool_field("repl-disable-tcp-nodelay",
|
||||
server.repl_disable_tcp_nodelay);
|
||||
config_get_bool_field("repl-diskless-sync",
|
||||
@ -1375,6 +1499,10 @@ void configGetCommand(client *c) {
|
||||
server.lazyfree_lazy_server_del);
|
||||
config_get_bool_field("slave-lazy-flush",
|
||||
server.repl_slave_lazy_flush);
|
||||
config_get_bool_field("replica-lazy-flush",
|
||||
server.repl_slave_lazy_flush);
|
||||
config_get_bool_field("dynamic-hz",
|
||||
server.dynamic_hz);
|
||||
|
||||
/* Enum values */
|
||||
config_get_enum_field("maxmemory-policy",
|
||||
@ -1446,10 +1574,14 @@ void configGetCommand(client *c) {
|
||||
addReplyBulkCString(c,buf);
|
||||
matches++;
|
||||
}
|
||||
if (stringmatch(pattern,"slaveof",1)) {
|
||||
if (stringmatch(pattern,"slaveof",1) ||
|
||||
stringmatch(pattern,"replicaof",1))
|
||||
{
|
||||
char *optname = stringmatch(pattern,"slaveof",1) ?
|
||||
"slaveof" : "replicaof";
|
||||
char buf[256];
|
||||
|
||||
addReplyBulkCString(c,"slaveof");
|
||||
addReplyBulkCString(c,optname);
|
||||
if (server.masterhost)
|
||||
snprintf(buf,sizeof(buf),"%s %d",
|
||||
server.masterhost, server.masterport);
|
||||
@ -1475,7 +1607,17 @@ void configGetCommand(client *c) {
|
||||
sdsfree(aux);
|
||||
matches++;
|
||||
}
|
||||
setDeferredMultiBulkLength(c,replylen,matches*2);
|
||||
if (stringmatch(pattern,"requirepass",1)) {
|
||||
addReplyBulkCString(c,"requirepass");
|
||||
sds password = ACLDefaultUserFirstPassword();
|
||||
if (password) {
|
||||
addReplyBulkCBuffer(c,password,sdslen(password));
|
||||
} else {
|
||||
addReplyBulkCString(c,"");
|
||||
}
|
||||
matches++;
|
||||
}
|
||||
setDeferredMapLen(c,replylen,matches);
|
||||
}
|
||||
|
||||
/*-----------------------------------------------------------------------------
|
||||
@ -1605,8 +1747,20 @@ struct rewriteConfigState *rewriteConfigReadOldFile(char *path) {
|
||||
/* Now we populate the state according to the content of this line.
|
||||
* Append the line and populate the option -> line numbers map. */
|
||||
rewriteConfigAppendLine(state,line);
|
||||
rewriteConfigAddLineNumberToOption(state,argv[0],linenum);
|
||||
|
||||
/* Translate options using the word "slave" to the corresponding name
|
||||
* "replica", before adding such option to the config name -> lines
|
||||
* mapping. */
|
||||
char *p = strstr(argv[0],"slave");
|
||||
if (p) {
|
||||
sds alt = sdsempty();
|
||||
alt = sdscatlen(alt,argv[0],p-argv[0]);;
|
||||
alt = sdscatlen(alt,"replica",7);
|
||||
alt = sdscatlen(alt,p+5,strlen(p+5));
|
||||
sdsfree(argv[0]);
|
||||
argv[0] = alt;
|
||||
}
|
||||
rewriteConfigAddLineNumberToOption(state,argv[0],linenum);
|
||||
sdsfreesplitres(argv,argc);
|
||||
}
|
||||
fclose(fp);
|
||||
@ -1781,6 +1935,38 @@ void rewriteConfigSaveOption(struct rewriteConfigState *state) {
|
||||
rewriteConfigMarkAsProcessed(state,"save");
|
||||
}
|
||||
|
||||
/* Rewrite the user option. */
|
||||
void rewriteConfigUserOption(struct rewriteConfigState *state) {
|
||||
/* If there is a user file defined we just mark this configuration
|
||||
* directive as processed, so that all the lines containing users
|
||||
* inside the config file gets discarded. */
|
||||
if (server.acl_filename[0] != '\0') {
|
||||
rewriteConfigMarkAsProcessed(state,"user");
|
||||
return;
|
||||
}
|
||||
|
||||
/* Otherwise scan the list of users and rewrite every line. Note that
|
||||
* in case the list here is empty, the effect will just be to comment
|
||||
* all the users directive inside the config file. */
|
||||
raxIterator ri;
|
||||
raxStart(&ri,Users);
|
||||
raxSeek(&ri,"^",NULL,0);
|
||||
while(raxNext(&ri)) {
|
||||
user *u = ri.data;
|
||||
sds line = sdsnew("user ");
|
||||
line = sdscatsds(line,u->name);
|
||||
line = sdscatlen(line," ",1);
|
||||
sds descr = ACLDescribeUser(u);
|
||||
line = sdscatsds(line,descr);
|
||||
sdsfree(descr);
|
||||
rewriteConfigRewriteLine(state,"user",line,1);
|
||||
}
|
||||
raxStop(&ri);
|
||||
|
||||
/* Mark "user" as processed in case there are no defined users. */
|
||||
rewriteConfigMarkAsProcessed(state,"user");
|
||||
}
|
||||
|
||||
/* Rewrite the dir option, always using absolute paths.*/
|
||||
void rewriteConfigDirOption(struct rewriteConfigState *state) {
|
||||
char cwd[1024];
|
||||
@ -1793,15 +1979,14 @@ void rewriteConfigDirOption(struct rewriteConfigState *state) {
|
||||
}
|
||||
|
||||
/* Rewrite the slaveof option. */
|
||||
void rewriteConfigSlaveofOption(struct rewriteConfigState *state) {
|
||||
char *option = "slaveof";
|
||||
void rewriteConfigSlaveofOption(struct rewriteConfigState *state, char *option) {
|
||||
sds line;
|
||||
|
||||
/* If this is a master, we want all the slaveof config options
|
||||
* in the file to be removed. Note that if this is a cluster instance
|
||||
* we don't want a slaveof directive inside redis.conf. */
|
||||
if (server.cluster_enabled || server.masterhost == NULL) {
|
||||
rewriteConfigMarkAsProcessed(state,"slaveof");
|
||||
rewriteConfigMarkAsProcessed(state,option);
|
||||
return;
|
||||
}
|
||||
line = sdscatprintf(sdsempty(),"%s %s %d", option,
|
||||
@ -1843,8 +2028,10 @@ void rewriteConfigClientoutputbufferlimitOption(struct rewriteConfigState *state
|
||||
rewriteConfigFormatMemory(soft,sizeof(soft),
|
||||
server.client_obuf_limits[j].soft_limit_bytes);
|
||||
|
||||
char *typename = getClientTypeName(j);
|
||||
if (!strcmp(typename,"slave")) typename = "replica";
|
||||
line = sdscatprintf(sdsempty(),"%s %s %s %s %ld",
|
||||
option, getClientTypeName(j), hard, soft,
|
||||
option, typename, hard, soft,
|
||||
(long) server.client_obuf_limits[j].soft_limit_seconds);
|
||||
rewriteConfigRewriteLine(state,option,line,force);
|
||||
}
|
||||
@ -1872,6 +2059,26 @@ void rewriteConfigBindOption(struct rewriteConfigState *state) {
|
||||
rewriteConfigRewriteLine(state,option,line,force);
|
||||
}
|
||||
|
||||
/* Rewrite the requirepass option. */
|
||||
void rewriteConfigRequirepassOption(struct rewriteConfigState *state, char *option) {
|
||||
int force = 1;
|
||||
sds line;
|
||||
sds password = ACLDefaultUserFirstPassword();
|
||||
|
||||
/* If there is no password set, we don't want the requirepass option
|
||||
* to be present in the configuration at all. */
|
||||
if (password == NULL) {
|
||||
rewriteConfigMarkAsProcessed(state,option);
|
||||
return;
|
||||
}
|
||||
|
||||
line = sdsnew(option);
|
||||
line = sdscatlen(line, " ", 1);
|
||||
line = sdscatsds(line, password);
|
||||
|
||||
rewriteConfigRewriteLine(state,option,line,force);
|
||||
}
|
||||
|
||||
/* Glue together the configuration lines in the current configuration
|
||||
* rewrite state into a single string, stripping multiple empty lines. */
|
||||
sds rewriteConfigGetContentFromState(struct rewriteConfigState *state) {
|
||||
@ -2022,36 +2229,40 @@ int rewriteConfig(char *path) {
|
||||
rewriteConfigOctalOption(state,"unixsocketperm",server.unixsocketperm,CONFIG_DEFAULT_UNIX_SOCKET_PERM);
|
||||
rewriteConfigNumericalOption(state,"timeout",server.maxidletime,CONFIG_DEFAULT_CLIENT_TIMEOUT);
|
||||
rewriteConfigNumericalOption(state,"tcp-keepalive",server.tcpkeepalive,CONFIG_DEFAULT_TCP_KEEPALIVE);
|
||||
rewriteConfigNumericalOption(state,"slave-announce-port",server.slave_announce_port,CONFIG_DEFAULT_SLAVE_ANNOUNCE_PORT);
|
||||
rewriteConfigNumericalOption(state,"replica-announce-port",server.slave_announce_port,CONFIG_DEFAULT_SLAVE_ANNOUNCE_PORT);
|
||||
rewriteConfigEnumOption(state,"loglevel",server.verbosity,loglevel_enum,CONFIG_DEFAULT_VERBOSITY);
|
||||
rewriteConfigStringOption(state,"logfile",server.logfile,CONFIG_DEFAULT_LOGFILE);
|
||||
rewriteConfigStringOption(state,"aclfile",server.acl_filename,CONFIG_DEFAULT_ACL_FILENAME);
|
||||
rewriteConfigYesNoOption(state,"syslog-enabled",server.syslog_enabled,CONFIG_DEFAULT_SYSLOG_ENABLED);
|
||||
rewriteConfigStringOption(state,"syslog-ident",server.syslog_ident,CONFIG_DEFAULT_SYSLOG_IDENT);
|
||||
rewriteConfigSyslogfacilityOption(state);
|
||||
rewriteConfigSaveOption(state);
|
||||
rewriteConfigUserOption(state);
|
||||
rewriteConfigNumericalOption(state,"databases",server.dbnum,CONFIG_DEFAULT_DBNUM);
|
||||
rewriteConfigYesNoOption(state,"stop-writes-on-bgsave-error",server.stop_writes_on_bgsave_err,CONFIG_DEFAULT_STOP_WRITES_ON_BGSAVE_ERROR);
|
||||
rewriteConfigYesNoOption(state,"rdbcompression",server.rdb_compression,CONFIG_DEFAULT_RDB_COMPRESSION);
|
||||
rewriteConfigYesNoOption(state,"rdbchecksum",server.rdb_checksum,CONFIG_DEFAULT_RDB_CHECKSUM);
|
||||
rewriteConfigStringOption(state,"dbfilename",server.rdb_filename,CONFIG_DEFAULT_RDB_FILENAME);
|
||||
rewriteConfigDirOption(state);
|
||||
rewriteConfigSlaveofOption(state);
|
||||
rewriteConfigStringOption(state,"slave-announce-ip",server.slave_announce_ip,CONFIG_DEFAULT_SLAVE_ANNOUNCE_IP);
|
||||
rewriteConfigSlaveofOption(state,"replicaof");
|
||||
rewriteConfigStringOption(state,"replica-announce-ip",server.slave_announce_ip,CONFIG_DEFAULT_SLAVE_ANNOUNCE_IP);
|
||||
rewriteConfigStringOption(state,"masteruser",server.masteruser,NULL);
|
||||
rewriteConfigStringOption(state,"masterauth",server.masterauth,NULL);
|
||||
rewriteConfigStringOption(state,"cluster-announce-ip",server.cluster_announce_ip,NULL);
|
||||
rewriteConfigYesNoOption(state,"slave-serve-stale-data",server.repl_serve_stale_data,CONFIG_DEFAULT_SLAVE_SERVE_STALE_DATA);
|
||||
rewriteConfigYesNoOption(state,"slave-read-only",server.repl_slave_ro,CONFIG_DEFAULT_SLAVE_READ_ONLY);
|
||||
rewriteConfigNumericalOption(state,"repl-ping-slave-period",server.repl_ping_slave_period,CONFIG_DEFAULT_REPL_PING_SLAVE_PERIOD);
|
||||
rewriteConfigYesNoOption(state,"replica-serve-stale-data",server.repl_serve_stale_data,CONFIG_DEFAULT_SLAVE_SERVE_STALE_DATA);
|
||||
rewriteConfigYesNoOption(state,"replica-read-only",server.repl_slave_ro,CONFIG_DEFAULT_SLAVE_READ_ONLY);
|
||||
rewriteConfigYesNoOption(state,"replica-ignore-maxmemory",server.repl_slave_ignore_maxmemory,CONFIG_DEFAULT_SLAVE_IGNORE_MAXMEMORY);
|
||||
rewriteConfigNumericalOption(state,"repl-ping-replica-period",server.repl_ping_slave_period,CONFIG_DEFAULT_REPL_PING_SLAVE_PERIOD);
|
||||
rewriteConfigNumericalOption(state,"repl-timeout",server.repl_timeout,CONFIG_DEFAULT_REPL_TIMEOUT);
|
||||
rewriteConfigBytesOption(state,"repl-backlog-size",server.repl_backlog_size,CONFIG_DEFAULT_REPL_BACKLOG_SIZE);
|
||||
rewriteConfigBytesOption(state,"repl-backlog-ttl",server.repl_backlog_time_limit,CONFIG_DEFAULT_REPL_BACKLOG_TIME_LIMIT);
|
||||
rewriteConfigYesNoOption(state,"repl-disable-tcp-nodelay",server.repl_disable_tcp_nodelay,CONFIG_DEFAULT_REPL_DISABLE_TCP_NODELAY);
|
||||
rewriteConfigYesNoOption(state,"repl-diskless-sync",server.repl_diskless_sync,CONFIG_DEFAULT_REPL_DISKLESS_SYNC);
|
||||
rewriteConfigNumericalOption(state,"repl-diskless-sync-delay",server.repl_diskless_sync_delay,CONFIG_DEFAULT_REPL_DISKLESS_SYNC_DELAY);
|
||||
rewriteConfigNumericalOption(state,"slave-priority",server.slave_priority,CONFIG_DEFAULT_SLAVE_PRIORITY);
|
||||
rewriteConfigNumericalOption(state,"min-slaves-to-write",server.repl_min_slaves_to_write,CONFIG_DEFAULT_MIN_SLAVES_TO_WRITE);
|
||||
rewriteConfigNumericalOption(state,"min-slaves-max-lag",server.repl_min_slaves_max_lag,CONFIG_DEFAULT_MIN_SLAVES_MAX_LAG);
|
||||
rewriteConfigStringOption(state,"requirepass",server.requirepass,NULL);
|
||||
rewriteConfigNumericalOption(state,"replica-priority",server.slave_priority,CONFIG_DEFAULT_SLAVE_PRIORITY);
|
||||
rewriteConfigNumericalOption(state,"min-replicas-to-write",server.repl_min_slaves_to_write,CONFIG_DEFAULT_MIN_SLAVES_TO_WRITE);
|
||||
rewriteConfigNumericalOption(state,"min-replicas-max-lag",server.repl_min_slaves_max_lag,CONFIG_DEFAULT_MIN_SLAVES_MAX_LAG);
|
||||
rewriteConfigRequirepassOption(state,"requirepass");
|
||||
rewriteConfigNumericalOption(state,"maxclients",server.maxclients,CONFIG_DEFAULT_MAX_CLIENTS);
|
||||
rewriteConfigBytesOption(state,"maxmemory",server.maxmemory,CONFIG_DEFAULT_MAXMEMORY);
|
||||
rewriteConfigBytesOption(state,"proto-max-bulk-len",server.proto_max_bulk_len,CONFIG_DEFAULT_PROTO_MAX_BULK_LEN);
|
||||
@ -2076,10 +2287,10 @@ int rewriteConfig(char *path) {
|
||||
rewriteConfigYesNoOption(state,"cluster-enabled",server.cluster_enabled,0);
|
||||
rewriteConfigStringOption(state,"cluster-config-file",server.cluster_configfile,CONFIG_DEFAULT_CLUSTER_CONFIG_FILE);
|
||||
rewriteConfigYesNoOption(state,"cluster-require-full-coverage",server.cluster_require_full_coverage,CLUSTER_DEFAULT_REQUIRE_FULL_COVERAGE);
|
||||
rewriteConfigYesNoOption(state,"cluster-slave-no-failover",server.cluster_slave_no_failover,CLUSTER_DEFAULT_SLAVE_NO_FAILOVER);
|
||||
rewriteConfigYesNoOption(state,"cluster-replica-no-failover",server.cluster_slave_no_failover,CLUSTER_DEFAULT_SLAVE_NO_FAILOVER);
|
||||
rewriteConfigNumericalOption(state,"cluster-node-timeout",server.cluster_node_timeout,CLUSTER_DEFAULT_NODE_TIMEOUT);
|
||||
rewriteConfigNumericalOption(state,"cluster-migration-barrier",server.cluster_migration_barrier,CLUSTER_DEFAULT_MIGRATION_BARRIER);
|
||||
rewriteConfigNumericalOption(state,"cluster-slave-validity-factor",server.cluster_slave_validity_factor,CLUSTER_DEFAULT_SLAVE_VALIDITY);
|
||||
rewriteConfigNumericalOption(state,"cluster-replica-validity-factor",server.cluster_slave_validity_factor,CLUSTER_DEFAULT_SLAVE_VALIDITY);
|
||||
rewriteConfigNumericalOption(state,"slowlog-log-slower-than",server.slowlog_log_slower_than,CONFIG_DEFAULT_SLOWLOG_LOG_SLOWER_THAN);
|
||||
rewriteConfigNumericalOption(state,"latency-monitor-threshold",server.latency_monitor_threshold,CONFIG_DEFAULT_LATENCY_MONITOR_THRESHOLD);
|
||||
rewriteConfigNumericalOption(state,"slowlog-max-len",server.slowlog_max_len,CONFIG_DEFAULT_SLOWLOG_MAX_LEN);
|
||||
@ -2097,8 +2308,9 @@ int rewriteConfig(char *path) {
|
||||
rewriteConfigYesNoOption(state,"activerehashing",server.activerehashing,CONFIG_DEFAULT_ACTIVE_REHASHING);
|
||||
rewriteConfigYesNoOption(state,"activedefrag",server.active_defrag_enabled,CONFIG_DEFAULT_ACTIVE_DEFRAG);
|
||||
rewriteConfigYesNoOption(state,"protected-mode",server.protected_mode,CONFIG_DEFAULT_PROTECTED_MODE);
|
||||
rewriteConfigYesNoOption(state,"gopher-enabled",server.gopher_enabled,CONFIG_DEFAULT_GOPHER_ENABLED);
|
||||
rewriteConfigClientoutputbufferlimitOption(state);
|
||||
rewriteConfigNumericalOption(state,"hz",server.hz,CONFIG_DEFAULT_HZ);
|
||||
rewriteConfigNumericalOption(state,"hz",server.config_hz,CONFIG_DEFAULT_HZ);
|
||||
rewriteConfigYesNoOption(state,"aof-rewrite-incremental-fsync",server.aof_rewrite_incremental_fsync,CONFIG_DEFAULT_AOF_REWRITE_INCREMENTAL_FSYNC);
|
||||
rewriteConfigYesNoOption(state,"rdb-save-incremental-fsync",server.rdb_save_incremental_fsync,CONFIG_DEFAULT_RDB_SAVE_INCREMENTAL_FSYNC);
|
||||
rewriteConfigYesNoOption(state,"aof-load-truncated",server.aof_load_truncated,CONFIG_DEFAULT_AOF_LOAD_TRUNCATED);
|
||||
@ -2107,7 +2319,8 @@ int rewriteConfig(char *path) {
|
||||
rewriteConfigYesNoOption(state,"lazyfree-lazy-eviction",server.lazyfree_lazy_eviction,CONFIG_DEFAULT_LAZYFREE_LAZY_EVICTION);
|
||||
rewriteConfigYesNoOption(state,"lazyfree-lazy-expire",server.lazyfree_lazy_expire,CONFIG_DEFAULT_LAZYFREE_LAZY_EXPIRE);
|
||||
rewriteConfigYesNoOption(state,"lazyfree-lazy-server-del",server.lazyfree_lazy_server_del,CONFIG_DEFAULT_LAZYFREE_LAZY_SERVER_DEL);
|
||||
rewriteConfigYesNoOption(state,"slave-lazy-flush",server.repl_slave_lazy_flush,CONFIG_DEFAULT_SLAVE_LAZY_FLUSH);
|
||||
rewriteConfigYesNoOption(state,"replica-lazy-flush",server.repl_slave_lazy_flush,CONFIG_DEFAULT_SLAVE_LAZY_FLUSH);
|
||||
rewriteConfigYesNoOption(state,"dynamic-hz",server.dynamic_hz,CONFIG_DEFAULT_DYNAMIC_HZ);
|
||||
|
||||
/* Rewrite Sentinel config if in Sentinel mode. */
|
||||
if (server.sentinel_mode) rewriteConfigSentinelOption(state);
|
||||
|
@ -62,7 +62,9 @@
|
||||
#endif
|
||||
|
||||
/* Test for backtrace() */
|
||||
#if defined(__APPLE__) || (defined(__linux__) && defined(__GLIBC__))
|
||||
#if defined(__APPLE__) || (defined(__linux__) && defined(__GLIBC__)) || \
|
||||
defined(__FreeBSD__) || (defined(__OpenBSD__) && defined(USE_BACKTRACE))\
|
||||
|| defined(__DragonFly__)
|
||||
#define HAVE_BACKTRACE 1
|
||||
#endif
|
||||
|
||||
|
101
src/db.c
101
src/db.c
@ -38,6 +38,8 @@
|
||||
* C-level DB API
|
||||
*----------------------------------------------------------------------------*/
|
||||
|
||||
int keyIsExpired(redisDb *db, robj *key);
|
||||
|
||||
/* Update LFU when an object is accessed.
|
||||
* Firstly, decrement the counter if the decrement time is reached.
|
||||
* Then logarithmically increment the counter, and update the access time. */
|
||||
@ -102,7 +104,10 @@ robj *lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) {
|
||||
/* Key expired. If we are in the context of a master, expireIfNeeded()
|
||||
* returns 0 only when the key does not exist at all, so it's safe
|
||||
* to return NULL ASAP. */
|
||||
if (server.masterhost == NULL) return NULL;
|
||||
if (server.masterhost == NULL) {
|
||||
server.stat_keyspace_misses++;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* However if we are in the context of a slave, expireIfNeeded() will
|
||||
* not really try to expire the key, it only returns information
|
||||
@ -121,6 +126,7 @@ robj *lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) {
|
||||
server.current_client->cmd &&
|
||||
server.current_client->cmd->flags & CMD_READONLY)
|
||||
{
|
||||
server.stat_keyspace_misses++;
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
@ -184,14 +190,19 @@ void dbOverwrite(redisDb *db, robj *key, robj *val) {
|
||||
dictEntry *de = dictFind(db->dict,key->ptr);
|
||||
|
||||
serverAssertWithInfo(NULL,key,de != NULL);
|
||||
dictEntry auxentry = *de;
|
||||
robj *old = dictGetVal(de);
|
||||
if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
|
||||
robj *old = dictGetVal(de);
|
||||
int saved_lru = old->lru;
|
||||
dictReplace(db->dict, key->ptr, val);
|
||||
val->lru = saved_lru;
|
||||
} else {
|
||||
dictReplace(db->dict, key->ptr, val);
|
||||
val->lru = old->lru;
|
||||
}
|
||||
dictSetVal(db->dict, de, val);
|
||||
|
||||
if (server.lazyfree_lazy_server_del) {
|
||||
freeObjAsync(old);
|
||||
dictSetVal(db->dict, &auxentry, NULL);
|
||||
}
|
||||
|
||||
dictFreeVal(db->dict, &auxentry);
|
||||
}
|
||||
|
||||
/* High level Set operation. This function can be used in order to set
|
||||
@ -201,7 +212,7 @@ void dbOverwrite(redisDb *db, robj *key, robj *val) {
|
||||
* 2) clients WATCHing for the destination key notified.
|
||||
* 3) The expire time of the key is reset (the key is made persistent).
|
||||
*
|
||||
* All the new keys in the database should be craeted via this interface. */
|
||||
* All the new keys in the database should be created via this interface. */
|
||||
void setKey(redisDb *db, robj *key, robj *val) {
|
||||
if (lookupKeyWrite(db,key) == NULL) {
|
||||
dbAdd(db,key,val);
|
||||
@ -230,7 +241,7 @@ robj *dbRandomKey(redisDb *db) {
|
||||
sds key;
|
||||
robj *keyobj;
|
||||
|
||||
de = dictGetRandomKey(db->dict);
|
||||
de = dictGetFairRandomKey(db->dict);
|
||||
if (de == NULL) return NULL;
|
||||
|
||||
key = dictGetKey(de);
|
||||
@ -329,7 +340,7 @@ robj *dbUnshareStringValue(redisDb *db, robj *key, robj *o) {
|
||||
* database(s). Otherwise -1 is returned in the specific case the
|
||||
* DB number is out of range, and errno is set to EINVAL. */
|
||||
long long emptyDb(int dbnum, int flags, void(callback)(void*)) {
|
||||
int j, async = (flags & EMPTYDB_ASYNC);
|
||||
int async = (flags & EMPTYDB_ASYNC);
|
||||
long long removed = 0;
|
||||
|
||||
if (dbnum < -1 || dbnum >= server.dbnum) {
|
||||
@ -337,8 +348,15 @@ long long emptyDb(int dbnum, int flags, void(callback)(void*)) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
for (j = 0; j < server.dbnum; j++) {
|
||||
if (dbnum != -1 && dbnum != j) continue;
|
||||
int startdb, enddb;
|
||||
if (dbnum == -1) {
|
||||
startdb = 0;
|
||||
enddb = server.dbnum-1;
|
||||
} else {
|
||||
startdb = enddb = dbnum;
|
||||
}
|
||||
|
||||
for (int j = startdb; j <= enddb; j++) {
|
||||
removed += dictSize(server.db[j].dict);
|
||||
if (async) {
|
||||
emptyDbAsync(&server.db[j]);
|
||||
@ -430,10 +448,7 @@ void flushallCommand(client *c) {
|
||||
signalFlushedDb(-1);
|
||||
server.dirty += emptyDb(-1,flags,NULL);
|
||||
addReply(c,shared.ok);
|
||||
if (server.rdb_child_pid != -1) {
|
||||
kill(server.rdb_child_pid,SIGUSR1);
|
||||
rdbRemoveTempFile(server.rdb_child_pid);
|
||||
}
|
||||
if (server.rdb_child_pid != -1) killRDBChild();
|
||||
if (server.saveparamslen > 0) {
|
||||
/* Normally rdbSave() will reset dirty, but we don't want this here
|
||||
* as otherwise FLUSHALL will not be replicated nor put into the AOF. */
|
||||
@ -507,7 +522,7 @@ void randomkeyCommand(client *c) {
|
||||
robj *key;
|
||||
|
||||
if ((key = dbRandomKey(c->db)) == NULL) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -521,7 +536,7 @@ void keysCommand(client *c) {
|
||||
sds pattern = c->argv[1]->ptr;
|
||||
int plen = sdslen(pattern), allkeys;
|
||||
unsigned long numkeys = 0;
|
||||
void *replylen = addDeferredMultiBulkLength(c);
|
||||
void *replylen = addReplyDeferredLen(c);
|
||||
|
||||
di = dictGetSafeIterator(c->db->dict);
|
||||
allkeys = (pattern[0] == '*' && pattern[1] == '\0');
|
||||
@ -531,7 +546,7 @@ void keysCommand(client *c) {
|
||||
|
||||
if (allkeys || stringmatchlen(pattern,plen,key,sdslen(key),0)) {
|
||||
keyobj = createStringObject(key,sdslen(key));
|
||||
if (expireIfNeeded(c->db,keyobj) == 0) {
|
||||
if (!keyIsExpired(c->db,keyobj)) {
|
||||
addReplyBulk(c,keyobj);
|
||||
numkeys++;
|
||||
}
|
||||
@ -539,7 +554,7 @@ void keysCommand(client *c) {
|
||||
}
|
||||
}
|
||||
dictReleaseIterator(di);
|
||||
setDeferredMultiBulkLength(c,replylen,numkeys);
|
||||
setDeferredArrayLen(c,replylen,numkeys);
|
||||
}
|
||||
|
||||
/* This callback is used by scanGenericCommand in order to collect elements
|
||||
@ -764,10 +779,10 @@ void scanGenericCommand(client *c, robj *o, unsigned long cursor) {
|
||||
}
|
||||
|
||||
/* Step 4: Reply to the client. */
|
||||
addReplyMultiBulkLen(c, 2);
|
||||
addReplyArrayLen(c, 2);
|
||||
addReplyBulkLongLong(c,cursor);
|
||||
|
||||
addReplyMultiBulkLen(c, listLength(keys));
|
||||
addReplyArrayLen(c, listLength(keys));
|
||||
while ((node = listFirst(keys)) != NULL) {
|
||||
robj *kobj = listNodeValue(node);
|
||||
addReplyBulk(c, kobj);
|
||||
@ -1108,6 +1123,25 @@ void propagateExpire(redisDb *db, robj *key, int lazy) {
|
||||
decrRefCount(argv[1]);
|
||||
}
|
||||
|
||||
/* Check if the key is expired. */
|
||||
int keyIsExpired(redisDb *db, robj *key) {
|
||||
mstime_t when = getExpire(db,key);
|
||||
|
||||
if (when < 0) return 0; /* No expire for this key */
|
||||
|
||||
/* Don't expire anything while loading. It will be done later. */
|
||||
if (server.loading) return 0;
|
||||
|
||||
/* If we are in the context of a Lua script, we pretend that time is
|
||||
* blocked to when the Lua script started. This way a key can expire
|
||||
* only the first time it is accessed and not in the middle of the
|
||||
* script execution, making propagation to slaves / AOF consistent.
|
||||
* See issue #1525 on Github for more information. */
|
||||
mstime_t now = server.lua_caller ? server.lua_time_start : mstime();
|
||||
|
||||
return now > when;
|
||||
}
|
||||
|
||||
/* This function is called when we are going to perform some operation
|
||||
* in a given key, but such key may be already logically expired even if
|
||||
* it still exists in the database. The main way this function is called
|
||||
@ -1128,32 +1162,17 @@ void propagateExpire(redisDb *db, robj *key, int lazy) {
|
||||
* The return value of the function is 0 if the key is still valid,
|
||||
* otherwise the function returns 1 if the key is expired. */
|
||||
int expireIfNeeded(redisDb *db, robj *key) {
|
||||
mstime_t when = getExpire(db,key);
|
||||
mstime_t now;
|
||||
if (!keyIsExpired(db,key)) return 0;
|
||||
|
||||
if (when < 0) return 0; /* No expire for this key */
|
||||
|
||||
/* Don't expire anything while loading. It will be done later. */
|
||||
if (server.loading) return 0;
|
||||
|
||||
/* If we are in the context of a Lua script, we pretend that time is
|
||||
* blocked to when the Lua script started. This way a key can expire
|
||||
* only the first time it is accessed and not in the middle of the
|
||||
* script execution, making propagation to slaves / AOF consistent.
|
||||
* See issue #1525 on Github for more information. */
|
||||
now = server.lua_caller ? server.lua_time_start : mstime();
|
||||
|
||||
/* If we are running in the context of a slave, return ASAP:
|
||||
/* If we are running in the context of a slave, instead of
|
||||
* evicting the expired key from the database, we return ASAP:
|
||||
* the slave key expiration is controlled by the master that will
|
||||
* send us synthesized DEL operations for expired keys.
|
||||
*
|
||||
* Still we try to return the right information to the caller,
|
||||
* that is, 0 if we think the key should be still valid, 1 if
|
||||
* we think the key is expired at this time. */
|
||||
if (server.masterhost != NULL) return now > when;
|
||||
|
||||
/* Return when this key has not expired */
|
||||
if (now <= when) return 0;
|
||||
if (server.masterhost != NULL) return 1;
|
||||
|
||||
/* Delete the key */
|
||||
server.stat_expiredkeys++;
|
||||
|
530
src/debug.c
530
src/debug.c
@ -37,7 +37,11 @@
|
||||
|
||||
#ifdef HAVE_BACKTRACE
|
||||
#include <execinfo.h>
|
||||
#ifndef __OpenBSD__
|
||||
#include <ucontext.h>
|
||||
#else
|
||||
typedef ucontext_t sigcontext_t;
|
||||
#endif
|
||||
#include <fcntl.h>
|
||||
#include "bio.h"
|
||||
#include <unistd.h>
|
||||
@ -70,7 +74,7 @@ void xorDigest(unsigned char *digest, void *ptr, size_t len) {
|
||||
digest[j] ^= hash[j];
|
||||
}
|
||||
|
||||
void xorObjectDigest(unsigned char *digest, robj *o) {
|
||||
void xorStringObjectDigest(unsigned char *digest, robj *o) {
|
||||
o = getDecodedObject(o);
|
||||
xorDigest(digest,o->ptr,sdslen(o->ptr));
|
||||
decrRefCount(o);
|
||||
@ -100,12 +104,151 @@ void mixDigest(unsigned char *digest, void *ptr, size_t len) {
|
||||
SHA1Final(digest,&ctx);
|
||||
}
|
||||
|
||||
void mixObjectDigest(unsigned char *digest, robj *o) {
|
||||
void mixStringObjectDigest(unsigned char *digest, robj *o) {
|
||||
o = getDecodedObject(o);
|
||||
mixDigest(digest,o->ptr,sdslen(o->ptr));
|
||||
decrRefCount(o);
|
||||
}
|
||||
|
||||
/* This function computes the digest of a data structure stored in the
|
||||
* object 'o'. It is the core of the DEBUG DIGEST command: when taking the
|
||||
* digest of a whole dataset, we take the digest of the key and the value
|
||||
* pair, and xor all those together.
|
||||
*
|
||||
* Note that this function does not reset the initial 'digest' passed, it
|
||||
* will continue mixing this object digest to anything that was already
|
||||
* present. */
|
||||
void xorObjectDigest(redisDb *db, robj *keyobj, unsigned char *digest, robj *o) {
|
||||
uint32_t aux = htonl(o->type);
|
||||
mixDigest(digest,&aux,sizeof(aux));
|
||||
long long expiretime = getExpire(db,keyobj);
|
||||
char buf[128];
|
||||
|
||||
/* Save the key and associated value */
|
||||
if (o->type == OBJ_STRING) {
|
||||
mixStringObjectDigest(digest,o);
|
||||
} else if (o->type == OBJ_LIST) {
|
||||
listTypeIterator *li = listTypeInitIterator(o,0,LIST_TAIL);
|
||||
listTypeEntry entry;
|
||||
while(listTypeNext(li,&entry)) {
|
||||
robj *eleobj = listTypeGet(&entry);
|
||||
mixStringObjectDigest(digest,eleobj);
|
||||
decrRefCount(eleobj);
|
||||
}
|
||||
listTypeReleaseIterator(li);
|
||||
} else if (o->type == OBJ_SET) {
|
||||
setTypeIterator *si = setTypeInitIterator(o);
|
||||
sds sdsele;
|
||||
while((sdsele = setTypeNextObject(si)) != NULL) {
|
||||
xorDigest(digest,sdsele,sdslen(sdsele));
|
||||
sdsfree(sdsele);
|
||||
}
|
||||
setTypeReleaseIterator(si);
|
||||
} else if (o->type == OBJ_ZSET) {
|
||||
unsigned char eledigest[20];
|
||||
|
||||
if (o->encoding == OBJ_ENCODING_ZIPLIST) {
|
||||
unsigned char *zl = o->ptr;
|
||||
unsigned char *eptr, *sptr;
|
||||
unsigned char *vstr;
|
||||
unsigned int vlen;
|
||||
long long vll;
|
||||
double score;
|
||||
|
||||
eptr = ziplistIndex(zl,0);
|
||||
serverAssert(eptr != NULL);
|
||||
sptr = ziplistNext(zl,eptr);
|
||||
serverAssert(sptr != NULL);
|
||||
|
||||
while (eptr != NULL) {
|
||||
serverAssert(ziplistGet(eptr,&vstr,&vlen,&vll));
|
||||
score = zzlGetScore(sptr);
|
||||
|
||||
memset(eledigest,0,20);
|
||||
if (vstr != NULL) {
|
||||
mixDigest(eledigest,vstr,vlen);
|
||||
} else {
|
||||
ll2string(buf,sizeof(buf),vll);
|
||||
mixDigest(eledigest,buf,strlen(buf));
|
||||
}
|
||||
|
||||
snprintf(buf,sizeof(buf),"%.17g",score);
|
||||
mixDigest(eledigest,buf,strlen(buf));
|
||||
xorDigest(digest,eledigest,20);
|
||||
zzlNext(zl,&eptr,&sptr);
|
||||
}
|
||||
} else if (o->encoding == OBJ_ENCODING_SKIPLIST) {
|
||||
zset *zs = o->ptr;
|
||||
dictIterator *di = dictGetIterator(zs->dict);
|
||||
dictEntry *de;
|
||||
|
||||
while((de = dictNext(di)) != NULL) {
|
||||
sds sdsele = dictGetKey(de);
|
||||
double *score = dictGetVal(de);
|
||||
|
||||
snprintf(buf,sizeof(buf),"%.17g",*score);
|
||||
memset(eledigest,0,20);
|
||||
mixDigest(eledigest,sdsele,sdslen(sdsele));
|
||||
mixDigest(eledigest,buf,strlen(buf));
|
||||
xorDigest(digest,eledigest,20);
|
||||
}
|
||||
dictReleaseIterator(di);
|
||||
} else {
|
||||
serverPanic("Unknown sorted set encoding");
|
||||
}
|
||||
} else if (o->type == OBJ_HASH) {
|
||||
hashTypeIterator *hi = hashTypeInitIterator(o);
|
||||
while (hashTypeNext(hi) != C_ERR) {
|
||||
unsigned char eledigest[20];
|
||||
sds sdsele;
|
||||
|
||||
memset(eledigest,0,20);
|
||||
sdsele = hashTypeCurrentObjectNewSds(hi,OBJ_HASH_KEY);
|
||||
mixDigest(eledigest,sdsele,sdslen(sdsele));
|
||||
sdsfree(sdsele);
|
||||
sdsele = hashTypeCurrentObjectNewSds(hi,OBJ_HASH_VALUE);
|
||||
mixDigest(eledigest,sdsele,sdslen(sdsele));
|
||||
sdsfree(sdsele);
|
||||
xorDigest(digest,eledigest,20);
|
||||
}
|
||||
hashTypeReleaseIterator(hi);
|
||||
} else if (o->type == OBJ_STREAM) {
|
||||
streamIterator si;
|
||||
streamIteratorStart(&si,o->ptr,NULL,NULL,0);
|
||||
streamID id;
|
||||
int64_t numfields;
|
||||
|
||||
while(streamIteratorGetID(&si,&id,&numfields)) {
|
||||
sds itemid = sdscatfmt(sdsempty(),"%U.%U",id.ms,id.seq);
|
||||
mixDigest(digest,itemid,sdslen(itemid));
|
||||
sdsfree(itemid);
|
||||
|
||||
while(numfields--) {
|
||||
unsigned char *field, *value;
|
||||
int64_t field_len, value_len;
|
||||
streamIteratorGetField(&si,&field,&value,
|
||||
&field_len,&value_len);
|
||||
mixDigest(digest,field,field_len);
|
||||
mixDigest(digest,value,value_len);
|
||||
}
|
||||
}
|
||||
streamIteratorStop(&si);
|
||||
} else if (o->type == OBJ_MODULE) {
|
||||
RedisModuleDigest md;
|
||||
moduleValue *mv = o->ptr;
|
||||
moduleType *mt = mv->type;
|
||||
moduleInitDigestContext(md);
|
||||
if (mt->digest) {
|
||||
mt->digest(&md,mv->value);
|
||||
xorDigest(digest,md.x,sizeof(md.x));
|
||||
}
|
||||
} else {
|
||||
serverPanic("Unknown object type");
|
||||
}
|
||||
/* If the key has an expire, add it to the mix */
|
||||
if (expiretime != -1) xorDigest(digest,"!!expire!!",10);
|
||||
}
|
||||
|
||||
/* Compute the dataset digest. Since keys, sets elements, hashes elements
|
||||
* are not ordered, we use a trick: every aggregate digest is the xor
|
||||
* of the digests of their elements. This way the order will not change
|
||||
@ -114,7 +257,6 @@ void mixObjectDigest(unsigned char *digest, robj *o) {
|
||||
* a different digest. */
|
||||
void computeDatasetDigest(unsigned char *final) {
|
||||
unsigned char digest[20];
|
||||
char buf[128];
|
||||
dictIterator *di = NULL;
|
||||
dictEntry *de;
|
||||
int j;
|
||||
@ -137,7 +279,6 @@ void computeDatasetDigest(unsigned char *final) {
|
||||
while((de = dictNext(di)) != NULL) {
|
||||
sds key;
|
||||
robj *keyobj, *o;
|
||||
long long expiretime;
|
||||
|
||||
memset(digest,0,20); /* This key-val digest */
|
||||
key = dictGetKey(de);
|
||||
@ -146,134 +287,8 @@ void computeDatasetDigest(unsigned char *final) {
|
||||
mixDigest(digest,key,sdslen(key));
|
||||
|
||||
o = dictGetVal(de);
|
||||
xorObjectDigest(db,keyobj,digest,o);
|
||||
|
||||
aux = htonl(o->type);
|
||||
mixDigest(digest,&aux,sizeof(aux));
|
||||
expiretime = getExpire(db,keyobj);
|
||||
|
||||
/* Save the key and associated value */
|
||||
if (o->type == OBJ_STRING) {
|
||||
mixObjectDigest(digest,o);
|
||||
} else if (o->type == OBJ_LIST) {
|
||||
listTypeIterator *li = listTypeInitIterator(o,0,LIST_TAIL);
|
||||
listTypeEntry entry;
|
||||
while(listTypeNext(li,&entry)) {
|
||||
robj *eleobj = listTypeGet(&entry);
|
||||
mixObjectDigest(digest,eleobj);
|
||||
decrRefCount(eleobj);
|
||||
}
|
||||
listTypeReleaseIterator(li);
|
||||
} else if (o->type == OBJ_SET) {
|
||||
setTypeIterator *si = setTypeInitIterator(o);
|
||||
sds sdsele;
|
||||
while((sdsele = setTypeNextObject(si)) != NULL) {
|
||||
xorDigest(digest,sdsele,sdslen(sdsele));
|
||||
sdsfree(sdsele);
|
||||
}
|
||||
setTypeReleaseIterator(si);
|
||||
} else if (o->type == OBJ_ZSET) {
|
||||
unsigned char eledigest[20];
|
||||
|
||||
if (o->encoding == OBJ_ENCODING_ZIPLIST) {
|
||||
unsigned char *zl = o->ptr;
|
||||
unsigned char *eptr, *sptr;
|
||||
unsigned char *vstr;
|
||||
unsigned int vlen;
|
||||
long long vll;
|
||||
double score;
|
||||
|
||||
eptr = ziplistIndex(zl,0);
|
||||
serverAssert(eptr != NULL);
|
||||
sptr = ziplistNext(zl,eptr);
|
||||
serverAssert(sptr != NULL);
|
||||
|
||||
while (eptr != NULL) {
|
||||
serverAssert(ziplistGet(eptr,&vstr,&vlen,&vll));
|
||||
score = zzlGetScore(sptr);
|
||||
|
||||
memset(eledigest,0,20);
|
||||
if (vstr != NULL) {
|
||||
mixDigest(eledigest,vstr,vlen);
|
||||
} else {
|
||||
ll2string(buf,sizeof(buf),vll);
|
||||
mixDigest(eledigest,buf,strlen(buf));
|
||||
}
|
||||
|
||||
snprintf(buf,sizeof(buf),"%.17g",score);
|
||||
mixDigest(eledigest,buf,strlen(buf));
|
||||
xorDigest(digest,eledigest,20);
|
||||
zzlNext(zl,&eptr,&sptr);
|
||||
}
|
||||
} else if (o->encoding == OBJ_ENCODING_SKIPLIST) {
|
||||
zset *zs = o->ptr;
|
||||
dictIterator *di = dictGetIterator(zs->dict);
|
||||
dictEntry *de;
|
||||
|
||||
while((de = dictNext(di)) != NULL) {
|
||||
sds sdsele = dictGetKey(de);
|
||||
double *score = dictGetVal(de);
|
||||
|
||||
snprintf(buf,sizeof(buf),"%.17g",*score);
|
||||
memset(eledigest,0,20);
|
||||
mixDigest(eledigest,sdsele,sdslen(sdsele));
|
||||
mixDigest(eledigest,buf,strlen(buf));
|
||||
xorDigest(digest,eledigest,20);
|
||||
}
|
||||
dictReleaseIterator(di);
|
||||
} else {
|
||||
serverPanic("Unknown sorted set encoding");
|
||||
}
|
||||
} else if (o->type == OBJ_HASH) {
|
||||
hashTypeIterator *hi = hashTypeInitIterator(o);
|
||||
while (hashTypeNext(hi) != C_ERR) {
|
||||
unsigned char eledigest[20];
|
||||
sds sdsele;
|
||||
|
||||
memset(eledigest,0,20);
|
||||
sdsele = hashTypeCurrentObjectNewSds(hi,OBJ_HASH_KEY);
|
||||
mixDigest(eledigest,sdsele,sdslen(sdsele));
|
||||
sdsfree(sdsele);
|
||||
sdsele = hashTypeCurrentObjectNewSds(hi,OBJ_HASH_VALUE);
|
||||
mixDigest(eledigest,sdsele,sdslen(sdsele));
|
||||
sdsfree(sdsele);
|
||||
xorDigest(digest,eledigest,20);
|
||||
}
|
||||
hashTypeReleaseIterator(hi);
|
||||
} else if (o->type == OBJ_STREAM) {
|
||||
streamIterator si;
|
||||
streamIteratorStart(&si,o->ptr,NULL,NULL,0);
|
||||
streamID id;
|
||||
int64_t numfields;
|
||||
|
||||
while(streamIteratorGetID(&si,&id,&numfields)) {
|
||||
sds itemid = sdscatfmt(sdsempty(),"%U.%U",id.ms,id.seq);
|
||||
mixDigest(digest,itemid,sdslen(itemid));
|
||||
sdsfree(itemid);
|
||||
|
||||
while(numfields--) {
|
||||
unsigned char *field, *value;
|
||||
int64_t field_len, value_len;
|
||||
streamIteratorGetField(&si,&field,&value,
|
||||
&field_len,&value_len);
|
||||
mixDigest(digest,field,field_len);
|
||||
mixDigest(digest,value,value_len);
|
||||
}
|
||||
}
|
||||
streamIteratorStop(&si);
|
||||
} else if (o->type == OBJ_MODULE) {
|
||||
RedisModuleDigest md;
|
||||
moduleValue *mv = o->ptr;
|
||||
moduleType *mt = mv->type;
|
||||
moduleInitDigestContext(md);
|
||||
if (mt->digest) {
|
||||
mt->digest(&md,mv->value);
|
||||
xorDigest(digest,md.x,sizeof(md.x));
|
||||
}
|
||||
} else {
|
||||
serverPanic("Unknown object type");
|
||||
}
|
||||
/* If the key has an expire, add it to the mix */
|
||||
if (expiretime != -1) xorDigest(digest,"!!expire!!",10);
|
||||
/* We can finally xor the key-val digest to the final digest */
|
||||
xorDigest(final,digest,20);
|
||||
decrRefCount(keyobj);
|
||||
@ -289,7 +304,9 @@ void debugCommand(client *c) {
|
||||
"CHANGE-REPL-ID -- Change the replication IDs of the instance. Dangerous, should be used only for testing the replication subsystem.",
|
||||
"CRASH-AND-RECOVER <milliseconds> -- Hard crash and restart after <milliseconds> delay.",
|
||||
"DIGEST -- Output a hex signature representing the current DB content.",
|
||||
"DIGEST-VALUE <key-1> ... <key-N>-- Output a hex signature of the values of all the specified keys.",
|
||||
"ERROR <string> -- Return a Redis protocol error with <string> as message. Useful for clients unit tests to simulate Redis errors.",
|
||||
"LOG <message> -- write message to the server log.",
|
||||
"HTSTATS <dbid> -- Return hash table statistics of the specified Redis database.",
|
||||
"HTSTATS-KEY <key> -- Like htstats but for the hash table stored as key's value.",
|
||||
"LOADAOF -- Flush the AOF buffers on disk and reload the AOF in memory.",
|
||||
@ -305,6 +322,7 @@ void debugCommand(client *c) {
|
||||
"SLEEP <seconds> -- Stop the server for <seconds>. Decimals allowed.",
|
||||
"STRUCTSIZE -- Return the size of different Redis core C structures.",
|
||||
"ZIPLIST <key> -- Show low level info about the ziplist encoding.",
|
||||
"STRINGMATCH-TEST -- Run a fuzz tester against the stringmatchlen() function.",
|
||||
NULL
|
||||
};
|
||||
addReplyHelp(c, help);
|
||||
@ -331,8 +349,10 @@ NULL
|
||||
zfree(ptr);
|
||||
addReply(c,shared.ok);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"assert")) {
|
||||
if (c->argc >= 3) c->argv[2] = tryObjectEncoding(c->argv[2]);
|
||||
serverAssertWithInfo(c,c->argv[0],1 == 2);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"log") && c->argc == 3) {
|
||||
serverLog(LL_WARNING, "DEBUG LOG: %s", (char*)c->argv[2]->ptr);
|
||||
addReply(c,shared.ok);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"reload")) {
|
||||
rdbSaveInfo rsi, *rsiptr;
|
||||
rsiptr = rdbPopulateSaveInfo(&rsi);
|
||||
@ -341,7 +361,10 @@ NULL
|
||||
return;
|
||||
}
|
||||
emptyDb(-1,EMPTYDB_NO_FLAGS,NULL);
|
||||
if (rdbLoad(server.rdb_filename,NULL) != C_OK) {
|
||||
protectClient(c);
|
||||
int ret = rdbLoad(server.rdb_filename,NULL);
|
||||
unprotectClient(c);
|
||||
if (ret != C_OK) {
|
||||
addReplyError(c,"Error trying to load the RDB dump");
|
||||
return;
|
||||
}
|
||||
@ -350,7 +373,10 @@ NULL
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"loadaof")) {
|
||||
if (server.aof_state != AOF_OFF) flushAppendOnlyFile(1);
|
||||
emptyDb(-1,EMPTYDB_NO_FLAGS,NULL);
|
||||
if (loadAppendOnlyFile(server.aof_filename) != C_OK) {
|
||||
protectClient(c);
|
||||
int ret = loadAppendOnlyFile(server.aof_filename);
|
||||
unprotectClient(c);
|
||||
if (ret != C_OK) {
|
||||
addReply(c,shared.err);
|
||||
return;
|
||||
}
|
||||
@ -481,15 +507,80 @@ NULL
|
||||
}
|
||||
addReply(c,shared.ok);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"digest") && c->argc == 2) {
|
||||
/* DEBUG DIGEST (form without keys specified) */
|
||||
unsigned char digest[20];
|
||||
sds d = sdsempty();
|
||||
int j;
|
||||
|
||||
computeDatasetDigest(digest);
|
||||
for (j = 0; j < 20; j++)
|
||||
d = sdscatprintf(d, "%02x",digest[j]);
|
||||
for (int i = 0; i < 20; i++) d = sdscatprintf(d, "%02x",digest[i]);
|
||||
addReplyStatus(c,d);
|
||||
sdsfree(d);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"digest-value") && c->argc >= 2) {
|
||||
/* DEBUG DIGEST-VALUE key key key ... key. */
|
||||
addReplyArrayLen(c,c->argc-2);
|
||||
for (int j = 2; j < c->argc; j++) {
|
||||
unsigned char digest[20];
|
||||
memset(digest,0,20); /* Start with a clean result */
|
||||
robj *o = lookupKeyReadWithFlags(c->db,c->argv[j],LOOKUP_NOTOUCH);
|
||||
if (o) xorObjectDigest(c->db,c->argv[j],digest,o);
|
||||
|
||||
sds d = sdsempty();
|
||||
for (int i = 0; i < 20; i++) d = sdscatprintf(d, "%02x",digest[i]);
|
||||
addReplyStatus(c,d);
|
||||
sdsfree(d);
|
||||
}
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"protocol") && c->argc == 3) {
|
||||
/* DEBUG PROTOCOL [string|integer|double|bignum|null|array|set|map|
|
||||
* attrib|push|verbatim|true|false|state|err|bloberr] */
|
||||
char *name = c->argv[2]->ptr;
|
||||
if (!strcasecmp(name,"string")) {
|
||||
addReplyBulkCString(c,"Hello World");
|
||||
} else if (!strcasecmp(name,"integer")) {
|
||||
addReplyLongLong(c,12345);
|
||||
} else if (!strcasecmp(name,"double")) {
|
||||
addReplyDouble(c,3.14159265359);
|
||||
} else if (!strcasecmp(name,"bignum")) {
|
||||
addReplyProto(c,"(1234567999999999999999999999999999999\r\n",40);
|
||||
} else if (!strcasecmp(name,"null")) {
|
||||
addReplyNull(c);
|
||||
} else if (!strcasecmp(name,"array")) {
|
||||
addReplyArrayLen(c,3);
|
||||
for (int j = 0; j < 3; j++) addReplyLongLong(c,j);
|
||||
} else if (!strcasecmp(name,"set")) {
|
||||
addReplySetLen(c,3);
|
||||
for (int j = 0; j < 3; j++) addReplyLongLong(c,j);
|
||||
} else if (!strcasecmp(name,"map")) {
|
||||
addReplyMapLen(c,3);
|
||||
for (int j = 0; j < 3; j++) {
|
||||
addReplyLongLong(c,j);
|
||||
addReplyBool(c, j == 1);
|
||||
}
|
||||
} else if (!strcasecmp(name,"attrib")) {
|
||||
addReplyAttributeLen(c,1);
|
||||
addReplyBulkCString(c,"key-popularity");
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyBulkCString(c,"key:123");
|
||||
addReplyLongLong(c,90);
|
||||
/* Attributes are not real replies, so a well formed reply should
|
||||
* also have a normal reply type after the attribute. */
|
||||
addReplyBulkCString(c,"Some real reply following the attribute");
|
||||
} else if (!strcasecmp(name,"push")) {
|
||||
addReplyPushLen(c,2);
|
||||
addReplyBulkCString(c,"server-cpu-usage");
|
||||
addReplyLongLong(c,42);
|
||||
/* Push replies are not synchronous replies, so we emit also a
|
||||
* normal reply in order for blocking clients just discarding the
|
||||
* push reply, to actually consume the reply and continue. */
|
||||
addReplyBulkCString(c,"Some real reply following the push reply");
|
||||
} else if (!strcasecmp(name,"true")) {
|
||||
addReplyBool(c,1);
|
||||
} else if (!strcasecmp(name,"false")) {
|
||||
addReplyBool(c,0);
|
||||
} else if (!strcasecmp(name,"verbatim")) {
|
||||
addReplyVerbatim(c,"This is a verbatim\nstring",25,"txt");
|
||||
} else {
|
||||
addReplyError(c,"Wrong protocol type name. Please use one of the following: string|integer|double|bignum|null|array|set|map|attrib|push|verbatim|true|false|state|err|bloberr");
|
||||
}
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"sleep") && c->argc == 3) {
|
||||
double dtime = strtod(c->argv[2]->ptr,NULL);
|
||||
long long utime = dtime*1000000;
|
||||
@ -581,6 +672,10 @@ NULL
|
||||
changeReplicationId();
|
||||
clearReplicationId2();
|
||||
addReply(c,shared.ok);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"stringmatch-test") && c->argc == 2)
|
||||
{
|
||||
stringmatchlen_fuzz_test();
|
||||
addReplyStatus(c,"Apparently Redis did not crash: test passed");
|
||||
} else {
|
||||
addReplySubcommandSyntaxError(c);
|
||||
return;
|
||||
@ -708,7 +803,7 @@ static void *getMcontextEip(ucontext_t *uc) {
|
||||
#endif
|
||||
#elif defined(__linux__)
|
||||
/* Linux */
|
||||
#if defined(__i386__)
|
||||
#if defined(__i386__) || defined(__ILP32__)
|
||||
return (void*) uc->uc_mcontext.gregs[14]; /* Linux 32 */
|
||||
#elif defined(__X86_64__) || defined(__x86_64__)
|
||||
return (void*) uc->uc_mcontext.gregs[16]; /* Linux 64 */
|
||||
@ -719,6 +814,22 @@ static void *getMcontextEip(ucontext_t *uc) {
|
||||
#elif defined(__aarch64__) /* Linux AArch64 */
|
||||
return (void*) uc->uc_mcontext.pc;
|
||||
#endif
|
||||
#elif defined(__FreeBSD__)
|
||||
/* FreeBSD */
|
||||
#if defined(__i386__)
|
||||
return (void*) uc->uc_mcontext.mc_eip;
|
||||
#elif defined(__x86_64__)
|
||||
return (void*) uc->uc_mcontext.mc_rip;
|
||||
#endif
|
||||
#elif defined(__OpenBSD__)
|
||||
/* OpenBSD */
|
||||
#if defined(__i386__)
|
||||
return (void*) uc->sc_eip;
|
||||
#elif defined(__x86_64__)
|
||||
return (void*) uc->sc_rip;
|
||||
#endif
|
||||
#elif defined(__DragonFly__)
|
||||
return (void*) uc->uc_mcontext.mc_rip;
|
||||
#else
|
||||
return NULL;
|
||||
#endif
|
||||
@ -804,7 +915,7 @@ void logRegisters(ucontext_t *uc) {
|
||||
/* Linux */
|
||||
#elif defined(__linux__)
|
||||
/* Linux x86 */
|
||||
#if defined(__i386__)
|
||||
#if defined(__i386__) || defined(__ILP32__)
|
||||
serverLog(LL_WARNING,
|
||||
"\n"
|
||||
"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\n"
|
||||
@ -860,6 +971,145 @@ void logRegisters(ucontext_t *uc) {
|
||||
);
|
||||
logStackContent((void**)uc->uc_mcontext.gregs[15]);
|
||||
#endif
|
||||
#elif defined(__FreeBSD__)
|
||||
#if defined(__x86_64__)
|
||||
serverLog(LL_WARNING,
|
||||
"\n"
|
||||
"RAX:%016lx RBX:%016lx\nRCX:%016lx RDX:%016lx\n"
|
||||
"RDI:%016lx RSI:%016lx\nRBP:%016lx RSP:%016lx\n"
|
||||
"R8 :%016lx R9 :%016lx\nR10:%016lx R11:%016lx\n"
|
||||
"R12:%016lx R13:%016lx\nR14:%016lx R15:%016lx\n"
|
||||
"RIP:%016lx EFL:%016lx\nCSGSFS:%016lx",
|
||||
(unsigned long) uc->uc_mcontext.mc_rax,
|
||||
(unsigned long) uc->uc_mcontext.mc_rbx,
|
||||
(unsigned long) uc->uc_mcontext.mc_rcx,
|
||||
(unsigned long) uc->uc_mcontext.mc_rdx,
|
||||
(unsigned long) uc->uc_mcontext.mc_rdi,
|
||||
(unsigned long) uc->uc_mcontext.mc_rsi,
|
||||
(unsigned long) uc->uc_mcontext.mc_rbp,
|
||||
(unsigned long) uc->uc_mcontext.mc_rsp,
|
||||
(unsigned long) uc->uc_mcontext.mc_r8,
|
||||
(unsigned long) uc->uc_mcontext.mc_r9,
|
||||
(unsigned long) uc->uc_mcontext.mc_r10,
|
||||
(unsigned long) uc->uc_mcontext.mc_r11,
|
||||
(unsigned long) uc->uc_mcontext.mc_r12,
|
||||
(unsigned long) uc->uc_mcontext.mc_r13,
|
||||
(unsigned long) uc->uc_mcontext.mc_r14,
|
||||
(unsigned long) uc->uc_mcontext.mc_r15,
|
||||
(unsigned long) uc->uc_mcontext.mc_rip,
|
||||
(unsigned long) uc->uc_mcontext.mc_rflags,
|
||||
(unsigned long) uc->uc_mcontext.mc_cs
|
||||
);
|
||||
logStackContent((void**)uc->uc_mcontext.mc_rsp);
|
||||
#elif defined(__i386__)
|
||||
serverLog(LL_WARNING,
|
||||
"\n"
|
||||
"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\n"
|
||||
"EDI:%08lx ESI:%08lx EBP:%08lx ESP:%08lx\n"
|
||||
"SS :%08lx EFL:%08lx EIP:%08lx CS:%08lx\n"
|
||||
"DS :%08lx ES :%08lx FS :%08lx GS:%08lx",
|
||||
(unsigned long) uc->uc_mcontext.mc_eax,
|
||||
(unsigned long) uc->uc_mcontext.mc_ebx,
|
||||
(unsigned long) uc->uc_mcontext.mc_ebx,
|
||||
(unsigned long) uc->uc_mcontext.mc_edx,
|
||||
(unsigned long) uc->uc_mcontext.mc_edi,
|
||||
(unsigned long) uc->uc_mcontext.mc_esi,
|
||||
(unsigned long) uc->uc_mcontext.mc_ebp,
|
||||
(unsigned long) uc->uc_mcontext.mc_esp,
|
||||
(unsigned long) uc->uc_mcontext.mc_ss,
|
||||
(unsigned long) uc->uc_mcontext.mc_eflags,
|
||||
(unsigned long) uc->uc_mcontext.mc_eip,
|
||||
(unsigned long) uc->uc_mcontext.mc_cs,
|
||||
(unsigned long) uc->uc_mcontext.mc_es,
|
||||
(unsigned long) uc->uc_mcontext.mc_fs,
|
||||
(unsigned long) uc->uc_mcontext.mc_gs
|
||||
);
|
||||
logStackContent((void**)uc->uc_mcontext.mc_esp);
|
||||
#endif
|
||||
#elif defined(__OpenBSD__)
|
||||
#if defined(__x86_64__)
|
||||
serverLog(LL_WARNING,
|
||||
"\n"
|
||||
"RAX:%016lx RBX:%016lx\nRCX:%016lx RDX:%016lx\n"
|
||||
"RDI:%016lx RSI:%016lx\nRBP:%016lx RSP:%016lx\n"
|
||||
"R8 :%016lx R9 :%016lx\nR10:%016lx R11:%016lx\n"
|
||||
"R12:%016lx R13:%016lx\nR14:%016lx R15:%016lx\n"
|
||||
"RIP:%016lx EFL:%016lx\nCSGSFS:%016lx",
|
||||
(unsigned long) uc->sc_rax,
|
||||
(unsigned long) uc->sc_rbx,
|
||||
(unsigned long) uc->sc_rcx,
|
||||
(unsigned long) uc->sc_rdx,
|
||||
(unsigned long) uc->sc_rdi,
|
||||
(unsigned long) uc->sc_rsi,
|
||||
(unsigned long) uc->sc_rbp,
|
||||
(unsigned long) uc->sc_rsp,
|
||||
(unsigned long) uc->sc_r8,
|
||||
(unsigned long) uc->sc_r9,
|
||||
(unsigned long) uc->sc_r10,
|
||||
(unsigned long) uc->sc_r11,
|
||||
(unsigned long) uc->sc_r12,
|
||||
(unsigned long) uc->sc_r13,
|
||||
(unsigned long) uc->sc_r14,
|
||||
(unsigned long) uc->sc_r15,
|
||||
(unsigned long) uc->sc_rip,
|
||||
(unsigned long) uc->sc_rflags,
|
||||
(unsigned long) uc->sc_cs
|
||||
);
|
||||
logStackContent((void**)uc->sc_rsp);
|
||||
#elif defined(__i386__)
|
||||
serverLog(LL_WARNING,
|
||||
"\n"
|
||||
"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\n"
|
||||
"EDI:%08lx ESI:%08lx EBP:%08lx ESP:%08lx\n"
|
||||
"SS :%08lx EFL:%08lx EIP:%08lx CS:%08lx\n"
|
||||
"DS :%08lx ES :%08lx FS :%08lx GS:%08lx",
|
||||
(unsigned long) uc->sc_eax,
|
||||
(unsigned long) uc->sc_ebx,
|
||||
(unsigned long) uc->sc_ebx,
|
||||
(unsigned long) uc->sc_edx,
|
||||
(unsigned long) uc->sc_edi,
|
||||
(unsigned long) uc->sc_esi,
|
||||
(unsigned long) uc->sc_ebp,
|
||||
(unsigned long) uc->sc_esp,
|
||||
(unsigned long) uc->sc_ss,
|
||||
(unsigned long) uc->sc_eflags,
|
||||
(unsigned long) uc->sc_eip,
|
||||
(unsigned long) uc->sc_cs,
|
||||
(unsigned long) uc->sc_es,
|
||||
(unsigned long) uc->sc_fs,
|
||||
(unsigned long) uc->sc_gs
|
||||
);
|
||||
logStackContent((void**)uc->sc_esp);
|
||||
#endif
|
||||
#elif defined(__DragonFly__)
|
||||
serverLog(LL_WARNING,
|
||||
"\n"
|
||||
"RAX:%016lx RBX:%016lx\nRCX:%016lx RDX:%016lx\n"
|
||||
"RDI:%016lx RSI:%016lx\nRBP:%016lx RSP:%016lx\n"
|
||||
"R8 :%016lx R9 :%016lx\nR10:%016lx R11:%016lx\n"
|
||||
"R12:%016lx R13:%016lx\nR14:%016lx R15:%016lx\n"
|
||||
"RIP:%016lx EFL:%016lx\nCSGSFS:%016lx",
|
||||
(unsigned long) uc->uc_mcontext.mc_rax,
|
||||
(unsigned long) uc->uc_mcontext.mc_rbx,
|
||||
(unsigned long) uc->uc_mcontext.mc_rcx,
|
||||
(unsigned long) uc->uc_mcontext.mc_rdx,
|
||||
(unsigned long) uc->uc_mcontext.mc_rdi,
|
||||
(unsigned long) uc->uc_mcontext.mc_rsi,
|
||||
(unsigned long) uc->uc_mcontext.mc_rbp,
|
||||
(unsigned long) uc->uc_mcontext.mc_rsp,
|
||||
(unsigned long) uc->uc_mcontext.mc_r8,
|
||||
(unsigned long) uc->uc_mcontext.mc_r9,
|
||||
(unsigned long) uc->uc_mcontext.mc_r10,
|
||||
(unsigned long) uc->uc_mcontext.mc_r11,
|
||||
(unsigned long) uc->uc_mcontext.mc_r12,
|
||||
(unsigned long) uc->uc_mcontext.mc_r13,
|
||||
(unsigned long) uc->uc_mcontext.mc_r14,
|
||||
(unsigned long) uc->uc_mcontext.mc_r15,
|
||||
(unsigned long) uc->uc_mcontext.mc_rip,
|
||||
(unsigned long) uc->uc_mcontext.mc_rflags,
|
||||
(unsigned long) uc->uc_mcontext.mc_cs
|
||||
);
|
||||
logStackContent((void**)uc->uc_mcontext.mc_rsp);
|
||||
#else
|
||||
serverLog(LL_WARNING,
|
||||
" Dumping of registers not supported for this OS/arch");
|
||||
@ -1179,6 +1429,8 @@ void serverLogHexDump(int level, char *descr, void *value, size_t len) {
|
||||
void watchdogSignalHandler(int sig, siginfo_t *info, void *secret) {
|
||||
#ifdef HAVE_BACKTRACE
|
||||
ucontext_t *uc = (ucontext_t*) secret;
|
||||
#else
|
||||
(void)secret;
|
||||
#endif
|
||||
UNUSED(info);
|
||||
UNUSED(sig);
|
||||
|
24
src/dict.c
24
src/dict.c
@ -739,6 +739,30 @@ unsigned int dictGetSomeKeys(dict *d, dictEntry **des, unsigned int count) {
|
||||
return stored;
|
||||
}
|
||||
|
||||
/* This is like dictGetRandomKey() from the POV of the API, but will do more
|
||||
* work to ensure a better distribution of the returned element.
|
||||
*
|
||||
* This function improves the distribution because the dictGetRandomKey()
|
||||
* problem is that it selects a random bucket, then it selects a random
|
||||
* element from the chain in the bucket. However elements being in different
|
||||
* chain lengths will have different probabilities of being reported. With
|
||||
* this function instead what we do is to consider a "linear" range of the table
|
||||
* that may be constituted of N buckets with chains of different lengths
|
||||
* appearing one after the other. Then we report a random element in the range.
|
||||
* In this way we smooth away the problem of different chain lenghts. */
|
||||
#define GETFAIR_NUM_ENTRIES 15
|
||||
dictEntry *dictGetFairRandomKey(dict *d) {
|
||||
dictEntry *entries[GETFAIR_NUM_ENTRIES];
|
||||
unsigned int count = dictGetSomeKeys(d,entries,GETFAIR_NUM_ENTRIES);
|
||||
/* Note that dictGetSomeKeys() may return zero elements in an unlucky
|
||||
* run() even if there are actually elements inside the hash table. So
|
||||
* when we get zero, we call the true dictGetRandomKey() that will always
|
||||
* yeld the element if the hash table has at least one. */
|
||||
if (count == 0) return dictGetRandomKey(d);
|
||||
unsigned int idx = rand() % count;
|
||||
return entries[idx];
|
||||
}
|
||||
|
||||
/* Function to reverse bits. Algorithm from:
|
||||
* http://graphics.stanford.edu/~seander/bithacks.html#ReverseParallel */
|
||||
static unsigned long rev(unsigned long v) {
|
||||
|
@ -166,6 +166,7 @@ dictIterator *dictGetSafeIterator(dict *d);
|
||||
dictEntry *dictNext(dictIterator *iter);
|
||||
void dictReleaseIterator(dictIterator *iter);
|
||||
dictEntry *dictGetRandomKey(dict *d);
|
||||
dictEntry *dictGetFairRandomKey(dict *d);
|
||||
unsigned int dictGetSomeKeys(dict *d, dictEntry **des, unsigned int count);
|
||||
void dictGetStats(char *buf, size_t bufsize, dict *d);
|
||||
uint64_t dictGenHashFunction(const void *key, int len);
|
||||
|
17
src/evict.c
17
src/evict.c
@ -364,7 +364,7 @@ size_t freeMemoryGetNotCountedMemory(void) {
|
||||
}
|
||||
}
|
||||
if (server.aof_state != AOF_OFF) {
|
||||
overhead += sdslen(server.aof_buf)+aofRewriteBufferSize();
|
||||
overhead += sdsalloc(server.aof_buf)+aofRewriteBufferSize();
|
||||
}
|
||||
return overhead;
|
||||
}
|
||||
@ -444,6 +444,10 @@ int getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *lev
|
||||
* Otehrwise if we are over the memory limit, but not enough memory
|
||||
* was freed to return back under the limit, the function returns C_ERR. */
|
||||
int freeMemoryIfNeeded(void) {
|
||||
/* By default replicas should ignore maxmemory
|
||||
* and just be masters exact copies. */
|
||||
if (server.masterhost && server.repl_slave_ignore_maxmemory) return C_OK;
|
||||
|
||||
size_t mem_reported, mem_tofree, mem_freed;
|
||||
mstime_t latency, eviction_latency;
|
||||
long long delta;
|
||||
@ -618,3 +622,14 @@ cant_free:
|
||||
return C_ERR;
|
||||
}
|
||||
|
||||
/* This is a wrapper for freeMemoryIfNeeded() that only really calls the
|
||||
* function if right now there are the conditions to do so safely:
|
||||
*
|
||||
* - There must be no script in timeout condition.
|
||||
* - Nor we are loading data right now.
|
||||
*
|
||||
*/
|
||||
int freeMemoryIfNeededAndSafe(void) {
|
||||
if (server.lua_timedout || server.loading) return C_OK;
|
||||
return freeMemoryIfNeeded();
|
||||
}
|
||||
|
32
src/geo.c
32
src/geo.c
@ -466,7 +466,7 @@ void georadiusGeneric(client *c, int flags) {
|
||||
|
||||
/* Look up the requested zset */
|
||||
robj *zobj = NULL;
|
||||
if ((zobj = lookupKeyReadOrReply(c, key, shared.emptymultibulk)) == NULL ||
|
||||
if ((zobj = lookupKeyReadOrReply(c, key, shared.null[c->resp])) == NULL ||
|
||||
checkType(c, zobj, OBJ_ZSET)) {
|
||||
return;
|
||||
}
|
||||
@ -566,7 +566,7 @@ void georadiusGeneric(client *c, int flags) {
|
||||
|
||||
/* If no matching results, the user gets an empty reply. */
|
||||
if (ga->used == 0 && storekey == NULL) {
|
||||
addReply(c, shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
geoArrayFree(ga);
|
||||
return;
|
||||
}
|
||||
@ -597,11 +597,11 @@ void georadiusGeneric(client *c, int flags) {
|
||||
if (withhash)
|
||||
option_length++;
|
||||
|
||||
/* The multibulk len we send is exactly result_length. The result is
|
||||
/* The array len we send is exactly result_length. The result is
|
||||
* either all strings of just zset members *or* a nested multi-bulk
|
||||
* reply containing the zset member string _and_ all the additional
|
||||
* options the user enabled for this request. */
|
||||
addReplyMultiBulkLen(c, returned_items);
|
||||
addReplyArrayLen(c, returned_items);
|
||||
|
||||
/* Finally send results back to the caller */
|
||||
int i;
|
||||
@ -613,7 +613,7 @@ void georadiusGeneric(client *c, int flags) {
|
||||
* as a nested multi-bulk. Add 1 to account for result value
|
||||
* itself. */
|
||||
if (option_length)
|
||||
addReplyMultiBulkLen(c, option_length + 1);
|
||||
addReplyArrayLen(c, option_length + 1);
|
||||
|
||||
addReplyBulkSds(c,gp->member);
|
||||
gp->member = NULL;
|
||||
@ -625,7 +625,7 @@ void georadiusGeneric(client *c, int flags) {
|
||||
addReplyLongLong(c, gp->score);
|
||||
|
||||
if (withcoords) {
|
||||
addReplyMultiBulkLen(c, 2);
|
||||
addReplyArrayLen(c, 2);
|
||||
addReplyHumanLongDouble(c, gp->longitude);
|
||||
addReplyHumanLongDouble(c, gp->latitude);
|
||||
}
|
||||
@ -706,11 +706,11 @@ void geohashCommand(client *c) {
|
||||
|
||||
/* Geohash elements one after the other, using a null bulk reply for
|
||||
* missing elements. */
|
||||
addReplyMultiBulkLen(c,c->argc-2);
|
||||
addReplyArrayLen(c,c->argc-2);
|
||||
for (j = 2; j < c->argc; j++) {
|
||||
double score;
|
||||
if (!zobj || zsetScore(zobj, c->argv[j]->ptr, &score) == C_ERR) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
/* The internal format we use for geocoding is a bit different
|
||||
* than the standard, since we use as initial latitude range
|
||||
@ -721,7 +721,7 @@ void geohashCommand(client *c) {
|
||||
/* Decode... */
|
||||
double xy[2];
|
||||
if (!decodeGeohash(score,xy)) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
continue;
|
||||
}
|
||||
|
||||
@ -759,19 +759,19 @@ void geoposCommand(client *c) {
|
||||
|
||||
/* Report elements one after the other, using a null bulk reply for
|
||||
* missing elements. */
|
||||
addReplyMultiBulkLen(c,c->argc-2);
|
||||
addReplyArrayLen(c,c->argc-2);
|
||||
for (j = 2; j < c->argc; j++) {
|
||||
double score;
|
||||
if (!zobj || zsetScore(zobj, c->argv[j]->ptr, &score) == C_ERR) {
|
||||
addReply(c,shared.nullmultibulk);
|
||||
addReplyNullArray(c);
|
||||
} else {
|
||||
/* Decode... */
|
||||
double xy[2];
|
||||
if (!decodeGeohash(score,xy)) {
|
||||
addReply(c,shared.nullmultibulk);
|
||||
addReplyNullArray(c);
|
||||
continue;
|
||||
}
|
||||
addReplyMultiBulkLen(c,2);
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyHumanLongDouble(c,xy[0]);
|
||||
addReplyHumanLongDouble(c,xy[1]);
|
||||
}
|
||||
@ -797,7 +797,7 @@ void geodistCommand(client *c) {
|
||||
|
||||
/* Look up the requested zset */
|
||||
robj *zobj = NULL;
|
||||
if ((zobj = lookupKeyReadOrReply(c, c->argv[1], shared.nullbulk))
|
||||
if ((zobj = lookupKeyReadOrReply(c, c->argv[1], shared.null[c->resp]))
|
||||
== NULL || checkType(c, zobj, OBJ_ZSET)) return;
|
||||
|
||||
/* Get the scores. We need both otherwise NULL is returned. */
|
||||
@ -805,13 +805,13 @@ void geodistCommand(client *c) {
|
||||
if (zsetScore(zobj, c->argv[2]->ptr, &score1) == C_ERR ||
|
||||
zsetScore(zobj, c->argv[3]->ptr, &score2) == C_ERR)
|
||||
{
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Decode & compute the distance. */
|
||||
if (!decodeGeohash(score1,xyxy) || !decodeGeohash(score2,xyxy+2))
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
else
|
||||
addReplyDoubleDistance(c,
|
||||
geohashGetDistance(xyxy[0],xyxy[1],xyxy[2],xyxy[3]) / to_meter);
|
||||
|
@ -127,8 +127,8 @@ int geohashEncode(const GeoHashRange *long_range, const GeoHashRange *lat_range,
|
||||
|
||||
/* Return an error when trying to index outside the supported
|
||||
* constraints. */
|
||||
if (longitude > 180 || longitude < -180 ||
|
||||
latitude > 85.05112878 || latitude < -85.05112878) return 0;
|
||||
if (longitude > GEO_LONG_MAX || longitude < GEO_LONG_MIN ||
|
||||
latitude > GEO_LAT_MAX || latitude < GEO_LAT_MIN) return 0;
|
||||
|
||||
hash->bits = 0;
|
||||
hash->step = step;
|
||||
|
97
src/gopher.c
Normal file
97
src/gopher.c
Normal file
@ -0,0 +1,97 @@
|
||||
/*
|
||||
* Copyright (c) 2019, Salvatore Sanfilippo <antirez at gmail dot com>
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright notice,
|
||||
* this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of Redis nor the names of its contributors may be used
|
||||
* to endorse or promote products derived from this software without
|
||||
* specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
* POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#include "server.h"
|
||||
|
||||
/* Emit an item in Gopher directory listing format:
|
||||
* <type><descr><TAB><selector><TAB><hostname><TAB><port>
|
||||
* If descr or selector are NULL, then the "(NULL)" string is used instead. */
|
||||
void addReplyGopherItem(client *c, const char *type, const char *descr,
|
||||
const char *selector, const char *hostname, int port)
|
||||
{
|
||||
sds item = sdscatfmt(sdsempty(),"%s%s\t%s\t%s\t%i\r\n",
|
||||
type, descr,
|
||||
selector ? selector : "(NULL)",
|
||||
hostname ? hostname : "(NULL)",
|
||||
port);
|
||||
addReplyProto(c,item,sdslen(item));
|
||||
sdsfree(item);
|
||||
}
|
||||
|
||||
/* This is called by processInputBuffer() when an inline request is processed
|
||||
* with Gopher mode enabled, and the request happens to have zero or just one
|
||||
* argument. In such case we get the relevant key and reply using the Gopher
|
||||
* protocol. */
|
||||
void processGopherRequest(client *c) {
|
||||
robj *keyname = c->argc == 0 ? createStringObject("/",1) : c->argv[0];
|
||||
robj *o = lookupKeyRead(c->db,keyname);
|
||||
|
||||
/* If there is no such key, return with a Gopher error. */
|
||||
if (o == NULL || o->type != OBJ_STRING) {
|
||||
char *errstr;
|
||||
if (o == NULL)
|
||||
errstr = "Error: no content at the specified key";
|
||||
else
|
||||
errstr = "Error: selected key type is invalid "
|
||||
"for Gopher output";
|
||||
addReplyGopherItem(c,"i",errstr,NULL,NULL,0);
|
||||
addReplyGopherItem(c,"i","Redis Gopher server",NULL,NULL,0);
|
||||
} else {
|
||||
addReply(c,o);
|
||||
}
|
||||
|
||||
/* Cleanup, also make sure to emit the final ".CRLF" line. Note that
|
||||
* the connection will be closed immediately after this because the client
|
||||
* will be flagged with CLIENT_CLOSE_AFTER_REPLY, in accordance with the
|
||||
* Gopher protocol. */
|
||||
if (c->argc == 0) decrRefCount(keyname);
|
||||
|
||||
/* Note that in theory we should terminate the Gopher request with
|
||||
* ".<CR><LF>" (called Lastline in the RFC) like that:
|
||||
*
|
||||
* addReplyProto(c,".\r\n",3);
|
||||
*
|
||||
* However after examining the current clients landscape, it's probably
|
||||
* going to do more harm than good for several reasons:
|
||||
*
|
||||
* 1. Clients should not have any issue with missing .<CR><LF> as for
|
||||
* specification, and in the real world indeed certain servers
|
||||
* implementations never used to send the terminator.
|
||||
*
|
||||
* 2. Redis does not know if it's serving a text file or a binary file:
|
||||
* at the same time clients will not remove the ".<CR><LF>" bytes at
|
||||
* tne end when downloading a binary file from the server, so adding
|
||||
* the "Lastline" terminator without knowing the content is just
|
||||
* dangerous.
|
||||
*
|
||||
* 3. The utility gopher2redis.rb that we provide for Redis, and any
|
||||
* other similar tool you may use as Gopher authoring system for
|
||||
* Redis, can just add the "Lastline" when needed.
|
||||
*/
|
||||
}
|
64
src/help.h
64
src/help.h
@ -98,6 +98,11 @@ struct commandHelp {
|
||||
"Get the current connection name",
|
||||
9,
|
||||
"2.6.9" },
|
||||
{ "CLIENT ID",
|
||||
"-",
|
||||
"Returns the client ID for the current connection",
|
||||
9,
|
||||
"5.0.0" },
|
||||
{ "CLIENT KILL",
|
||||
"[ip:port] [ID client-id] [TYPE normal|master|slave|pubsub] [ADDR ip:port] [SKIPME yes/no]",
|
||||
"Kill the connection of a client",
|
||||
@ -123,6 +128,11 @@ struct commandHelp {
|
||||
"Set the current connection name",
|
||||
9,
|
||||
"2.6.9" },
|
||||
{ "CLIENT UNBLOCK",
|
||||
"client-id [TIMEOUT|ERROR]",
|
||||
"Unblock a client blocked in a blocking command from a different connection",
|
||||
9,
|
||||
"5.0.0" },
|
||||
{ "CLUSTER ADDSLOTS",
|
||||
"slot [slot ...]",
|
||||
"Assign new hash slots to receiving node",
|
||||
@ -145,7 +155,7 @@ struct commandHelp {
|
||||
"3.0.0" },
|
||||
{ "CLUSTER FAILOVER",
|
||||
"[FORCE|TAKEOVER]",
|
||||
"Forces a slave to perform a manual failover of its master.",
|
||||
"Forces a replica to perform a manual failover of its master.",
|
||||
12,
|
||||
"3.0.0" },
|
||||
{ "CLUSTER FORGET",
|
||||
@ -178,9 +188,14 @@ struct commandHelp {
|
||||
"Get Cluster config for the node",
|
||||
12,
|
||||
"3.0.0" },
|
||||
{ "CLUSTER REPLICAS",
|
||||
"node-id",
|
||||
"List replica nodes of the specified master node",
|
||||
12,
|
||||
"5.0.0" },
|
||||
{ "CLUSTER REPLICATE",
|
||||
"node-id",
|
||||
"Reconfigure a node as a slave of the specified master node",
|
||||
"Reconfigure a node as a replica of the specified master node",
|
||||
12,
|
||||
"3.0.0" },
|
||||
{ "CLUSTER RESET",
|
||||
@ -205,7 +220,7 @@ struct commandHelp {
|
||||
"3.0.0" },
|
||||
{ "CLUSTER SLAVES",
|
||||
"node-id",
|
||||
"List slave nodes of the specified master node",
|
||||
"List replica nodes of the specified master node",
|
||||
12,
|
||||
"3.0.0" },
|
||||
{ "CLUSTER SLOTS",
|
||||
@ -690,12 +705,12 @@ struct commandHelp {
|
||||
"1.0.0" },
|
||||
{ "READONLY",
|
||||
"-",
|
||||
"Enables read queries for a connection to a cluster slave node",
|
||||
"Enables read queries for a connection to a cluster replica node",
|
||||
12,
|
||||
"3.0.0" },
|
||||
{ "READWRITE",
|
||||
"-",
|
||||
"Disables read queries for a connection to a cluster slave node",
|
||||
"Disables read queries for a connection to a cluster replica node",
|
||||
12,
|
||||
"3.0.0" },
|
||||
{ "RENAME",
|
||||
@ -708,6 +723,11 @@ struct commandHelp {
|
||||
"Rename a key, only if the new key does not exist",
|
||||
0,
|
||||
"1.0.0" },
|
||||
{ "REPLICAOF",
|
||||
"host port",
|
||||
"Make the server a replica of another instance, or promote it as master.",
|
||||
9,
|
||||
"5.0.0" },
|
||||
{ "RESTORE",
|
||||
"key ttl serialized-value [REPLACE]",
|
||||
"Create a key using the provided serialized value, previously obtained using DUMP.",
|
||||
@ -845,7 +865,7 @@ struct commandHelp {
|
||||
"1.0.0" },
|
||||
{ "SLAVEOF",
|
||||
"host port",
|
||||
"Make the server a slave of another instance, or promote it as master",
|
||||
"Make the server a replica of another instance, or promote it as master. Deprecated starting with Redis 5. Use REPLICAOF instead.",
|
||||
9,
|
||||
"1.0.0" },
|
||||
{ "SLOWLOG",
|
||||
@ -954,7 +974,7 @@ struct commandHelp {
|
||||
7,
|
||||
"2.2.0" },
|
||||
{ "WAIT",
|
||||
"numslaves timeout",
|
||||
"numreplicas timeout",
|
||||
"Wait for the synchronous replication of all the write commands sent in the context of the current connection",
|
||||
0,
|
||||
"3.0.0" },
|
||||
@ -963,11 +983,36 @@ struct commandHelp {
|
||||
"Watch the given keys to determine execution of the MULTI/EXEC block",
|
||||
7,
|
||||
"2.2.0" },
|
||||
{ "XACK",
|
||||
"key group ID [ID ...]",
|
||||
"Marks a pending message as correctly processed, effectively removing it from the pending entries list of the consumer group. Return value of the command is the number of messages successfully acknowledged, that is, the IDs we were actually able to resolve in the PEL.",
|
||||
14,
|
||||
"5.0.0" },
|
||||
{ "XADD",
|
||||
"key ID field string [field string ...]",
|
||||
"Appends a new entry to a stream",
|
||||
14,
|
||||
"5.0.0" },
|
||||
{ "XCLAIM",
|
||||
"key group consumer min-idle-time ID [ID ...] [IDLE ms] [TIME ms-unix-time] [RETRYCOUNT count] [force] [justid]",
|
||||
"Changes (or acquires) ownership of a message in a consumer group, as if the message was delivered to the specified consumer.",
|
||||
14,
|
||||
"5.0.0" },
|
||||
{ "XDEL",
|
||||
"key ID [ID ...]",
|
||||
"Removes the specified entries from the stream. Returns the number of items actually deleted, that may be different from the number of IDs passed in case certain IDs do not exist.",
|
||||
14,
|
||||
"5.0.0" },
|
||||
{ "XGROUP",
|
||||
"[CREATE key groupname id-or-$] [SETID key id-or-$] [DESTROY key groupname] [DELCONSUMER key groupname consumername]",
|
||||
"Create, destroy, and manage consumer groups.",
|
||||
14,
|
||||
"5.0.0" },
|
||||
{ "XINFO",
|
||||
"[CONSUMERS key groupname] [GROUPS key] [STREAM key] [HELP]",
|
||||
"Get information on streams and consumer groups",
|
||||
14,
|
||||
"5.0.0" },
|
||||
{ "XLEN",
|
||||
"key",
|
||||
"Return the number of entires in a stream",
|
||||
@ -998,6 +1043,11 @@ struct commandHelp {
|
||||
"Return a range of elements in a stream, with IDs matching the specified IDs interval, in reverse order (from greater to smaller IDs) compared to XRANGE",
|
||||
14,
|
||||
"5.0.0" },
|
||||
{ "XTRIM",
|
||||
"key MAXLEN [~] count",
|
||||
"Trims the stream to (approximately if '~' is passed) a certain size",
|
||||
14,
|
||||
"5.0.0" },
|
||||
{ "ZADD",
|
||||
"key [NX|XX] [CH] [INCR] score member [score member ...]",
|
||||
"Add one or more members to a sorted set, or update its score if it already exists",
|
||||
|
@ -1512,7 +1512,7 @@ void pfdebugCommand(client *c) {
|
||||
}
|
||||
|
||||
hdr = o->ptr;
|
||||
addReplyMultiBulkLen(c,HLL_REGISTERS);
|
||||
addReplyArrayLen(c,HLL_REGISTERS);
|
||||
for (j = 0; j < HLL_REGISTERS; j++) {
|
||||
uint8_t val;
|
||||
|
||||
|
@ -123,7 +123,7 @@ static uint8_t intsetSearch(intset *is, int64_t value, uint32_t *pos) {
|
||||
} else {
|
||||
/* Check for the case where we know we cannot find the value,
|
||||
* but do know the insert position. */
|
||||
if (value > _intsetGet(is,intrev32ifbe(is->length)-1)) {
|
||||
if (value > _intsetGet(is,max)) {
|
||||
if (pos) *pos = intrev32ifbe(is->length);
|
||||
return 0;
|
||||
} else if (value < _intsetGet(is,0)) {
|
||||
|
@ -476,19 +476,19 @@ sds createLatencyReport(void) {
|
||||
/* latencyCommand() helper to produce a time-delay reply for all the samples
|
||||
* in memory for the specified time series. */
|
||||
void latencyCommandReplyWithSamples(client *c, struct latencyTimeSeries *ts) {
|
||||
void *replylen = addDeferredMultiBulkLength(c);
|
||||
void *replylen = addReplyDeferredLen(c);
|
||||
int samples = 0, j;
|
||||
|
||||
for (j = 0; j < LATENCY_TS_LEN; j++) {
|
||||
int i = (ts->idx + j) % LATENCY_TS_LEN;
|
||||
|
||||
if (ts->samples[i].time == 0) continue;
|
||||
addReplyMultiBulkLen(c,2);
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyLongLong(c,ts->samples[i].time);
|
||||
addReplyLongLong(c,ts->samples[i].latency);
|
||||
samples++;
|
||||
}
|
||||
setDeferredMultiBulkLength(c,replylen,samples);
|
||||
setDeferredArrayLen(c,replylen,samples);
|
||||
}
|
||||
|
||||
/* latencyCommand() helper to produce the reply for the LATEST subcommand,
|
||||
@ -497,14 +497,14 @@ void latencyCommandReplyWithLatestEvents(client *c) {
|
||||
dictIterator *di;
|
||||
dictEntry *de;
|
||||
|
||||
addReplyMultiBulkLen(c,dictSize(server.latency_events));
|
||||
addReplyArrayLen(c,dictSize(server.latency_events));
|
||||
di = dictGetIterator(server.latency_events);
|
||||
while((de = dictNext(di)) != NULL) {
|
||||
char *event = dictGetKey(de);
|
||||
struct latencyTimeSeries *ts = dictGetVal(de);
|
||||
int last = (ts->idx + LATENCY_TS_LEN - 1) % LATENCY_TS_LEN;
|
||||
|
||||
addReplyMultiBulkLen(c,4);
|
||||
addReplyArrayLen(c,4);
|
||||
addReplyBulkCString(c,event);
|
||||
addReplyLongLong(c,ts->samples[last].time);
|
||||
addReplyLongLong(c,ts->samples[last].latency);
|
||||
@ -560,19 +560,30 @@ sds latencyCommandGenSparkeline(char *event, struct latencyTimeSeries *ts) {
|
||||
|
||||
/* LATENCY command implementations.
|
||||
*
|
||||
* LATENCY SAMPLES: return time-latency samples for the specified event.
|
||||
* LATENCY HISTORY: return time-latency samples for the specified event.
|
||||
* LATENCY LATEST: return the latest latency for all the events classes.
|
||||
* LATENCY DOCTOR: returns an human readable analysis of instance latency.
|
||||
* LATENCY DOCTOR: returns a human readable analysis of instance latency.
|
||||
* LATENCY GRAPH: provide an ASCII graph of the latency of the specified event.
|
||||
* LATENCY RESET: reset data of a specified event or all the data if no event provided.
|
||||
*/
|
||||
void latencyCommand(client *c) {
|
||||
const char *help[] = {
|
||||
"DOCTOR -- Returns a human readable latency analysis report.",
|
||||
"GRAPH <event> -- Returns an ASCII latency graph for the event class.",
|
||||
"HISTORY <event> -- Returns time-latency samples for the event class.",
|
||||
"LATEST -- Returns the latest latency samples for all events.",
|
||||
"RESET [event ...] -- Resets latency data of one or more event classes.",
|
||||
" (default: reset all data for all event classes)",
|
||||
"HELP -- Prints this help.",
|
||||
NULL
|
||||
};
|
||||
struct latencyTimeSeries *ts;
|
||||
|
||||
if (!strcasecmp(c->argv[1]->ptr,"history") && c->argc == 3) {
|
||||
/* LATENCY HISTORY <event> */
|
||||
ts = dictFetchValue(server.latency_events,c->argv[2]->ptr);
|
||||
if (ts == NULL) {
|
||||
addReplyMultiBulkLen(c,0);
|
||||
addReplyArrayLen(c,0);
|
||||
} else {
|
||||
latencyCommandReplyWithSamples(c,ts);
|
||||
}
|
||||
@ -610,8 +621,10 @@ void latencyCommand(client *c) {
|
||||
resets += latencyResetEvent(c->argv[j]->ptr);
|
||||
addReplyLongLong(c,resets);
|
||||
}
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"help") && c->argc >= 2) {
|
||||
addReplyHelp(c, help);
|
||||
} else {
|
||||
addReply(c,shared.syntaxerr);
|
||||
addReplySubcommandSyntaxError(c);
|
||||
}
|
||||
return;
|
||||
|
||||
|
@ -90,6 +90,17 @@ int dbAsyncDelete(redisDb *db, robj *key) {
|
||||
}
|
||||
}
|
||||
|
||||
/* Free an object, if the object is huge enough, free it in async way. */
|
||||
void freeObjAsync(robj *o) {
|
||||
size_t free_effort = lazyfreeGetFreeEffort(o);
|
||||
if (free_effort > LAZYFREE_THRESHOLD && o->refcount == 1) {
|
||||
atomicIncr(lazyfree_objects,1);
|
||||
bioCreateBackgroundJob(BIO_LAZY_FREE,o,NULL,NULL);
|
||||
} else {
|
||||
decrRefCount(o);
|
||||
}
|
||||
}
|
||||
|
||||
/* Empty a Redis DB asynchronously. What the function does actually is to
|
||||
* create a new empty set of hash tables and scheduling the old ones for
|
||||
* lazy freeing. */
|
||||
|
@ -707,6 +707,26 @@ unsigned char *lpInsert(unsigned char *lp, unsigned char *ele, uint32_t size, un
|
||||
}
|
||||
}
|
||||
lpSetTotalBytes(lp,new_listpack_bytes);
|
||||
|
||||
#if 0
|
||||
/* This code path is normally disabled: what it does is to force listpack
|
||||
* to return *always* a new pointer after performing some modification to
|
||||
* the listpack, even if the previous allocation was enough. This is useful
|
||||
* in order to spot bugs in code using listpacks: by doing so we can find
|
||||
* if the caller forgets to set the new pointer where the listpack reference
|
||||
* is stored, after an update. */
|
||||
unsigned char *oldlp = lp;
|
||||
lp = lp_malloc(new_listpack_bytes);
|
||||
memcpy(lp,oldlp,new_listpack_bytes);
|
||||
if (newp) {
|
||||
unsigned long offset = (*newp)-oldlp;
|
||||
*newp = lp + offset;
|
||||
}
|
||||
/* Make sure the old allocation contains garbage. */
|
||||
memset(oldlp,'A',new_listpack_bytes);
|
||||
lp_free(oldlp);
|
||||
#endif
|
||||
|
||||
return lp;
|
||||
}
|
||||
|
||||
|
56
src/lolwut.c
Normal file
56
src/lolwut.c
Normal file
@ -0,0 +1,56 @@
|
||||
/*
|
||||
* Copyright (c) 2018, Salvatore Sanfilippo <antirez at gmail dot com>
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright notice,
|
||||
* this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of Redis nor the names of its contributors may be used
|
||||
* to endorse or promote products derived from this software without
|
||||
* specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
* POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ----------------------------------------------------------------------------
|
||||
*
|
||||
* This file implements the LOLWUT command. The command should do something
|
||||
* fun and interesting, and should be replaced by a new implementation at
|
||||
* each new version of Redis.
|
||||
*/
|
||||
|
||||
#include "server.h"
|
||||
|
||||
void lolwut5Command(client *c);
|
||||
|
||||
/* The default target for LOLWUT if no matching version was found.
|
||||
* This is what unstable versions of Redis will display. */
|
||||
void lolwutUnstableCommand(client *c) {
|
||||
sds rendered = sdsnew("Redis ver. ");
|
||||
rendered = sdscat(rendered,REDIS_VERSION);
|
||||
rendered = sdscatlen(rendered,"\n",1);
|
||||
addReplyBulkSds(c,rendered);
|
||||
}
|
||||
|
||||
void lolwutCommand(client *c) {
|
||||
char *v = REDIS_VERSION;
|
||||
if ((v[0] == '5' && v[1] == '.') ||
|
||||
(v[0] == '4' && v[1] == '.' && v[2] == '9'))
|
||||
lolwut5Command(c);
|
||||
else
|
||||
lolwutUnstableCommand(c);
|
||||
}
|
282
src/lolwut5.c
Normal file
282
src/lolwut5.c
Normal file
@ -0,0 +1,282 @@
|
||||
/*
|
||||
* Copyright (c) 2018, Salvatore Sanfilippo <antirez at gmail dot com>
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright notice,
|
||||
* this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of Redis nor the names of its contributors may be used
|
||||
* to endorse or promote products derived from this software without
|
||||
* specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
* POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ----------------------------------------------------------------------------
|
||||
*
|
||||
* This file implements the LOLWUT command. The command should do something
|
||||
* fun and interesting, and should be replaced by a new implementation at
|
||||
* each new version of Redis.
|
||||
*/
|
||||
|
||||
#include "server.h"
|
||||
#include <math.h>
|
||||
|
||||
/* This structure represents our canvas. Drawing functions will take a pointer
|
||||
* to a canvas to write to it. Later the canvas can be rendered to a string
|
||||
* suitable to be printed on the screen, using unicode Braille characters. */
|
||||
typedef struct lwCanvas {
|
||||
int width;
|
||||
int height;
|
||||
char *pixels;
|
||||
} lwCanvas;
|
||||
|
||||
/* Translate a group of 8 pixels (2x4 vertical rectangle) to the corresponding
|
||||
* braille character. The byte should correspond to the pixels arranged as
|
||||
* follows, where 0 is the least significant bit, and 7 the most significant
|
||||
* bit:
|
||||
*
|
||||
* 0 3
|
||||
* 1 4
|
||||
* 2 5
|
||||
* 6 7
|
||||
*
|
||||
* The corresponding utf8 encoded character is set into the three bytes
|
||||
* pointed by 'output'.
|
||||
*/
|
||||
#include <stdio.h>
|
||||
void lwTranslatePixelsGroup(int byte, char *output) {
|
||||
int code = 0x2800 + byte;
|
||||
/* Convert to unicode. This is in the U0800-UFFFF range, so we need to
|
||||
* emit it like this in three bytes:
|
||||
* 1110xxxx 10xxxxxx 10xxxxxx. */
|
||||
output[0] = 0xE0 | (code >> 12); /* 1110-xxxx */
|
||||
output[1] = 0x80 | ((code >> 6) & 0x3F); /* 10-xxxxxx */
|
||||
output[2] = 0x80 | (code & 0x3F); /* 10-xxxxxx */
|
||||
}
|
||||
|
||||
/* Allocate and return a new canvas of the specified size. */
|
||||
lwCanvas *lwCreateCanvas(int width, int height) {
|
||||
lwCanvas *canvas = zmalloc(sizeof(*canvas));
|
||||
canvas->width = width;
|
||||
canvas->height = height;
|
||||
canvas->pixels = zmalloc(width*height);
|
||||
memset(canvas->pixels,0,width*height);
|
||||
return canvas;
|
||||
}
|
||||
|
||||
/* Free the canvas created by lwCreateCanvas(). */
|
||||
void lwFreeCanvas(lwCanvas *canvas) {
|
||||
zfree(canvas->pixels);
|
||||
zfree(canvas);
|
||||
}
|
||||
|
||||
/* Set a pixel to the specified color. Color is 0 or 1, where zero means no
|
||||
* dot will be displyed, and 1 means dot will be displayed.
|
||||
* Coordinates are arranged so that left-top corner is 0,0. You can write
|
||||
* out of the size of the canvas without issues. */
|
||||
void lwDrawPixel(lwCanvas *canvas, int x, int y, int color) {
|
||||
if (x < 0 || x >= canvas->width ||
|
||||
y < 0 || y >= canvas->height) return;
|
||||
canvas->pixels[x+y*canvas->width] = color;
|
||||
}
|
||||
|
||||
/* Return the value of the specified pixel on the canvas. */
|
||||
int lwGetPixel(lwCanvas *canvas, int x, int y) {
|
||||
if (x < 0 || x >= canvas->width ||
|
||||
y < 0 || y >= canvas->height) return 0;
|
||||
return canvas->pixels[x+y*canvas->width];
|
||||
}
|
||||
|
||||
/* Draw a line from x1,y1 to x2,y2 using the Bresenham algorithm. */
|
||||
void lwDrawLine(lwCanvas *canvas, int x1, int y1, int x2, int y2, int color) {
|
||||
int dx = abs(x2-x1);
|
||||
int dy = abs(y2-y1);
|
||||
int sx = (x1 < x2) ? 1 : -1;
|
||||
int sy = (y1 < y2) ? 1 : -1;
|
||||
int err = dx-dy, e2;
|
||||
|
||||
while(1) {
|
||||
lwDrawPixel(canvas,x1,y1,color);
|
||||
if (x1 == x2 && y1 == y2) break;
|
||||
e2 = err*2;
|
||||
if (e2 > -dy) {
|
||||
err -= dy;
|
||||
x1 += sx;
|
||||
}
|
||||
if (e2 < dx) {
|
||||
err += dx;
|
||||
y1 += sy;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Draw a square centered at the specified x,y coordinates, with the specified
|
||||
* rotation angle and size. In order to write a rotated square, we use the
|
||||
* trivial fact that the parametric equation:
|
||||
*
|
||||
* x = sin(k)
|
||||
* y = cos(k)
|
||||
*
|
||||
* Describes a circle for values going from 0 to 2*PI. So basically if we start
|
||||
* at 45 degrees, that is k = PI/4, with the first point, and then we find
|
||||
* the other three points incrementing K by PI/2 (90 degrees), we'll have the
|
||||
* points of the square. In order to rotate the square, we just start with
|
||||
* k = PI/4 + rotation_angle, and we are done.
|
||||
*
|
||||
* Of course the vanilla equations above will describe the square inside a
|
||||
* circle of radius 1, so in order to draw larger squares we'll have to
|
||||
* multiply the obtained coordinates, and then translate them. However this
|
||||
* is much simpler than implementing the abstract concept of 2D shape and then
|
||||
* performing the rotation/translation transformation, so for LOLWUT it's
|
||||
* a good approach. */
|
||||
void lwDrawSquare(lwCanvas *canvas, int x, int y, float size, float angle) {
|
||||
int px[4], py[4];
|
||||
|
||||
/* Adjust the desired size according to the fact that the square inscribed
|
||||
* into a circle of radius 1 has the side of length SQRT(2). This way
|
||||
* size becomes a simple multiplication factor we can use with our
|
||||
* coordinates to magnify them. */
|
||||
size /= 1.4142135623;
|
||||
size = round(size);
|
||||
|
||||
/* Compute the four points. */
|
||||
float k = M_PI/4 + angle;
|
||||
for (int j = 0; j < 4; j++) {
|
||||
px[j] = round(sin(k) * size + x);
|
||||
py[j] = round(cos(k) * size + y);
|
||||
k += M_PI/2;
|
||||
}
|
||||
|
||||
/* Draw the square. */
|
||||
for (int j = 0; j < 4; j++)
|
||||
lwDrawLine(canvas,px[j],py[j],px[(j+1)%4],py[(j+1)%4],1);
|
||||
}
|
||||
|
||||
/* Schotter, the output of LOLWUT of Redis 5, is a computer graphic art piece
|
||||
* generated by Georg Nees in the 60s. It explores the relationship between
|
||||
* caos and order.
|
||||
*
|
||||
* The function creates the canvas itself, depending on the columns available
|
||||
* in the output display and the number of squares per row and per column
|
||||
* requested by the caller. */
|
||||
lwCanvas *lwDrawSchotter(int console_cols, int squares_per_row, int squares_per_col) {
|
||||
/* Calculate the canvas size. */
|
||||
int canvas_width = console_cols*2;
|
||||
int padding = canvas_width > 4 ? 2 : 0;
|
||||
float square_side = (float)(canvas_width-padding*2) / squares_per_row;
|
||||
int canvas_height = square_side * squares_per_col + padding*2;
|
||||
lwCanvas *canvas = lwCreateCanvas(canvas_width, canvas_height);
|
||||
|
||||
for (int y = 0; y < squares_per_col; y++) {
|
||||
for (int x = 0; x < squares_per_row; x++) {
|
||||
int sx = x * square_side + square_side/2 + padding;
|
||||
int sy = y * square_side + square_side/2 + padding;
|
||||
/* Rotate and translate randomly as we go down to lower
|
||||
* rows. */
|
||||
float angle = 0;
|
||||
if (y > 1) {
|
||||
float r1 = (float)rand() / RAND_MAX / squares_per_col * y;
|
||||
float r2 = (float)rand() / RAND_MAX / squares_per_col * y;
|
||||
float r3 = (float)rand() / RAND_MAX / squares_per_col * y;
|
||||
if (rand() % 2) r1 = -r1;
|
||||
if (rand() % 2) r2 = -r2;
|
||||
if (rand() % 2) r3 = -r3;
|
||||
angle = r1;
|
||||
sx += r2*square_side/3;
|
||||
sy += r3*square_side/3;
|
||||
}
|
||||
lwDrawSquare(canvas,sx,sy,square_side,angle);
|
||||
}
|
||||
}
|
||||
|
||||
return canvas;
|
||||
}
|
||||
|
||||
/* Converts the canvas to an SDS string representing the UTF8 characters to
|
||||
* print to the terminal in order to obtain a graphical representaiton of the
|
||||
* logical canvas. The actual returned string will require a terminal that is
|
||||
* width/2 large and height/4 tall in order to hold the whole image without
|
||||
* overflowing or scrolling, since each Barille character is 2x4. */
|
||||
sds lwRenderCanvas(lwCanvas *canvas) {
|
||||
sds text = sdsempty();
|
||||
for (int y = 0; y < canvas->height; y += 4) {
|
||||
for (int x = 0; x < canvas->width; x += 2) {
|
||||
/* We need to emit groups of 8 bits according to a specific
|
||||
* arrangement. See lwTranslatePixelsGroup() for more info. */
|
||||
int byte = 0;
|
||||
if (lwGetPixel(canvas,x,y)) byte |= (1<<0);
|
||||
if (lwGetPixel(canvas,x,y+1)) byte |= (1<<1);
|
||||
if (lwGetPixel(canvas,x,y+2)) byte |= (1<<2);
|
||||
if (lwGetPixel(canvas,x+1,y)) byte |= (1<<3);
|
||||
if (lwGetPixel(canvas,x+1,y+1)) byte |= (1<<4);
|
||||
if (lwGetPixel(canvas,x+1,y+2)) byte |= (1<<5);
|
||||
if (lwGetPixel(canvas,x,y+3)) byte |= (1<<6);
|
||||
if (lwGetPixel(canvas,x+1,y+3)) byte |= (1<<7);
|
||||
char unicode[3];
|
||||
lwTranslatePixelsGroup(byte,unicode);
|
||||
text = sdscatlen(text,unicode,3);
|
||||
}
|
||||
if (y != canvas->height-1) text = sdscatlen(text,"\n",1);
|
||||
}
|
||||
return text;
|
||||
}
|
||||
|
||||
/* The LOLWUT command:
|
||||
*
|
||||
* LOLWUT [terminal columns] [squares-per-row] [squares-per-col]
|
||||
*
|
||||
* By default the command uses 66 columns, 8 squares per row, 12 squares
|
||||
* per column.
|
||||
*/
|
||||
void lolwut5Command(client *c) {
|
||||
long cols = 66;
|
||||
long squares_per_row = 8;
|
||||
long squares_per_col = 12;
|
||||
|
||||
/* Parse the optional arguments if any. */
|
||||
if (c->argc > 1 &&
|
||||
getLongFromObjectOrReply(c,c->argv[1],&cols,NULL) != C_OK)
|
||||
return;
|
||||
|
||||
if (c->argc > 2 &&
|
||||
getLongFromObjectOrReply(c,c->argv[2],&squares_per_row,NULL) != C_OK)
|
||||
return;
|
||||
|
||||
if (c->argc > 3 &&
|
||||
getLongFromObjectOrReply(c,c->argv[3],&squares_per_col,NULL) != C_OK)
|
||||
return;
|
||||
|
||||
/* Limits. We want LOLWUT to be always reasonably fast and cheap to execute
|
||||
* so we have maximum number of columns, rows, and output resulution. */
|
||||
if (cols < 1) cols = 1;
|
||||
if (cols > 1000) cols = 1000;
|
||||
if (squares_per_row < 1) squares_per_row = 1;
|
||||
if (squares_per_row > 200) squares_per_row = 200;
|
||||
if (squares_per_col < 1) squares_per_col = 1;
|
||||
if (squares_per_col > 200) squares_per_col = 200;
|
||||
|
||||
/* Generate some computer art and reply. */
|
||||
lwCanvas *canvas = lwDrawSchotter(cols,squares_per_row,squares_per_col);
|
||||
sds rendered = lwRenderCanvas(canvas);
|
||||
rendered = sdscat(rendered,
|
||||
"\nGeorg Nees - schotter, plotter on paper, 1968. Redis ver. ");
|
||||
rendered = sdscat(rendered,REDIS_VERSION);
|
||||
rendered = sdscatlen(rendered,"\n",1);
|
||||
addReplyBulkSds(c,rendered);
|
||||
lwFreeCanvas(canvas);
|
||||
}
|
11
src/lzf_d.c
11
src/lzf_d.c
@ -52,6 +52,10 @@
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#if defined(__GNUC__) && __GNUC__ >= 5
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wimplicit-fallthrough"
|
||||
#endif
|
||||
unsigned int
|
||||
lzf_decompress (const void *const in_data, unsigned int in_len,
|
||||
void *out_data, unsigned int out_len)
|
||||
@ -86,8 +90,6 @@ lzf_decompress (const void *const in_data, unsigned int in_len,
|
||||
#ifdef lzf_movsb
|
||||
lzf_movsb (op, ip, ctrl);
|
||||
#else
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wimplicit-fallthrough"
|
||||
switch (ctrl)
|
||||
{
|
||||
case 32: *op++ = *ip++; case 31: *op++ = *ip++; case 30: *op++ = *ip++; case 29: *op++ = *ip++;
|
||||
@ -99,7 +101,6 @@ lzf_decompress (const void *const in_data, unsigned int in_len,
|
||||
case 8: *op++ = *ip++; case 7: *op++ = *ip++; case 6: *op++ = *ip++; case 5: *op++ = *ip++;
|
||||
case 4: *op++ = *ip++; case 3: *op++ = *ip++; case 2: *op++ = *ip++; case 1: *op++ = *ip++;
|
||||
}
|
||||
#pragma GCC diagnostic pop
|
||||
#endif
|
||||
}
|
||||
else /* back reference */
|
||||
@ -185,4 +186,6 @@ lzf_decompress (const void *const in_data, unsigned int in_len,
|
||||
|
||||
return op - (u8 *)out_data;
|
||||
}
|
||||
|
||||
#if defined(__GNUC__) && __GNUC__ >= 5
|
||||
#pragma GCC diagnostic pop
|
||||
#endif
|
||||
|
@ -2,6 +2,9 @@
|
||||
GIT_SHA1=`(git show-ref --head --hash=8 2> /dev/null || echo 00000000) | head -n1`
|
||||
GIT_DIRTY=`git diff --no-ext-diff 2> /dev/null | wc -l`
|
||||
BUILD_ID=`uname -n`"-"`date +%s`
|
||||
if [ -n "$SOURCE_DATE_EPOCH" ]; then
|
||||
BUILD_ID=$(date -u -d "@$SOURCE_DATE_EPOCH" +%s 2>/dev/null || date -u -r "$SOURCE_DATE_EPOCH" +%s 2>/dev/null || date -u %s)
|
||||
fi
|
||||
test -f release.h || touch release.h
|
||||
(cat release.h | grep SHA1 | grep $GIT_SHA1) && \
|
||||
(cat release.h | grep DIRTY | grep $GIT_DIRTY) && exit 0 # Already up-to-date
|
||||
|
477
src/module.c
477
src/module.c
@ -64,6 +64,7 @@ struct AutoMemEntry {
|
||||
#define REDISMODULE_AM_STRING 1
|
||||
#define REDISMODULE_AM_REPLY 2
|
||||
#define REDISMODULE_AM_FREED 3 /* Explicitly freed by user already. */
|
||||
#define REDISMODULE_AM_DICT 4
|
||||
|
||||
/* The pool allocator block. Redis Modules can allocate memory via this special
|
||||
* allocator that will automatically release it all once the callback returns.
|
||||
@ -241,9 +242,21 @@ typedef struct RedisModuleKeyspaceSubscriber {
|
||||
/* The module keyspace notification subscribers list */
|
||||
static list *moduleKeyspaceSubscribers;
|
||||
|
||||
/* Static client recycled for all notification clients, to avoid allocating
|
||||
* per round. */
|
||||
static client *moduleKeyspaceSubscribersClient;
|
||||
/* Static client recycled for when we need to provide a context with a client
|
||||
* in a situation where there is no client to provide. This avoidsallocating
|
||||
* a new client per round. For instance this is used in the keyspace
|
||||
* notifications, timers and cluster messages callbacks. */
|
||||
static client *moduleFreeContextReusedClient;
|
||||
|
||||
/* Data structures related to the exported dictionary data structure. */
|
||||
typedef struct RedisModuleDict {
|
||||
rax *rax; /* The radix tree. */
|
||||
} RedisModuleDict;
|
||||
|
||||
typedef struct RedisModuleDictIter {
|
||||
RedisModuleDict *dict;
|
||||
raxIterator ri;
|
||||
} RedisModuleDictIter;
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
* Prototypes
|
||||
@ -256,6 +269,7 @@ robj **moduleCreateArgvFromUserFormat(const char *cmdname, const char *fmt, int
|
||||
void moduleReplicateMultiIfNeeded(RedisModuleCtx *ctx);
|
||||
void RM_ZsetRangeStop(RedisModuleKey *kp);
|
||||
static void zsetKeyReset(RedisModuleKey *key);
|
||||
void RM_FreeDict(RedisModuleCtx *ctx, RedisModuleDict *d);
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
* Heap allocation raw functions
|
||||
@ -474,7 +488,7 @@ void moduleHandlePropagationAfterCommandCallback(RedisModuleCtx *ctx) {
|
||||
if (c->flags & CLIENT_LUA) return;
|
||||
|
||||
/* Handle the replication of the final EXEC, since whatever a command
|
||||
* emits is always wrappered around MULTI/EXEC. */
|
||||
* emits is always wrapped around MULTI/EXEC. */
|
||||
if (ctx->flags & REDISMODULE_CTX_MULTI_EMITTED) {
|
||||
robj *propargv[1];
|
||||
propargv[0] = createStringObject("EXEC",4);
|
||||
@ -548,7 +562,7 @@ void RM_KeyAtPos(RedisModuleCtx *ctx, int pos) {
|
||||
ctx->keys_pos[ctx->keys_count++] = pos;
|
||||
}
|
||||
|
||||
/* Helper for RM_CreateCommand(). Truns a string representing command
|
||||
/* Helper for RM_CreateCommand(). Turns a string representing command
|
||||
* flags into the command flags used by the Redis core.
|
||||
*
|
||||
* It returns the set of flags, or -1 if unknown flags are found. */
|
||||
@ -595,7 +609,7 @@ int commandFlagsFromString(char *s) {
|
||||
* And is supposed to always return REDISMODULE_OK.
|
||||
*
|
||||
* The set of flags 'strflags' specify the behavior of the command, and should
|
||||
* be passed as a C string compoesd of space separated words, like for
|
||||
* be passed as a C string composed of space separated words, like for
|
||||
* example "write deny-oom". The set of flags are:
|
||||
*
|
||||
* * **"write"**: The command may modify the data set (it may also read
|
||||
@ -616,7 +630,7 @@ int commandFlagsFromString(char *s) {
|
||||
* * **"allow-stale"**: The command is allowed to run on slaves that don't
|
||||
* serve stale data. Don't use if you don't know what
|
||||
* this means.
|
||||
* * **"no-monitor"**: Don't propoagate the command on monitor. Use this if
|
||||
* * **"no-monitor"**: Don't propagate the command on monitor. Use this if
|
||||
* the command has sensible data among the arguments.
|
||||
* * **"fast"**: The command time complexity is not greater
|
||||
* than O(log(N)) where N is the size of the collection or
|
||||
@ -670,6 +684,7 @@ int RM_CreateCommand(RedisModuleCtx *ctx, const char *name, RedisModuleCmdFunc c
|
||||
cp->rediscmd->calls = 0;
|
||||
dictAdd(server.commands,sdsdup(cmdname),cp->rediscmd);
|
||||
dictAdd(server.orig_commands,sdsdup(cmdname),cp->rediscmd);
|
||||
cp->rediscmd->id = ACLGetCommandID(cmdname); /* ID used for ACL. */
|
||||
return REDISMODULE_OK;
|
||||
}
|
||||
|
||||
@ -777,6 +792,7 @@ void autoMemoryCollect(RedisModuleCtx *ctx) {
|
||||
case REDISMODULE_AM_STRING: decrRefCount(ptr); break;
|
||||
case REDISMODULE_AM_REPLY: RM_FreeCallReply(ptr); break;
|
||||
case REDISMODULE_AM_KEY: RM_CloseKey(ptr); break;
|
||||
case REDISMODULE_AM_DICT: RM_FreeDict(NULL,ptr); break;
|
||||
}
|
||||
}
|
||||
ctx->flags |= REDISMODULE_CTX_AUTO_MEMORY;
|
||||
@ -794,19 +810,26 @@ void autoMemoryCollect(RedisModuleCtx *ctx) {
|
||||
* with RedisModule_FreeString(), unless automatic memory is enabled.
|
||||
*
|
||||
* The string is created by copying the `len` bytes starting
|
||||
* at `ptr`. No reference is retained to the passed buffer. */
|
||||
* at `ptr`. No reference is retained to the passed buffer.
|
||||
*
|
||||
* The module context 'ctx' is optional and may be NULL if you want to create
|
||||
* a string out of the context scope. However in that case, the automatic
|
||||
* memory management will not be available, and the string memory must be
|
||||
* managed manually. */
|
||||
RedisModuleString *RM_CreateString(RedisModuleCtx *ctx, const char *ptr, size_t len) {
|
||||
RedisModuleString *o = createStringObject(ptr,len);
|
||||
autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);
|
||||
if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);
|
||||
return o;
|
||||
}
|
||||
|
||||
|
||||
/* Create a new module string object from a printf format and arguments.
|
||||
* The returned string must be freed with RedisModule_FreeString(), unless
|
||||
* automatic memory is enabled.
|
||||
*
|
||||
* The string is created using the sds formatter function sdscatvprintf(). */
|
||||
* The string is created using the sds formatter function sdscatvprintf().
|
||||
*
|
||||
* The passed context 'ctx' may be NULL if necessary, see the
|
||||
* RedisModule_CreateString() documentation for more info. */
|
||||
RedisModuleString *RM_CreateStringPrintf(RedisModuleCtx *ctx, const char *fmt, ...) {
|
||||
sds s = sdsempty();
|
||||
|
||||
@ -816,7 +839,7 @@ RedisModuleString *RM_CreateStringPrintf(RedisModuleCtx *ctx, const char *fmt, .
|
||||
va_end(ap);
|
||||
|
||||
RedisModuleString *o = createObject(OBJ_STRING, s);
|
||||
autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);
|
||||
if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);
|
||||
|
||||
return o;
|
||||
}
|
||||
@ -826,7 +849,10 @@ RedisModuleString *RM_CreateStringPrintf(RedisModuleCtx *ctx, const char *fmt, .
|
||||
* integer instead of taking a buffer and its length.
|
||||
*
|
||||
* The returned string must be released with RedisModule_FreeString() or by
|
||||
* enabling automatic memory management. */
|
||||
* enabling automatic memory management.
|
||||
*
|
||||
* The passed context 'ctx' may be NULL if necessary, see the
|
||||
* RedisModule_CreateString() documentation for more info. */
|
||||
RedisModuleString *RM_CreateStringFromLongLong(RedisModuleCtx *ctx, long long ll) {
|
||||
char buf[LONG_STR_SIZE];
|
||||
size_t len = ll2string(buf,sizeof(buf),ll);
|
||||
@ -837,10 +863,13 @@ RedisModuleString *RM_CreateStringFromLongLong(RedisModuleCtx *ctx, long long ll
|
||||
* RedisModuleString.
|
||||
*
|
||||
* The returned string must be released with RedisModule_FreeString() or by
|
||||
* enabling automatic memory management. */
|
||||
* enabling automatic memory management.
|
||||
*
|
||||
* The passed context 'ctx' may be NULL if necessary, see the
|
||||
* RedisModule_CreateString() documentation for more info. */
|
||||
RedisModuleString *RM_CreateStringFromString(RedisModuleCtx *ctx, const RedisModuleString *str) {
|
||||
RedisModuleString *o = dupStringObject(str);
|
||||
autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);
|
||||
if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);
|
||||
return o;
|
||||
}
|
||||
|
||||
@ -849,10 +878,16 @@ RedisModuleString *RM_CreateStringFromString(RedisModuleCtx *ctx, const RedisMod
|
||||
*
|
||||
* It is possible to call this function even when automatic memory management
|
||||
* is enabled. In that case the string will be released ASAP and removed
|
||||
* from the pool of string to release at the end. */
|
||||
* from the pool of string to release at the end.
|
||||
*
|
||||
* If the string was created with a NULL context 'ctx', it is also possible to
|
||||
* pass ctx as NULL when releasing the string (but passing a context will not
|
||||
* create any issue). Strings created with a context should be freed also passing
|
||||
* the context, so if you want to free a string out of context later, make sure
|
||||
* to create it using a NULL context. */
|
||||
void RM_FreeString(RedisModuleCtx *ctx, RedisModuleString *str) {
|
||||
decrRefCount(str);
|
||||
autoMemoryFreed(ctx,REDISMODULE_AM_STRING,str);
|
||||
if (ctx != NULL) autoMemoryFreed(ctx,REDISMODULE_AM_STRING,str);
|
||||
}
|
||||
|
||||
/* Every call to this function, will make the string 'str' requiring
|
||||
@ -876,9 +911,11 @@ void RM_FreeString(RedisModuleCtx *ctx, RedisModuleString *str) {
|
||||
* Note that when memory management is turned off, you don't need
|
||||
* any call to RetainString() since creating a string will always result
|
||||
* into a string that lives after the callback function returns, if
|
||||
* no FreeString() call is performed. */
|
||||
* no FreeString() call is performed.
|
||||
*
|
||||
* It is possible to call this function with a NULL context. */
|
||||
void RM_RetainString(RedisModuleCtx *ctx, RedisModuleString *str) {
|
||||
if (!autoMemoryFreed(ctx,REDISMODULE_AM_STRING,str)) {
|
||||
if (ctx == NULL || !autoMemoryFreed(ctx,REDISMODULE_AM_STRING,str)) {
|
||||
/* Increment the string reference counting only if we can't
|
||||
* just remove the object from the list of objects that should
|
||||
* be reclaimed. Why we do that, instead of just incrementing
|
||||
@ -956,9 +993,9 @@ RedisModuleString *moduleAssertUnsharedString(RedisModuleString *str) {
|
||||
return str;
|
||||
}
|
||||
|
||||
/* Append the specified buffere to the string 'str'. The string must be a
|
||||
/* Append the specified buffer to the string 'str'. The string must be a
|
||||
* string created by the user that is referenced only a single time, otherwise
|
||||
* REDISMODULE_ERR is returend and the operation is not performed. */
|
||||
* REDISMODULE_ERR is returned and the operation is not performed. */
|
||||
int RM_StringAppendBuffer(RedisModuleCtx *ctx, RedisModuleString *str, const char *buf, size_t len) {
|
||||
UNUSED(ctx);
|
||||
str = moduleAssertUnsharedString(str);
|
||||
@ -1087,10 +1124,10 @@ int RM_ReplyWithArray(RedisModuleCtx *ctx, long len) {
|
||||
ctx->postponed_arrays = zrealloc(ctx->postponed_arrays,sizeof(void*)*
|
||||
(ctx->postponed_arrays_count+1));
|
||||
ctx->postponed_arrays[ctx->postponed_arrays_count] =
|
||||
addDeferredMultiBulkLength(c);
|
||||
addReplyDeferredLen(c);
|
||||
ctx->postponed_arrays_count++;
|
||||
} else {
|
||||
addReplyMultiBulkLen(c,len);
|
||||
addReplyArrayLen(c,len);
|
||||
}
|
||||
return REDISMODULE_OK;
|
||||
}
|
||||
@ -1118,7 +1155,7 @@ int RM_ReplyWithArray(RedisModuleCtx *ctx, long len) {
|
||||
*
|
||||
* Note that in the above example there is no reason to postpone the array
|
||||
* length, since we produce a fixed number of elements, but in the practice
|
||||
* the code may use an interator or other ways of creating the output so
|
||||
* the code may use an iterator or other ways of creating the output so
|
||||
* that is not easy to calculate in advance the number of elements.
|
||||
*/
|
||||
void RM_ReplySetArrayLength(RedisModuleCtx *ctx, long len) {
|
||||
@ -1133,7 +1170,7 @@ void RM_ReplySetArrayLength(RedisModuleCtx *ctx, long len) {
|
||||
return;
|
||||
}
|
||||
ctx->postponed_arrays_count--;
|
||||
setDeferredMultiBulkLength(c,
|
||||
setDeferredArrayLen(c,
|
||||
ctx->postponed_arrays[ctx->postponed_arrays_count],
|
||||
len);
|
||||
if (ctx->postponed_arrays_count == 0) {
|
||||
@ -1169,7 +1206,7 @@ int RM_ReplyWithString(RedisModuleCtx *ctx, RedisModuleString *str) {
|
||||
int RM_ReplyWithNull(RedisModuleCtx *ctx) {
|
||||
client *c = moduleGetReplyClient(ctx);
|
||||
if (c == NULL) return REDISMODULE_OK;
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
return REDISMODULE_OK;
|
||||
}
|
||||
|
||||
@ -1410,7 +1447,7 @@ int RM_SelectDb(RedisModuleCtx *ctx, int newid) {
|
||||
* to call other APIs with the key handle as argument to perform
|
||||
* operations on the key.
|
||||
*
|
||||
* The return value is the handle repesenting the key, that must be
|
||||
* The return value is the handle representing the key, that must be
|
||||
* closed with RM_CloseKey().
|
||||
*
|
||||
* If the key does not exist and WRITE mode is requested, the handle
|
||||
@ -1664,7 +1701,7 @@ int RM_StringTruncate(RedisModuleKey *key, size_t newlen) {
|
||||
* Key API for List type
|
||||
* -------------------------------------------------------------------------- */
|
||||
|
||||
/* Push an element into a list, on head or tail depending on 'where' argumnet.
|
||||
/* Push an element into a list, on head or tail depending on 'where' argument.
|
||||
* If the key pointer is about an empty key opened for writing, the key
|
||||
* is created. On error (key opened for read-only operations or of the wrong
|
||||
* type) REDISMODULE_ERR is returned, otherwise REDISMODULE_OK is returned. */
|
||||
@ -1769,7 +1806,7 @@ int RM_ZsetAdd(RedisModuleKey *key, double score, RedisModuleString *ele, int *f
|
||||
* The input and output flags, and the return value, have the same exact
|
||||
* meaning, with the only difference that this function will return
|
||||
* REDISMODULE_ERR even when 'score' is a valid double number, but adding it
|
||||
* to the existing score resuts into a NaN (not a number) condition.
|
||||
* to the existing score results into a NaN (not a number) condition.
|
||||
*
|
||||
* This function has an additional field 'newscore', if not NULL is filled
|
||||
* with the new score of the element after the increment, if no error
|
||||
@ -2150,7 +2187,9 @@ int RM_ZsetRangePrev(RedisModuleKey *key) {
|
||||
*
|
||||
* The function is variadic and the user must specify pairs of field
|
||||
* names and values, both as RedisModuleString pointers (unless the
|
||||
* CFIELD option is set, see later).
|
||||
* CFIELD option is set, see later). At the end of the field/value-ptr pairs,
|
||||
* NULL must be specified as last argument to signal the end of the arguments
|
||||
* in the variadic function.
|
||||
*
|
||||
* Example to set the hash argv[1] to the value argv[2]:
|
||||
*
|
||||
@ -2658,6 +2697,7 @@ RedisModuleCallReply *RM_Call(RedisModuleCtx *ctx, const char *cmdname, const ch
|
||||
/* Create the client and dispatch the command. */
|
||||
va_start(ap, fmt);
|
||||
c = createClient(-1);
|
||||
c->user = NULL; /* Root user. */
|
||||
argv = moduleCreateArgvFromUserFormat(cmdname,fmt,&argc,&flags,ap);
|
||||
replicate = flags & REDISMODULE_ARGV_REPLICATE;
|
||||
va_end(ap);
|
||||
@ -2987,7 +3027,7 @@ int RM_ModuleTypeSetValue(RedisModuleKey *key, moduleType *mt, void *value) {
|
||||
}
|
||||
|
||||
/* Assuming RedisModule_KeyType() returned REDISMODULE_KEYTYPE_MODULE on
|
||||
* the key, returns the moduel type pointer of the value stored at key.
|
||||
* the key, returns the module type pointer of the value stored at key.
|
||||
*
|
||||
* If the key is NULL, is not associated with a module type, or is empty,
|
||||
* then NULL is returned instead. */
|
||||
@ -3287,7 +3327,7 @@ void RM_DigestAddLongLong(RedisModuleDigest *md, long long ll) {
|
||||
mixDigest(md->o,buf,len);
|
||||
}
|
||||
|
||||
/* See the doucmnetation for `RedisModule_DigestAddElement()`. */
|
||||
/* See the documentation for `RedisModule_DigestAddElement()`. */
|
||||
void RM_DigestEndSequence(RedisModuleDigest *md) {
|
||||
xorDigest(md->x,md->o,sizeof(md->o));
|
||||
memset(md->o,0,sizeof(md->o));
|
||||
@ -3484,7 +3524,7 @@ void unblockClientFromModule(client *c) {
|
||||
* reply_timeout: called when the timeout is reached in order to send an
|
||||
* error to the client.
|
||||
*
|
||||
* free_privdata: called in order to free the privata data that is passed
|
||||
* free_privdata: called in order to free the private data that is passed
|
||||
* by RedisModule_UnblockClient() call.
|
||||
*/
|
||||
RedisModuleBlockedClient *RM_BlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*), long long timeout_ms) {
|
||||
@ -3631,8 +3671,8 @@ void moduleHandleBlockedClients(void) {
|
||||
* free the temporary client we just used for the replies. */
|
||||
if (c) {
|
||||
if (bc->reply_client->bufpos)
|
||||
addReplyString(c,bc->reply_client->buf,
|
||||
bc->reply_client->bufpos);
|
||||
addReplyProto(c,bc->reply_client->buf,
|
||||
bc->reply_client->bufpos);
|
||||
if (listLength(bc->reply_client->reply))
|
||||
listJoin(c->reply,bc->reply_client->reply);
|
||||
c->reply_bytes += bc->reply_client->reply_bytes;
|
||||
@ -3681,7 +3721,7 @@ void moduleBlockedClientTimedOut(client *c) {
|
||||
bc->timeout_callback(&ctx,(void**)c->argv,c->argc);
|
||||
moduleFreeContext(&ctx);
|
||||
/* For timeout events, we do not want to call the disconnect callback,
|
||||
* because the blocekd client will be automatically disconnected in
|
||||
* because the blocked client will be automatically disconnected in
|
||||
* this case, and the user can still hook using the timeout callback. */
|
||||
bc->disconnect_callback = NULL;
|
||||
}
|
||||
@ -3698,7 +3738,7 @@ int RM_IsBlockedTimeoutRequest(RedisModuleCtx *ctx) {
|
||||
return (ctx->flags & REDISMODULE_CTX_BLOCKED_TIMEOUT) != 0;
|
||||
}
|
||||
|
||||
/* Get the privata data set by RedisModule_UnblockClient() */
|
||||
/* Get the private data set by RedisModule_UnblockClient() */
|
||||
void *RM_GetBlockedClientPrivateData(RedisModuleCtx *ctx) {
|
||||
return ctx->blocked_privdata;
|
||||
}
|
||||
@ -3793,11 +3833,11 @@ void moduleReleaseGIL(void) {
|
||||
* -------------------------------------------------------------------------- */
|
||||
|
||||
/* Subscribe to keyspace notifications. This is a low-level version of the
|
||||
* keyspace-notifications API. A module cand register callbacks to be notified
|
||||
* keyspace-notifications API. A module can register callbacks to be notified
|
||||
* when keyspce events occur.
|
||||
*
|
||||
* Notification events are filtered by their type (string events, set events,
|
||||
* etc), and the subsriber callback receives only events that match a specific
|
||||
* etc), and the subscriber callback receives only events that match a specific
|
||||
* mask of event types.
|
||||
*
|
||||
* When subscribing to notifications with RedisModule_SubscribeToKeyspaceEvents
|
||||
@ -3832,7 +3872,7 @@ void moduleReleaseGIL(void) {
|
||||
* used to send anything to the client, and has the db number where the event
|
||||
* occurred as its selected db number.
|
||||
*
|
||||
* Notice that it is not necessary to enable norifications in redis.conf for
|
||||
* Notice that it is not necessary to enable notifications in redis.conf for
|
||||
* module notifications to work.
|
||||
*
|
||||
* Warning: the notification callbacks are performed in a synchronous manner,
|
||||
@ -3873,10 +3913,10 @@ void moduleNotifyKeyspaceEvent(int type, const char *event, robj *key, int dbid)
|
||||
if ((sub->event_mask & type) && sub->active == 0) {
|
||||
RedisModuleCtx ctx = REDISMODULE_CTX_INIT;
|
||||
ctx.module = sub->module;
|
||||
ctx.client = moduleKeyspaceSubscribersClient;
|
||||
ctx.client = moduleFreeContextReusedClient;
|
||||
selectDb(ctx.client, dbid);
|
||||
|
||||
/* mark the handler as activer to avoid reentrant loops.
|
||||
/* mark the handler as active to avoid reentrant loops.
|
||||
* If the subscriber performs an action triggering itself,
|
||||
* it will not be notified about it. */
|
||||
sub->active = 1;
|
||||
@ -3936,6 +3976,8 @@ void moduleCallClusterReceivers(const char *sender_id, uint64_t module_id, uint8
|
||||
if (r->module_id == module_id) {
|
||||
RedisModuleCtx ctx = REDISMODULE_CTX_INIT;
|
||||
ctx.module = r->module;
|
||||
ctx.client = moduleFreeContextReusedClient;
|
||||
selectDb(ctx.client, 0);
|
||||
r->callback(&ctx,sender_id,type,payload,len);
|
||||
moduleFreeContext(&ctx);
|
||||
return;
|
||||
@ -4084,7 +4126,7 @@ size_t RM_GetClusterSize(void) {
|
||||
*
|
||||
* * REDISMODULE_NODE_MYSELF This node
|
||||
* * REDISMODULE_NODE_MASTER The node is a master
|
||||
* * REDISMODULE_NODE_SLAVE The ndoe is a slave
|
||||
* * REDISMODULE_NODE_SLAVE The node is a replica
|
||||
* * REDISMODULE_NODE_PFAIL We see the node as failing
|
||||
* * REDISMODULE_NODE_FAIL The cluster agrees the node is failing
|
||||
* * REDISMODULE_NODE_NOFAILOVER The slave is configured to never failover
|
||||
@ -4126,6 +4168,32 @@ int RM_GetClusterNodeInfo(RedisModuleCtx *ctx, const char *id, char *ip, char *m
|
||||
return REDISMODULE_OK;
|
||||
}
|
||||
|
||||
/* Set Redis Cluster flags in order to change the normal behavior of
|
||||
* Redis Cluster, especially with the goal of disabling certain functions.
|
||||
* This is useful for modules that use the Cluster API in order to create
|
||||
* a different distributed system, but still want to use the Redis Cluster
|
||||
* message bus. Flags that can be set:
|
||||
*
|
||||
* CLUSTER_MODULE_FLAG_NO_FAILOVER
|
||||
* CLUSTER_MODULE_FLAG_NO_REDIRECTION
|
||||
*
|
||||
* With the following effects:
|
||||
*
|
||||
* NO_FAILOVER: prevent Redis Cluster slaves to failover a failing master.
|
||||
* Also disables the replica migration feature.
|
||||
*
|
||||
* NO_REDIRECTION: Every node will accept any key, without trying to perform
|
||||
* partitioning according to the user Redis Cluster algorithm.
|
||||
* Slots informations will still be propagated across the
|
||||
* cluster, but without effects. */
|
||||
void RM_SetClusterFlags(RedisModuleCtx *ctx, uint64_t flags) {
|
||||
UNUSED(ctx);
|
||||
if (flags & REDISMODULE_CLUSTER_FLAG_NO_FAILOVER)
|
||||
server.cluster_module_flags |= CLUSTER_MODULE_FLAG_NO_FAILOVER;
|
||||
if (flags & REDISMODULE_CLUSTER_FLAG_NO_REDIRECTION)
|
||||
server.cluster_module_flags |= CLUSTER_MODULE_FLAG_NO_REDIRECTION;
|
||||
}
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
* Modules Timers API
|
||||
*
|
||||
@ -4155,6 +4223,7 @@ typedef struct RedisModuleTimer {
|
||||
RedisModule *module; /* Module reference. */
|
||||
RedisModuleTimerProc callback; /* The callback to invoke on expire. */
|
||||
void *data; /* Private data for the callback. */
|
||||
int dbid; /* Database number selected by the original client. */
|
||||
} RedisModuleTimer;
|
||||
|
||||
/* This is the timer handler that is called by the main event loop. We schedule
|
||||
@ -4180,6 +4249,8 @@ int moduleTimerHandler(struct aeEventLoop *eventLoop, long long id, void *client
|
||||
RedisModuleCtx ctx = REDISMODULE_CTX_INIT;
|
||||
|
||||
ctx.module = timer->module;
|
||||
ctx.client = moduleFreeContextReusedClient;
|
||||
selectDb(ctx.client, timer->dbid);
|
||||
timer->callback(&ctx,timer->data);
|
||||
moduleFreeContext(&ctx);
|
||||
raxRemove(Timers,(unsigned char*)ri.key,ri.key_len,NULL);
|
||||
@ -4204,6 +4275,7 @@ RedisModuleTimerID RM_CreateTimer(RedisModuleCtx *ctx, mstime_t period, RedisMod
|
||||
timer->module = ctx->module;
|
||||
timer->callback = callback;
|
||||
timer->data = data;
|
||||
timer->dbid = ctx->client->db->id;
|
||||
uint64_t expiretime = ustime()+period*1000;
|
||||
uint64_t key;
|
||||
|
||||
@ -4243,7 +4315,7 @@ RedisModuleTimerID RM_CreateTimer(RedisModuleCtx *ctx, mstime_t period, RedisMod
|
||||
}
|
||||
|
||||
/* Stop a timer, returns REDISMODULE_OK if the timer was found, belonged to the
|
||||
* calling module, and was stoped, otherwise REDISMODULE_ERR is returned.
|
||||
* calling module, and was stopped, otherwise REDISMODULE_ERR is returned.
|
||||
* If not NULL, the data pointer is set to the value of the data argument when
|
||||
* the timer was created. */
|
||||
int RM_StopTimer(RedisModuleCtx *ctx, RedisModuleTimerID id, void **data) {
|
||||
@ -4260,7 +4332,7 @@ int RM_StopTimer(RedisModuleCtx *ctx, RedisModuleTimerID id, void **data) {
|
||||
* (in milliseconds), and the private data pointer associated with the timer.
|
||||
* If the timer specified does not exist or belongs to a different module
|
||||
* no information is returned and the function returns REDISMODULE_ERR, otherwise
|
||||
* REDISMODULE_OK is returned. The argumnets remaining or data can be NULL if
|
||||
* REDISMODULE_OK is returned. The arguments remaining or data can be NULL if
|
||||
* the caller does not need certain information. */
|
||||
int RM_GetTimerInfo(RedisModuleCtx *ctx, RedisModuleTimerID id, uint64_t *remaining, void **data) {
|
||||
RedisModuleTimer *timer = raxFind(Timers,(unsigned char*)&id,sizeof(id));
|
||||
@ -4275,6 +4347,257 @@ int RM_GetTimerInfo(RedisModuleCtx *ctx, RedisModuleTimerID id, uint64_t *remain
|
||||
return REDISMODULE_OK;
|
||||
}
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
* Modules Dictionary API
|
||||
*
|
||||
* Implements a sorted dictionary (actually backed by a radix tree) with
|
||||
* the usual get / set / del / num-items API, together with an iterator
|
||||
* capable of going back and forth.
|
||||
* -------------------------------------------------------------------------- */
|
||||
|
||||
/* Create a new dictionary. The 'ctx' pointer can be the current module context
|
||||
* or NULL, depending on what you want. Please follow the following rules:
|
||||
*
|
||||
* 1. Use a NULL context if you plan to retain a reference to this dictionary
|
||||
* that will survive the time of the module callback where you created it.
|
||||
* 2. Use a NULL context if no context is available at the time you are creating
|
||||
* the dictionary (of course...).
|
||||
* 3. However use the current callback context as 'ctx' argument if the
|
||||
* dictionary time to live is just limited to the callback scope. In this
|
||||
* case, if enabled, you can enjoy the automatic memory management that will
|
||||
* reclaim the dictionary memory, as well as the strings returned by the
|
||||
* Next / Prev dictionary iterator calls.
|
||||
*/
|
||||
RedisModuleDict *RM_CreateDict(RedisModuleCtx *ctx) {
|
||||
struct RedisModuleDict *d = zmalloc(sizeof(*d));
|
||||
d->rax = raxNew();
|
||||
if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_DICT,d);
|
||||
return d;
|
||||
}
|
||||
|
||||
/* Free a dictionary created with RM_CreateDict(). You need to pass the
|
||||
* context pointer 'ctx' only if the dictionary was created using the
|
||||
* context instead of passing NULL. */
|
||||
void RM_FreeDict(RedisModuleCtx *ctx, RedisModuleDict *d) {
|
||||
if (ctx != NULL) autoMemoryFreed(ctx,REDISMODULE_AM_DICT,d);
|
||||
raxFree(d->rax);
|
||||
zfree(d);
|
||||
}
|
||||
|
||||
/* Return the size of the dictionary (number of keys). */
|
||||
uint64_t RM_DictSize(RedisModuleDict *d) {
|
||||
return raxSize(d->rax);
|
||||
}
|
||||
|
||||
/* Store the specified key into the dictionary, setting its value to the
|
||||
* pointer 'ptr'. If the key was added with success, since it did not
|
||||
* already exist, REDISMODULE_OK is returned. Otherwise if the key already
|
||||
* exists the function returns REDISMODULE_ERR. */
|
||||
int RM_DictSetC(RedisModuleDict *d, void *key, size_t keylen, void *ptr) {
|
||||
int retval = raxTryInsert(d->rax,key,keylen,ptr,NULL);
|
||||
return (retval == 1) ? REDISMODULE_OK : REDISMODULE_ERR;
|
||||
}
|
||||
|
||||
/* Like RedisModule_DictSetC() but will replace the key with the new
|
||||
* value if the key already exists. */
|
||||
int RM_DictReplaceC(RedisModuleDict *d, void *key, size_t keylen, void *ptr) {
|
||||
int retval = raxInsert(d->rax,key,keylen,ptr,NULL);
|
||||
return (retval == 1) ? REDISMODULE_OK : REDISMODULE_ERR;
|
||||
}
|
||||
|
||||
/* Like RedisModule_DictSetC() but takes the key as a RedisModuleString. */
|
||||
int RM_DictSet(RedisModuleDict *d, RedisModuleString *key, void *ptr) {
|
||||
return RM_DictSetC(d,key->ptr,sdslen(key->ptr),ptr);
|
||||
}
|
||||
|
||||
/* Like RedisModule_DictReplaceC() but takes the key as a RedisModuleString. */
|
||||
int RM_DictReplace(RedisModuleDict *d, RedisModuleString *key, void *ptr) {
|
||||
return RM_DictReplaceC(d,key->ptr,sdslen(key->ptr),ptr);
|
||||
}
|
||||
|
||||
/* Return the value stored at the specified key. The function returns NULL
|
||||
* both in the case the key does not exist, or if you actually stored
|
||||
* NULL at key. So, optionally, if the 'nokey' pointer is not NULL, it will
|
||||
* be set by reference to 1 if the key does not exist, or to 0 if the key
|
||||
* exists. */
|
||||
void *RM_DictGetC(RedisModuleDict *d, void *key, size_t keylen, int *nokey) {
|
||||
void *res = raxFind(d->rax,key,keylen);
|
||||
if (nokey) *nokey = (res == raxNotFound);
|
||||
return (res == raxNotFound) ? NULL : res;
|
||||
}
|
||||
|
||||
/* Like RedisModule_DictGetC() but takes the key as a RedisModuleString. */
|
||||
void *RM_DictGet(RedisModuleDict *d, RedisModuleString *key, int *nokey) {
|
||||
return RM_DictGetC(d,key->ptr,sdslen(key->ptr),nokey);
|
||||
}
|
||||
|
||||
/* Remove the specified key from the dictionary, returning REDISMODULE_OK if
|
||||
* the key was found and delted, or REDISMODULE_ERR if instead there was
|
||||
* no such key in the dictionary. When the operation is successful, if
|
||||
* 'oldval' is not NULL, then '*oldval' is set to the value stored at the
|
||||
* key before it was deleted. Using this feature it is possible to get
|
||||
* a pointer to the value (for instance in order to release it), without
|
||||
* having to call RedisModule_DictGet() before deleting the key. */
|
||||
int RM_DictDelC(RedisModuleDict *d, void *key, size_t keylen, void *oldval) {
|
||||
int retval = raxRemove(d->rax,key,keylen,oldval);
|
||||
return retval ? REDISMODULE_OK : REDISMODULE_ERR;
|
||||
}
|
||||
|
||||
/* Like RedisModule_DictDelC() but gets the key as a RedisModuleString. */
|
||||
int RM_DictDel(RedisModuleDict *d, RedisModuleString *key, void *oldval) {
|
||||
return RM_DictDelC(d,key->ptr,sdslen(key->ptr),oldval);
|
||||
}
|
||||
|
||||
/* Return an interator, setup in order to start iterating from the specified
|
||||
* key by applying the operator 'op', which is just a string specifying the
|
||||
* comparison operator to use in order to seek the first element. The
|
||||
* operators avalable are:
|
||||
*
|
||||
* "^" -- Seek the first (lexicographically smaller) key.
|
||||
* "$" -- Seek the last (lexicographically biffer) key.
|
||||
* ">" -- Seek the first element greter than the specified key.
|
||||
* ">=" -- Seek the first element greater or equal than the specified key.
|
||||
* "<" -- Seek the first element smaller than the specified key.
|
||||
* "<=" -- Seek the first element smaller or equal than the specified key.
|
||||
* "==" -- Seek the first element matching exactly the specified key.
|
||||
*
|
||||
* Note that for "^" and "$" the passed key is not used, and the user may
|
||||
* just pass NULL with a length of 0.
|
||||
*
|
||||
* If the element to start the iteration cannot be seeked based on the
|
||||
* key and operator passed, RedisModule_DictNext() / Prev() will just return
|
||||
* REDISMODULE_ERR at the first call, otherwise they'll produce elements.
|
||||
*/
|
||||
RedisModuleDictIter *RM_DictIteratorStartC(RedisModuleDict *d, const char *op, void *key, size_t keylen) {
|
||||
RedisModuleDictIter *di = zmalloc(sizeof(*di));
|
||||
di->dict = d;
|
||||
raxStart(&di->ri,d->rax);
|
||||
raxSeek(&di->ri,op,key,keylen);
|
||||
return di;
|
||||
}
|
||||
|
||||
/* Exactly like RedisModule_DictIteratorStartC, but the key is passed as a
|
||||
* RedisModuleString. */
|
||||
RedisModuleDictIter *RM_DictIteratorStart(RedisModuleDict *d, const char *op, RedisModuleString *key) {
|
||||
return RM_DictIteratorStartC(d,op,key->ptr,sdslen(key->ptr));
|
||||
}
|
||||
|
||||
/* Release the iterator created with RedisModule_DictIteratorStart(). This call
|
||||
* is mandatory otherwise a memory leak is introduced in the module. */
|
||||
void RM_DictIteratorStop(RedisModuleDictIter *di) {
|
||||
raxStop(&di->ri);
|
||||
zfree(di);
|
||||
}
|
||||
|
||||
/* After its creation with RedisModule_DictIteratorStart(), it is possible to
|
||||
* change the currently selected element of the iterator by using this
|
||||
* API call. The result based on the operator and key is exactly like
|
||||
* the function RedisModule_DictIteratorStart(), however in this case the
|
||||
* return value is just REDISMODULE_OK in case the seeked element was found,
|
||||
* or REDISMODULE_ERR in case it was not possible to seek the specified
|
||||
* element. It is possible to reseek an iterator as many times as you want. */
|
||||
int RM_DictIteratorReseekC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) {
|
||||
return raxSeek(&di->ri,op,key,keylen);
|
||||
}
|
||||
|
||||
/* Like RedisModule_DictIteratorReseekC() but takes the key as as a
|
||||
* RedisModuleString. */
|
||||
int RM_DictIteratorReseek(RedisModuleDictIter *di, const char *op, RedisModuleString *key) {
|
||||
return RM_DictIteratorReseekC(di,op,key->ptr,sdslen(key->ptr));
|
||||
}
|
||||
|
||||
/* Return the current item of the dictionary iterator 'di' and steps to the
|
||||
* next element. If the iterator already yield the last element and there
|
||||
* are no other elements to return, NULL is returned, otherwise a pointer
|
||||
* to a string representing the key is provided, and the '*keylen' length
|
||||
* is set by reference (if keylen is not NULL). The '*dataptr', if not NULL
|
||||
* is set to the value of the pointer stored at the returned key as auxiliary
|
||||
* data (as set by the RedisModule_DictSet API).
|
||||
*
|
||||
* Usage example:
|
||||
*
|
||||
* ... create the iterator here ...
|
||||
* char *key;
|
||||
* void *data;
|
||||
* while((key = RedisModule_DictNextC(iter,&keylen,&data)) != NULL) {
|
||||
* printf("%.*s %p\n", (int)keylen, key, data);
|
||||
* }
|
||||
*
|
||||
* The returned pointer is of type void because sometimes it makes sense
|
||||
* to cast it to a char* sometimes to an unsigned char* depending on the
|
||||
* fact it contains or not binary data, so this API ends being more
|
||||
* comfortable to use.
|
||||
*
|
||||
* The validity of the returned pointer is until the next call to the
|
||||
* next/prev iterator step. Also the pointer is no longer valid once the
|
||||
* iterator is released. */
|
||||
void *RM_DictNextC(RedisModuleDictIter *di, size_t *keylen, void **dataptr) {
|
||||
if (!raxNext(&di->ri)) return NULL;
|
||||
if (keylen) *keylen = di->ri.key_len;
|
||||
if (dataptr) *dataptr = di->ri.data;
|
||||
return di->ri.key;
|
||||
}
|
||||
|
||||
/* This function is exactly like RedisModule_DictNext() but after returning
|
||||
* the currently selected element in the iterator, it selects the previous
|
||||
* element (laxicographically smaller) instead of the next one. */
|
||||
void *RM_DictPrevC(RedisModuleDictIter *di, size_t *keylen, void **dataptr) {
|
||||
if (!raxPrev(&di->ri)) return NULL;
|
||||
if (keylen) *keylen = di->ri.key_len;
|
||||
if (dataptr) *dataptr = di->ri.data;
|
||||
return di->ri.key;
|
||||
}
|
||||
|
||||
/* Like RedisModuleNextC(), but instead of returning an internally allocated
|
||||
* buffer and key length, it returns directly a module string object allocated
|
||||
* in the specified context 'ctx' (that may be NULL exactly like for the main
|
||||
* API RedisModule_CreateString).
|
||||
*
|
||||
* The returned string object should be deallocated after use, either manually
|
||||
* or by using a context that has automatic memory management active. */
|
||||
RedisModuleString *RM_DictNext(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr) {
|
||||
size_t keylen;
|
||||
void *key = RM_DictNextC(di,&keylen,dataptr);
|
||||
if (key == NULL) return NULL;
|
||||
return RM_CreateString(ctx,key,keylen);
|
||||
}
|
||||
|
||||
/* Like RedisModule_DictNext() but after returning the currently selected
|
||||
* element in the iterator, it selects the previous element (laxicographically
|
||||
* smaller) instead of the next one. */
|
||||
RedisModuleString *RM_DictPrev(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr) {
|
||||
size_t keylen;
|
||||
void *key = RM_DictPrevC(di,&keylen,dataptr);
|
||||
if (key == NULL) return NULL;
|
||||
return RM_CreateString(ctx,key,keylen);
|
||||
}
|
||||
|
||||
/* Compare the element currently pointed by the iterator to the specified
|
||||
* element given by key/keylen, according to the operator 'op' (the set of
|
||||
* valid operators are the same valid for RedisModule_DictIteratorStart).
|
||||
* If the comparision is successful the command returns REDISMODULE_OK
|
||||
* otherwise REDISMODULE_ERR is returned.
|
||||
*
|
||||
* This is useful when we want to just emit a lexicographical range, so
|
||||
* in the loop, as we iterate elements, we can also check if we are still
|
||||
* on range.
|
||||
*
|
||||
* The function returne REDISMODULE_ERR if the iterator reached the
|
||||
* end of elements condition as well. */
|
||||
int RM_DictCompareC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) {
|
||||
if (raxEOF(&di->ri)) return REDISMODULE_ERR;
|
||||
int res = raxCompare(&di->ri,op,key,keylen);
|
||||
return res ? REDISMODULE_OK : REDISMODULE_ERR;
|
||||
}
|
||||
|
||||
/* Like RedisModule_DictCompareC but gets the key to compare with the current
|
||||
* iterator key as a RedisModuleString. */
|
||||
int RM_DictCompare(RedisModuleDictIter *di, const char *op, RedisModuleString *key) {
|
||||
if (raxEOF(&di->ri)) return REDISMODULE_ERR;
|
||||
int res = raxCompare(&di->ri,op,key->ptr,sdslen(key->ptr));
|
||||
return res ? REDISMODULE_OK : REDISMODULE_ERR;
|
||||
}
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
* Modules utility APIs
|
||||
* -------------------------------------------------------------------------- */
|
||||
@ -4336,8 +4659,9 @@ void moduleInitModulesSystem(void) {
|
||||
|
||||
/* Set up the keyspace notification susbscriber list and static client */
|
||||
moduleKeyspaceSubscribers = listCreate();
|
||||
moduleKeyspaceSubscribersClient = createClient(-1);
|
||||
moduleKeyspaceSubscribersClient->flags |= CLIENT_MODULE;
|
||||
moduleFreeContextReusedClient = createClient(-1);
|
||||
moduleFreeContextReusedClient->flags |= CLIENT_MODULE;
|
||||
moduleFreeContextReusedClient->user = NULL; /* root user. */
|
||||
|
||||
moduleRegisterCoreAPI();
|
||||
if (pipe(server.module_blocked_pipe) == -1) {
|
||||
@ -4475,7 +4799,7 @@ int moduleUnload(sds name) {
|
||||
|
||||
moduleUnregisterCommands(module);
|
||||
|
||||
/* Remvoe any noification subscribers this module might have */
|
||||
/* Remove any notification subscribers this module might have */
|
||||
moduleUnsubscribeNotifications(module);
|
||||
|
||||
/* Unregister all the hooks. TODO: Yet no hooks support here. */
|
||||
@ -4497,6 +4821,25 @@ int moduleUnload(sds name) {
|
||||
return REDISMODULE_OK;
|
||||
}
|
||||
|
||||
/* Helper function for the MODULE and HELLO command: send the list of the
|
||||
* loaded modules to the client. */
|
||||
void addReplyLoadedModules(client *c) {
|
||||
dictIterator *di = dictGetIterator(modules);
|
||||
dictEntry *de;
|
||||
|
||||
addReplyArrayLen(c,dictSize(modules));
|
||||
while ((de = dictNext(di)) != NULL) {
|
||||
sds name = dictGetKey(de);
|
||||
struct RedisModule *module = dictGetVal(de);
|
||||
addReplyMapLen(c,2);
|
||||
addReplyBulkCString(c,"name");
|
||||
addReplyBulkCBuffer(c,name,sdslen(name));
|
||||
addReplyBulkCString(c,"ver");
|
||||
addReplyLongLong(c,module->ver);
|
||||
}
|
||||
dictReleaseIterator(di);
|
||||
}
|
||||
|
||||
/* Redis MODULE command.
|
||||
*
|
||||
* MODULE LOAD <path> [args...] */
|
||||
@ -4544,20 +4887,7 @@ NULL
|
||||
addReplyErrorFormat(c,"Error unloading module: %s",errmsg);
|
||||
}
|
||||
} else if (!strcasecmp(subcmd,"list") && c->argc == 2) {
|
||||
dictIterator *di = dictGetIterator(modules);
|
||||
dictEntry *de;
|
||||
|
||||
addReplyMultiBulkLen(c,dictSize(modules));
|
||||
while ((de = dictNext(di)) != NULL) {
|
||||
sds name = dictGetKey(de);
|
||||
struct RedisModule *module = dictGetVal(de);
|
||||
addReplyMultiBulkLen(c,4);
|
||||
addReplyBulkCString(c,"name");
|
||||
addReplyBulkCBuffer(c,name,sdslen(name));
|
||||
addReplyBulkCString(c,"ver");
|
||||
addReplyLongLong(c,module->ver);
|
||||
}
|
||||
dictReleaseIterator(di);
|
||||
addReplyLoadedModules(c);
|
||||
} else {
|
||||
addReplySubcommandSyntaxError(c);
|
||||
return;
|
||||
@ -4700,4 +5030,27 @@ void moduleRegisterCoreAPI(void) {
|
||||
REGISTER_API(BlockedClientDisconnected);
|
||||
REGISTER_API(SetDisconnectCallback);
|
||||
REGISTER_API(GetBlockedClientHandle);
|
||||
REGISTER_API(SetClusterFlags);
|
||||
REGISTER_API(CreateDict);
|
||||
REGISTER_API(FreeDict);
|
||||
REGISTER_API(DictSize);
|
||||
REGISTER_API(DictSetC);
|
||||
REGISTER_API(DictReplaceC);
|
||||
REGISTER_API(DictSet);
|
||||
REGISTER_API(DictReplace);
|
||||
REGISTER_API(DictGetC);
|
||||
REGISTER_API(DictGet);
|
||||
REGISTER_API(DictDelC);
|
||||
REGISTER_API(DictDel);
|
||||
REGISTER_API(DictIteratorStartC);
|
||||
REGISTER_API(DictIteratorStart);
|
||||
REGISTER_API(DictIteratorStop);
|
||||
REGISTER_API(DictIteratorReseekC);
|
||||
REGISTER_API(DictIteratorReseek);
|
||||
REGISTER_API(DictNextC);
|
||||
REGISTER_API(DictPrevC);
|
||||
REGISTER_API(DictNext);
|
||||
REGISTER_API(DictPrev);
|
||||
REGISTER_API(DictCompareC);
|
||||
REGISTER_API(DictCompare);
|
||||
}
|
||||
|
@ -13,7 +13,7 @@ endif
|
||||
|
||||
.SUFFIXES: .c .so .xo .o
|
||||
|
||||
all: helloworld.so hellotype.so helloblock.so testmodule.so hellocluster.so hellotimer.so
|
||||
all: helloworld.so hellotype.so helloblock.so testmodule.so hellocluster.so hellotimer.so hellodict.so
|
||||
|
||||
.c.xo:
|
||||
$(CC) -I. $(CFLAGS) $(SHOBJ_CFLAGS) -fPIC -c $< -o $@
|
||||
@ -43,6 +43,11 @@ hellotimer.xo: ../redismodule.h
|
||||
hellotimer.so: hellotimer.xo
|
||||
$(LD) -o $@ $< $(SHOBJ_LDFLAGS) $(LIBS) -lc
|
||||
|
||||
hellodict.xo: ../redismodule.h
|
||||
|
||||
hellodict.so: hellodict.xo
|
||||
$(LD) -o $@ $< $(SHOBJ_LDFLAGS) $(LIBS) -lc
|
||||
|
||||
testmodule.xo: ../redismodule.h
|
||||
|
||||
testmodule.so: testmodule.xo
|
||||
|
@ -77,7 +77,7 @@ void *HelloBlock_ThreadMain(void *arg) {
|
||||
/* An example blocked client disconnection callback.
|
||||
*
|
||||
* Note that in the case of the HELLO.BLOCK command, the blocked client is now
|
||||
* owned by the thread calling sleep(). In this speciifc case, there is not
|
||||
* owned by the thread calling sleep(). In this specific case, there is not
|
||||
* much we can do, however normally we could instead implement a way to
|
||||
* signal the thread that the client disconnected, and sleep the specified
|
||||
* amount of seconds with a while loop calling sleep(1), so that once we
|
||||
|
@ -69,7 +69,7 @@ int ListCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int
|
||||
RedisModule_ReplyWithLongLong(ctx,port);
|
||||
}
|
||||
RedisModule_FreeClusterNodesList(ids);
|
||||
return RedisModule_ReplyWithSimpleString(ctx, "OK");
|
||||
return REDISMODULE_OK;
|
||||
}
|
||||
|
||||
/* Callback for message MSGTYPE_PING */
|
||||
@ -77,6 +77,7 @@ void PingReceiver(RedisModuleCtx *ctx, const char *sender_id, uint8_t type, cons
|
||||
RedisModule_Log(ctx,"notice","PING (type %d) RECEIVED from %.*s: '%.*s'",
|
||||
type,REDISMODULE_NODE_ID_LEN,sender_id,(int)len, payload);
|
||||
RedisModule_SendClusterMessage(ctx,NULL,MSGTYPE_PONG,(unsigned char*)"Ohi!",4);
|
||||
RedisModule_Call(ctx, "INCR", "c", "pings_received");
|
||||
}
|
||||
|
||||
/* Callback for message MSGTYPE_PONG. */
|
||||
@ -102,6 +103,15 @@ int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)
|
||||
ListCommand_RedisCommand,"readonly",0,0,0) == REDISMODULE_ERR)
|
||||
return REDISMODULE_ERR;
|
||||
|
||||
/* Disable Redis Cluster sharding and redirections. This way every node
|
||||
* will be able to access every possible key, regardless of the hash slot.
|
||||
* This way the PING message handler will be able to increment a specific
|
||||
* variable. Normally you do that in order for the distributed system
|
||||
* you create as a module to have total freedom in the keyspace
|
||||
* manipulation. */
|
||||
RedisModule_SetClusterFlags(ctx,REDISMODULE_CLUSTER_FLAG_NO_REDIRECTION);
|
||||
|
||||
/* Register our handlers for different message types. */
|
||||
RedisModule_RegisterClusterMessageReceiver(ctx,MSGTYPE_PING,PingReceiver);
|
||||
RedisModule_RegisterClusterMessageReceiver(ctx,MSGTYPE_PONG,PongReceiver);
|
||||
return REDISMODULE_OK;
|
||||
|
132
src/modules/hellodict.c
Normal file
132
src/modules/hellodict.c
Normal file
@ -0,0 +1,132 @@
|
||||
/* Hellodict -- An example of modules dictionary API
|
||||
*
|
||||
* This module implements a volatile key-value store on top of the
|
||||
* dictionary exported by the Redis modules API.
|
||||
*
|
||||
* -----------------------------------------------------------------------------
|
||||
*
|
||||
* Copyright (c) 2018, Salvatore Sanfilippo <antirez at gmail dot com>
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright notice,
|
||||
* this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of Redis nor the names of its contributors may be used
|
||||
* to endorse or promote products derived from this software without
|
||||
* specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
* POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#define REDISMODULE_EXPERIMENTAL_API
|
||||
#include "../redismodule.h"
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <ctype.h>
|
||||
#include <string.h>
|
||||
|
||||
static RedisModuleDict *Keyspace;
|
||||
|
||||
/* HELLODICT.SET <key> <value>
|
||||
*
|
||||
* Set the specified key to the specified value. */
|
||||
int cmd_SET(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
||||
if (argc != 3) return RedisModule_WrongArity(ctx);
|
||||
RedisModule_DictSet(Keyspace,argv[1],argv[2]);
|
||||
/* We need to keep a reference to the value stored at the key, otherwise
|
||||
* it would be freed when this callback returns. */
|
||||
RedisModule_RetainString(NULL,argv[2]);
|
||||
return RedisModule_ReplyWithSimpleString(ctx, "OK");
|
||||
}
|
||||
|
||||
/* HELLODICT.GET <key>
|
||||
*
|
||||
* Return the value of the specified key, or a null reply if the key
|
||||
* is not defined. */
|
||||
int cmd_GET(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
||||
if (argc != 2) return RedisModule_WrongArity(ctx);
|
||||
RedisModuleString *val = RedisModule_DictGet(Keyspace,argv[1],NULL);
|
||||
if (val == NULL) {
|
||||
return RedisModule_ReplyWithNull(ctx);
|
||||
} else {
|
||||
return RedisModule_ReplyWithString(ctx, val);
|
||||
}
|
||||
}
|
||||
|
||||
/* HELLODICT.KEYRANGE <startkey> <endkey> <count>
|
||||
*
|
||||
* Return a list of matching keys, lexicographically between startkey
|
||||
* and endkey. No more than 'count' items are emitted. */
|
||||
int cmd_KEYRANGE(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
||||
if (argc != 4) return RedisModule_WrongArity(ctx);
|
||||
|
||||
/* Parse the count argument. */
|
||||
long long count;
|
||||
if (RedisModule_StringToLongLong(argv[3],&count) != REDISMODULE_OK) {
|
||||
return RedisModule_ReplyWithError(ctx,"ERR invalid count");
|
||||
}
|
||||
|
||||
/* Seek the iterator. */
|
||||
RedisModuleDictIter *iter = RedisModule_DictIteratorStart(
|
||||
Keyspace, ">=", argv[1]);
|
||||
|
||||
/* Reply with the matching items. */
|
||||
char *key;
|
||||
size_t keylen;
|
||||
long long replylen = 0; /* Keep track of the amitted array len. */
|
||||
RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_ARRAY_LEN);
|
||||
while((key = RedisModule_DictNextC(iter,&keylen,NULL)) != NULL) {
|
||||
if (replylen >= count) break;
|
||||
if (RedisModule_DictCompare(iter,"<=",argv[2]) == REDISMODULE_ERR)
|
||||
break;
|
||||
RedisModule_ReplyWithStringBuffer(ctx,key,keylen);
|
||||
replylen++;
|
||||
}
|
||||
RedisModule_ReplySetArrayLength(ctx,replylen);
|
||||
|
||||
/* Cleanup. */
|
||||
RedisModule_DictIteratorStop(iter);
|
||||
return REDISMODULE_OK;
|
||||
}
|
||||
|
||||
/* This function must be present on each Redis module. It is used in order to
|
||||
* register the commands into the Redis server. */
|
||||
int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {
|
||||
REDISMODULE_NOT_USED(argv);
|
||||
REDISMODULE_NOT_USED(argc);
|
||||
|
||||
if (RedisModule_Init(ctx,"hellodict",1,REDISMODULE_APIVER_1)
|
||||
== REDISMODULE_ERR) return REDISMODULE_ERR;
|
||||
|
||||
if (RedisModule_CreateCommand(ctx,"hellodict.set",
|
||||
cmd_SET,"write deny-oom",1,1,0) == REDISMODULE_ERR)
|
||||
return REDISMODULE_ERR;
|
||||
|
||||
if (RedisModule_CreateCommand(ctx,"hellodict.get",
|
||||
cmd_GET,"readonly",1,1,0) == REDISMODULE_ERR)
|
||||
return REDISMODULE_ERR;
|
||||
|
||||
if (RedisModule_CreateCommand(ctx,"hellodict.keyrange",
|
||||
cmd_KEYRANGE,"readonly",1,1,0) == REDISMODULE_ERR)
|
||||
return REDISMODULE_ERR;
|
||||
|
||||
/* Create our global dictionray. Here we'll set our keys and values. */
|
||||
Keyspace = RedisModule_CreateDict(NULL);
|
||||
|
||||
return REDISMODULE_OK;
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
/* Helloworld cluster -- A ping/pong cluster API example.
|
||||
/* Timer API example -- Register and handle timer events
|
||||
*
|
||||
* -----------------------------------------------------------------------------
|
||||
*
|
||||
@ -37,9 +37,6 @@
|
||||
#include <ctype.h>
|
||||
#include <string.h>
|
||||
|
||||
#define MSGTYPE_PING 1
|
||||
#define MSGTYPE_PONG 2
|
||||
|
||||
/* Timer callback. */
|
||||
void timerHandler(RedisModuleCtx *ctx, void *data) {
|
||||
REDISMODULE_NOT_USED(ctx);
|
||||
|
23
src/multi.c
23
src/multi.c
@ -35,6 +35,7 @@
|
||||
void initClientMultiState(client *c) {
|
||||
c->mstate.commands = NULL;
|
||||
c->mstate.count = 0;
|
||||
c->mstate.cmd_flags = 0;
|
||||
}
|
||||
|
||||
/* Release all the resources associated with MULTI/EXEC state */
|
||||
@ -67,6 +68,7 @@ void queueMultiCommand(client *c) {
|
||||
for (j = 0; j < c->argc; j++)
|
||||
incrRefCount(mc->argv[j]);
|
||||
c->mstate.count++;
|
||||
c->mstate.cmd_flags |= c->cmd->flags;
|
||||
}
|
||||
|
||||
void discardTransaction(client *c) {
|
||||
@ -132,7 +134,22 @@ void execCommand(client *c) {
|
||||
* in the second an EXECABORT error is returned. */
|
||||
if (c->flags & (CLIENT_DIRTY_CAS|CLIENT_DIRTY_EXEC)) {
|
||||
addReply(c, c->flags & CLIENT_DIRTY_EXEC ? shared.execaborterr :
|
||||
shared.nullmultibulk);
|
||||
shared.nullarray[c->resp]);
|
||||
discardTransaction(c);
|
||||
goto handle_monitor;
|
||||
}
|
||||
|
||||
/* If there are write commands inside the transaction, and this is a read
|
||||
* only slave, we want to send an error. This happens when the transaction
|
||||
* was initiated when the instance was a master or a writable replica and
|
||||
* then the configuration changed (for example instance was turned into
|
||||
* a replica). */
|
||||
if (!server.loading && server.masterhost && server.repl_slave_ro &&
|
||||
!(c->flags & CLIENT_MASTER) && c->mstate.cmd_flags & CMD_WRITE)
|
||||
{
|
||||
addReplyError(c,
|
||||
"Transaction contains write commands but instance "
|
||||
"is now a read-only replica. EXEC aborted.");
|
||||
discardTransaction(c);
|
||||
goto handle_monitor;
|
||||
}
|
||||
@ -142,7 +159,7 @@ void execCommand(client *c) {
|
||||
orig_argv = c->argv;
|
||||
orig_argc = c->argc;
|
||||
orig_cmd = c->cmd;
|
||||
addReplyMultiBulkLen(c,c->mstate.count);
|
||||
addReplyArrayLen(c,c->mstate.count);
|
||||
for (j = 0; j < c->mstate.count; j++) {
|
||||
c->argc = c->mstate.commands[j].argc;
|
||||
c->argv = c->mstate.commands[j].argv;
|
||||
@ -158,7 +175,7 @@ void execCommand(client *c) {
|
||||
must_propagate = 1;
|
||||
}
|
||||
|
||||
call(c,CMD_CALL_FULL);
|
||||
call(c,server.loading ? CMD_CALL_NONE : CMD_CALL_FULL);
|
||||
|
||||
/* Commands may alter argc/argv, restore mstate. */
|
||||
c->mstate.commands[j].argc = c->argc;
|
||||
|
659
src/networking.c
659
src/networking.c
File diff suppressed because it is too large
Load Diff
84
src/object.c
84
src/object.c
@ -185,7 +185,7 @@ robj *createStringObjectFromLongDouble(long double value, int humanfriendly) {
|
||||
/* Duplicate a string object, with the guarantee that the returned object
|
||||
* has the same encoding as the original one.
|
||||
*
|
||||
* This function also guarantees that duplicating a small integere object
|
||||
* This function also guarantees that duplicating a small integer object
|
||||
* (or a string object that contains a representation of a small integer)
|
||||
* will always result in a fresh object that is unshared (refcount == 1).
|
||||
*
|
||||
@ -1011,12 +1011,24 @@ struct redisMemOverhead *getMemoryOverheadData(void) {
|
||||
|
||||
mem = 0;
|
||||
if (server.aof_state != AOF_OFF) {
|
||||
mem += sdslen(server.aof_buf);
|
||||
mem += sdsalloc(server.aof_buf);
|
||||
mem += aofRewriteBufferSize();
|
||||
}
|
||||
mh->aof_buffer = mem;
|
||||
mem_total+=mem;
|
||||
|
||||
mem = server.lua_scripts_mem;
|
||||
mem += dictSize(server.lua_scripts) * sizeof(dictEntry) +
|
||||
dictSlots(server.lua_scripts) * sizeof(dictEntry*);
|
||||
mem += dictSize(server.repl_scriptcache_dict) * sizeof(dictEntry) +
|
||||
dictSlots(server.repl_scriptcache_dict) * sizeof(dictEntry*);
|
||||
if (listLength(server.repl_scriptcache_fifo) > 0) {
|
||||
mem += listLength(server.repl_scriptcache_fifo) * (sizeof(listNode) +
|
||||
sdsZmallocSize(listNodeValue(listFirst(server.repl_scriptcache_fifo))));
|
||||
}
|
||||
mh->lua_caches = mem;
|
||||
mem_total+=mem;
|
||||
|
||||
for (j = 0; j < server.dbnum; j++) {
|
||||
redisDb *db = server.db+j;
|
||||
long long keyscount = dictSize(db->dict);
|
||||
@ -1074,6 +1086,7 @@ sds getMemoryDoctorReport(void) {
|
||||
int high_alloc_rss = 0; /* High rss overhead. */
|
||||
int big_slave_buf = 0; /* Slave buffers are too big. */
|
||||
int big_client_buf = 0; /* Client buffers are too big. */
|
||||
int many_scripts = 0; /* Script cache has too many scripts. */
|
||||
int num_reports = 0;
|
||||
struct redisMemOverhead *mh = getMemoryOverheadData();
|
||||
|
||||
@ -1124,6 +1137,12 @@ sds getMemoryDoctorReport(void) {
|
||||
big_slave_buf = 1;
|
||||
num_reports++;
|
||||
}
|
||||
|
||||
/* Too many scripts are cached? */
|
||||
if (dictSize(server.lua_scripts) > 1000) {
|
||||
many_scripts = 1;
|
||||
num_reports++;
|
||||
}
|
||||
}
|
||||
|
||||
sds s;
|
||||
@ -1153,14 +1172,17 @@ sds getMemoryDoctorReport(void) {
|
||||
s = sdscatprintf(s," * High allocator RSS overhead: This instance has an RSS memory overhead is greater than 1.1 (this means that the Resident Set Size of the allocator is much larger than the sum what the allocator actually holds). This problem is usually due to a large peak memory (check if there is a peak memory entry above in the report), you can try the MEMORY PURGE command to reclaim it.\n\n");
|
||||
}
|
||||
if (high_proc_rss) {
|
||||
s = sdscatprintf(s," * High process RSS overhead: This instance has non-allocator RSS memory overhead is greater than 1.1 (this means that the Resident Set Size of the Redis process is much larger than the RSS the allocator holds). This problem may be due to LUA scripts or Modules.\n\n");
|
||||
s = sdscatprintf(s," * High process RSS overhead: This instance has non-allocator RSS memory overhead is greater than 1.1 (this means that the Resident Set Size of the Redis process is much larger than the RSS the allocator holds). This problem may be due to Lua scripts or Modules.\n\n");
|
||||
}
|
||||
if (big_slave_buf) {
|
||||
s = sdscat(s," * Big slave buffers: The slave output buffers in this instance are greater than 10MB for each slave (on average). This likely means that there is some slave instance that is struggling receiving data, either because it is too slow or because of networking issues. As a result, data piles on the master output buffers. Please try to identify what slave is not receiving data correctly and why. You can use the INFO output in order to check the slaves delays and the CLIENT LIST command to check the output buffers of each slave.\n\n");
|
||||
s = sdscat(s," * Big replica buffers: The replica output buffers in this instance are greater than 10MB for each replica (on average). This likely means that there is some replica instance that is struggling receiving data, either because it is too slow or because of networking issues. As a result, data piles on the master output buffers. Please try to identify what replica is not receiving data correctly and why. You can use the INFO output in order to check the replicas delays and the CLIENT LIST command to check the output buffers of each replica.\n\n");
|
||||
}
|
||||
if (big_client_buf) {
|
||||
s = sdscat(s," * Big client buffers: The clients output buffers in this instance are greater than 200K per client (on average). This may result from different causes, like Pub/Sub clients subscribed to channels bot not receiving data fast enough, so that data piles on the Redis instance output buffer, or clients sending commands with large replies or very large sequences of commands in the same pipeline. Please use the CLIENT LIST command in order to investigate the issue if it causes problems in your instance, or to understand better why certain clients are using a big amount of memory.\n\n");
|
||||
}
|
||||
if (many_scripts) {
|
||||
s = sdscat(s," * Many scripts: There seem to be many cached scripts in this instance (more than 1000). This may be because scripts are generated and `EVAL`ed, instead of being parameterized (with KEYS and ARGV), `SCRIPT LOAD`ed and `EVALSHA`ed. Unless `SCRIPT FLUSH` is called periodically, the scripts' caches may end up consuming most of your memory.\n\n");
|
||||
}
|
||||
s = sdscat(s,"I'm here to keep you safe, Sam. I want to help you.\n");
|
||||
}
|
||||
freeMemoryOverheadData(mh);
|
||||
@ -1226,15 +1248,15 @@ NULL
|
||||
};
|
||||
addReplyHelp(c, help);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"refcount") && c->argc == 3) {
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nullbulk))
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||
== NULL) return;
|
||||
addReplyLongLong(c,o->refcount);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"encoding") && c->argc == 3) {
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nullbulk))
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||
== NULL) return;
|
||||
addReplyBulkCString(c,strEncoding(o->encoding));
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"idletime") && c->argc == 3) {
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nullbulk))
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||
== NULL) return;
|
||||
if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
|
||||
addReplyError(c,"An LFU maxmemory policy is selected, idle time not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.");
|
||||
@ -1242,7 +1264,7 @@ NULL
|
||||
}
|
||||
addReplyLongLong(c,estimateObjectIdleTime(o)/1000);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"freq") && c->argc == 3) {
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nullbulk))
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.null[c->resp]))
|
||||
== NULL) return;
|
||||
if (!(server.maxmemory_policy & MAXMEMORY_FLAG_LFU)) {
|
||||
addReplyError(c,"An LFU maxmemory policy is not selected, access frequency not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.");
|
||||
@ -1263,9 +1285,18 @@ NULL
|
||||
*
|
||||
* Usage: MEMORY usage <key> */
|
||||
void memoryCommand(client *c) {
|
||||
robj *o;
|
||||
|
||||
if (!strcasecmp(c->argv[1]->ptr,"usage") && c->argc >= 3) {
|
||||
if (!strcasecmp(c->argv[1]->ptr,"help") && c->argc == 2) {
|
||||
const char *help[] = {
|
||||
"DOCTOR - Return memory problems reports.",
|
||||
"MALLOC-STATS -- Return internal statistics report from the memory allocator.",
|
||||
"PURGE -- Attempt to purge dirty pages for reclamation by the allocator.",
|
||||
"STATS -- Return information about the memory usage of the server.",
|
||||
"USAGE <key> [SAMPLES <count>] -- Return memory in bytes used by <key> and its value. Nested values are sampled up to <count> times (default: 5).",
|
||||
NULL
|
||||
};
|
||||
addReplyHelp(c, help);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"usage") && c->argc >= 3) {
|
||||
dictEntry *de;
|
||||
long long samples = OBJ_COMPUTE_SIZE_DEF_SAMPLES;
|
||||
for (int j = 3; j < c->argc; j++) {
|
||||
if (!strcasecmp(c->argv[j]->ptr,"samples") &&
|
||||
@ -1284,16 +1315,18 @@ void memoryCommand(client *c) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
if ((o = objectCommandLookupOrReply(c,c->argv[2],shared.nullbulk))
|
||||
== NULL) return;
|
||||
size_t usage = objectComputeSize(o,samples);
|
||||
usage += sdsAllocSize(c->argv[2]->ptr);
|
||||
if ((de = dictFind(c->db->dict,c->argv[2]->ptr)) == NULL) {
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
size_t usage = objectComputeSize(dictGetVal(de),samples);
|
||||
usage += sdsAllocSize(dictGetKey(de));
|
||||
usage += sizeof(dictEntry);
|
||||
addReplyLongLong(c,usage);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"stats") && c->argc == 2) {
|
||||
struct redisMemOverhead *mh = getMemoryOverheadData();
|
||||
|
||||
addReplyMultiBulkLen(c,(24+mh->num_dbs)*2);
|
||||
addReplyMapLen(c,25+mh->num_dbs);
|
||||
|
||||
addReplyBulkCString(c,"peak.allocated");
|
||||
addReplyLongLong(c,mh->peak_allocated);
|
||||
@ -1316,11 +1349,14 @@ void memoryCommand(client *c) {
|
||||
addReplyBulkCString(c,"aof.buffer");
|
||||
addReplyLongLong(c,mh->aof_buffer);
|
||||
|
||||
addReplyBulkCString(c,"lua.caches");
|
||||
addReplyLongLong(c,mh->lua_caches);
|
||||
|
||||
for (size_t j = 0; j < mh->num_dbs; j++) {
|
||||
char dbname[32];
|
||||
snprintf(dbname,sizeof(dbname),"db.%zd",mh->db[j].dbid);
|
||||
addReplyBulkCString(c,dbname);
|
||||
addReplyMultiBulkLen(c,4);
|
||||
addReplyMapLen(c,2);
|
||||
|
||||
addReplyBulkCString(c,"overhead.hashtable.main");
|
||||
addReplyLongLong(c,mh->db[j].overhead_ht_main);
|
||||
@ -1409,19 +1445,7 @@ void memoryCommand(client *c) {
|
||||
addReply(c, shared.ok);
|
||||
/* Nothing to do for other allocators. */
|
||||
#endif
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"help") && c->argc == 2) {
|
||||
addReplyMultiBulkLen(c,5);
|
||||
addReplyBulkCString(c,
|
||||
"MEMORY DOCTOR - Outputs memory problems report");
|
||||
addReplyBulkCString(c,
|
||||
"MEMORY USAGE <key> [SAMPLES <count>] - Estimate memory usage of key");
|
||||
addReplyBulkCString(c,
|
||||
"MEMORY STATS - Show memory usage details");
|
||||
addReplyBulkCString(c,
|
||||
"MEMORY PURGE - Ask the allocator to release memory");
|
||||
addReplyBulkCString(c,
|
||||
"MEMORY MALLOC-STATS - Show allocator internal stats");
|
||||
} else {
|
||||
addReplyError(c,"Syntax error. Try MEMORY HELP");
|
||||
addReplyErrorFormat(c, "Unknown subcommand or wrong number of arguments for '%s'. Try MEMORY HELP", (char*)c->argv[1]->ptr);
|
||||
}
|
||||
}
|
||||
|
153
src/pubsub.c
153
src/pubsub.c
@ -29,6 +29,93 @@
|
||||
|
||||
#include "server.h"
|
||||
|
||||
int clientSubscriptionsCount(client *c);
|
||||
|
||||
/*-----------------------------------------------------------------------------
|
||||
* Pubsub client replies API
|
||||
*----------------------------------------------------------------------------*/
|
||||
|
||||
/* Send a pubsub message of type "message" to the client. */
|
||||
void addReplyPubsubMessage(client *c, robj *channel, robj *msg) {
|
||||
if (c->resp == 2)
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
else
|
||||
addReplyPushLen(c,3);
|
||||
addReply(c,shared.messagebulk);
|
||||
addReplyBulk(c,channel);
|
||||
addReplyBulk(c,msg);
|
||||
}
|
||||
|
||||
/* Send a pubsub message of type "pmessage" to the client. The difference
|
||||
* with the "message" type delivered by addReplyPubsubMessage() is that
|
||||
* this message format also includes the pattern that matched the message. */
|
||||
void addReplyPubsubPatMessage(client *c, robj *pat, robj *channel, robj *msg) {
|
||||
if (c->resp == 2)
|
||||
addReply(c,shared.mbulkhdr[4]);
|
||||
else
|
||||
addReplyPushLen(c,4);
|
||||
addReply(c,shared.pmessagebulk);
|
||||
addReplyBulk(c,pat);
|
||||
addReplyBulk(c,channel);
|
||||
addReplyBulk(c,msg);
|
||||
}
|
||||
|
||||
/* Send the pubsub subscription notification to the client. */
|
||||
void addReplyPubsubSubscribed(client *c, robj *channel) {
|
||||
if (c->resp == 2)
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
else
|
||||
addReplyPushLen(c,3);
|
||||
addReply(c,shared.subscribebulk);
|
||||
addReplyBulk(c,channel);
|
||||
addReplyLongLong(c,clientSubscriptionsCount(c));
|
||||
}
|
||||
|
||||
/* Send the pubsub unsubscription notification to the client.
|
||||
* Channel can be NULL: this is useful when the client sends a mass
|
||||
* unsubscribe command but there are no channels to unsubscribe from: we
|
||||
* still send a notification. */
|
||||
void addReplyPubsubUnsubscribed(client *c, robj *channel) {
|
||||
if (c->resp == 2)
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
else
|
||||
addReplyPushLen(c,3);
|
||||
addReply(c,shared.unsubscribebulk);
|
||||
if (channel)
|
||||
addReplyBulk(c,channel);
|
||||
else
|
||||
addReplyNull(c);
|
||||
addReplyLongLong(c,clientSubscriptionsCount(c));
|
||||
}
|
||||
|
||||
/* Send the pubsub pattern subscription notification to the client. */
|
||||
void addReplyPubsubPatSubscribed(client *c, robj *pattern) {
|
||||
if (c->resp == 2)
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
else
|
||||
addReplyPushLen(c,3);
|
||||
addReply(c,shared.psubscribebulk);
|
||||
addReplyBulk(c,pattern);
|
||||
addReplyLongLong(c,clientSubscriptionsCount(c));
|
||||
}
|
||||
|
||||
/* Send the pubsub pattern unsubscription notification to the client.
|
||||
* Pattern can be NULL: this is useful when the client sends a mass
|
||||
* punsubscribe command but there are no pattern to unsubscribe from: we
|
||||
* still send a notification. */
|
||||
void addReplyPubsubPatUnsubscribed(client *c, robj *pattern) {
|
||||
if (c->resp == 2)
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
else
|
||||
addReplyPushLen(c,3);
|
||||
addReply(c,shared.punsubscribebulk);
|
||||
if (pattern)
|
||||
addReplyBulk(c,pattern);
|
||||
else
|
||||
addReplyNull(c);
|
||||
addReplyLongLong(c,clientSubscriptionsCount(c));
|
||||
}
|
||||
|
||||
/*-----------------------------------------------------------------------------
|
||||
* Pubsub low level API
|
||||
*----------------------------------------------------------------------------*/
|
||||
@ -76,10 +163,7 @@ int pubsubSubscribeChannel(client *c, robj *channel) {
|
||||
listAddNodeTail(clients,c);
|
||||
}
|
||||
/* Notify the client */
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
addReply(c,shared.subscribebulk);
|
||||
addReplyBulk(c,channel);
|
||||
addReplyLongLong(c,clientSubscriptionsCount(c));
|
||||
addReplyPubsubSubscribed(c,channel);
|
||||
return retval;
|
||||
}
|
||||
|
||||
@ -111,14 +195,7 @@ int pubsubUnsubscribeChannel(client *c, robj *channel, int notify) {
|
||||
}
|
||||
}
|
||||
/* Notify the client */
|
||||
if (notify) {
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
addReply(c,shared.unsubscribebulk);
|
||||
addReplyBulk(c,channel);
|
||||
addReplyLongLong(c,dictSize(c->pubsub_channels)+
|
||||
listLength(c->pubsub_patterns));
|
||||
|
||||
}
|
||||
if (notify) addReplyPubsubUnsubscribed(c,channel);
|
||||
decrRefCount(channel); /* it is finally safe to release it */
|
||||
return retval;
|
||||
}
|
||||
@ -138,10 +215,7 @@ int pubsubSubscribePattern(client *c, robj *pattern) {
|
||||
listAddNodeTail(server.pubsub_patterns,pat);
|
||||
}
|
||||
/* Notify the client */
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
addReply(c,shared.psubscribebulk);
|
||||
addReplyBulk(c,pattern);
|
||||
addReplyLongLong(c,clientSubscriptionsCount(c));
|
||||
addReplyPubsubPatSubscribed(c,pattern);
|
||||
return retval;
|
||||
}
|
||||
|
||||
@ -162,13 +236,7 @@ int pubsubUnsubscribePattern(client *c, robj *pattern, int notify) {
|
||||
listDelNode(server.pubsub_patterns,ln);
|
||||
}
|
||||
/* Notify the client */
|
||||
if (notify) {
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
addReply(c,shared.punsubscribebulk);
|
||||
addReplyBulk(c,pattern);
|
||||
addReplyLongLong(c,dictSize(c->pubsub_channels)+
|
||||
listLength(c->pubsub_patterns));
|
||||
}
|
||||
if (notify) addReplyPubsubPatUnsubscribed(c,pattern);
|
||||
decrRefCount(pattern);
|
||||
return retval;
|
||||
}
|
||||
@ -186,13 +254,7 @@ int pubsubUnsubscribeAllChannels(client *c, int notify) {
|
||||
count += pubsubUnsubscribeChannel(c,channel,notify);
|
||||
}
|
||||
/* We were subscribed to nothing? Still reply to the client. */
|
||||
if (notify && count == 0) {
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
addReply(c,shared.unsubscribebulk);
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyLongLong(c,dictSize(c->pubsub_channels)+
|
||||
listLength(c->pubsub_patterns));
|
||||
}
|
||||
if (notify && count == 0) addReplyPubsubUnsubscribed(c,NULL);
|
||||
dictReleaseIterator(di);
|
||||
return count;
|
||||
}
|
||||
@ -210,14 +272,7 @@ int pubsubUnsubscribeAllPatterns(client *c, int notify) {
|
||||
|
||||
count += pubsubUnsubscribePattern(c,pattern,notify);
|
||||
}
|
||||
if (notify && count == 0) {
|
||||
/* We were subscribed to nothing? Still reply to the client. */
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
addReply(c,shared.punsubscribebulk);
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyLongLong(c,dictSize(c->pubsub_channels)+
|
||||
listLength(c->pubsub_patterns));
|
||||
}
|
||||
if (notify && count == 0) addReplyPubsubPatUnsubscribed(c,NULL);
|
||||
return count;
|
||||
}
|
||||
|
||||
@ -238,11 +293,7 @@ int pubsubPublishMessage(robj *channel, robj *message) {
|
||||
listRewind(list,&li);
|
||||
while ((ln = listNext(&li)) != NULL) {
|
||||
client *c = ln->value;
|
||||
|
||||
addReply(c,shared.mbulkhdr[3]);
|
||||
addReply(c,shared.messagebulk);
|
||||
addReplyBulk(c,channel);
|
||||
addReplyBulk(c,message);
|
||||
addReplyPubsubMessage(c,channel,message);
|
||||
receivers++;
|
||||
}
|
||||
}
|
||||
@ -256,12 +307,10 @@ int pubsubPublishMessage(robj *channel, robj *message) {
|
||||
if (stringmatchlen((char*)pat->pattern->ptr,
|
||||
sdslen(pat->pattern->ptr),
|
||||
(char*)channel->ptr,
|
||||
sdslen(channel->ptr),0)) {
|
||||
addReply(pat->client,shared.mbulkhdr[4]);
|
||||
addReply(pat->client,shared.pmessagebulk);
|
||||
addReplyBulk(pat->client,pat->pattern);
|
||||
addReplyBulk(pat->client,channel);
|
||||
addReplyBulk(pat->client,message);
|
||||
sdslen(channel->ptr),0))
|
||||
{
|
||||
addReplyPubsubPatMessage(pat->client,
|
||||
pat->pattern,channel,message);
|
||||
receivers++;
|
||||
}
|
||||
}
|
||||
@ -343,7 +392,7 @@ NULL
|
||||
long mblen = 0;
|
||||
void *replylen;
|
||||
|
||||
replylen = addDeferredMultiBulkLength(c);
|
||||
replylen = addReplyDeferredLen(c);
|
||||
while((de = dictNext(di)) != NULL) {
|
||||
robj *cobj = dictGetKey(de);
|
||||
sds channel = cobj->ptr;
|
||||
@ -356,12 +405,12 @@ NULL
|
||||
}
|
||||
}
|
||||
dictReleaseIterator(di);
|
||||
setDeferredMultiBulkLength(c,replylen,mblen);
|
||||
setDeferredArrayLen(c,replylen,mblen);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"numsub") && c->argc >= 2) {
|
||||
/* PUBSUB NUMSUB [Channel_1 ... Channel_N] */
|
||||
int j;
|
||||
|
||||
addReplyMultiBulkLen(c,(c->argc-2)*2);
|
||||
addReplyArrayLen(c,(c->argc-2)*2);
|
||||
for (j = 2; j < c->argc; j++) {
|
||||
list *l = dictFetchValue(server.pubsub_channels,c->argv[j]);
|
||||
|
||||
|
@ -40,7 +40,7 @@
|
||||
* container: 2 bits, NONE=1, ZIPLIST=2.
|
||||
* recompress: 1 bit, bool, true if node is temporarry decompressed for usage.
|
||||
* attempted_compress: 1 bit, boolean, used for verifying during testing.
|
||||
* extra: 12 bits, free for future use; pads out the remainder of 32 bits */
|
||||
* extra: 10 bits, free for future use; pads out the remainder of 32 bits */
|
||||
typedef struct quicklistNode {
|
||||
struct quicklistNode *prev;
|
||||
struct quicklistNode *next;
|
||||
|
267
src/rax.c
267
src/rax.c
@ -1,6 +1,6 @@
|
||||
/* Rax -- A radix tree implementation.
|
||||
*
|
||||
* Copyright (c) 2017, Salvatore Sanfilippo <antirez at gmail dot com>
|
||||
* Copyright (c) 2017-2018, Salvatore Sanfilippo <antirez at gmail dot com>
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
@ -51,14 +51,18 @@ void *raxNotFound = (void*)"rax-not-found-pointer";
|
||||
|
||||
void raxDebugShowNode(const char *msg, raxNode *n);
|
||||
|
||||
/* Turn debugging messages on/off. */
|
||||
#if 0
|
||||
/* Turn debugging messages on/off by compiling with RAX_DEBUG_MSG macro on.
|
||||
* When RAX_DEBUG_MSG is defined by default Rax operations will emit a lot
|
||||
* of debugging info to the standard output, however you can still turn
|
||||
* debugging on/off in order to enable it only when you suspect there is an
|
||||
* operation causing a bug using the function raxSetDebugMsg(). */
|
||||
#ifdef RAX_DEBUG_MSG
|
||||
#define debugf(...) \
|
||||
do { \
|
||||
if (raxDebugMsg) { \
|
||||
printf("%s:%s:%d:\t", __FILE__, __FUNCTION__, __LINE__); \
|
||||
printf(__VA_ARGS__); \
|
||||
fflush(stdout); \
|
||||
} while (0);
|
||||
}
|
||||
|
||||
#define debugnode(msg,n) raxDebugShowNode(msg,n)
|
||||
#else
|
||||
@ -66,6 +70,16 @@ void raxDebugShowNode(const char *msg, raxNode *n);
|
||||
#define debugnode(msg,n)
|
||||
#endif
|
||||
|
||||
/* By default log debug info if RAX_DEBUG_MSG is defined. */
|
||||
static int raxDebugMsg = 1;
|
||||
|
||||
/* When debug messages are enabled, turn them on/off dynamically. By
|
||||
* default they are enabled. Set the state to 0 to disable, and 1 to
|
||||
* re-enable. */
|
||||
void raxSetDebugMsg(int onoff) {
|
||||
raxDebugMsg = onoff;
|
||||
}
|
||||
|
||||
/* ------------------------- raxStack functions --------------------------
|
||||
* The raxStack is a simple stack of pointers that is capable of switching
|
||||
* from using a stack-allocated array to dynamic heap once a given number of
|
||||
@ -134,12 +148,43 @@ static inline void raxStackFree(raxStack *ts) {
|
||||
* Radix tree implementation
|
||||
* --------------------------------------------------------------------------*/
|
||||
|
||||
/* Return the padding needed in the characters section of a node having size
|
||||
* 'nodesize'. The padding is needed to store the child pointers to aligned
|
||||
* addresses. Note that we add 4 to the node size because the node has a four
|
||||
* bytes header. */
|
||||
#define raxPadding(nodesize) ((sizeof(void*)-((nodesize+4) % sizeof(void*))) & (sizeof(void*)-1))
|
||||
|
||||
/* Return the pointer to the last child pointer in a node. For the compressed
|
||||
* nodes this is the only child pointer. */
|
||||
#define raxNodeLastChildPtr(n) ((raxNode**) ( \
|
||||
((char*)(n)) + \
|
||||
raxNodeCurrentLength(n) - \
|
||||
sizeof(raxNode*) - \
|
||||
(((n)->iskey && !(n)->isnull) ? sizeof(void*) : 0) \
|
||||
))
|
||||
|
||||
/* Return the pointer to the first child pointer. */
|
||||
#define raxNodeFirstChildPtr(n) ((raxNode**) ( \
|
||||
(n)->data + \
|
||||
(n)->size + \
|
||||
raxPadding((n)->size)))
|
||||
|
||||
/* Return the current total size of the node. Note that the second line
|
||||
* computes the padding after the string of characters, needed in order to
|
||||
* save pointers to aligned addresses. */
|
||||
#define raxNodeCurrentLength(n) ( \
|
||||
sizeof(raxNode)+(n)->size+ \
|
||||
raxPadding((n)->size)+ \
|
||||
((n)->iscompr ? sizeof(raxNode*) : sizeof(raxNode*)*(n)->size)+ \
|
||||
(((n)->iskey && !(n)->isnull)*sizeof(void*)) \
|
||||
)
|
||||
|
||||
/* Allocate a new non compressed node with the specified number of children.
|
||||
* If datafiled is true, the allocation is made large enough to hold the
|
||||
* associated data pointer.
|
||||
* Returns the new node pointer. On out of memory NULL is returned. */
|
||||
raxNode *raxNewNode(size_t children, int datafield) {
|
||||
size_t nodesize = sizeof(raxNode)+children+
|
||||
size_t nodesize = sizeof(raxNode)+children+raxPadding(children)+
|
||||
sizeof(raxNode*)*children;
|
||||
if (datafield) nodesize += sizeof(void*);
|
||||
raxNode *node = rax_malloc(nodesize);
|
||||
@ -167,13 +212,6 @@ rax *raxNew(void) {
|
||||
}
|
||||
}
|
||||
|
||||
/* Return the current total size of the node. */
|
||||
#define raxNodeCurrentLength(n) ( \
|
||||
sizeof(raxNode)+(n)->size+ \
|
||||
((n)->iscompr ? sizeof(raxNode*) : sizeof(raxNode*)*(n)->size)+ \
|
||||
(((n)->iskey && !(n)->isnull)*sizeof(void*)) \
|
||||
)
|
||||
|
||||
/* realloc the node to make room for auxiliary data in order
|
||||
* to store an item in that node. On out of memory NULL is returned. */
|
||||
raxNode *raxReallocForData(raxNode *n, void *data) {
|
||||
@ -216,18 +254,17 @@ void *raxGetData(raxNode *n) {
|
||||
raxNode *raxAddChild(raxNode *n, unsigned char c, raxNode **childptr, raxNode ***parentlink) {
|
||||
assert(n->iscompr == 0);
|
||||
|
||||
size_t curlen = sizeof(raxNode)+
|
||||
n->size+
|
||||
sizeof(raxNode*)*n->size;
|
||||
size_t newlen;
|
||||
size_t curlen = raxNodeCurrentLength(n);
|
||||
n->size++;
|
||||
size_t newlen = raxNodeCurrentLength(n);
|
||||
n->size--; /* For now restore the orignal size. We'll update it only on
|
||||
success at the end. */
|
||||
|
||||
/* Alloc the new child we will link to 'n'. */
|
||||
raxNode *child = raxNewNode(0,0);
|
||||
if (child == NULL) return NULL;
|
||||
|
||||
/* Make space in the original node. */
|
||||
if (n->iskey) curlen += sizeof(void*);
|
||||
newlen = curlen+sizeof(raxNode*)+1; /* Add 1 char and 1 pointer. */
|
||||
raxNode *newn = rax_realloc(n,newlen);
|
||||
if (newn == NULL) {
|
||||
rax_free(child);
|
||||
@ -235,14 +272,34 @@ raxNode *raxAddChild(raxNode *n, unsigned char c, raxNode **childptr, raxNode **
|
||||
}
|
||||
n = newn;
|
||||
|
||||
/* After the reallocation, we have 5/9 (depending on the system
|
||||
* pointer size) bytes at the end, that is, the additional char
|
||||
* in the 'data' section, plus one pointer to the new child:
|
||||
/* After the reallocation, we have up to 8/16 (depending on the system
|
||||
* pointer size, and the required node padding) bytes at the end, that is,
|
||||
* the additional char in the 'data' section, plus one pointer to the new
|
||||
* child, plus the padding needed in order to store addresses into aligned
|
||||
* locations.
|
||||
*
|
||||
* [numc][abx][ap][bp][xp]|auxp|.....
|
||||
* So if we start with the following node, having "abde" edges.
|
||||
*
|
||||
* Note:
|
||||
* - We assume 4 bytes pointer for simplicity.
|
||||
* - Each space below corresponds to one byte
|
||||
*
|
||||
* [HDR*][abde][Aptr][Bptr][Dptr][Eptr]|AUXP|
|
||||
*
|
||||
* After the reallocation we need: 1 byte for the new edge character
|
||||
* plus 4 bytes for a new child pointer (assuming 32 bit machine).
|
||||
* However after adding 1 byte to the edge char, the header + the edge
|
||||
* characters are no longer aligned, so we also need 3 bytes of padding.
|
||||
* In total the reallocation will add 1+4+3 bytes = 8 bytes:
|
||||
*
|
||||
* (Blank bytes are represented by ".")
|
||||
*
|
||||
* [HDR*][abde][Aptr][Bptr][Dptr][Eptr]|AUXP|[....][....]
|
||||
*
|
||||
* Let's find where to insert the new child in order to make sure
|
||||
* it is inserted in-place lexicographically. */
|
||||
* it is inserted in-place lexicographically. Assuming we are adding
|
||||
* a child "c" in our case pos will be = 2 after the end of the following
|
||||
* loop. */
|
||||
int pos;
|
||||
for (pos = 0; pos < n->size; pos++) {
|
||||
if (n->data[pos] > c) break;
|
||||
@ -252,55 +309,81 @@ raxNode *raxAddChild(raxNode *n, unsigned char c, raxNode **childptr, raxNode **
|
||||
* so that we can mess with the other data without overwriting it.
|
||||
* We will obtain something like that:
|
||||
*
|
||||
* [numc][abx][ap][bp][xp].....|auxp| */
|
||||
unsigned char *src;
|
||||
* [HDR*][abde][Aptr][Bptr][Dptr][Eptr][....][....]|AUXP|
|
||||
*/
|
||||
unsigned char *src, *dst;
|
||||
if (n->iskey && !n->isnull) {
|
||||
src = n->data+n->size+sizeof(raxNode*)*n->size;
|
||||
memmove(src+1+sizeof(raxNode*),src,sizeof(void*));
|
||||
src = ((unsigned char*)n+curlen-sizeof(void*));
|
||||
dst = ((unsigned char*)n+newlen-sizeof(void*));
|
||||
memmove(dst,src,sizeof(void*));
|
||||
}
|
||||
|
||||
/* Now imagine we are adding a node with edge 'c'. The insertion
|
||||
* point is between 'b' and 'x', so the 'pos' variable value is
|
||||
* To start, move all the child pointers after the insertion point
|
||||
* of 1+sizeof(pointer) bytes on the right, to obtain:
|
||||
/* Compute the "shift", that is, how many bytes we need to move the
|
||||
* pointers section forward because of the addition of the new child
|
||||
* byte in the string section. Note that if we had no padding, that
|
||||
* would be always "1", since we are adding a single byte in the string
|
||||
* section of the node (where now there is "abde" basically).
|
||||
*
|
||||
* [numc][abx][ap][bp].....[xp]|auxp| */
|
||||
src = n->data+n->size+sizeof(raxNode*)*pos;
|
||||
memmove(src+1+sizeof(raxNode*),src,sizeof(raxNode*)*(n->size-pos));
|
||||
* However we have padding, so it could be zero, or up to 8.
|
||||
*
|
||||
* Another way to think at the shift is, how many bytes we need to
|
||||
* move child pointers forward *other than* the obvious sizeof(void*)
|
||||
* needed for the additional pointer itself. */
|
||||
size_t shift = newlen - curlen - sizeof(void*);
|
||||
|
||||
/* We said we are adding a node with edge 'c'. The insertion
|
||||
* point is between 'b' and 'd', so the 'pos' variable value is
|
||||
* the index of the first child pointer that we need to move forward
|
||||
* to make space for our new pointer.
|
||||
*
|
||||
* To start, move all the child pointers after the insertion point
|
||||
* of shift+sizeof(pointer) bytes on the right, to obtain:
|
||||
*
|
||||
* [HDR*][abde][Aptr][Bptr][....][....][Dptr][Eptr]|AUXP|
|
||||
*/
|
||||
src = n->data+n->size+
|
||||
raxPadding(n->size)+
|
||||
sizeof(raxNode*)*pos;
|
||||
memmove(src+shift+sizeof(raxNode*),src,sizeof(raxNode*)*(n->size-pos));
|
||||
|
||||
/* Move the pointers to the left of the insertion position as well. Often
|
||||
* we don't need to do anything if there was already some padding to use. In
|
||||
* that case the final destination of the pointers will be the same, however
|
||||
* in our example there was no pre-existing padding, so we added one byte
|
||||
* plus thre bytes of padding. After the next memmove() things will look
|
||||
* like thata:
|
||||
*
|
||||
* [HDR*][abde][....][Aptr][Bptr][....][Dptr][Eptr]|AUXP|
|
||||
*/
|
||||
if (shift) {
|
||||
src = (unsigned char*) raxNodeFirstChildPtr(n);
|
||||
memmove(src+shift,src,sizeof(raxNode*)*pos);
|
||||
}
|
||||
|
||||
/* Now make the space for the additional char in the data section,
|
||||
* but also move the pointers before the insertion point in the right
|
||||
* by 1 byte, in order to obtain the following:
|
||||
* but also move the pointers before the insertion point to the right
|
||||
* by shift bytes, in order to obtain the following:
|
||||
*
|
||||
* [numc][ab.x][ap][bp]....[xp]|auxp| */
|
||||
* [HDR*][ab.d][e...][Aptr][Bptr][....][Dptr][Eptr]|AUXP|
|
||||
*/
|
||||
src = n->data+pos;
|
||||
memmove(src+1,src,n->size-pos+sizeof(raxNode*)*pos);
|
||||
memmove(src+1,src,n->size-pos);
|
||||
|
||||
/* We can now set the character and its child node pointer to get:
|
||||
*
|
||||
* [numc][abcx][ap][bp][cp]....|auxp|
|
||||
* [numc][abcx][ap][bp][cp][xp]|auxp| */
|
||||
* [HDR*][abcd][e...][Aptr][Bptr][....][Dptr][Eptr]|AUXP|
|
||||
* [HDR*][abcd][e...][Aptr][Bptr][Cptr][Dptr][Eptr]|AUXP|
|
||||
*/
|
||||
n->data[pos] = c;
|
||||
n->size++;
|
||||
raxNode **childfield = (raxNode**)(n->data+n->size+sizeof(raxNode*)*pos);
|
||||
src = (unsigned char*) raxNodeFirstChildPtr(n);
|
||||
raxNode **childfield = (raxNode**)(src+sizeof(raxNode*)*pos);
|
||||
memcpy(childfield,&child,sizeof(child));
|
||||
*childptr = child;
|
||||
*parentlink = childfield;
|
||||
return n;
|
||||
}
|
||||
|
||||
/* Return the pointer to the last child pointer in a node. For the compressed
|
||||
* nodes this is the only child pointer. */
|
||||
#define raxNodeLastChildPtr(n) ((raxNode**) ( \
|
||||
((char*)(n)) + \
|
||||
raxNodeCurrentLength(n) - \
|
||||
sizeof(raxNode*) - \
|
||||
(((n)->iskey && !(n)->isnull) ? sizeof(void*) : 0) \
|
||||
))
|
||||
|
||||
/* Return the pointer to the first child pointer. */
|
||||
#define raxNodeFirstChildPtr(n) ((raxNode**)((n)->data+(n)->size))
|
||||
|
||||
/* Turn the node 'n', that must be a node without any children, into a
|
||||
* compressed node representing a set of nodes linked one after the other
|
||||
* and having exactly one child each. The node can be a key or not: this
|
||||
@ -321,7 +404,7 @@ raxNode *raxCompressNode(raxNode *n, unsigned char *s, size_t len, raxNode **chi
|
||||
if (*child == NULL) return NULL;
|
||||
|
||||
/* Make space in the parent node. */
|
||||
newsize = sizeof(raxNode)+len+sizeof(raxNode*);
|
||||
newsize = sizeof(raxNode)+len+raxPadding(len)+sizeof(raxNode*);
|
||||
if (n->iskey) {
|
||||
data = raxGetData(n); /* To restore it later. */
|
||||
if (!n->isnull) newsize += sizeof(void*);
|
||||
@ -619,13 +702,14 @@ int raxGenericInsert(rax *rax, unsigned char *s, size_t len, void *data, void **
|
||||
raxNode *postfix = NULL;
|
||||
|
||||
if (trimmedlen) {
|
||||
nodesize = sizeof(raxNode)+trimmedlen+sizeof(raxNode*);
|
||||
nodesize = sizeof(raxNode)+trimmedlen+raxPadding(trimmedlen)+
|
||||
sizeof(raxNode*);
|
||||
if (h->iskey && !h->isnull) nodesize += sizeof(void*);
|
||||
trimmed = rax_malloc(nodesize);
|
||||
}
|
||||
|
||||
if (postfixlen) {
|
||||
nodesize = sizeof(raxNode)+postfixlen+
|
||||
nodesize = sizeof(raxNode)+postfixlen+raxPadding(postfixlen)+
|
||||
sizeof(raxNode*);
|
||||
postfix = rax_malloc(nodesize);
|
||||
}
|
||||
@ -701,11 +785,12 @@ int raxGenericInsert(rax *rax, unsigned char *s, size_t len, void *data, void **
|
||||
|
||||
/* Allocate postfix & trimmed nodes ASAP to fail for OOM gracefully. */
|
||||
size_t postfixlen = h->size - j;
|
||||
size_t nodesize = sizeof(raxNode)+postfixlen+sizeof(raxNode*);
|
||||
size_t nodesize = sizeof(raxNode)+postfixlen+raxPadding(postfixlen)+
|
||||
sizeof(raxNode*);
|
||||
if (data != NULL) nodesize += sizeof(void*);
|
||||
raxNode *postfix = rax_malloc(nodesize);
|
||||
|
||||
nodesize = sizeof(raxNode)+j+sizeof(raxNode*);
|
||||
nodesize = sizeof(raxNode)+j+raxPadding(j)+sizeof(raxNode*);
|
||||
if (h->iskey && !h->isnull) nodesize += sizeof(void*);
|
||||
raxNode *trimmed = rax_malloc(nodesize);
|
||||
|
||||
@ -875,7 +960,7 @@ raxNode *raxRemoveChild(raxNode *parent, raxNode *child) {
|
||||
return parent;
|
||||
}
|
||||
|
||||
/* Otherwise we need to scan for the children pointer and memmove()
|
||||
/* Otherwise we need to scan for the child pointer and memmove()
|
||||
* accordingly.
|
||||
*
|
||||
* 1. To start we seek the first element in both the children
|
||||
@ -900,13 +985,21 @@ raxNode *raxRemoveChild(raxNode *parent, raxNode *child) {
|
||||
debugf("raxRemoveChild tail len: %d\n", taillen);
|
||||
memmove(e,e+1,taillen);
|
||||
|
||||
/* Since we have one data byte less, also child pointers start one byte
|
||||
* before now. */
|
||||
memmove(((char*)cp)-1,cp,(parent->size-taillen-1)*sizeof(raxNode**));
|
||||
/* Compute the shift, that is the amount of bytes we should move our
|
||||
* child pointers to the left, since the removal of one edge character
|
||||
* and the corresponding padding change, may change the layout.
|
||||
* We just check if in the old version of the node there was at the
|
||||
* end just a single byte and all padding: in that case removing one char
|
||||
* will remove a whole sizeof(void*) word. */
|
||||
size_t shift = ((parent->size+4) % sizeof(void*)) == 1 ? sizeof(void*) : 0;
|
||||
|
||||
/* Move the remaining "tail" pointer at the right position as well. */
|
||||
/* Move the children pointers before the deletion point. */
|
||||
if (shift)
|
||||
memmove(((char*)cp)-shift,cp,(parent->size-taillen-1)*sizeof(raxNode**));
|
||||
|
||||
/* Move the remaining "tail" pointers at the right position as well. */
|
||||
size_t valuelen = (parent->iskey && !parent->isnull) ? sizeof(void*) : 0;
|
||||
memmove(((char*)c)-1,c+1,taillen*sizeof(raxNode**)+valuelen);
|
||||
memmove(((char*)c)-shift,c+1,taillen*sizeof(raxNode**)+valuelen);
|
||||
|
||||
/* 4. Update size. */
|
||||
parent->size--;
|
||||
@ -1072,7 +1165,7 @@ int raxRemove(rax *rax, unsigned char *s, size_t len, void **old) {
|
||||
if (nodes > 1) {
|
||||
/* If we can compress, create the new node and populate it. */
|
||||
size_t nodesize =
|
||||
sizeof(raxNode)+comprsize+sizeof(raxNode*);
|
||||
sizeof(raxNode)+comprsize+raxPadding(comprsize)+sizeof(raxNode*);
|
||||
raxNode *new = rax_malloc(nodesize);
|
||||
/* An out of memory here just means we cannot optimize this
|
||||
* node, but the tree is left in a consistent state. */
|
||||
@ -1313,7 +1406,7 @@ int raxIteratorNextStep(raxIterator *it, int noup) {
|
||||
}
|
||||
}
|
||||
|
||||
/* Seek the grestest key in the subtree at the current node. Return 0 on
|
||||
/* Seek the greatest key in the subtree at the current node. Return 0 on
|
||||
* out of memory, otherwise 1. This is an helper function for different
|
||||
* iteration functions below. */
|
||||
int raxSeekGreatest(raxIterator *it) {
|
||||
@ -1793,6 +1886,7 @@ void raxShow(rax *rax) {
|
||||
|
||||
/* Used by debugnode() macro to show info about a given node. */
|
||||
void raxDebugShowNode(const char *msg, raxNode *n) {
|
||||
if (raxDebugMsg == 0) return;
|
||||
printf("%s: %p [%.*s] key:%d size:%d children:",
|
||||
msg, (void*)n, (int)n->size, (char*)n->data, n->iskey, n->size);
|
||||
int numcld = n->iscompr ? 1 : n->size;
|
||||
@ -1807,4 +1901,43 @@ void raxDebugShowNode(const char *msg, raxNode *n) {
|
||||
fflush(stdout);
|
||||
}
|
||||
|
||||
/* Touch all the nodes of a tree returning a check sum. This is useful
|
||||
* in order to make Valgrind detect if there is something wrong while
|
||||
* reading the data structure.
|
||||
*
|
||||
* This function was used in order to identify Rax bugs after a big refactoring
|
||||
* using this technique:
|
||||
*
|
||||
* 1. The rax-test is executed using Valgrind, adding a printf() so that for
|
||||
* the fuzz tester we see what iteration in the loop we are in.
|
||||
* 2. After every modification of the radix tree made by the fuzz tester
|
||||
* in rax-test.c, we add a call to raxTouch().
|
||||
* 3. Now as soon as an operation will corrupt the tree, raxTouch() will
|
||||
* detect it (via Valgrind) immediately. We can add more calls to narrow
|
||||
* the state.
|
||||
* 4. At this point a good idea is to enable Rax debugging messages immediately
|
||||
* before the moment the tree is corrupted, to see what happens.
|
||||
*/
|
||||
unsigned long raxTouch(raxNode *n) {
|
||||
debugf("Touching %p\n", (void*)n);
|
||||
unsigned long sum = 0;
|
||||
if (n->iskey) {
|
||||
sum += (unsigned long)raxGetData(n);
|
||||
}
|
||||
|
||||
int numchildren = n->iscompr ? 1 : n->size;
|
||||
raxNode **cp = raxNodeFirstChildPtr(n);
|
||||
int count = 0;
|
||||
for (int i = 0; i < numchildren; i++) {
|
||||
if (numchildren > 1) {
|
||||
sum += (long)n->data[i];
|
||||
}
|
||||
raxNode *child;
|
||||
memcpy(&child,cp,sizeof(child));
|
||||
if (child == (void*)0x65d1760) count++;
|
||||
if (count > 1) exit(1);
|
||||
sum += raxTouch(child);
|
||||
cp++;
|
||||
}
|
||||
return sum;
|
||||
}
|
||||
|
38
src/rax.h
38
src/rax.h
@ -1,3 +1,33 @@
|
||||
/* Rax -- A radix tree implementation.
|
||||
*
|
||||
* Copyright (c) 2017-2018, Salvatore Sanfilippo <antirez at gmail dot com>
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright notice,
|
||||
* this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of Redis nor the names of its contributors may be used
|
||||
* to endorse or promote products derived from this software without
|
||||
* specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
* POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#ifndef RAX_H
|
||||
#define RAX_H
|
||||
|
||||
@ -77,16 +107,16 @@ typedef struct raxNode {
|
||||
* Note how the character is not stored in the children but in the
|
||||
* edge of the parents:
|
||||
*
|
||||
* [header strlen=0][abc][a-ptr][b-ptr][c-ptr](value-ptr?)
|
||||
* [header iscompr=0][abc][a-ptr][b-ptr][c-ptr](value-ptr?)
|
||||
*
|
||||
* if node is compressed (strlen != 0) the node has 1 children.
|
||||
* if node is compressed (iscompr bit is 1) the node has 1 children.
|
||||
* In that case the 'size' bytes of the string stored immediately at
|
||||
* the start of the data section, represent a sequence of successive
|
||||
* nodes linked one after the other, for which only the last one in
|
||||
* the sequence is actually represented as a node, and pointed to by
|
||||
* the current compressed node.
|
||||
*
|
||||
* [header strlen=3][xyz][z-ptr](value-ptr?)
|
||||
* [header iscompr=1][xyz][z-ptr](value-ptr?)
|
||||
*
|
||||
* Both compressed and not compressed nodes can represent a key
|
||||
* with associated data in the radix tree at any level (not just terminal
|
||||
@ -176,6 +206,8 @@ void raxStop(raxIterator *it);
|
||||
int raxEOF(raxIterator *it);
|
||||
void raxShow(rax *rax);
|
||||
uint64_t raxSize(rax *rax);
|
||||
unsigned long raxTouch(raxNode *n);
|
||||
void raxSetDebugMsg(int onoff);
|
||||
|
||||
/* Internal API. May be used by the node callback in order to access rax nodes
|
||||
* in a low level way, so this function is exported as well. */
|
||||
|
13
src/rdb.c
13
src/rdb.c
@ -1645,6 +1645,9 @@ robj *rdbLoadObject(int rdbtype, rio *rdb) {
|
||||
* node: the entries inside the listpack itself are delta-encoded
|
||||
* relatively to this ID. */
|
||||
sds nodekey = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL);
|
||||
if (nodekey == NULL) {
|
||||
rdbExitReportCorruptRDB("Stream master ID loading failed: invalid encoding or I/O error.");
|
||||
}
|
||||
if (sdslen(nodekey) != sizeof(streamID)) {
|
||||
rdbExitReportCorruptRDB("Stream node key entry is not the "
|
||||
"size of a stream ID");
|
||||
@ -2222,6 +2225,16 @@ void backgroundSaveDoneHandler(int exitcode, int bysignal) {
|
||||
}
|
||||
}
|
||||
|
||||
/* Kill the RDB saving child using SIGUSR1 (so that the parent will know
|
||||
* the child did not exit for an error, but because we wanted), and performs
|
||||
* the cleanup needed. */
|
||||
void killRDBChild(void) {
|
||||
kill(server.rdb_child_pid,SIGUSR1);
|
||||
rdbRemoveTempFile(server.rdb_child_pid);
|
||||
closeChildInfoPipe();
|
||||
updateDictResizePolicy();
|
||||
}
|
||||
|
||||
/* Spawn an RDB child that writes the RDB to the sockets of the slaves
|
||||
* that are currently in SLAVE_STATE_WAIT_BGSAVE_START state. */
|
||||
int rdbSaveToSlavesSockets(rdbSaveInfo *rsi) {
|
||||
|
@ -39,6 +39,7 @@
|
||||
#include <sys/time.h>
|
||||
#include <signal.h>
|
||||
#include <assert.h>
|
||||
#include <math.h>
|
||||
|
||||
#include <sds.h> /* Use hiredis sds. */
|
||||
#include "ae.h"
|
||||
@ -48,6 +49,7 @@
|
||||
|
||||
#define UNUSED(V) ((void) V)
|
||||
#define RANDPTR_INITIAL_SIZE 8
|
||||
#define MAX_LATENCY_PRECISION 3
|
||||
|
||||
static struct config {
|
||||
aeEventLoop *el;
|
||||
@ -79,6 +81,7 @@ static struct config {
|
||||
sds dbnumstr;
|
||||
char *tests;
|
||||
char *auth;
|
||||
int precision;
|
||||
} config;
|
||||
|
||||
typedef struct _client {
|
||||
@ -428,8 +431,19 @@ static int compareLatency(const void *a, const void *b) {
|
||||
return (*(long long*)a)-(*(long long*)b);
|
||||
}
|
||||
|
||||
static int ipow(int base, int exp) {
|
||||
int result = 1;
|
||||
while (exp) {
|
||||
if (exp & 1) result *= base;
|
||||
exp /= 2;
|
||||
base *= base;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
static void showLatencyReport(void) {
|
||||
int i, curlat = 0;
|
||||
int usbetweenlat = ipow(10, MAX_LATENCY_PRECISION-config.precision);
|
||||
float perc, reqpersec;
|
||||
|
||||
reqpersec = (float)config.requests_finished/((float)config.totlatency/1000);
|
||||
@ -444,10 +458,21 @@ static void showLatencyReport(void) {
|
||||
|
||||
qsort(config.latency,config.requests,sizeof(long long),compareLatency);
|
||||
for (i = 0; i < config.requests; i++) {
|
||||
if (config.latency[i]/1000 != curlat || i == (config.requests-1)) {
|
||||
curlat = config.latency[i]/1000;
|
||||
if (config.latency[i]/usbetweenlat != curlat ||
|
||||
i == (config.requests-1))
|
||||
{
|
||||
curlat = config.latency[i]/usbetweenlat;
|
||||
perc = ((float)(i+1)*100)/config.requests;
|
||||
printf("%.2f%% <= %d milliseconds\n", perc, curlat);
|
||||
printf("%.2f%% <= %.*f milliseconds\n", perc, config.precision,
|
||||
curlat/pow(10.0, config.precision));
|
||||
|
||||
/* After the 2 milliseconds latency to have percentages split
|
||||
* by decimals will just add a lot of noise to the output. */
|
||||
if (config.latency[i] > 2000) {
|
||||
config.precision = 0;
|
||||
usbetweenlat = ipow(10,
|
||||
MAX_LATENCY_PRECISION-config.precision);
|
||||
}
|
||||
}
|
||||
}
|
||||
printf("%.2f requests per second\n\n", reqpersec);
|
||||
@ -546,6 +571,11 @@ int parseOptions(int argc, const char **argv) {
|
||||
if (lastarg) goto invalid;
|
||||
config.dbnum = atoi(argv[++i]);
|
||||
config.dbnumstr = sdsfromlonglong(config.dbnum);
|
||||
} else if (!strcmp(argv[i],"--precision")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.precision = atoi(argv[++i]);
|
||||
if (config.precision < 0) config.precision = 0;
|
||||
if (config.precision > MAX_LATENCY_PRECISION) config.precision = MAX_LATENCY_PRECISION;
|
||||
} else if (!strcmp(argv[i],"--help")) {
|
||||
exit_status = 0;
|
||||
goto usage;
|
||||
@ -585,6 +615,7 @@ usage:
|
||||
" -e If server replies with errors, show them on stdout.\n"
|
||||
" (no more than 1 error per second is displayed)\n"
|
||||
" -q Quiet. Just show query/sec values\n"
|
||||
" --precision Number of decimal places to display in latency output (default 0)\n"
|
||||
" --csv Output in CSV format\n"
|
||||
" -l Loop. Run the tests forever\n"
|
||||
" -t <tests> Only run the comma separated list of tests. The test\n"
|
||||
@ -679,6 +710,7 @@ int main(int argc, const char **argv) {
|
||||
config.tests = NULL;
|
||||
config.dbnum = 0;
|
||||
config.auth = NULL;
|
||||
config.precision = 1;
|
||||
|
||||
i = parseOptions(argc,argv);
|
||||
argc -= i;
|
||||
|
1226
src/redis-cli.c
1226
src/redis-cli.c
File diff suppressed because it is too large
Load Diff
@ -117,6 +117,10 @@
|
||||
#define REDISMODULE_NODE_FAIL (1<<4)
|
||||
#define REDISMODULE_NODE_NOFAILOVER (1<<5)
|
||||
|
||||
#define REDISMODULE_CLUSTER_FLAG_NONE 0
|
||||
#define REDISMODULE_CLUSTER_FLAG_NO_FAILOVER (1<<1)
|
||||
#define REDISMODULE_CLUSTER_FLAG_NO_REDIRECTION (1<<2)
|
||||
|
||||
#define REDISMODULE_NOT_USED(V) ((void) V)
|
||||
|
||||
/* This type represents a timer handle, and is returned when a timer is
|
||||
@ -141,6 +145,8 @@ typedef struct RedisModuleType RedisModuleType;
|
||||
typedef struct RedisModuleDigest RedisModuleDigest;
|
||||
typedef struct RedisModuleBlockedClient RedisModuleBlockedClient;
|
||||
typedef struct RedisModuleClusterInfo RedisModuleClusterInfo;
|
||||
typedef struct RedisModuleDict RedisModuleDict;
|
||||
typedef struct RedisModuleDictIter RedisModuleDictIter;
|
||||
|
||||
typedef int (*RedisModuleCmdFunc)(RedisModuleCtx *ctx, RedisModuleString **argv, int argc);
|
||||
typedef void (*RedisModuleDisconnectFunc)(RedisModuleCtx *ctx, RedisModuleBlockedClient *bc);
|
||||
@ -273,6 +279,28 @@ long long REDISMODULE_API_FUNC(RedisModule_Milliseconds)(void);
|
||||
void REDISMODULE_API_FUNC(RedisModule_DigestAddStringBuffer)(RedisModuleDigest *md, unsigned char *ele, size_t len);
|
||||
void REDISMODULE_API_FUNC(RedisModule_DigestAddLongLong)(RedisModuleDigest *md, long long ele);
|
||||
void REDISMODULE_API_FUNC(RedisModule_DigestEndSequence)(RedisModuleDigest *md);
|
||||
RedisModuleDict *REDISMODULE_API_FUNC(RedisModule_CreateDict)(RedisModuleCtx *ctx);
|
||||
void REDISMODULE_API_FUNC(RedisModule_FreeDict)(RedisModuleCtx *ctx, RedisModuleDict *d);
|
||||
uint64_t REDISMODULE_API_FUNC(RedisModule_DictSize)(RedisModuleDict *d);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictSetC)(RedisModuleDict *d, void *key, size_t keylen, void *ptr);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictReplaceC)(RedisModuleDict *d, void *key, size_t keylen, void *ptr);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictSet)(RedisModuleDict *d, RedisModuleString *key, void *ptr);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictReplace)(RedisModuleDict *d, RedisModuleString *key, void *ptr);
|
||||
void *REDISMODULE_API_FUNC(RedisModule_DictGetC)(RedisModuleDict *d, void *key, size_t keylen, int *nokey);
|
||||
void *REDISMODULE_API_FUNC(RedisModule_DictGet)(RedisModuleDict *d, RedisModuleString *key, int *nokey);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictDelC)(RedisModuleDict *d, void *key, size_t keylen, void *oldval);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictDel)(RedisModuleDict *d, RedisModuleString *key, void *oldval);
|
||||
RedisModuleDictIter *REDISMODULE_API_FUNC(RedisModule_DictIteratorStartC)(RedisModuleDict *d, const char *op, void *key, size_t keylen);
|
||||
RedisModuleDictIter *REDISMODULE_API_FUNC(RedisModule_DictIteratorStart)(RedisModuleDict *d, const char *op, RedisModuleString *key);
|
||||
void REDISMODULE_API_FUNC(RedisModule_DictIteratorStop)(RedisModuleDictIter *di);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictIteratorReseekC)(RedisModuleDictIter *di, const char *op, void *key, size_t keylen);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictIteratorReseek)(RedisModuleDictIter *di, const char *op, RedisModuleString *key);
|
||||
void *REDISMODULE_API_FUNC(RedisModule_DictNextC)(RedisModuleDictIter *di, size_t *keylen, void **dataptr);
|
||||
void *REDISMODULE_API_FUNC(RedisModule_DictPrevC)(RedisModuleDictIter *di, size_t *keylen, void **dataptr);
|
||||
RedisModuleString *REDISMODULE_API_FUNC(RedisModule_DictNext)(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr);
|
||||
RedisModuleString *REDISMODULE_API_FUNC(RedisModule_DictPrev)(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictCompareC)(RedisModuleDictIter *di, const char *op, void *key, size_t keylen);
|
||||
int REDISMODULE_API_FUNC(RedisModule_DictCompare)(RedisModuleDictIter *di, const char *op, RedisModuleString *key);
|
||||
|
||||
/* Experimental APIs */
|
||||
#ifdef REDISMODULE_EXPERIMENTAL_API
|
||||
@ -303,6 +331,7 @@ size_t REDISMODULE_API_FUNC(RedisModule_GetClusterSize)(void);
|
||||
void REDISMODULE_API_FUNC(RedisModule_GetRandomBytes)(unsigned char *dst, size_t len);
|
||||
void REDISMODULE_API_FUNC(RedisModule_GetRandomHexChars)(char *dst, size_t len);
|
||||
void REDISMODULE_API_FUNC(RedisModule_SetDisconnectCallback)(RedisModuleBlockedClient *bc, RedisModuleDisconnectFunc callback);
|
||||
void REDISMODULE_API_FUNC(RedisModule_SetClusterFlags)(RedisModuleCtx *ctx, uint64_t flags);
|
||||
#endif
|
||||
|
||||
/* This is included inline inside each Redis module. */
|
||||
@ -412,6 +441,28 @@ static int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int
|
||||
REDISMODULE_GET_API(DigestAddStringBuffer);
|
||||
REDISMODULE_GET_API(DigestAddLongLong);
|
||||
REDISMODULE_GET_API(DigestEndSequence);
|
||||
REDISMODULE_GET_API(CreateDict);
|
||||
REDISMODULE_GET_API(FreeDict);
|
||||
REDISMODULE_GET_API(DictSize);
|
||||
REDISMODULE_GET_API(DictSetC);
|
||||
REDISMODULE_GET_API(DictReplaceC);
|
||||
REDISMODULE_GET_API(DictSet);
|
||||
REDISMODULE_GET_API(DictReplace);
|
||||
REDISMODULE_GET_API(DictGetC);
|
||||
REDISMODULE_GET_API(DictGet);
|
||||
REDISMODULE_GET_API(DictDelC);
|
||||
REDISMODULE_GET_API(DictDel);
|
||||
REDISMODULE_GET_API(DictIteratorStartC);
|
||||
REDISMODULE_GET_API(DictIteratorStart);
|
||||
REDISMODULE_GET_API(DictIteratorStop);
|
||||
REDISMODULE_GET_API(DictIteratorReseekC);
|
||||
REDISMODULE_GET_API(DictIteratorReseek);
|
||||
REDISMODULE_GET_API(DictNextC);
|
||||
REDISMODULE_GET_API(DictPrevC);
|
||||
REDISMODULE_GET_API(DictNext);
|
||||
REDISMODULE_GET_API(DictPrev);
|
||||
REDISMODULE_GET_API(DictCompare);
|
||||
REDISMODULE_GET_API(DictCompareC);
|
||||
|
||||
#ifdef REDISMODULE_EXPERIMENTAL_API
|
||||
REDISMODULE_GET_API(GetThreadSafeContext);
|
||||
@ -440,6 +491,7 @@ static int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int
|
||||
REDISMODULE_GET_API(GetClusterSize);
|
||||
REDISMODULE_GET_API(GetRandomBytes);
|
||||
REDISMODULE_GET_API(GetRandomHexChars);
|
||||
REDISMODULE_GET_API(SetClusterFlags);
|
||||
#endif
|
||||
|
||||
if (RedisModule_IsModuleNameBusy && RedisModule_IsModuleNameBusy(name)) return REDISMODULE_ERR;
|
||||
|
@ -48,7 +48,7 @@ int cancelReplicationHandshake(void);
|
||||
/* Return the pointer to a string representing the slave ip:listening_port
|
||||
* pair. Mostly useful for logging, since we want to log a slave using its
|
||||
* IP address and its listening port which is more clear for the user, for
|
||||
* example: "Closing connection with slave 10.1.2.3:6380". */
|
||||
* example: "Closing connection with replica 10.1.2.3:6380". */
|
||||
char *replicationGetSlaveName(client *c) {
|
||||
static char buf[NET_PEER_ID_LEN];
|
||||
char ip[NET_IP_STR_LEN];
|
||||
@ -64,7 +64,7 @@ char *replicationGetSlaveName(client *c) {
|
||||
if (c->slave_listening_port)
|
||||
anetFormatAddr(buf,sizeof(buf),ip,c->slave_listening_port);
|
||||
else
|
||||
snprintf(buf,sizeof(buf),"%s:<unknown-slave-port>",ip);
|
||||
snprintf(buf,sizeof(buf),"%s:<unknown-replica-port>",ip);
|
||||
} else {
|
||||
snprintf(buf,sizeof(buf),"client id #%llu",
|
||||
(unsigned long long) c->id);
|
||||
@ -263,7 +263,7 @@ void replicationFeedSlaves(list *slaves, int dictid, robj **argv, int argc) {
|
||||
* or are already in sync with the master. */
|
||||
|
||||
/* Add the multi bulk length. */
|
||||
addReplyMultiBulkLen(slave,argc);
|
||||
addReplyArrayLen(slave,argc);
|
||||
|
||||
/* Finally any additional argument that was not stored inside the
|
||||
* static buffer if any (from j to argc). */
|
||||
@ -296,7 +296,7 @@ void replicationFeedSlavesFromMasterStream(list *slaves, char *buf, size_t bufle
|
||||
|
||||
/* Don't feed slaves that are still waiting for BGSAVE to start */
|
||||
if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_START) continue;
|
||||
addReplyString(slave,buf,buflen);
|
||||
addReplyProto(slave,buf,buflen);
|
||||
}
|
||||
}
|
||||
|
||||
@ -344,7 +344,7 @@ void replicationFeedMonitors(client *c, list *monitors, int dictid, robj **argv,
|
||||
long long addReplyReplicationBacklog(client *c, long long offset) {
|
||||
long long j, skip, len;
|
||||
|
||||
serverLog(LL_DEBUG, "[PSYNC] Slave request offset: %lld", offset);
|
||||
serverLog(LL_DEBUG, "[PSYNC] Replica request offset: %lld", offset);
|
||||
|
||||
if (server.repl_backlog_histlen == 0) {
|
||||
serverLog(LL_DEBUG, "[PSYNC] Backlog history len is zero");
|
||||
@ -472,7 +472,7 @@ int masterTryPartialResynchronization(client *c) {
|
||||
strcasecmp(master_replid, server.replid2))
|
||||
{
|
||||
serverLog(LL_NOTICE,"Partial resynchronization not accepted: "
|
||||
"Replication ID mismatch (Slave asked for '%s', my "
|
||||
"Replication ID mismatch (Replica asked for '%s', my "
|
||||
"replication IDs are '%s' and '%s')",
|
||||
master_replid, server.replid, server.replid2);
|
||||
} else {
|
||||
@ -481,7 +481,7 @@ int masterTryPartialResynchronization(client *c) {
|
||||
"up to %lld", psync_offset, server.second_replid_offset);
|
||||
}
|
||||
} else {
|
||||
serverLog(LL_NOTICE,"Full resync requested by slave %s",
|
||||
serverLog(LL_NOTICE,"Full resync requested by replica %s",
|
||||
replicationGetSlaveName(c));
|
||||
}
|
||||
goto need_full_resync;
|
||||
@ -493,10 +493,10 @@ int masterTryPartialResynchronization(client *c) {
|
||||
psync_offset > (server.repl_backlog_off + server.repl_backlog_histlen))
|
||||
{
|
||||
serverLog(LL_NOTICE,
|
||||
"Unable to partial resync with slave %s for lack of backlog (Slave request was: %lld).", replicationGetSlaveName(c), psync_offset);
|
||||
"Unable to partial resync with replica %s for lack of backlog (Replica request was: %lld).", replicationGetSlaveName(c), psync_offset);
|
||||
if (psync_offset > server.master_repl_offset) {
|
||||
serverLog(LL_WARNING,
|
||||
"Warning: slave %s tried to PSYNC with an offset that is greater than the master replication offset.", replicationGetSlaveName(c));
|
||||
"Warning: replica %s tried to PSYNC with an offset that is greater than the master replication offset.", replicationGetSlaveName(c));
|
||||
}
|
||||
goto need_full_resync;
|
||||
}
|
||||
@ -567,7 +567,7 @@ int startBgsaveForReplication(int mincapa) {
|
||||
listNode *ln;
|
||||
|
||||
serverLog(LL_NOTICE,"Starting BGSAVE for SYNC with target: %s",
|
||||
socket_target ? "slaves sockets" : "disk");
|
||||
socket_target ? "replicas sockets" : "disk");
|
||||
|
||||
rdbSaveInfo rsi, *rsiptr;
|
||||
rsiptr = rdbPopulateSaveInfo(&rsi);
|
||||
@ -644,7 +644,7 @@ void syncCommand(client *c) {
|
||||
return;
|
||||
}
|
||||
|
||||
serverLog(LL_NOTICE,"Slave %s asks for synchronization",
|
||||
serverLog(LL_NOTICE,"Replica %s asks for synchronization",
|
||||
replicationGetSlaveName(c));
|
||||
|
||||
/* Try a partial resynchronization if this is a PSYNC command.
|
||||
@ -725,7 +725,7 @@ void syncCommand(client *c) {
|
||||
} else {
|
||||
/* No way, we need to wait for the next BGSAVE in order to
|
||||
* register differences. */
|
||||
serverLog(LL_NOTICE,"Can't attach the slave to the current BGSAVE. Waiting for next BGSAVE for SYNC");
|
||||
serverLog(LL_NOTICE,"Can't attach the replica to the current BGSAVE. Waiting for next BGSAVE for SYNC");
|
||||
}
|
||||
|
||||
/* CASE 2: BGSAVE is in progress, with socket target. */
|
||||
@ -798,7 +798,7 @@ void replconfCommand(client *c) {
|
||||
memcpy(c->slave_ip,ip,sdslen(ip)+1);
|
||||
} else {
|
||||
addReplyErrorFormat(c,"REPLCONF ip-address provided by "
|
||||
"slave instance is too long: %zd bytes", sdslen(ip));
|
||||
"replica instance is too long: %zd bytes", sdslen(ip));
|
||||
return;
|
||||
}
|
||||
} else if (!strcasecmp(c->argv[j]->ptr,"capa")) {
|
||||
@ -858,12 +858,12 @@ void putSlaveOnline(client *slave) {
|
||||
slave->repl_ack_time = server.unixtime; /* Prevent false timeout. */
|
||||
if (aeCreateFileEvent(server.el, slave->fd, AE_WRITABLE,
|
||||
sendReplyToClient, slave) == AE_ERR) {
|
||||
serverLog(LL_WARNING,"Unable to register writable event for slave bulk transfer: %s", strerror(errno));
|
||||
serverLog(LL_WARNING,"Unable to register writable event for replica bulk transfer: %s", strerror(errno));
|
||||
freeClient(slave);
|
||||
return;
|
||||
}
|
||||
refreshGoodSlavesCount();
|
||||
serverLog(LL_NOTICE,"Synchronization with slave %s succeeded",
|
||||
serverLog(LL_NOTICE,"Synchronization with replica %s succeeded",
|
||||
replicationGetSlaveName(slave));
|
||||
}
|
||||
|
||||
@ -880,7 +880,7 @@ void sendBulkToSlave(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
if (slave->replpreamble) {
|
||||
nwritten = write(fd,slave->replpreamble,sdslen(slave->replpreamble));
|
||||
if (nwritten == -1) {
|
||||
serverLog(LL_VERBOSE,"Write error sending RDB preamble to slave: %s",
|
||||
serverLog(LL_VERBOSE,"Write error sending RDB preamble to replica: %s",
|
||||
strerror(errno));
|
||||
freeClient(slave);
|
||||
return;
|
||||
@ -900,14 +900,14 @@ void sendBulkToSlave(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
lseek(slave->repldbfd,slave->repldboff,SEEK_SET);
|
||||
buflen = read(slave->repldbfd,buf,PROTO_IOBUF_LEN);
|
||||
if (buflen <= 0) {
|
||||
serverLog(LL_WARNING,"Read error sending DB to slave: %s",
|
||||
serverLog(LL_WARNING,"Read error sending DB to replica: %s",
|
||||
(buflen == 0) ? "premature EOF" : strerror(errno));
|
||||
freeClient(slave);
|
||||
return;
|
||||
}
|
||||
if ((nwritten = write(fd,buf,buflen)) == -1) {
|
||||
if (errno != EAGAIN) {
|
||||
serverLog(LL_WARNING,"Write error sending DB to slave: %s",
|
||||
serverLog(LL_WARNING,"Write error sending DB to replica: %s",
|
||||
strerror(errno));
|
||||
freeClient(slave);
|
||||
}
|
||||
@ -961,7 +961,7 @@ void updateSlavesWaitingBgsave(int bgsaveerr, int type) {
|
||||
* the slave online. */
|
||||
if (type == RDB_CHILD_TYPE_SOCKET) {
|
||||
serverLog(LL_NOTICE,
|
||||
"Streamed RDB transfer with slave %s succeeded (socket). Waiting for REPLCONF ACK from slave to enable streaming",
|
||||
"Streamed RDB transfer with replica %s succeeded (socket). Waiting for REPLCONF ACK from slave to enable streaming",
|
||||
replicationGetSlaveName(slave));
|
||||
/* Note: we wait for a REPLCONF ACK message from slave in
|
||||
* order to really put it online (install the write handler
|
||||
@ -1080,6 +1080,7 @@ void replicationCreateMasterClient(int fd, int dbid) {
|
||||
server.master->authenticated = 1;
|
||||
server.master->reploff = server.master_initial_offset;
|
||||
server.master->read_reploff = server.master->reploff;
|
||||
server.master->user = NULL; /* This client can do everything. */
|
||||
memcpy(server.master->replid, server.master_replid,
|
||||
sizeof(server.master_replid));
|
||||
/* If master offset is set to -1, this master is old and is not
|
||||
@ -1096,7 +1097,7 @@ void restartAOF() {
|
||||
sleep(1);
|
||||
}
|
||||
if (!retry) {
|
||||
serverLog(LL_WARNING,"FATAL: this slave instance finished the synchronization with its master, but the AOF can't be turned on. Exiting now.");
|
||||
serverLog(LL_WARNING,"FATAL: this replica instance finished the synchronization with its master, but the AOF can't be turned on. Exiting now.");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
@ -1161,12 +1162,12 @@ void readSyncBulkPayload(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
* at the next call. */
|
||||
server.repl_transfer_size = 0;
|
||||
serverLog(LL_NOTICE,
|
||||
"MASTER <-> SLAVE sync: receiving streamed RDB from master");
|
||||
"MASTER <-> REPLICA sync: receiving streamed RDB from master");
|
||||
} else {
|
||||
usemark = 0;
|
||||
server.repl_transfer_size = strtol(buf+1,NULL,10);
|
||||
serverLog(LL_NOTICE,
|
||||
"MASTER <-> SLAVE sync: receiving %lld bytes from master",
|
||||
"MASTER <-> REPLICA sync: receiving %lld bytes from master",
|
||||
(long long) server.repl_transfer_size);
|
||||
}
|
||||
return;
|
||||
@ -1207,7 +1208,7 @@ void readSyncBulkPayload(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
|
||||
server.repl_transfer_lastio = server.unixtime;
|
||||
if ((nwritten = write(server.repl_transfer_fd,buf,nread)) != nread) {
|
||||
serverLog(LL_WARNING,"Write error or short write writing to the DB dump file needed for MASTER <-> SLAVE synchronization: %s",
|
||||
serverLog(LL_WARNING,"Write error or short write writing to the DB dump file needed for MASTER <-> REPLICA synchronization: %s",
|
||||
(nwritten == -1) ? strerror(errno) : "short write");
|
||||
goto error;
|
||||
}
|
||||
@ -1245,12 +1246,24 @@ void readSyncBulkPayload(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
if (eof_reached) {
|
||||
int aof_is_enabled = server.aof_state != AOF_OFF;
|
||||
|
||||
/* Ensure background save doesn't overwrite synced data */
|
||||
if (server.rdb_child_pid != -1) {
|
||||
serverLog(LL_NOTICE,
|
||||
"Replica is about to load the RDB file received from the "
|
||||
"master, but there is a pending RDB child running. "
|
||||
"Killing process %ld and removing its temp file to avoid "
|
||||
"any race",
|
||||
(long) server.rdb_child_pid);
|
||||
killRDBChild();
|
||||
}
|
||||
|
||||
if (rename(server.repl_transfer_tmpfile,server.rdb_filename) == -1) {
|
||||
serverLog(LL_WARNING,"Failed trying to rename the temp DB into dump.rdb in MASTER <-> SLAVE synchronization: %s", strerror(errno));
|
||||
serverLog(LL_WARNING,"Failed trying to rename the temp DB into %s in MASTER <-> REPLICA synchronization: %s",
|
||||
server.rdb_filename, strerror(errno));
|
||||
cancelReplicationHandshake();
|
||||
return;
|
||||
}
|
||||
serverLog(LL_NOTICE, "MASTER <-> SLAVE sync: Flushing old data");
|
||||
serverLog(LL_NOTICE, "MASTER <-> REPLICA sync: Flushing old data");
|
||||
/* We need to stop any AOFRW fork before flusing and parsing
|
||||
* RDB, otherwise we'll create a copy-on-write disaster. */
|
||||
if(aof_is_enabled) stopAppendOnly();
|
||||
@ -1264,7 +1277,7 @@ void readSyncBulkPayload(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
* rdbLoad() will call the event loop to process events from time to
|
||||
* time for non blocking loading. */
|
||||
aeDeleteFileEvent(server.el,server.repl_transfer_s,AE_READABLE);
|
||||
serverLog(LL_NOTICE, "MASTER <-> SLAVE sync: Loading DB in memory");
|
||||
serverLog(LL_NOTICE, "MASTER <-> REPLICA sync: Loading DB in memory");
|
||||
rdbSaveInfo rsi = RDB_SAVE_INFO_INIT;
|
||||
if (rdbLoad(server.rdb_filename,&rsi) != C_OK) {
|
||||
serverLog(LL_WARNING,"Failed trying to load the MASTER synchronization DB from disk");
|
||||
@ -1292,7 +1305,7 @@ void readSyncBulkPayload(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
* masters after a failover. */
|
||||
if (server.repl_backlog == NULL) createReplicationBacklog();
|
||||
|
||||
serverLog(LL_NOTICE, "MASTER <-> SLAVE sync: Finished with success");
|
||||
serverLog(LL_NOTICE, "MASTER <-> REPLICA sync: Finished with success");
|
||||
/* Restart the AOF subsystem now that we finished the sync. This
|
||||
* will trigger an AOF rewrite, and when done will start appending
|
||||
* to the new file. */
|
||||
@ -1648,7 +1661,13 @@ void syncWithMaster(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
|
||||
/* AUTH with the master if required. */
|
||||
if (server.repl_state == REPL_STATE_SEND_AUTH) {
|
||||
if (server.masterauth) {
|
||||
if (server.masteruser && server.masterauth) {
|
||||
err = sendSynchronousCommand(SYNC_CMD_WRITE,fd,"AUTH",
|
||||
server.masteruser,server.masterauth,NULL);
|
||||
if (err) goto write_error;
|
||||
server.repl_state = REPL_STATE_RECEIVE_AUTH;
|
||||
return;
|
||||
} else if (server.masterauth) {
|
||||
err = sendSynchronousCommand(SYNC_CMD_WRITE,fd,"AUTH",server.masterauth,NULL);
|
||||
if (err) goto write_error;
|
||||
server.repl_state = REPL_STATE_RECEIVE_AUTH;
|
||||
@ -1791,7 +1810,7 @@ void syncWithMaster(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
* uninstalling the read handler from the file descriptor. */
|
||||
|
||||
if (psync_result == PSYNC_CONTINUE) {
|
||||
serverLog(LL_NOTICE, "MASTER <-> SLAVE sync: Master accepted a Partial Resynchronization.");
|
||||
serverLog(LL_NOTICE, "MASTER <-> REPLICA sync: Master accepted a Partial Resynchronization.");
|
||||
return;
|
||||
}
|
||||
|
||||
@ -1823,7 +1842,7 @@ void syncWithMaster(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
sleep(1);
|
||||
}
|
||||
if (dfd == -1) {
|
||||
serverLog(LL_WARNING,"Opening the temp file needed for MASTER <-> SLAVE synchronization: %s",strerror(errno));
|
||||
serverLog(LL_WARNING,"Opening the temp file needed for MASTER <-> REPLICA synchronization: %s",strerror(errno));
|
||||
goto error;
|
||||
}
|
||||
|
||||
@ -1997,11 +2016,11 @@ void replicationHandleMasterDisconnection(void) {
|
||||
* the slaves only if we'll have to do a full resync with our master. */
|
||||
}
|
||||
|
||||
void slaveofCommand(client *c) {
|
||||
void replicaofCommand(client *c) {
|
||||
/* SLAVEOF is not allowed in cluster mode as replication is automatically
|
||||
* configured using the current address of the master node. */
|
||||
if (server.cluster_enabled) {
|
||||
addReplyError(c,"SLAVEOF not allowed in cluster mode.");
|
||||
addReplyError(c,"REPLICAOF not allowed in cluster mode.");
|
||||
return;
|
||||
}
|
||||
|
||||
@ -2025,7 +2044,7 @@ void slaveofCommand(client *c) {
|
||||
/* Check if we are already attached to the specified slave */
|
||||
if (server.masterhost && !strcasecmp(server.masterhost,c->argv[1]->ptr)
|
||||
&& server.masterport == port) {
|
||||
serverLog(LL_NOTICE,"SLAVE OF would result into synchronization with the master we are already connected with. No operation performed.");
|
||||
serverLog(LL_NOTICE,"REPLICAOF would result into synchronization with the master we are already connected with. No operation performed.");
|
||||
addReplySds(c,sdsnew("+OK Already connected to specified master\r\n"));
|
||||
return;
|
||||
}
|
||||
@ -2033,7 +2052,7 @@ void slaveofCommand(client *c) {
|
||||
* we can continue. */
|
||||
replicationSetMaster(c->argv[1]->ptr, port);
|
||||
sds client = catClientInfoString(sdsempty(),c);
|
||||
serverLog(LL_NOTICE,"SLAVE OF %s:%d enabled (user request from '%s')",
|
||||
serverLog(LL_NOTICE,"REPLICAOF %s:%d enabled (user request from '%s')",
|
||||
server.masterhost, server.masterport, client);
|
||||
sdsfree(client);
|
||||
}
|
||||
@ -2050,10 +2069,10 @@ void roleCommand(client *c) {
|
||||
void *mbcount;
|
||||
int slaves = 0;
|
||||
|
||||
addReplyMultiBulkLen(c,3);
|
||||
addReplyArrayLen(c,3);
|
||||
addReplyBulkCBuffer(c,"master",6);
|
||||
addReplyLongLong(c,server.master_repl_offset);
|
||||
mbcount = addDeferredMultiBulkLength(c);
|
||||
mbcount = addReplyDeferredLen(c);
|
||||
listRewind(server.slaves,&li);
|
||||
while((ln = listNext(&li))) {
|
||||
client *slave = ln->value;
|
||||
@ -2065,17 +2084,17 @@ void roleCommand(client *c) {
|
||||
slaveip = ip;
|
||||
}
|
||||
if (slave->replstate != SLAVE_STATE_ONLINE) continue;
|
||||
addReplyMultiBulkLen(c,3);
|
||||
addReplyArrayLen(c,3);
|
||||
addReplyBulkCString(c,slaveip);
|
||||
addReplyBulkLongLong(c,slave->slave_listening_port);
|
||||
addReplyBulkLongLong(c,slave->repl_ack_off);
|
||||
slaves++;
|
||||
}
|
||||
setDeferredMultiBulkLength(c,mbcount,slaves);
|
||||
setDeferredArrayLen(c,mbcount,slaves);
|
||||
} else {
|
||||
char *slavestate = NULL;
|
||||
|
||||
addReplyMultiBulkLen(c,5);
|
||||
addReplyArrayLen(c,5);
|
||||
addReplyBulkCBuffer(c,"slave",5);
|
||||
addReplyBulkCString(c,server.masterhost);
|
||||
addReplyLongLong(c,server.masterport);
|
||||
@ -2104,7 +2123,7 @@ void replicationSendAck(void) {
|
||||
|
||||
if (c != NULL) {
|
||||
c->flags |= CLIENT_MASTER_FORCE_REPLY;
|
||||
addReplyMultiBulkLen(c,3);
|
||||
addReplyArrayLen(c,3);
|
||||
addReplyBulkCString(c,"REPLCONF");
|
||||
addReplyBulkCString(c,"ACK");
|
||||
addReplyBulkLongLong(c,c->reploff);
|
||||
@ -2191,7 +2210,7 @@ void replicationCacheMasterUsingMyself(void) {
|
||||
unlinkClient(server.master);
|
||||
server.cached_master = server.master;
|
||||
server.master = NULL;
|
||||
serverLog(LL_NOTICE,"Before turning into a slave, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.");
|
||||
serverLog(LL_NOTICE,"Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.");
|
||||
}
|
||||
|
||||
/* Free a cached master, called when there are no longer the conditions for
|
||||
@ -2407,7 +2426,7 @@ void waitCommand(client *c) {
|
||||
long long offset = c->woff;
|
||||
|
||||
if (server.masterhost) {
|
||||
addReplyError(c,"WAIT cannot be used with slave instances. Please also note that since Redis 4.0 if a slave is configured to be writable (which is not the default) writes to slaves are just local and are not propagated.");
|
||||
addReplyError(c,"WAIT cannot be used with replica instances. Please also note that since Redis 4.0 if a replica is configured to be writable (which is not the default) writes to replicas are just local and are not propagated.");
|
||||
return;
|
||||
}
|
||||
|
||||
@ -2539,7 +2558,7 @@ void replicationCron(void) {
|
||||
serverLog(LL_NOTICE,"Connecting to MASTER %s:%d",
|
||||
server.masterhost, server.masterport);
|
||||
if (connectWithMaster() == C_OK) {
|
||||
serverLog(LL_NOTICE,"MASTER <-> SLAVE sync started");
|
||||
serverLog(LL_NOTICE,"MASTER <-> REPLICA sync started");
|
||||
}
|
||||
}
|
||||
|
||||
@ -2611,7 +2630,7 @@ void replicationCron(void) {
|
||||
if (slave->flags & CLIENT_PRE_PSYNC) continue;
|
||||
if ((server.unixtime - slave->repl_ack_time) > server.repl_timeout)
|
||||
{
|
||||
serverLog(LL_WARNING, "Disconnecting timedout slave: %s",
|
||||
serverLog(LL_WARNING, "Disconnecting timedout replica: %s",
|
||||
replicationGetSlaveName(slave));
|
||||
freeClient(slave);
|
||||
}
|
||||
@ -2641,7 +2660,7 @@ void replicationCron(void) {
|
||||
* be the same as our repl-id.
|
||||
* 3. We, yet as master, receive some updates, that will not
|
||||
* increment the master_repl_offset.
|
||||
* 4. Later we are turned into a slave, connecto to the new
|
||||
* 4. Later we are turned into a slave, connect to the new
|
||||
* master that will accept our PSYNC request by second
|
||||
* replication ID, but there will be data inconsistency
|
||||
* because we received writes. */
|
||||
@ -2650,7 +2669,7 @@ void replicationCron(void) {
|
||||
freeReplicationBacklog();
|
||||
serverLog(LL_NOTICE,
|
||||
"Replication backlog freed after %d seconds "
|
||||
"without connected slaves.",
|
||||
"without connected replicas.",
|
||||
(int) server.repl_backlog_time_limit);
|
||||
}
|
||||
}
|
||||
|
148
src/scripting.c
148
src/scripting.c
@ -42,7 +42,7 @@ char *redisProtocolToLuaType_Int(lua_State *lua, char *reply);
|
||||
char *redisProtocolToLuaType_Bulk(lua_State *lua, char *reply);
|
||||
char *redisProtocolToLuaType_Status(lua_State *lua, char *reply);
|
||||
char *redisProtocolToLuaType_Error(lua_State *lua, char *reply);
|
||||
char *redisProtocolToLuaType_MultiBulk(lua_State *lua, char *reply);
|
||||
char *redisProtocolToLuaType_MultiBulk(lua_State *lua, char *reply, int atype);
|
||||
int redis_math_random (lua_State *L);
|
||||
int redis_math_randomseed (lua_State *L);
|
||||
void ldbInit(void);
|
||||
@ -132,7 +132,9 @@ char *redisProtocolToLuaType(lua_State *lua, char* reply) {
|
||||
case '$': p = redisProtocolToLuaType_Bulk(lua,reply); break;
|
||||
case '+': p = redisProtocolToLuaType_Status(lua,reply); break;
|
||||
case '-': p = redisProtocolToLuaType_Error(lua,reply); break;
|
||||
case '*': p = redisProtocolToLuaType_MultiBulk(lua,reply); break;
|
||||
case '*': p = redisProtocolToLuaType_MultiBulk(lua,reply,*p); break;
|
||||
case '%': p = redisProtocolToLuaType_MultiBulk(lua,reply,*p); break;
|
||||
case '~': p = redisProtocolToLuaType_MultiBulk(lua,reply,*p); break;
|
||||
}
|
||||
return p;
|
||||
}
|
||||
@ -180,22 +182,38 @@ char *redisProtocolToLuaType_Error(lua_State *lua, char *reply) {
|
||||
return p+2;
|
||||
}
|
||||
|
||||
char *redisProtocolToLuaType_MultiBulk(lua_State *lua, char *reply) {
|
||||
char *redisProtocolToLuaType_MultiBulk(lua_State *lua, char *reply, int atype) {
|
||||
char *p = strchr(reply+1,'\r');
|
||||
long long mbulklen;
|
||||
int j = 0;
|
||||
|
||||
string2ll(reply+1,p-reply-1,&mbulklen);
|
||||
p += 2;
|
||||
if (mbulklen == -1) {
|
||||
lua_pushboolean(lua,0);
|
||||
return p;
|
||||
}
|
||||
lua_newtable(lua);
|
||||
for (j = 0; j < mbulklen; j++) {
|
||||
lua_pushnumber(lua,j+1);
|
||||
p = redisProtocolToLuaType(lua,p);
|
||||
lua_settable(lua,-3);
|
||||
if (server.lua_caller->resp == 2 || atype == '*') {
|
||||
p += 2;
|
||||
if (mbulklen == -1) {
|
||||
lua_pushboolean(lua,0);
|
||||
return p;
|
||||
}
|
||||
lua_newtable(lua);
|
||||
for (j = 0; j < mbulklen; j++) {
|
||||
lua_pushnumber(lua,j+1);
|
||||
p = redisProtocolToLuaType(lua,p);
|
||||
lua_settable(lua,-3);
|
||||
}
|
||||
} else if (server.lua_caller->resp == 3) {
|
||||
/* Here we handle only Set and Map replies in RESP3 mode, since arrays
|
||||
* follow the above RESP2 code path. */
|
||||
p += 2;
|
||||
lua_newtable(lua);
|
||||
for (j = 0; j < mbulklen; j++) {
|
||||
p = redisProtocolToLuaType(lua,p);
|
||||
if (atype == '%') {
|
||||
p = redisProtocolToLuaType(lua,p);
|
||||
} else {
|
||||
lua_pushboolean(lua,1);
|
||||
}
|
||||
lua_settable(lua,-3);
|
||||
}
|
||||
}
|
||||
return p;
|
||||
}
|
||||
@ -282,7 +300,7 @@ void luaReplyToRedisReply(client *c, lua_State *lua) {
|
||||
addReplyBulkCBuffer(c,(char*)lua_tostring(lua,-1),lua_strlen(lua,-1));
|
||||
break;
|
||||
case LUA_TBOOLEAN:
|
||||
addReply(c,lua_toboolean(lua,-1) ? shared.cone : shared.nullbulk);
|
||||
addReply(c,lua_toboolean(lua,-1) ? shared.cone : shared.null[c->resp]);
|
||||
break;
|
||||
case LUA_TNUMBER:
|
||||
addReplyLongLong(c,(long long)lua_tonumber(lua,-1));
|
||||
@ -315,7 +333,7 @@ void luaReplyToRedisReply(client *c, lua_State *lua) {
|
||||
sdsfree(ok);
|
||||
lua_pop(lua,1);
|
||||
} else {
|
||||
void *replylen = addDeferredMultiBulkLength(c);
|
||||
void *replylen = addReplyDeferredLen(c);
|
||||
int j = 1, mbulklen = 0;
|
||||
|
||||
lua_pop(lua,1); /* Discard the 'ok' field value we popped */
|
||||
@ -330,11 +348,11 @@ void luaReplyToRedisReply(client *c, lua_State *lua) {
|
||||
luaReplyToRedisReply(c, lua);
|
||||
mbulklen++;
|
||||
}
|
||||
setDeferredMultiBulkLength(c,replylen,mbulklen);
|
||||
setDeferredArrayLen(c,replylen,mbulklen);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
}
|
||||
lua_pop(lua,1);
|
||||
}
|
||||
@ -442,6 +460,7 @@ int luaRedisGenericCommand(lua_State *lua, int raise_error) {
|
||||
/* Setup our fake client for command execution */
|
||||
c->argv = argv;
|
||||
c->argc = argc;
|
||||
c->user = server.lua_caller->user;
|
||||
|
||||
/* Log the command if debugging is active. */
|
||||
if (ldb.active && ldb.step) {
|
||||
@ -479,10 +498,24 @@ int luaRedisGenericCommand(lua_State *lua, int raise_error) {
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/* Check the ACLs. */
|
||||
int acl_retval = ACLCheckCommandPerm(c);
|
||||
if (acl_retval != ACL_OK) {
|
||||
if (acl_retval == ACL_DENIED_CMD)
|
||||
luaPushError(lua, "The user executing the script can't run this "
|
||||
"command or subcommand");
|
||||
else
|
||||
luaPushError(lua, "The user executing the script can't access "
|
||||
"at least one of the keys mentioned in the "
|
||||
"command arguments");
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/* Write commands are forbidden against read-only slaves, or if a
|
||||
* command marked as non-deterministic was already called in the context
|
||||
* of this script. */
|
||||
if (cmd->flags & CMD_WRITE) {
|
||||
int deny_write_type = writeCommandsDeniedByDiskError();
|
||||
if (server.lua_random_dirty && !server.lua_replicate_commands) {
|
||||
luaPushError(lua,
|
||||
"Write commands not allowed after non deterministic commands. Call redis.replicate_commands() at the start of your script in order to switch to single commands replication mode.");
|
||||
@ -493,11 +526,16 @@ int luaRedisGenericCommand(lua_State *lua, int raise_error) {
|
||||
{
|
||||
luaPushError(lua, shared.roslaveerr->ptr);
|
||||
goto cleanup;
|
||||
} else if (server.stop_writes_on_bgsave_err &&
|
||||
server.saveparamslen > 0 &&
|
||||
server.lastbgsave_status == C_ERR)
|
||||
{
|
||||
luaPushError(lua, shared.bgsaveerr->ptr);
|
||||
} else if (deny_write_type != DISK_ERROR_TYPE_NONE) {
|
||||
if (deny_write_type == DISK_ERROR_TYPE_RDB) {
|
||||
luaPushError(lua, shared.bgsaveerr->ptr);
|
||||
} else {
|
||||
sds aof_write_err = sdscatfmt(sdsempty(),
|
||||
"-MISCONF Errors writing to the AOF file: %s\r\n",
|
||||
strerror(server.aof_last_write_errno));
|
||||
luaPushError(lua, aof_write_err);
|
||||
sdsfree(aof_write_err);
|
||||
}
|
||||
goto cleanup;
|
||||
}
|
||||
}
|
||||
@ -506,10 +544,13 @@ int luaRedisGenericCommand(lua_State *lua, int raise_error) {
|
||||
* could enlarge the memory usage are not allowed, but only if this is the
|
||||
* first write in the context of this script, otherwise we can't stop
|
||||
* in the middle. */
|
||||
if (server.maxmemory && server.lua_write_dirty == 0 &&
|
||||
if (server.maxmemory && /* Maxmemory is actually enabled. */
|
||||
!server.loading && /* Don't care about mem if loading. */
|
||||
!server.masterhost && /* Slave must execute the script. */
|
||||
server.lua_write_dirty == 0 && /* Script had no side effects so far. */
|
||||
(cmd->flags & CMD_DENYOOM))
|
||||
{
|
||||
if (freeMemoryIfNeeded() == C_ERR) {
|
||||
if (getMaxmemoryState(NULL,NULL,NULL,NULL) != C_OK) {
|
||||
luaPushError(lua, shared.oomerr->ptr);
|
||||
goto cleanup;
|
||||
}
|
||||
@ -628,6 +669,8 @@ cleanup:
|
||||
argv_size = 0;
|
||||
}
|
||||
|
||||
c->user = NULL;
|
||||
|
||||
if (raise_error) {
|
||||
/* If we are here we should have an error in the stack, in the
|
||||
* form of a table with an "err" field. Extract the string to
|
||||
@ -768,7 +811,7 @@ int luaRedisSetReplCommand(lua_State *lua) {
|
||||
|
||||
flags = lua_tonumber(lua,-1);
|
||||
if ((flags & ~(PROPAGATE_AOF|PROPAGATE_REPL)) != 0) {
|
||||
lua_pushstring(lua, "Invalid replication flags. Use REPL_AOF, REPL_SLAVE, REPL_ALL or REPL_NONE.");
|
||||
lua_pushstring(lua, "Invalid replication flags. Use REPL_AOF, REPL_REPLICA, REPL_ALL or REPL_NONE.");
|
||||
return lua_error(lua);
|
||||
}
|
||||
server.lua_repl = flags;
|
||||
@ -908,7 +951,6 @@ void scriptingInit(int setup) {
|
||||
server.lua_client = NULL;
|
||||
server.lua_caller = NULL;
|
||||
server.lua_timedout = 0;
|
||||
server.lua_always_replicate_commands = 0; /* Only DEBUG can change it.*/
|
||||
ldbInit();
|
||||
}
|
||||
|
||||
@ -919,6 +961,7 @@ void scriptingInit(int setup) {
|
||||
* This is useful for replication, as we need to replicate EVALSHA
|
||||
* as EVAL, so we need to remember the associated script. */
|
||||
server.lua_scripts = dictCreate(&shaScriptObjectDictType,NULL);
|
||||
server.lua_scripts_mem = 0;
|
||||
|
||||
/* Register the redis commands table and fields */
|
||||
lua_newtable(lua);
|
||||
@ -989,6 +1032,10 @@ void scriptingInit(int setup) {
|
||||
lua_pushnumber(lua,PROPAGATE_REPL);
|
||||
lua_settable(lua,-3);
|
||||
|
||||
lua_pushstring(lua,"REPL_REPLICA");
|
||||
lua_pushnumber(lua,PROPAGATE_REPL);
|
||||
lua_settable(lua,-3);
|
||||
|
||||
lua_pushstring(lua,"REPL_ALL");
|
||||
lua_pushnumber(lua,PROPAGATE_AOF|PROPAGATE_REPL);
|
||||
lua_settable(lua,-3);
|
||||
@ -1073,6 +1120,7 @@ void scriptingInit(int setup) {
|
||||
* This function is used in order to reset the scripting environment. */
|
||||
void scriptingRelease(void) {
|
||||
dictRelease(server.lua_scripts);
|
||||
server.lua_scripts_mem = 0;
|
||||
lua_close(server.lua);
|
||||
}
|
||||
|
||||
@ -1207,17 +1255,19 @@ sds luaCreateFunction(client *c, lua_State *lua, robj *body) {
|
||||
* EVALSHA commands as EVAL using the original script. */
|
||||
int retval = dictAdd(server.lua_scripts,sha,body);
|
||||
serverAssertWithInfo(c ? c : server.lua_client,NULL,retval == DICT_OK);
|
||||
server.lua_scripts_mem += sdsZmallocSize(sha) + getStringObjectSdsUsedMemory(body);
|
||||
incrRefCount(body);
|
||||
return sha;
|
||||
}
|
||||
|
||||
/* This is the Lua script "count" hook that we use to detect scripts timeout. */
|
||||
void luaMaskCountHook(lua_State *lua, lua_Debug *ar) {
|
||||
long long elapsed;
|
||||
long long elapsed = mstime() - server.lua_time_start;
|
||||
UNUSED(ar);
|
||||
UNUSED(lua);
|
||||
|
||||
elapsed = mstime() - server.lua_time_start;
|
||||
/* Set the timeout condition if not already set and the maximum
|
||||
* execution time was reached. */
|
||||
if (elapsed >= server.lua_time_limit && server.lua_timedout == 0) {
|
||||
serverLog(LL_WARNING,"Lua slow script detected: still in execution after %lld milliseconds. You can try killing the script using the SCRIPT KILL command.",elapsed);
|
||||
server.lua_timedout = 1;
|
||||
@ -1226,7 +1276,7 @@ void luaMaskCountHook(lua_State *lua, lua_Debug *ar) {
|
||||
* we need to mask the client executing the script from the event loop.
|
||||
* If we don't do that the client may disconnect and could no longer be
|
||||
* here when the EVAL command will return. */
|
||||
aeDeleteFileEvent(server.el, server.lua_caller->fd, AE_READABLE);
|
||||
protectClient(server.lua_caller);
|
||||
}
|
||||
if (server.lua_timedout) processEventsWhileBlocked();
|
||||
if (server.lua_kill) {
|
||||
@ -1240,6 +1290,7 @@ void evalGenericCommand(client *c, int evalsha) {
|
||||
lua_State *lua = server.lua;
|
||||
char funcname[43];
|
||||
long long numkeys;
|
||||
long long initial_server_dirty = server.dirty;
|
||||
int delhook = 0, err;
|
||||
|
||||
/* When we replicate whole scripts, we want the same PRNG sequence at
|
||||
@ -1336,9 +1387,7 @@ void evalGenericCommand(client *c, int evalsha) {
|
||||
server.lua_caller = c;
|
||||
server.lua_time_start = mstime();
|
||||
server.lua_kill = 0;
|
||||
if (server.lua_time_limit > 0 && server.masterhost == NULL &&
|
||||
ldb.active == 0)
|
||||
{
|
||||
if (server.lua_time_limit > 0 && ldb.active == 0) {
|
||||
lua_sethook(lua,luaMaskCountHook,LUA_MASKCOUNT,100000);
|
||||
delhook = 1;
|
||||
} else if (ldb.active) {
|
||||
@ -1355,10 +1404,11 @@ void evalGenericCommand(client *c, int evalsha) {
|
||||
if (delhook) lua_sethook(lua,NULL,0,0); /* Disable hook */
|
||||
if (server.lua_timedout) {
|
||||
server.lua_timedout = 0;
|
||||
/* Restore the readable handler that was unregistered when the
|
||||
* script timeout was detected. */
|
||||
aeCreateFileEvent(server.el,c->fd,AE_READABLE,
|
||||
readQueryFromClient,c);
|
||||
/* Restore the client that was protected when the script timeout
|
||||
* was detected. */
|
||||
unprotectClient(c);
|
||||
if (server.masterhost && server.master)
|
||||
queueClientForReprocessing(server.master);
|
||||
}
|
||||
server.lua_caller = NULL;
|
||||
|
||||
@ -1422,9 +1472,21 @@ void evalGenericCommand(client *c, int evalsha) {
|
||||
|
||||
replicationScriptCacheAdd(c->argv[1]->ptr);
|
||||
serverAssertWithInfo(c,NULL,script != NULL);
|
||||
rewriteClientCommandArgument(c,0,
|
||||
resetRefCount(createStringObject("EVAL",4)));
|
||||
rewriteClientCommandArgument(c,1,script);
|
||||
|
||||
/* If the script did not produce any changes in the dataset we want
|
||||
* just to replicate it as SCRIPT LOAD, otherwise we risk running
|
||||
* an aborted script on slaves (that may then produce results there)
|
||||
* or just running a CPU costly read-only script on the slaves. */
|
||||
if (server.dirty == initial_server_dirty) {
|
||||
rewriteClientCommandVector(c,3,
|
||||
resetRefCount(createStringObject("SCRIPT",6)),
|
||||
resetRefCount(createStringObject("LOAD",4)),
|
||||
script);
|
||||
} else {
|
||||
rewriteClientCommandArgument(c,0,
|
||||
resetRefCount(createStringObject("EVAL",4)));
|
||||
rewriteClientCommandArgument(c,1,script);
|
||||
}
|
||||
forceCommandPropagation(c,PROPAGATE_REPL|PROPAGATE_AOF);
|
||||
}
|
||||
}
|
||||
@ -1459,7 +1521,7 @@ void scriptCommand(client *c) {
|
||||
const char *help[] = {
|
||||
"DEBUG (yes|sync|no) -- Set the debug mode for subsequent scripts executed.",
|
||||
"EXISTS <sha1> [<sha1> ...] -- Return information about the existence of the scripts in the script cache.",
|
||||
"FLUSH -- Flush the Lua scripts cache. Very dangerous on slaves.",
|
||||
"FLUSH -- Flush the Lua scripts cache. Very dangerous on replicas.",
|
||||
"KILL -- Kill the currently executing Lua script.",
|
||||
"LOAD <script> -- Load a script into the scripts cache, without executing it.",
|
||||
NULL
|
||||
@ -1473,7 +1535,7 @@ NULL
|
||||
} else if (c->argc >= 2 && !strcasecmp(c->argv[1]->ptr,"exists")) {
|
||||
int j;
|
||||
|
||||
addReplyMultiBulkLen(c, c->argc-2);
|
||||
addReplyArrayLen(c, c->argc-2);
|
||||
for (j = 2; j < c->argc; j++) {
|
||||
if (dictFind(server.lua_scripts,c->argv[j]->ptr))
|
||||
addReply(c,shared.cone);
|
||||
@ -1488,6 +1550,8 @@ NULL
|
||||
} else if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,"kill")) {
|
||||
if (server.lua_caller == NULL) {
|
||||
addReplySds(c,sdsnew("-NOTBUSY No scripts in execution right now.\r\n"));
|
||||
} else if (server.lua_caller->flags & CLIENT_MASTER) {
|
||||
addReplySds(c,sdsnew("-UNKILLABLE The busy script was sent by a master instance in the context of replication and cannot be killed.\r\n"));
|
||||
} else if (server.lua_write_dirty) {
|
||||
addReplySds(c,sdsnew("-UNKILLABLE Sorry the script already executed write commands against the dataset. You can either wait the script termination or kill the server in a hard way using the SHUTDOWN NOSAVE command.\r\n"));
|
||||
} else {
|
||||
@ -1716,7 +1780,7 @@ int ldbRemoveChild(pid_t pid) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Return the number of children we still did not received termination
|
||||
/* Return the number of children we still did not receive termination
|
||||
* acknowledge via wait() in the parent process. */
|
||||
int ldbPendingChildren(void) {
|
||||
return listLength(ldb.children);
|
||||
|
@ -695,7 +695,7 @@ sds sdscatfmt(sds s, char const *fmt, ...) {
|
||||
* s = sdstrim(s,"Aa. :");
|
||||
* printf("%s\n", s);
|
||||
*
|
||||
* Output will be just "Hello World".
|
||||
* Output will be just "HelloWorld".
|
||||
*/
|
||||
sds sdstrim(sds s, const char *cset) {
|
||||
char *start, *end, *sp, *ep;
|
||||
|
@ -452,13 +452,16 @@ struct redisCommand sentinelcmds[] = {
|
||||
{"info",sentinelInfoCommand,-1,"",0,NULL,0,0,0,0,0},
|
||||
{"role",sentinelRoleCommand,1,"l",0,NULL,0,0,0,0,0},
|
||||
{"client",clientCommand,-2,"rs",0,NULL,0,0,0,0,0},
|
||||
{"shutdown",shutdownCommand,-1,"",0,NULL,0,0,0,0,0}
|
||||
{"shutdown",shutdownCommand,-1,"",0,NULL,0,0,0,0,0},
|
||||
{"auth",authCommand,2,"sltF",0,NULL,0,0,0,0,0},
|
||||
{"hello",helloCommand,-2,"sF",0,NULL,0,0,0,0,0}
|
||||
};
|
||||
|
||||
/* This function overwrites a few normal Redis config default with Sentinel
|
||||
* specific defaults. */
|
||||
void initSentinelConfig(void) {
|
||||
server.port = REDIS_SENTINEL_PORT;
|
||||
server.protected_mode = 0; /* Sentinel must be exposed. */
|
||||
}
|
||||
|
||||
/* Perform the Sentinel mode initialization. */
|
||||
@ -883,17 +886,17 @@ void sentinelPendingScriptsCommand(client *c) {
|
||||
listNode *ln;
|
||||
listIter li;
|
||||
|
||||
addReplyMultiBulkLen(c,listLength(sentinel.scripts_queue));
|
||||
addReplyArrayLen(c,listLength(sentinel.scripts_queue));
|
||||
listRewind(sentinel.scripts_queue,&li);
|
||||
while ((ln = listNext(&li)) != NULL) {
|
||||
sentinelScriptJob *sj = ln->value;
|
||||
int j = 0;
|
||||
|
||||
addReplyMultiBulkLen(c,10);
|
||||
addReplyMapLen(c,5);
|
||||
|
||||
addReplyBulkCString(c,"argv");
|
||||
while (sj->argv[j]) j++;
|
||||
addReplyMultiBulkLen(c,j);
|
||||
addReplyArrayLen(c,j);
|
||||
j = 0;
|
||||
while (sj->argv[j]) addReplyBulkCString(c,sj->argv[j++]);
|
||||
|
||||
@ -1687,16 +1690,18 @@ char *sentinelHandleConfiguration(char **argv, int argc) {
|
||||
ri = sentinelGetMasterByName(argv[1]);
|
||||
if (!ri) return "No such master with specified name.";
|
||||
ri->leader_epoch = strtoull(argv[2],NULL,10);
|
||||
} else if (!strcasecmp(argv[0],"known-slave") && argc == 4) {
|
||||
} else if ((!strcasecmp(argv[0],"known-slave") ||
|
||||
!strcasecmp(argv[0],"known-replica")) && argc == 4)
|
||||
{
|
||||
sentinelRedisInstance *slave;
|
||||
|
||||
/* known-slave <name> <ip> <port> */
|
||||
/* known-replica <name> <ip> <port> */
|
||||
ri = sentinelGetMasterByName(argv[1]);
|
||||
if (!ri) return "No such master with specified name.";
|
||||
if ((slave = createSentinelRedisInstance(NULL,SRI_SLAVE,argv[2],
|
||||
atoi(argv[3]), ri->quorum, ri)) == NULL)
|
||||
{
|
||||
return "Wrong hostname or port for slave.";
|
||||
return "Wrong hostname or port for replica.";
|
||||
}
|
||||
} else if (!strcasecmp(argv[0],"known-sentinel") &&
|
||||
(argc == 4 || argc == 5)) {
|
||||
@ -1854,7 +1859,7 @@ void rewriteConfigSentinelOption(struct rewriteConfigState *state) {
|
||||
if (sentinelAddrIsEqual(slave_addr,master_addr))
|
||||
slave_addr = master->addr;
|
||||
line = sdscatprintf(sdsempty(),
|
||||
"sentinel known-slave %s %s %d",
|
||||
"sentinel known-replica %s %s %d",
|
||||
master->name, slave_addr->ip, slave_addr->port);
|
||||
rewriteConfigRewriteLine(state,"sentinel",line,1);
|
||||
}
|
||||
@ -1939,12 +1944,25 @@ werr:
|
||||
/* Send the AUTH command with the specified master password if needed.
|
||||
* Note that for slaves the password set for the master is used.
|
||||
*
|
||||
* In case this Sentinel requires a password as well, via the "requirepass"
|
||||
* configuration directive, we assume we should use the local password in
|
||||
* order to authenticate when connecting with the other Sentinels as well.
|
||||
* So basically all the Sentinels share the same password and use it to
|
||||
* authenticate reciprocally.
|
||||
*
|
||||
* We don't check at all if the command was successfully transmitted
|
||||
* to the instance as if it fails Sentinel will detect the instance down,
|
||||
* will disconnect and reconnect the link and so forth. */
|
||||
void sentinelSendAuthIfNeeded(sentinelRedisInstance *ri, redisAsyncContext *c) {
|
||||
char *auth_pass = (ri->flags & SRI_MASTER) ? ri->auth_pass :
|
||||
ri->master->auth_pass;
|
||||
char *auth_pass = NULL;
|
||||
|
||||
if (ri->flags & SRI_MASTER) {
|
||||
auth_pass = ri->auth_pass;
|
||||
} else if (ri->flags & SRI_SLAVE) {
|
||||
auth_pass = ri->master->auth_pass;
|
||||
} else if (ri->flags & SRI_SENTINEL) {
|
||||
auth_pass = ACLDefaultUserFirstPassword();
|
||||
}
|
||||
|
||||
if (auth_pass) {
|
||||
if (redisAsyncCommand(c, sentinelDiscardReplyCallback, ri, "%s %s",
|
||||
@ -2628,7 +2646,7 @@ int sentinelSendPing(sentinelRedisInstance *ri) {
|
||||
ri->link->last_ping_time = mstime();
|
||||
/* We update the active ping time only if we received the pong for
|
||||
* the previous ping, otherwise we are technically waiting since the
|
||||
* first ping that did not received a reply. */
|
||||
* first ping that did not receive a reply. */
|
||||
if (ri->link->act_ping_time == 0)
|
||||
ri->link->act_ping_time = ri->link->last_ping_time;
|
||||
return 1;
|
||||
@ -2724,7 +2742,7 @@ void addReplySentinelRedisInstance(client *c, sentinelRedisInstance *ri) {
|
||||
void *mbl;
|
||||
int fields = 0;
|
||||
|
||||
mbl = addDeferredMultiBulkLength(c);
|
||||
mbl = addReplyDeferredLen(c);
|
||||
|
||||
addReplyBulkCString(c,"name");
|
||||
addReplyBulkCString(c,ri->name);
|
||||
@ -2905,7 +2923,7 @@ void addReplySentinelRedisInstance(client *c, sentinelRedisInstance *ri) {
|
||||
fields++;
|
||||
}
|
||||
|
||||
setDeferredMultiBulkLength(c,mbl,fields*2);
|
||||
setDeferredMapLen(c,mbl,fields);
|
||||
}
|
||||
|
||||
/* Output a number of instances contained inside a dictionary as
|
||||
@ -2915,7 +2933,7 @@ void addReplyDictOfRedisInstances(client *c, dict *instances) {
|
||||
dictEntry *de;
|
||||
|
||||
di = dictGetIterator(instances);
|
||||
addReplyMultiBulkLen(c,dictSize(instances));
|
||||
addReplyArrayLen(c,dictSize(instances));
|
||||
while((de = dictNext(di)) != NULL) {
|
||||
sentinelRedisInstance *ri = dictGetVal(de);
|
||||
|
||||
@ -2978,8 +2996,10 @@ void sentinelCommand(client *c) {
|
||||
if ((ri = sentinelGetMasterByNameOrReplyError(c,c->argv[2]))
|
||||
== NULL) return;
|
||||
addReplySentinelRedisInstance(c,ri);
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"slaves")) {
|
||||
/* SENTINEL SLAVES <master-name> */
|
||||
} else if (!strcasecmp(c->argv[1]->ptr,"slaves") ||
|
||||
!strcasecmp(c->argv[1]->ptr,"replicas"))
|
||||
{
|
||||
/* SENTINEL REPLICAS <master-name> */
|
||||
sentinelRedisInstance *ri;
|
||||
|
||||
if (c->argc != 3) goto numargserr;
|
||||
@ -3043,7 +3063,7 @@ void sentinelCommand(client *c) {
|
||||
|
||||
/* Reply with a three-elements multi-bulk reply:
|
||||
* down state, leader, vote epoch. */
|
||||
addReplyMultiBulkLen(c,3);
|
||||
addReplyArrayLen(c,3);
|
||||
addReply(c, isdown ? shared.cone : shared.czero);
|
||||
addReplyBulkCString(c, leader ? leader : "*");
|
||||
addReplyLongLong(c, (long long)leader_epoch);
|
||||
@ -3059,11 +3079,11 @@ void sentinelCommand(client *c) {
|
||||
if (c->argc != 3) goto numargserr;
|
||||
ri = sentinelGetMasterByName(c->argv[2]->ptr);
|
||||
if (ri == NULL) {
|
||||
addReply(c,shared.nullmultibulk);
|
||||
addReplyNullArray(c);
|
||||
} else {
|
||||
sentinelAddr *addr = sentinelGetCurrentMasterAddress(ri);
|
||||
|
||||
addReplyMultiBulkLen(c,2);
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyBulkCString(c,addr->ip);
|
||||
addReplyBulkLongLong(c,addr->port);
|
||||
}
|
||||
@ -3079,7 +3099,7 @@ void sentinelCommand(client *c) {
|
||||
return;
|
||||
}
|
||||
if (sentinelSelectSlave(ri) == NULL) {
|
||||
addReplySds(c,sdsnew("-NOGOODSLAVE No suitable slave to promote\r\n"));
|
||||
addReplySds(c,sdsnew("-NOGOODSLAVE No suitable replica to promote\r\n"));
|
||||
return;
|
||||
}
|
||||
serverLog(LL_WARNING,"Executing user requested FAILOVER of '%s'",
|
||||
@ -3213,7 +3233,7 @@ void sentinelCommand(client *c) {
|
||||
* 3.) other master name
|
||||
* ...
|
||||
*/
|
||||
addReplyMultiBulkLen(c,dictSize(masters_local) * 2);
|
||||
addReplyArrayLen(c,dictSize(masters_local) * 2);
|
||||
|
||||
dictIterator *di;
|
||||
dictEntry *de;
|
||||
@ -3221,25 +3241,25 @@ void sentinelCommand(client *c) {
|
||||
while ((de = dictNext(di)) != NULL) {
|
||||
sentinelRedisInstance *ri = dictGetVal(de);
|
||||
addReplyBulkCBuffer(c,ri->name,strlen(ri->name));
|
||||
addReplyMultiBulkLen(c,dictSize(ri->slaves) + 1); /* +1 for self */
|
||||
addReplyMultiBulkLen(c,2);
|
||||
addReplyArrayLen(c,dictSize(ri->slaves) + 1); /* +1 for self */
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyLongLong(c, now - ri->info_refresh);
|
||||
if (ri->info)
|
||||
addReplyBulkCBuffer(c,ri->info,sdslen(ri->info));
|
||||
else
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
|
||||
dictIterator *sdi;
|
||||
dictEntry *sde;
|
||||
sdi = dictGetIterator(ri->slaves);
|
||||
while ((sde = dictNext(sdi)) != NULL) {
|
||||
sentinelRedisInstance *sri = dictGetVal(sde);
|
||||
addReplyMultiBulkLen(c,2);
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyLongLong(c, now - sri->info_refresh);
|
||||
if (sri->info)
|
||||
addReplyBulkCBuffer(c,sri->info,sdslen(sri->info));
|
||||
else
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
}
|
||||
dictReleaseIterator(sdi);
|
||||
}
|
||||
@ -3261,9 +3281,9 @@ void sentinelCommand(client *c) {
|
||||
sentinel.simfailure_flags |=
|
||||
SENTINEL_SIMFAILURE_CRASH_AFTER_PROMOTION;
|
||||
serverLog(LL_WARNING,"Failure simulation: this Sentinel "
|
||||
"will crash after promoting the selected slave to master");
|
||||
"will crash after promoting the selected replica to master");
|
||||
} else if (!strcasecmp(c->argv[j]->ptr,"help")) {
|
||||
addReplyMultiBulkLen(c,2);
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyBulkCString(c,"crash-after-election");
|
||||
addReplyBulkCString(c,"crash-after-promotion");
|
||||
} else {
|
||||
@ -3363,9 +3383,9 @@ void sentinelRoleCommand(client *c) {
|
||||
dictIterator *di;
|
||||
dictEntry *de;
|
||||
|
||||
addReplyMultiBulkLen(c,2);
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyBulkCBuffer(c,"sentinel",8);
|
||||
addReplyMultiBulkLen(c,dictSize(sentinel.masters));
|
||||
addReplyArrayLen(c,dictSize(sentinel.masters));
|
||||
|
||||
di = dictGetIterator(sentinel.masters);
|
||||
while((de = dictNext(di)) != NULL) {
|
||||
@ -3569,7 +3589,7 @@ void sentinelCheckSubjectivelyDown(sentinelRedisInstance *ri) {
|
||||
(mstime() - ri->link->cc_conn_time) >
|
||||
SENTINEL_MIN_LINK_RECONNECT_PERIOD &&
|
||||
ri->link->act_ping_time != 0 && /* There is a pending ping... */
|
||||
/* The pending ping is delayed, and we did not received
|
||||
/* The pending ping is delayed, and we did not receive
|
||||
* error replies as well. */
|
||||
(mstime() - ri->link->act_ping_time) > (ri->down_after_period/2) &&
|
||||
(mstime() - ri->link->last_pong_time) > (ri->down_after_period/2))
|
||||
@ -3725,7 +3745,7 @@ void sentinelAskMasterStateToOtherSentinels(sentinelRedisInstance *master, int f
|
||||
*
|
||||
* 1) We believe it is down, or there is a failover in progress.
|
||||
* 2) Sentinel is connected.
|
||||
* 3) We did not received the info within SENTINEL_ASK_PERIOD ms. */
|
||||
* 3) We did not receive the info within SENTINEL_ASK_PERIOD ms. */
|
||||
if ((master->flags & SRI_S_DOWN) == 0) continue;
|
||||
if (ri->link->disconnected) continue;
|
||||
if (!(flags & SENTINEL_ASK_FORCED) &&
|
||||
|
1594
src/server.c
1594
src/server.c
File diff suppressed because it is too large
Load Diff
232
src/server.h
232
src/server.h
@ -78,12 +78,14 @@ typedef long long mstime_t; /* millisecond time type. */
|
||||
#define C_ERR -1
|
||||
|
||||
/* Static server configuration */
|
||||
#define CONFIG_DEFAULT_HZ 10 /* Time interrupt calls/sec. */
|
||||
#define CONFIG_DEFAULT_DYNAMIC_HZ 1 /* Adapt hz to # of clients.*/
|
||||
#define CONFIG_DEFAULT_HZ 10 /* Time interrupt calls/sec. */
|
||||
#define CONFIG_MIN_HZ 1
|
||||
#define CONFIG_MAX_HZ 500
|
||||
#define CONFIG_DEFAULT_SERVER_PORT 6379 /* TCP port */
|
||||
#define CONFIG_DEFAULT_TCP_BACKLOG 511 /* TCP listen backlog */
|
||||
#define CONFIG_DEFAULT_CLIENT_TIMEOUT 0 /* default client timeout: infinite */
|
||||
#define MAX_CLIENTS_PER_CLOCK_TICK 200 /* HZ is adapted based on that. */
|
||||
#define CONFIG_DEFAULT_SERVER_PORT 6379 /* TCP port. */
|
||||
#define CONFIG_DEFAULT_TCP_BACKLOG 511 /* TCP listen backlog. */
|
||||
#define CONFIG_DEFAULT_CLIENT_TIMEOUT 0 /* Default client timeout: infinite */
|
||||
#define CONFIG_DEFAULT_DBNUM 16
|
||||
#define CONFIG_MAX_LINE 1024
|
||||
#define CRON_DBS_PER_CALL 16
|
||||
@ -91,7 +93,7 @@ typedef long long mstime_t; /* millisecond time type. */
|
||||
#define PROTO_SHARED_SELECT_CMDS 10
|
||||
#define OBJ_SHARED_INTEGERS 10000
|
||||
#define OBJ_SHARED_BULKHDR_LEN 32
|
||||
#define LOG_MAX_LEN 1024 /* Default maximum length of syslog messages */
|
||||
#define LOG_MAX_LEN 1024 /* Default maximum length of syslog messages.*/
|
||||
#define AOF_REWRITE_PERC 100
|
||||
#define AOF_REWRITE_MIN_SIZE (64*1024*1024)
|
||||
#define AOF_REWRITE_ITEMS_PER_CMD 64
|
||||
@ -119,6 +121,7 @@ typedef long long mstime_t; /* millisecond time type. */
|
||||
#define CONFIG_DEFAULT_UNIX_SOCKET_PERM 0
|
||||
#define CONFIG_DEFAULT_TCP_KEEPALIVE 300
|
||||
#define CONFIG_DEFAULT_PROTECTED_MODE 1
|
||||
#define CONFIG_DEFAULT_GOPHER_ENABLED 0
|
||||
#define CONFIG_DEFAULT_LOGFILE ""
|
||||
#define CONFIG_DEFAULT_SYSLOG_ENABLED 0
|
||||
#define CONFIG_DEFAULT_STOP_WRITES_ON_BGSAVE_ERROR 1
|
||||
@ -129,6 +132,7 @@ typedef long long mstime_t; /* millisecond time type. */
|
||||
#define CONFIG_DEFAULT_REPL_DISKLESS_SYNC_DELAY 5
|
||||
#define CONFIG_DEFAULT_SLAVE_SERVE_STALE_DATA 1
|
||||
#define CONFIG_DEFAULT_SLAVE_READ_ONLY 1
|
||||
#define CONFIG_DEFAULT_SLAVE_IGNORE_MAXMEMORY 1
|
||||
#define CONFIG_DEFAULT_SLAVE_ANNOUNCE_IP NULL
|
||||
#define CONFIG_DEFAULT_SLAVE_ANNOUNCE_PORT 0
|
||||
#define CONFIG_DEFAULT_REPL_DISABLE_TCP_NODELAY 0
|
||||
@ -145,6 +149,7 @@ typedef long long mstime_t; /* millisecond time type. */
|
||||
#define CONFIG_DEFAULT_RDB_SAVE_INCREMENTAL_FSYNC 1
|
||||
#define CONFIG_DEFAULT_MIN_SLAVES_TO_WRITE 0
|
||||
#define CONFIG_DEFAULT_MIN_SLAVES_MAX_LAG 10
|
||||
#define CONFIG_DEFAULT_ACL_FILENAME ""
|
||||
#define NET_IP_STR_LEN 46 /* INET6_ADDRSTRLEN is 46, but we need to be sure */
|
||||
#define NET_PEER_ID_LEN (NET_IP_STR_LEN+32) /* Must be enough for ip:port */
|
||||
#define CONFIG_BINDADDR_MAX 16
|
||||
@ -199,22 +204,47 @@ typedef long long mstime_t; /* millisecond time type. */
|
||||
|
||||
/* Command flags. Please check the command table defined in the redis.c file
|
||||
* for more information about the meaning of every flag. */
|
||||
#define CMD_WRITE (1<<0) /* "w" flag */
|
||||
#define CMD_READONLY (1<<1) /* "r" flag */
|
||||
#define CMD_DENYOOM (1<<2) /* "m" flag */
|
||||
#define CMD_MODULE (1<<3) /* Command exported by module. */
|
||||
#define CMD_ADMIN (1<<4) /* "a" flag */
|
||||
#define CMD_PUBSUB (1<<5) /* "p" flag */
|
||||
#define CMD_NOSCRIPT (1<<6) /* "s" flag */
|
||||
#define CMD_RANDOM (1<<7) /* "R" flag */
|
||||
#define CMD_SORT_FOR_SCRIPT (1<<8) /* "S" flag */
|
||||
#define CMD_LOADING (1<<9) /* "l" flag */
|
||||
#define CMD_STALE (1<<10) /* "t" flag */
|
||||
#define CMD_SKIP_MONITOR (1<<11) /* "M" flag */
|
||||
#define CMD_ASKING (1<<12) /* "k" flag */
|
||||
#define CMD_FAST (1<<13) /* "F" flag */
|
||||
#define CMD_MODULE_GETKEYS (1<<14) /* Use the modules getkeys interface. */
|
||||
#define CMD_MODULE_NO_CLUSTER (1<<15) /* Deny on Redis Cluster. */
|
||||
#define CMD_WRITE (1ULL<<0) /* "write" flag */
|
||||
#define CMD_READONLY (1ULL<<1) /* "read-only" flag */
|
||||
#define CMD_DENYOOM (1ULL<<2) /* "use-memory" flag */
|
||||
#define CMD_MODULE (1ULL<<3) /* Command exported by module. */
|
||||
#define CMD_ADMIN (1ULL<<4) /* "admin" flag */
|
||||
#define CMD_PUBSUB (1ULL<<5) /* "pub-sub" flag */
|
||||
#define CMD_NOSCRIPT (1ULL<<6) /* "no-script" flag */
|
||||
#define CMD_RANDOM (1ULL<<7) /* "random" flag */
|
||||
#define CMD_SORT_FOR_SCRIPT (1ULL<<8) /* "to-sort" flag */
|
||||
#define CMD_LOADING (1ULL<<9) /* "ok-loading" flag */
|
||||
#define CMD_STALE (1ULL<<10) /* "ok-stale" flag */
|
||||
#define CMD_SKIP_MONITOR (1ULL<<11) /* "no-monitor" flag */
|
||||
#define CMD_ASKING (1ULL<<12) /* "cluster-asking" flag */
|
||||
#define CMD_FAST (1ULL<<13) /* "fast" flag */
|
||||
|
||||
/* Command flags used by the module system. */
|
||||
#define CMD_MODULE_GETKEYS (1ULL<<14) /* Use the modules getkeys interface. */
|
||||
#define CMD_MODULE_NO_CLUSTER (1ULL<<15) /* Deny on Redis Cluster. */
|
||||
|
||||
/* Command flags that describe ACLs categories. */
|
||||
#define CMD_CATEGORY_KEYSPACE (1ULL<<16)
|
||||
#define CMD_CATEGORY_READ (1ULL<<17)
|
||||
#define CMD_CATEGORY_WRITE (1ULL<<18)
|
||||
#define CMD_CATEGORY_SET (1ULL<<19)
|
||||
#define CMD_CATEGORY_SORTEDSET (1ULL<<20)
|
||||
#define CMD_CATEGORY_LIST (1ULL<<21)
|
||||
#define CMD_CATEGORY_HASH (1ULL<<22)
|
||||
#define CMD_CATEGORY_STRING (1ULL<<23)
|
||||
#define CMD_CATEGORY_BITMAP (1ULL<<24)
|
||||
#define CMD_CATEGORY_HYPERLOGLOG (1ULL<<25)
|
||||
#define CMD_CATEGORY_GEO (1ULL<<26)
|
||||
#define CMD_CATEGORY_STREAM (1ULL<<27)
|
||||
#define CMD_CATEGORY_PUBSUB (1ULL<<28)
|
||||
#define CMD_CATEGORY_ADMIN (1ULL<<29)
|
||||
#define CMD_CATEGORY_FAST (1ULL<<30)
|
||||
#define CMD_CATEGORY_SLOW (1ULL<<31)
|
||||
#define CMD_CATEGORY_BLOCKING (1ULL<<32)
|
||||
#define CMD_CATEGORY_DANGEROUS (1ULL<<33)
|
||||
#define CMD_CATEGORY_CONNECTION (1ULL<<34)
|
||||
#define CMD_CATEGORY_TRANSACTION (1ULL<<35)
|
||||
#define CMD_CATEGORY_SCRIPTING (1ULL<<36)
|
||||
|
||||
/* AOF states */
|
||||
#define AOF_OFF 0 /* AOF is off */
|
||||
@ -253,6 +283,7 @@ typedef long long mstime_t; /* millisecond time type. */
|
||||
#define CLIENT_LUA_DEBUG (1<<25) /* Run EVAL in debug mode. */
|
||||
#define CLIENT_LUA_DEBUG_SYNC (1<<26) /* EVAL debugging without fork() */
|
||||
#define CLIENT_MODULE (1<<27) /* Non connected client used by some module. */
|
||||
#define CLIENT_PROTECTED (1<<28) /* Client should not be freed for now. */
|
||||
|
||||
/* Client block type (btype field in client structure)
|
||||
* if CLIENT_BLOCKED flag is set. */
|
||||
@ -650,6 +681,9 @@ typedef struct multiCmd {
|
||||
typedef struct multiState {
|
||||
multiCmd *commands; /* Array of MULTI commands */
|
||||
int count; /* Total number of MULTI commands */
|
||||
int cmd_flags; /* The accumulated command flags OR-ed together.
|
||||
So if at least a command has a given flag, it
|
||||
will be set in this field. */
|
||||
int minreplicas; /* MINREPLICAS for synchronous replication */
|
||||
time_t minreplicas_timeout; /* MINREPLICAS timeout as unixtime. */
|
||||
} multiState;
|
||||
@ -700,14 +734,58 @@ typedef struct readyList {
|
||||
robj *key;
|
||||
} readyList;
|
||||
|
||||
/* This structure represents a Redis user. This is useful for ACLs, the
|
||||
* user is associated to the connection after the connection is authenticated.
|
||||
* If there is no associated user, the connection uses the default user. */
|
||||
#define USER_COMMAND_BITS_COUNT 1024 /* The total number of command bits
|
||||
in the user structure. The last valid
|
||||
command ID we can set in the user
|
||||
is USER_COMMAND_BITS_COUNT-1. */
|
||||
#define USER_FLAG_ENABLED (1<<0) /* The user is active. */
|
||||
#define USER_FLAG_DISABLED (1<<1) /* The user is disabled. */
|
||||
#define USER_FLAG_ALLKEYS (1<<2) /* The user can mention any key. */
|
||||
#define USER_FLAG_ALLCOMMANDS (1<<3) /* The user can run all commands. */
|
||||
#define USER_FLAG_NOPASS (1<<4) /* The user requires no password, any
|
||||
provided password will work. For the
|
||||
default user, this also means that
|
||||
no AUTH is needed, and every
|
||||
connection is immediately
|
||||
authenticated. */
|
||||
typedef struct user {
|
||||
sds name; /* The username as an SDS string. */
|
||||
uint64_t flags; /* See USER_FLAG_* */
|
||||
|
||||
/* The bit in allowed_commands is set if this user has the right to
|
||||
* execute this command. In commands having subcommands, if this bit is
|
||||
* set, then all the subcommands are also available.
|
||||
*
|
||||
* If the bit for a given command is NOT set and the command has
|
||||
* subcommands, Redis will also check allowed_subcommands in order to
|
||||
* understand if the command can be executed. */
|
||||
uint64_t allowed_commands[USER_COMMAND_BITS_COUNT/64];
|
||||
|
||||
/* This array points, for each command ID (corresponding to the command
|
||||
* bit set in allowed_commands), to an array of SDS strings, terminated by
|
||||
* a NULL pointer, with all the sub commands that can be executed for
|
||||
* this command. When no subcommands matching is used, the field is just
|
||||
* set to NULL to avoid allocating USER_COMMAND_BITS_COUNT pointers. */
|
||||
sds **allowed_subcommands;
|
||||
list *passwords; /* A list of SDS valid passwords for this user. */
|
||||
list *patterns; /* A list of allowed key patterns. If this field is NULL
|
||||
the user cannot mention any key in a command, unless
|
||||
the flag ALLKEYS is set in the user. */
|
||||
} user;
|
||||
|
||||
/* With multiplexing we need to take per-client state.
|
||||
* Clients are taken in a linked list. */
|
||||
typedef struct client {
|
||||
uint64_t id; /* Client incremental unique ID. */
|
||||
int fd; /* Client socket. */
|
||||
int resp; /* RESP protocol version. Can be 2 or 3. */
|
||||
redisDb *db; /* Pointer to currently SELECTed DB. */
|
||||
robj *name; /* As set by CLIENT SETNAME. */
|
||||
sds querybuf; /* Buffer we use to accumulate client queries. */
|
||||
size_t qb_pos; /* The position we have read in querybuf. */
|
||||
sds pending_querybuf; /* If this client is flagged as master, this buffer
|
||||
represents the yet not applied portion of the
|
||||
replication stream that we are receiving from
|
||||
@ -716,6 +794,9 @@ typedef struct client {
|
||||
int argc; /* Num of arguments of current command. */
|
||||
robj **argv; /* Arguments of current command. */
|
||||
struct redisCommand *cmd, *lastcmd; /* Last command executed. */
|
||||
user *user; /* User associated with this connection. If the
|
||||
user is set to NULL the connection can do
|
||||
anything (admin). */
|
||||
int reqtype; /* Request protocol type: PROTO_REQ_* */
|
||||
int multibulklen; /* Number of multi bulk arguments left to read. */
|
||||
long bulklen; /* Length of bulk argument in multi bulk request. */
|
||||
@ -727,7 +808,7 @@ typedef struct client {
|
||||
time_t lastinteraction; /* Time of the last interaction, used for timeout */
|
||||
time_t obuf_soft_limit_reached_time;
|
||||
int flags; /* Client flags: CLIENT_* macros. */
|
||||
int authenticated; /* When requirepass is non-NULL. */
|
||||
int authenticated; /* Needed when the default user requires auth. */
|
||||
int replstate; /* Replication state if this is a slave. */
|
||||
int repl_put_online_on_ack; /* Install slave write handler on ACK. */
|
||||
int repldbfd; /* Replication DB file descriptor. */
|
||||
@ -772,14 +853,14 @@ struct moduleLoadQueueEntry {
|
||||
};
|
||||
|
||||
struct sharedObjectsStruct {
|
||||
robj *crlf, *ok, *err, *emptybulk, *czero, *cone, *cnegone, *pong, *space,
|
||||
*colon, *nullbulk, *nullmultibulk, *queued,
|
||||
*emptymultibulk, *wrongtypeerr, *nokeyerr, *syntaxerr, *sameobjecterr,
|
||||
robj *crlf, *ok, *err, *emptybulk, *czero, *cone, *pong, *space,
|
||||
*colon, *queued, *null[4], *nullarray[4],
|
||||
*emptyarray, *wrongtypeerr, *nokeyerr, *syntaxerr, *sameobjecterr,
|
||||
*outofrangeerr, *noscripterr, *loadingerr, *slowscripterr, *bgsaveerr,
|
||||
*masterdownerr, *roslaveerr, *execaborterr, *noautherr, *noreplicaserr,
|
||||
*busykeyerr, *oomerr, *plus, *messagebulk, *pmessagebulk, *subscribebulk,
|
||||
*unsubscribebulk, *psubscribebulk, *punsubscribebulk, *del, *unlink,
|
||||
*rpop, *lpop, *lpush, *zpopmin, *zpopmax, *emptyscan,
|
||||
*rpop, *lpop, *lpush, *rpoplpush, *zpopmin, *zpopmax, *emptyscan,
|
||||
*select[PROTO_SHARED_SELECT_CMDS],
|
||||
*integers[OBJ_SHARED_INTEGERS],
|
||||
*mbulkhdr[OBJ_SHARED_BULKHDR_LEN], /* "*<value>\r\n" */
|
||||
@ -851,6 +932,7 @@ struct redisMemOverhead {
|
||||
size_t clients_slaves;
|
||||
size_t clients_normal;
|
||||
size_t aof_buffer;
|
||||
size_t lua_caches;
|
||||
size_t overhead_total;
|
||||
size_t dataset;
|
||||
size_t total_keys;
|
||||
@ -858,11 +940,11 @@ struct redisMemOverhead {
|
||||
float dataset_perc;
|
||||
float peak_perc;
|
||||
float total_frag;
|
||||
size_t total_frag_bytes;
|
||||
ssize_t total_frag_bytes;
|
||||
float allocator_frag;
|
||||
size_t allocator_frag_bytes;
|
||||
ssize_t allocator_frag_bytes;
|
||||
float allocator_rss;
|
||||
size_t allocator_rss_bytes;
|
||||
ssize_t allocator_rss_bytes;
|
||||
float rss_extra;
|
||||
size_t rss_extra_bytes;
|
||||
size_t num_dbs;
|
||||
@ -923,6 +1005,10 @@ struct redisServer {
|
||||
char *configfile; /* Absolute config file path, or NULL */
|
||||
char *executable; /* Absolute executable file path. */
|
||||
char **exec_argv; /* Executable argv vector (copy). */
|
||||
int dynamic_hz; /* Change hz value depending on # of clients. */
|
||||
int config_hz; /* Configured HZ value. May be different than
|
||||
the actual 'hz' field value if dynamic-hz
|
||||
is enabled. */
|
||||
int hz; /* serverCron() calls frequency in hertz */
|
||||
redisDb *db;
|
||||
dict *commands; /* Command table */
|
||||
@ -932,7 +1018,6 @@ struct redisServer {
|
||||
int shutdown_asap; /* SHUTDOWN needed ASAP */
|
||||
int activerehashing; /* Incremental rehash in serverCron() */
|
||||
int active_defrag_running; /* Active defragmentation running (holds current scan aggressiveness) */
|
||||
char *requirepass; /* Pass for AUTH command, or NULL */
|
||||
char *pidfile; /* PID file path */
|
||||
int arch_bits; /* 32 or 64 depending on sizeof(long) */
|
||||
int cronloops; /* Number of times the cron function run */
|
||||
@ -970,6 +1055,8 @@ struct redisServer {
|
||||
dict *migrate_cached_sockets;/* MIGRATE cached sockets */
|
||||
uint64_t next_client_id; /* Next client unique ID. Incremental. */
|
||||
int protected_mode; /* Don't accept external connections. */
|
||||
int gopher_enabled; /* If true the server will reply to gopher
|
||||
queries. Will still serve RESP2 queries. */
|
||||
/* RDB / AOF loading information */
|
||||
int loading; /* We are loading data from disk if true */
|
||||
off_t loading_total_bytes;
|
||||
@ -980,7 +1067,8 @@ struct redisServer {
|
||||
struct redisCommand *delCommand, *multiCommand, *lpushCommand,
|
||||
*lpopCommand, *rpopCommand, *zpopminCommand,
|
||||
*zpopmaxCommand, *sremCommand, *execCommand,
|
||||
*expireCommand, *pexpireCommand, *xclaimCommand;
|
||||
*expireCommand, *pexpireCommand, *xclaimCommand,
|
||||
*xgroupCommand;
|
||||
/* Fields used only for stats */
|
||||
time_t stat_starttime; /* Server start time */
|
||||
long long stat_numcommands; /* Number of processed commands */
|
||||
@ -1132,6 +1220,7 @@ struct redisServer {
|
||||
int repl_diskless_sync; /* Send RDB to slaves sockets directly. */
|
||||
int repl_diskless_sync_delay; /* Delay to start a diskless repl BGSAVE. */
|
||||
/* Replication (slave) */
|
||||
char *masteruser; /* AUTH with this user and masterauth with master */
|
||||
char *masterauth; /* AUTH with this password with master */
|
||||
char *masterhost; /* Hostname of master */
|
||||
int masterport; /* Port of master */
|
||||
@ -1149,6 +1238,7 @@ struct redisServer {
|
||||
time_t repl_transfer_lastio; /* Unix time of the latest read, for timeout */
|
||||
int repl_serve_stale_data; /* Serve stale data when link is down? */
|
||||
int repl_slave_ro; /* Slave is read only? */
|
||||
int repl_slave_ignore_maxmemory; /* If true slaves do not evict. */
|
||||
time_t repl_down_since; /* Unix time at which link with master went down */
|
||||
int repl_disable_tcp_nodelay; /* Disable TCP_NODELAY after SYNC? */
|
||||
int slave_priority; /* Reported in INFO and used by Sentinel. */
|
||||
@ -1222,11 +1312,16 @@ struct redisServer {
|
||||
char *cluster_announce_ip; /* IP address to announce on cluster bus. */
|
||||
int cluster_announce_port; /* base port to announce on cluster bus. */
|
||||
int cluster_announce_bus_port; /* bus port to announce on cluster bus. */
|
||||
int cluster_module_flags; /* Set of flags that Redis modules are able
|
||||
to set in order to suppress certain
|
||||
native Redis Cluster features. Check the
|
||||
REDISMODULE_CLUSTER_FLAG_*. */
|
||||
/* Scripting */
|
||||
lua_State *lua; /* The Lua interpreter. We use just one for all clients */
|
||||
client *lua_client; /* The "fake client" to query Redis from Lua */
|
||||
client *lua_caller; /* The client running EVAL right now, or NULL */
|
||||
dict *lua_scripts; /* A dictionary of SHA1 -> Lua scripts */
|
||||
unsigned long long lua_scripts_mem; /* Cached scripts' memory + oh */
|
||||
mstime_t lua_time_limit; /* Script timeout in milliseconds */
|
||||
mstime_t lua_time_start; /* Start time of script, milliseconds time */
|
||||
int lua_write_dirty; /* True if a write command was called during the
|
||||
@ -1247,6 +1342,8 @@ struct redisServer {
|
||||
/* Latency monitor */
|
||||
long long latency_monitor_threshold;
|
||||
dict *latency_events;
|
||||
/* ACLs */
|
||||
char *acl_filename; /* ACL Users file. NULL if not configured. */
|
||||
/* Assert & bug reporting */
|
||||
const char *assert_failed;
|
||||
const char *assert_file;
|
||||
@ -1274,8 +1371,8 @@ struct redisCommand {
|
||||
char *name;
|
||||
redisCommandProc *proc;
|
||||
int arity;
|
||||
char *sflags; /* Flags as string representation, one char per flag. */
|
||||
int flags; /* The actual flags, obtained from the 'sflags' field. */
|
||||
char *sflags; /* Flags as string representation, one char per flag. */
|
||||
uint64_t flags; /* The actual flags, obtained from the 'sflags' field. */
|
||||
/* Use a function to determine keys arguments in a command line.
|
||||
* Used for Redis Cluster redirect. */
|
||||
redisGetKeysProc *getkeys_proc;
|
||||
@ -1284,6 +1381,11 @@ struct redisCommand {
|
||||
int lastkey; /* The last argument that's a key */
|
||||
int keystep; /* The step between first and last key */
|
||||
long long microseconds, calls;
|
||||
int id; /* Command ID. This is a progressive ID starting from 0 that
|
||||
is assigned at runtime, and is used in order to check
|
||||
ACLs. A connection is able to execute a given command if
|
||||
the user associated to the connection has this command
|
||||
bit set in the bitmap of allowed commands. */
|
||||
};
|
||||
|
||||
struct redisFunctionSym {
|
||||
@ -1404,14 +1506,24 @@ void freeClient(client *c);
|
||||
void freeClientAsync(client *c);
|
||||
void resetClient(client *c);
|
||||
void sendReplyToClient(aeEventLoop *el, int fd, void *privdata, int mask);
|
||||
void *addDeferredMultiBulkLength(client *c);
|
||||
void setDeferredMultiBulkLength(client *c, void *node, long length);
|
||||
void *addReplyDeferredLen(client *c);
|
||||
void setDeferredArrayLen(client *c, void *node, long length);
|
||||
void setDeferredMapLen(client *c, void *node, long length);
|
||||
void setDeferredSetLen(client *c, void *node, long length);
|
||||
void setDeferredAttributeLen(client *c, void *node, long length);
|
||||
void setDeferredPushLen(client *c, void *node, long length);
|
||||
void processInputBuffer(client *c);
|
||||
void processInputBufferAndReplicate(client *c);
|
||||
void processGopherRequest(client *c);
|
||||
void acceptHandler(aeEventLoop *el, int fd, void *privdata, int mask);
|
||||
void acceptTcpHandler(aeEventLoop *el, int fd, void *privdata, int mask);
|
||||
void acceptUnixHandler(aeEventLoop *el, int fd, void *privdata, int mask);
|
||||
void readQueryFromClient(aeEventLoop *el, int fd, void *privdata, int mask);
|
||||
void addReplyString(client *c, const char *s, size_t len);
|
||||
void addReplyNull(client *c);
|
||||
void addReplyNullArray(client *c);
|
||||
void addReplyBool(client *c, int b);
|
||||
void addReplyVerbatim(client *c, const char *s, size_t len, const char *ext);
|
||||
void addReplyProto(client *c, const char *s, size_t len);
|
||||
void addReplyBulk(client *c, robj *obj);
|
||||
void addReplyBulkCString(client *c, const char *s);
|
||||
void addReplyBulkCBuffer(client *c, const void *p, size_t len);
|
||||
@ -1424,9 +1536,14 @@ void addReplyStatus(client *c, const char *status);
|
||||
void addReplyDouble(client *c, double d);
|
||||
void addReplyHumanLongDouble(client *c, long double d);
|
||||
void addReplyLongLong(client *c, long long ll);
|
||||
void addReplyMultiBulkLen(client *c, long length);
|
||||
void addReplyArrayLen(client *c, long length);
|
||||
void addReplyMapLen(client *c, long length);
|
||||
void addReplySetLen(client *c, long length);
|
||||
void addReplyAttributeLen(client *c, long length);
|
||||
void addReplyPushLen(client *c, long length);
|
||||
void addReplyHelp(client *c, const char **help);
|
||||
void addReplySubcommandSyntaxError(client *c);
|
||||
void addReplyLoadedModules(client *c);
|
||||
void copyClientOutputBuffer(client *dst, client *src);
|
||||
size_t sdsZmallocSize(sds s);
|
||||
size_t getStringObjectSdsUsedMemory(robj *o);
|
||||
@ -1457,6 +1574,8 @@ int clientHasPendingReplies(client *c);
|
||||
void unlinkClient(client *c);
|
||||
int writeToClient(int fd, client *c, int handler_installed);
|
||||
void linkClient(client *c);
|
||||
void protectClient(client *c);
|
||||
void unprotectClient(client *c);
|
||||
|
||||
#ifdef __GNUC__
|
||||
void addReplyErrorFormat(client *c, const char *fmt, ...)
|
||||
@ -1583,9 +1702,15 @@ void startLoading(FILE *fp);
|
||||
void loadingProgress(off_t pos);
|
||||
void stopLoading(void);
|
||||
|
||||
#define DISK_ERROR_TYPE_AOF 1 /* Don't accept writes: AOF errors. */
|
||||
#define DISK_ERROR_TYPE_RDB 2 /* Don't accept writes: RDB errors. */
|
||||
#define DISK_ERROR_TYPE_NONE 0 /* No problems, we can accept writes. */
|
||||
int writeCommandsDeniedByDiskError(void);
|
||||
|
||||
/* RDB persistence */
|
||||
#include "rdb.h"
|
||||
int rdbSaveRio(rio *rdb, int *error, int flags, rdbSaveInfo *rsi);
|
||||
void killRDBChild(void);
|
||||
|
||||
/* AOF persistence */
|
||||
void flushAppendOnlyFile(int force);
|
||||
@ -1599,6 +1724,7 @@ void backgroundRewriteDoneHandler(int exitcode, int bysignal);
|
||||
void aofRewriteBufferReset(void);
|
||||
unsigned long aofRewriteBufferSize(void);
|
||||
ssize_t aofReadDiffFromParent(void);
|
||||
void killAppendOnlyChild(void);
|
||||
|
||||
/* Child info */
|
||||
void openChildInfoPipe(void);
|
||||
@ -1606,6 +1732,29 @@ void closeChildInfoPipe(void);
|
||||
void sendChildInfo(int process_type);
|
||||
void receiveChildInfo(void);
|
||||
|
||||
/* acl.c -- Authentication related prototypes. */
|
||||
extern rax *Users;
|
||||
extern user *DefaultUser;
|
||||
void ACLInit(void);
|
||||
/* Return values for ACLCheckUserCredentials(). */
|
||||
#define ACL_OK 0
|
||||
#define ACL_DENIED_CMD 1
|
||||
#define ACL_DENIED_KEY 2
|
||||
int ACLCheckUserCredentials(robj *username, robj *password);
|
||||
int ACLAuthenticateUser(client *c, robj *username, robj *password);
|
||||
unsigned long ACLGetCommandID(const char *cmdname);
|
||||
user *ACLGetUserByName(const char *name, size_t namelen);
|
||||
int ACLCheckCommandPerm(client *c);
|
||||
int ACLSetUser(user *u, const char *op, ssize_t oplen);
|
||||
sds ACLDefaultUserFirstPassword(void);
|
||||
uint64_t ACLGetCommandCategoryFlagByName(const char *name);
|
||||
int ACLAppendUserForLoading(sds *argv, int argc, int *argc_err);
|
||||
char *ACLSetUserStringError(void);
|
||||
int ACLLoadConfiguredUsers(void);
|
||||
sds ACLDescribeUser(user *u);
|
||||
void ACLLoadUsersAtStartup(void);
|
||||
void addReplyCommandCategories(client *c, struct redisCommand *cmd);
|
||||
|
||||
/* Sorted sets data type */
|
||||
|
||||
/* Input flags. */
|
||||
@ -1674,6 +1823,7 @@ int zslLexValueLteMax(sds value, zlexrangespec *spec);
|
||||
int getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *level);
|
||||
size_t freeMemoryGetNotCountedMemory();
|
||||
int freeMemoryIfNeeded(void);
|
||||
int freeMemoryIfNeededAndSafe(void);
|
||||
int processCommand(client *c);
|
||||
void setupSignalHandlers(void);
|
||||
struct redisCommand *lookupCommand(sds name);
|
||||
@ -1822,6 +1972,7 @@ int dbAsyncDelete(redisDb *db, robj *key);
|
||||
void emptyDbAsync(redisDb *db);
|
||||
void slotToKeyFlushAsync(void);
|
||||
size_t lazyfreeGetPendingObjectsCount(void);
|
||||
void freeObjAsync(robj *o);
|
||||
|
||||
/* API to get key arguments from commands */
|
||||
int *getKeysFromCommand(struct redisCommand *cmd, robj **argv, int argc, int *numkeys);
|
||||
@ -1866,6 +2017,7 @@ sds luaCreateFunction(client *c, lua_State *lua, robj *body);
|
||||
void processUnblockedClients(void);
|
||||
void blockClient(client *c, int btype);
|
||||
void unblockClient(client *c);
|
||||
void queueClientForReprocessing(client *c);
|
||||
void replyToBlockedClientTimedOut(client *c);
|
||||
int getTimeoutFromObjectOrReply(client *c, robj *object, mstime_t *timeout, int unit);
|
||||
void disconnectAllBlockedClients(void);
|
||||
@ -1979,7 +2131,7 @@ void ttlCommand(client *c);
|
||||
void touchCommand(client *c);
|
||||
void pttlCommand(client *c);
|
||||
void persistCommand(client *c);
|
||||
void slaveofCommand(client *c);
|
||||
void replicaofCommand(client *c);
|
||||
void roleCommand(client *c);
|
||||
void debugCommand(client *c);
|
||||
void msetCommand(client *c);
|
||||
@ -2051,6 +2203,7 @@ void dumpCommand(client *c);
|
||||
void objectCommand(client *c);
|
||||
void memoryCommand(client *c);
|
||||
void clientCommand(client *c);
|
||||
void helloCommand(client *c);
|
||||
void evalCommand(client *c);
|
||||
void evalShaCommand(client *c);
|
||||
void scriptCommand(client *c);
|
||||
@ -2084,12 +2237,15 @@ void xrevrangeCommand(client *c);
|
||||
void xlenCommand(client *c);
|
||||
void xreadCommand(client *c);
|
||||
void xgroupCommand(client *c);
|
||||
void xsetidCommand(client *c);
|
||||
void xackCommand(client *c);
|
||||
void xpendingCommand(client *c);
|
||||
void xclaimCommand(client *c);
|
||||
void xinfoCommand(client *c);
|
||||
void xdelCommand(client *c);
|
||||
void xtrimCommand(client *c);
|
||||
void lolwutCommand(client *c);
|
||||
void aclCommand(client *c);
|
||||
|
||||
#if defined(__GNUC__)
|
||||
void *calloc(size_t count, size_t size) __attribute__ ((deprecated));
|
||||
|
@ -39,7 +39,7 @@
|
||||
#include <errno.h> /* errno program_invocation_name program_invocation_short_name */
|
||||
|
||||
#if !defined(HAVE_SETPROCTITLE)
|
||||
#if (defined __NetBSD__ || defined __FreeBSD__ || defined __OpenBSD__)
|
||||
#if (defined __NetBSD__ || defined __FreeBSD__ || defined __OpenBSD__ || defined __DragonFly__)
|
||||
#define HAVE_SETPROCTITLE 1
|
||||
#else
|
||||
#define HAVE_SETPROCTITLE 0
|
||||
|
@ -169,23 +169,23 @@ NULL
|
||||
return;
|
||||
|
||||
listRewind(server.slowlog,&li);
|
||||
totentries = addDeferredMultiBulkLength(c);
|
||||
totentries = addReplyDeferredLen(c);
|
||||
while(count-- && (ln = listNext(&li))) {
|
||||
int j;
|
||||
|
||||
se = ln->value;
|
||||
addReplyMultiBulkLen(c,6);
|
||||
addReplyArrayLen(c,6);
|
||||
addReplyLongLong(c,se->id);
|
||||
addReplyLongLong(c,se->time);
|
||||
addReplyLongLong(c,se->duration);
|
||||
addReplyMultiBulkLen(c,se->argc);
|
||||
addReplyArrayLen(c,se->argc);
|
||||
for (j = 0; j < se->argc; j++)
|
||||
addReplyBulk(c,se->argv[j]);
|
||||
addReplyBulkCBuffer(c,se->peerid,sdslen(se->peerid));
|
||||
addReplyBulkCBuffer(c,se->cname,sdslen(se->cname));
|
||||
sent++;
|
||||
}
|
||||
setDeferredMultiBulkLength(c,totentries,sent);
|
||||
setDeferredArrayLen(c,totentries,sent);
|
||||
} else {
|
||||
addReplySubcommandSyntaxError(c);
|
||||
}
|
||||
|
@ -505,7 +505,7 @@ void sortCommand(client *c) {
|
||||
addReplyError(c,"One or more scores can't be converted into double");
|
||||
} else if (storekey == NULL) {
|
||||
/* STORE option not specified, sent the sorting result to client */
|
||||
addReplyMultiBulkLen(c,outputlen);
|
||||
addReplyArrayLen(c,outputlen);
|
||||
for (j = start; j <= end; j++) {
|
||||
listNode *ln;
|
||||
listIter li;
|
||||
@ -519,7 +519,7 @@ void sortCommand(client *c) {
|
||||
|
||||
if (sop->type == SORT_OP_GET) {
|
||||
if (!val) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
addReplyBulk(c,val);
|
||||
decrRefCount(val);
|
||||
|
29
src/t_hash.c
29
src/t_hash.c
@ -641,7 +641,7 @@ static void addHashFieldToReply(client *c, robj *o, sds field) {
|
||||
int ret;
|
||||
|
||||
if (o == NULL) {
|
||||
addReply(c, shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -652,7 +652,7 @@ static void addHashFieldToReply(client *c, robj *o, sds field) {
|
||||
|
||||
ret = hashTypeGetFromZiplist(o, field, &vstr, &vlen, &vll);
|
||||
if (ret < 0) {
|
||||
addReply(c, shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
if (vstr) {
|
||||
addReplyBulkCBuffer(c, vstr, vlen);
|
||||
@ -664,7 +664,7 @@ static void addHashFieldToReply(client *c, robj *o, sds field) {
|
||||
} else if (o->encoding == OBJ_ENCODING_HT) {
|
||||
sds value = hashTypeGetFromHashTable(o, field);
|
||||
if (value == NULL)
|
||||
addReply(c, shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
else
|
||||
addReplyBulkCBuffer(c, value, sdslen(value));
|
||||
} else {
|
||||
@ -675,7 +675,7 @@ static void addHashFieldToReply(client *c, robj *o, sds field) {
|
||||
void hgetCommand(client *c) {
|
||||
robj *o;
|
||||
|
||||
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.nullbulk)) == NULL ||
|
||||
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp])) == NULL ||
|
||||
checkType(c,o,OBJ_HASH)) return;
|
||||
|
||||
addHashFieldToReply(c, o, c->argv[2]->ptr);
|
||||
@ -693,7 +693,7 @@ void hmgetCommand(client *c) {
|
||||
return;
|
||||
}
|
||||
|
||||
addReplyMultiBulkLen(c, c->argc-2);
|
||||
addReplyArrayLen(c, c->argc-2);
|
||||
for (i = 2; i < c->argc; i++) {
|
||||
addHashFieldToReply(c, o, c->argv[i]->ptr);
|
||||
}
|
||||
@ -766,17 +766,19 @@ static void addHashIteratorCursorToReply(client *c, hashTypeIterator *hi, int wh
|
||||
void genericHgetallCommand(client *c, int flags) {
|
||||
robj *o;
|
||||
hashTypeIterator *hi;
|
||||
int multiplier = 0;
|
||||
int length, count = 0;
|
||||
|
||||
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.emptymultibulk)) == NULL
|
||||
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp])) == NULL
|
||||
|| checkType(c,o,OBJ_HASH)) return;
|
||||
|
||||
if (flags & OBJ_HASH_KEY) multiplier++;
|
||||
if (flags & OBJ_HASH_VALUE) multiplier++;
|
||||
|
||||
length = hashTypeLength(o) * multiplier;
|
||||
addReplyMultiBulkLen(c, length);
|
||||
/* We return a map if the user requested keys and values, like in the
|
||||
* HGETALL case. Otherwise to use a flat array makes more sense. */
|
||||
length = hashTypeLength(o);
|
||||
if (flags & OBJ_HASH_KEY && flags & OBJ_HASH_VALUE) {
|
||||
addReplyMapLen(c, length);
|
||||
} else {
|
||||
addReplyArrayLen(c, length);
|
||||
}
|
||||
|
||||
hi = hashTypeInitIterator(o);
|
||||
while (hashTypeNext(hi) != C_ERR) {
|
||||
@ -791,6 +793,9 @@ void genericHgetallCommand(client *c, int flags) {
|
||||
}
|
||||
|
||||
hashTypeReleaseIterator(hi);
|
||||
|
||||
/* Make sure we returned the right number of elements. */
|
||||
if (flags & OBJ_HASH_KEY && flags & OBJ_HASH_VALUE) count /= 2;
|
||||
serverAssert(count == length);
|
||||
}
|
||||
|
||||
|
35
src/t_list.c
35
src/t_list.c
@ -298,7 +298,7 @@ void linsertCommand(client *c) {
|
||||
server.dirty++;
|
||||
} else {
|
||||
/* Notify client of a failed insert */
|
||||
addReply(c,shared.cnegone);
|
||||
addReplyLongLong(c,-1);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -312,7 +312,7 @@ void llenCommand(client *c) {
|
||||
}
|
||||
|
||||
void lindexCommand(client *c) {
|
||||
robj *o = lookupKeyReadOrReply(c,c->argv[1],shared.nullbulk);
|
||||
robj *o = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp]);
|
||||
if (o == NULL || checkType(c,o,OBJ_LIST)) return;
|
||||
long index;
|
||||
robj *value = NULL;
|
||||
@ -331,7 +331,7 @@ void lindexCommand(client *c) {
|
||||
addReplyBulk(c,value);
|
||||
decrRefCount(value);
|
||||
} else {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
}
|
||||
} else {
|
||||
serverPanic("Unknown list encoding");
|
||||
@ -365,12 +365,12 @@ void lsetCommand(client *c) {
|
||||
}
|
||||
|
||||
void popGenericCommand(client *c, int where) {
|
||||
robj *o = lookupKeyWriteOrReply(c,c->argv[1],shared.nullbulk);
|
||||
robj *o = lookupKeyWriteOrReply(c,c->argv[1],shared.null[c->resp]);
|
||||
if (o == NULL || checkType(c,o,OBJ_LIST)) return;
|
||||
|
||||
robj *value = listTypePop(o,where);
|
||||
if (value == NULL) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
char *event = (where == LIST_HEAD) ? "lpop" : "rpop";
|
||||
|
||||
@ -402,7 +402,7 @@ void lrangeCommand(client *c) {
|
||||
if ((getLongFromObjectOrReply(c, c->argv[2], &start, NULL) != C_OK) ||
|
||||
(getLongFromObjectOrReply(c, c->argv[3], &end, NULL) != C_OK)) return;
|
||||
|
||||
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.emptymultibulk)) == NULL
|
||||
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp])) == NULL
|
||||
|| checkType(c,o,OBJ_LIST)) return;
|
||||
llen = listTypeLength(o);
|
||||
|
||||
@ -414,14 +414,14 @@ void lrangeCommand(client *c) {
|
||||
/* Invariant: start >= 0, so this test will be true when end < 0.
|
||||
* The range is empty when start > end or start >= length. */
|
||||
if (start > end || start >= llen) {
|
||||
addReply(c,shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
if (end >= llen) end = llen-1;
|
||||
rangelen = (end-start)+1;
|
||||
|
||||
/* Return the result in form of a multi-bulk reply */
|
||||
addReplyMultiBulkLen(c,rangelen);
|
||||
addReplyArrayLen(c,rangelen);
|
||||
if (o->encoding == OBJ_ENCODING_QUICKLIST) {
|
||||
listTypeIterator *iter = listTypeInitIterator(o, start, LIST_TAIL);
|
||||
|
||||
@ -564,13 +564,13 @@ void rpoplpushHandlePush(client *c, robj *dstkey, robj *dstobj, robj *value) {
|
||||
|
||||
void rpoplpushCommand(client *c) {
|
||||
robj *sobj, *value;
|
||||
if ((sobj = lookupKeyWriteOrReply(c,c->argv[1],shared.nullbulk)) == NULL ||
|
||||
checkType(c,sobj,OBJ_LIST)) return;
|
||||
if ((sobj = lookupKeyWriteOrReply(c,c->argv[1],shared.null[c->resp]))
|
||||
== NULL || checkType(c,sobj,OBJ_LIST)) return;
|
||||
|
||||
if (listTypeLength(sobj) == 0) {
|
||||
/* This may only happen after loading very old RDB files. Recent
|
||||
* versions of Redis delete keys of empty lists. */
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
robj *dobj = lookupKeyWrite(c->db,c->argv[2]);
|
||||
robj *touchedkey = c->argv[1];
|
||||
@ -596,6 +596,9 @@ void rpoplpushCommand(client *c) {
|
||||
signalModifiedKey(c->db,touchedkey);
|
||||
decrRefCount(touchedkey);
|
||||
server.dirty++;
|
||||
if (c->cmd->proc == brpoplpushCommand) {
|
||||
rewriteClientCommandVector(c,3,shared.rpoplpush,c->argv[1],c->argv[2]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -636,10 +639,10 @@ int serveClientBlockedOnList(client *receiver, robj *key, robj *dstkey, redisDb
|
||||
db->id,argv,2,PROPAGATE_AOF|PROPAGATE_REPL);
|
||||
|
||||
/* BRPOP/BLPOP */
|
||||
addReplyMultiBulkLen(receiver,2);
|
||||
addReplyArrayLen(receiver,2);
|
||||
addReplyBulk(receiver,key);
|
||||
addReplyBulk(receiver,value);
|
||||
|
||||
|
||||
/* Notify event. */
|
||||
char *event = (where == LIST_HEAD) ? "lpop" : "rpop";
|
||||
notifyKeyspaceEvent(NOTIFY_LIST,event,key,receiver->db->id);
|
||||
@ -701,7 +704,7 @@ void blockingPopGenericCommand(client *c, int where) {
|
||||
robj *value = listTypePop(o,where);
|
||||
serverAssert(value != NULL);
|
||||
|
||||
addReplyMultiBulkLen(c,2);
|
||||
addReplyArrayLen(c,2);
|
||||
addReplyBulk(c,c->argv[j]);
|
||||
addReplyBulk(c,value);
|
||||
decrRefCount(value);
|
||||
@ -728,7 +731,7 @@ void blockingPopGenericCommand(client *c, int where) {
|
||||
/* If we are inside a MULTI/EXEC and the list is empty the only thing
|
||||
* we can do is treating it as a timeout (even with timeout 0). */
|
||||
if (c->flags & CLIENT_MULTI) {
|
||||
addReply(c,shared.nullmultibulk);
|
||||
addReplyNullArray(c);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -756,7 +759,7 @@ void brpoplpushCommand(client *c) {
|
||||
if (c->flags & CLIENT_MULTI) {
|
||||
/* Blocking against an empty list in a multi state
|
||||
* returns immediately. */
|
||||
addReply(c, shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
/* The list is empty and the client blocks. */
|
||||
blockForKeys(c,BLOCKED_LIST,c->argv + 1,1,timeout,c->argv[2],NULL);
|
||||
|
42
src/t_set.c
42
src/t_set.c
@ -207,7 +207,7 @@ sds setTypeNextObject(setTypeIterator *si) {
|
||||
* used field with values which are easy to trap if misused. */
|
||||
int setTypeRandomElement(robj *setobj, sds *sdsele, int64_t *llele) {
|
||||
if (setobj->encoding == OBJ_ENCODING_HT) {
|
||||
dictEntry *de = dictGetRandomKey(setobj->ptr);
|
||||
dictEntry *de = dictGetFairRandomKey(setobj->ptr);
|
||||
*sdsele = dictGetKey(de);
|
||||
*llele = -123456789; /* Not needed. Defensive. */
|
||||
} else if (setobj->encoding == OBJ_ENCODING_INTSET) {
|
||||
@ -415,13 +415,13 @@ void spopWithCountCommand(client *c) {
|
||||
|
||||
/* Make sure a key with the name inputted exists, and that it's type is
|
||||
* indeed a set. Otherwise, return nil */
|
||||
if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.emptymultibulk))
|
||||
if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp]))
|
||||
== NULL || checkType(c,set,OBJ_SET)) return;
|
||||
|
||||
/* If count is zero, serve an empty multibulk ASAP to avoid special
|
||||
* cases later. */
|
||||
if (count == 0) {
|
||||
addReply(c,shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -455,7 +455,7 @@ void spopWithCountCommand(client *c) {
|
||||
robj *propargv[3];
|
||||
propargv[0] = createStringObject("SREM",4);
|
||||
propargv[1] = c->argv[1];
|
||||
addReplyMultiBulkLen(c,count);
|
||||
addReplySetLen(c,count);
|
||||
|
||||
/* Common iteration vars. */
|
||||
sds sdsele;
|
||||
@ -516,11 +516,7 @@ void spopWithCountCommand(client *c) {
|
||||
sdsfree(sdsele);
|
||||
}
|
||||
|
||||
/* Assign the new set as the key value. */
|
||||
incrRefCount(set); /* Protect the old set value. */
|
||||
dbOverwrite(c->db,c->argv[1],newset);
|
||||
|
||||
/* Tranfer the old set to the client and release it. */
|
||||
/* Transfer the old set to the client. */
|
||||
setTypeIterator *si;
|
||||
si = setTypeInitIterator(set);
|
||||
while((encoding = setTypeNext(si,&sdsele,&llele)) != -1) {
|
||||
@ -539,7 +535,9 @@ void spopWithCountCommand(client *c) {
|
||||
decrRefCount(objele);
|
||||
}
|
||||
setTypeReleaseIterator(si);
|
||||
decrRefCount(set);
|
||||
|
||||
/* Assign the new set as the key value. */
|
||||
dbOverwrite(c->db,c->argv[1],newset);
|
||||
}
|
||||
|
||||
/* Don't propagate the command itself even if we incremented the
|
||||
@ -568,8 +566,8 @@ void spopCommand(client *c) {
|
||||
|
||||
/* Make sure a key with the name inputted exists, and that it's type is
|
||||
* indeed a set */
|
||||
if ((set = lookupKeyWriteOrReply(c,c->argv[1],shared.nullbulk)) == NULL ||
|
||||
checkType(c,set,OBJ_SET)) return;
|
||||
if ((set = lookupKeyWriteOrReply(c,c->argv[1],shared.null[c->resp]))
|
||||
== NULL || checkType(c,set,OBJ_SET)) return;
|
||||
|
||||
/* Get a random element from the set */
|
||||
encoding = setTypeRandomElement(set,&sdsele,&llele);
|
||||
@ -634,13 +632,13 @@ void srandmemberWithCountCommand(client *c) {
|
||||
uniq = 0;
|
||||
}
|
||||
|
||||
if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.emptymultibulk))
|
||||
if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp]))
|
||||
== NULL || checkType(c,set,OBJ_SET)) return;
|
||||
size = setTypeSize(set);
|
||||
|
||||
/* If count is zero, serve it ASAP to avoid special cases later. */
|
||||
if (count == 0) {
|
||||
addReply(c,shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -649,7 +647,7 @@ void srandmemberWithCountCommand(client *c) {
|
||||
* This case is trivial and can be served without auxiliary data
|
||||
* structures. */
|
||||
if (!uniq) {
|
||||
addReplyMultiBulkLen(c,count);
|
||||
addReplySetLen(c,count);
|
||||
while(count--) {
|
||||
encoding = setTypeRandomElement(set,&ele,&llele);
|
||||
if (encoding == OBJ_ENCODING_INTSET) {
|
||||
@ -739,7 +737,7 @@ void srandmemberWithCountCommand(client *c) {
|
||||
dictIterator *di;
|
||||
dictEntry *de;
|
||||
|
||||
addReplyMultiBulkLen(c,count);
|
||||
addReplySetLen(c,count);
|
||||
di = dictGetIterator(d);
|
||||
while((de = dictNext(di)) != NULL)
|
||||
addReplyBulk(c,dictGetKey(de));
|
||||
@ -762,8 +760,8 @@ void srandmemberCommand(client *c) {
|
||||
return;
|
||||
}
|
||||
|
||||
if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.nullbulk)) == NULL ||
|
||||
checkType(c,set,OBJ_SET)) return;
|
||||
if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp]))
|
||||
== NULL || checkType(c,set,OBJ_SET)) return;
|
||||
|
||||
encoding = setTypeRandomElement(set,&ele,&llele);
|
||||
if (encoding == OBJ_ENCODING_INTSET) {
|
||||
@ -815,7 +813,7 @@ void sinterGenericCommand(client *c, robj **setkeys,
|
||||
}
|
||||
addReply(c,shared.czero);
|
||||
} else {
|
||||
addReply(c,shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
}
|
||||
return;
|
||||
}
|
||||
@ -835,7 +833,7 @@ void sinterGenericCommand(client *c, robj **setkeys,
|
||||
* to the output list and save the pointer to later modify it with the
|
||||
* right length */
|
||||
if (!dstkey) {
|
||||
replylen = addDeferredMultiBulkLength(c);
|
||||
replylen = addReplyDeferredLen(c);
|
||||
} else {
|
||||
/* If we have a target key where to store the resulting set
|
||||
* create this key with an empty set inside */
|
||||
@ -913,7 +911,7 @@ void sinterGenericCommand(client *c, robj **setkeys,
|
||||
signalModifiedKey(c->db,dstkey);
|
||||
server.dirty++;
|
||||
} else {
|
||||
setDeferredMultiBulkLength(c,replylen,cardinality);
|
||||
setDeferredSetLen(c,replylen,cardinality);
|
||||
}
|
||||
zfree(sets);
|
||||
}
|
||||
@ -1059,7 +1057,7 @@ void sunionDiffGenericCommand(client *c, robj **setkeys, int setnum,
|
||||
|
||||
/* Output the content of the resulting set, if not in STORE mode */
|
||||
if (!dstkey) {
|
||||
addReplyMultiBulkLen(c,cardinality);
|
||||
addReplySetLen(c,cardinality);
|
||||
si = setTypeInitIterator(dstset);
|
||||
while((ele = setTypeNextObject(si)) != NULL) {
|
||||
addReplyBulkCBuffer(c,ele,sdslen(ele));
|
||||
|
488
src/t_stream.c
488
src/t_stream.c
File diff suppressed because it is too large
Load Diff
@ -80,7 +80,7 @@ void setGenericCommand(client *c, int flags, robj *key, robj *val, robj *expire,
|
||||
if ((flags & OBJ_SET_NX && lookupKeyWrite(c->db,key) != NULL) ||
|
||||
(flags & OBJ_SET_XX && lookupKeyWrite(c->db,key) == NULL))
|
||||
{
|
||||
addReply(c, abort_reply ? abort_reply : shared.nullbulk);
|
||||
addReply(c, abort_reply ? abort_reply : shared.null[c->resp]);
|
||||
return;
|
||||
}
|
||||
setKey(c->db,key,val);
|
||||
@ -157,7 +157,7 @@ void psetexCommand(client *c) {
|
||||
int getGenericCommand(client *c) {
|
||||
robj *o;
|
||||
|
||||
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.nullbulk)) == NULL)
|
||||
if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp])) == NULL)
|
||||
return C_OK;
|
||||
|
||||
if (o->type != OBJ_STRING) {
|
||||
@ -285,14 +285,14 @@ void getrangeCommand(client *c) {
|
||||
void mgetCommand(client *c) {
|
||||
int j;
|
||||
|
||||
addReplyMultiBulkLen(c,c->argc-1);
|
||||
addReplyArrayLen(c,c->argc-1);
|
||||
for (j = 1; j < c->argc; j++) {
|
||||
robj *o = lookupKeyRead(c->db,c->argv[j]);
|
||||
if (o == NULL) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
if (o->type != OBJ_STRING) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
addReplyBulk(c,o);
|
||||
}
|
||||
@ -301,24 +301,22 @@ void mgetCommand(client *c) {
|
||||
}
|
||||
|
||||
void msetGenericCommand(client *c, int nx) {
|
||||
int j, busykeys = 0;
|
||||
int j;
|
||||
|
||||
if ((c->argc % 2) == 0) {
|
||||
addReplyError(c,"wrong number of arguments for MSET");
|
||||
return;
|
||||
}
|
||||
|
||||
/* Handle the NX flag. The MSETNX semantic is to return zero and don't
|
||||
* set nothing at all if at least one already key exists. */
|
||||
* set anything if at least one key alerady exists. */
|
||||
if (nx) {
|
||||
for (j = 1; j < c->argc; j += 2) {
|
||||
if (lookupKeyWrite(c->db,c->argv[j]) != NULL) {
|
||||
busykeys++;
|
||||
addReply(c, shared.czero);
|
||||
return;
|
||||
}
|
||||
}
|
||||
if (busykeys) {
|
||||
addReply(c, shared.czero);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
for (j = 1; j < c->argc; j += 2) {
|
||||
|
162
src/t_zset.c
162
src/t_zset.c
@ -244,6 +244,61 @@ int zslDelete(zskiplist *zsl, double score, sds ele, zskiplistNode **node) {
|
||||
return 0; /* not found */
|
||||
}
|
||||
|
||||
/* Update the score of an elmenent inside the sorted set skiplist.
|
||||
* Note that the element must exist and must match 'score'.
|
||||
* This function does not update the score in the hash table side, the
|
||||
* caller should take care of it.
|
||||
*
|
||||
* Note that this function attempts to just update the node, in case after
|
||||
* the score update, the node would be exactly at the same position.
|
||||
* Otherwise the skiplist is modified by removing and re-adding a new
|
||||
* element, which is more costly.
|
||||
*
|
||||
* The function returns the updated element skiplist node pointer. */
|
||||
zskiplistNode *zslUpdateScore(zskiplist *zsl, double curscore, sds ele, double newscore) {
|
||||
zskiplistNode *update[ZSKIPLIST_MAXLEVEL], *x;
|
||||
int i;
|
||||
|
||||
/* We need to seek to element to update to start: this is useful anyway,
|
||||
* we'll have to update or remove it. */
|
||||
x = zsl->header;
|
||||
for (i = zsl->level-1; i >= 0; i--) {
|
||||
while (x->level[i].forward &&
|
||||
(x->level[i].forward->score < curscore ||
|
||||
(x->level[i].forward->score == curscore &&
|
||||
sdscmp(x->level[i].forward->ele,ele) < 0)))
|
||||
{
|
||||
x = x->level[i].forward;
|
||||
}
|
||||
update[i] = x;
|
||||
}
|
||||
|
||||
/* Jump to our element: note that this function assumes that the
|
||||
* element with the matching score exists. */
|
||||
x = x->level[0].forward;
|
||||
serverAssert(x && curscore == x->score && sdscmp(x->ele,ele) == 0);
|
||||
|
||||
/* If the node, after the score update, would be still exactly
|
||||
* at the same position, we can just update the score without
|
||||
* actually removing and re-inserting the element in the skiplist. */
|
||||
if ((x->backward == NULL || x->backward->score < newscore) &&
|
||||
(x->level[0].forward == NULL || x->level[0].forward->score > newscore))
|
||||
{
|
||||
x->score = newscore;
|
||||
return x;
|
||||
}
|
||||
|
||||
/* No way to reuse the old node: we need to remove and insert a new
|
||||
* one at a different place. */
|
||||
zslDeleteNode(zsl, x, update);
|
||||
zskiplistNode *newnode = zslInsert(zsl,newscore,x->ele);
|
||||
/* We reused the old node x->ele SDS string, free the node now
|
||||
* since zslInsert created a new one. */
|
||||
x->ele = NULL;
|
||||
zslFreeNode(x);
|
||||
return newnode;
|
||||
}
|
||||
|
||||
int zslValueGteMin(double value, zrangespec *spec) {
|
||||
return spec->minex ? (value > spec->min) : (value >= spec->min);
|
||||
}
|
||||
@ -519,12 +574,12 @@ int zslParseLexRangeItem(robj *item, sds *dest, int *ex) {
|
||||
switch(c[0]) {
|
||||
case '+':
|
||||
if (c[1] != '\0') return C_ERR;
|
||||
*ex = 0;
|
||||
*ex = 1;
|
||||
*dest = shared.maxstring;
|
||||
return C_OK;
|
||||
case '-':
|
||||
if (c[1] != '\0') return C_ERR;
|
||||
*ex = 0;
|
||||
*ex = 1;
|
||||
*dest = shared.minstring;
|
||||
return C_OK;
|
||||
case '(':
|
||||
@ -597,9 +652,8 @@ int zslIsInLexRange(zskiplist *zsl, zlexrangespec *range) {
|
||||
zskiplistNode *x;
|
||||
|
||||
/* Test for ranges that will always be empty. */
|
||||
if (sdscmplex(range->min,range->max) > 1 ||
|
||||
(sdscmp(range->min,range->max) == 0 &&
|
||||
(range->minex || range->maxex)))
|
||||
int cmp = sdscmplex(range->min,range->max);
|
||||
if (cmp > 0 || (cmp == 0 && (range->minex || range->maxex)))
|
||||
return 0;
|
||||
x = zsl->tail;
|
||||
if (x == NULL || !zslLexValueGteMin(x->ele,range))
|
||||
@ -872,9 +926,8 @@ int zzlIsInLexRange(unsigned char *zl, zlexrangespec *range) {
|
||||
unsigned char *p;
|
||||
|
||||
/* Test for ranges that will always be empty. */
|
||||
if (sdscmplex(range->min,range->max) > 1 ||
|
||||
(sdscmp(range->min,range->max) == 0 &&
|
||||
(range->minex || range->maxex)))
|
||||
int cmp = sdscmplex(range->min,range->max);
|
||||
if (cmp > 0 || (cmp == 0 && (range->minex || range->maxex)))
|
||||
return 0;
|
||||
|
||||
p = ziplistIndex(zl,-2); /* Last element. */
|
||||
@ -1341,13 +1394,7 @@ int zsetAdd(robj *zobj, double score, sds ele, int *flags, double *newscore) {
|
||||
|
||||
/* Remove and re-insert when score changes. */
|
||||
if (score != curscore) {
|
||||
zskiplistNode *node;
|
||||
serverAssert(zslDelete(zs->zsl,curscore,ele,&node));
|
||||
znode = zslInsert(zs->zsl,score,node->ele);
|
||||
/* We reused the node->ele SDS string, free the node now
|
||||
* since zslInsert created a new one. */
|
||||
node->ele = NULL;
|
||||
zslFreeNode(node);
|
||||
znode = zslUpdateScore(zs->zsl,curscore,ele,score);
|
||||
/* Note that we did not removed the original element from
|
||||
* the hash table representing the sorted set, so we just
|
||||
* update the score. */
|
||||
@ -1591,7 +1638,7 @@ reply_to_client:
|
||||
if (processed)
|
||||
addReplyDouble(c,score);
|
||||
else
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else { /* ZADD. */
|
||||
addReplyLongLong(c,ch ? added+updated : added);
|
||||
}
|
||||
@ -2380,7 +2427,7 @@ void zrangeGenericCommand(client *c, int reverse) {
|
||||
return;
|
||||
}
|
||||
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.emptymultibulk)) == NULL
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.null[c->resp])) == NULL
|
||||
|| checkType(c,zobj,OBJ_ZSET)) return;
|
||||
|
||||
/* Sanitize indexes. */
|
||||
@ -2392,14 +2439,19 @@ void zrangeGenericCommand(client *c, int reverse) {
|
||||
/* Invariant: start >= 0, so this test will be true when end < 0.
|
||||
* The range is empty when start > end or start >= length. */
|
||||
if (start > end || start >= llen) {
|
||||
addReply(c,shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
if (end >= llen) end = llen-1;
|
||||
rangelen = (end-start)+1;
|
||||
|
||||
/* Return the result in form of a multi-bulk reply */
|
||||
addReplyMultiBulkLen(c, withscores ? (rangelen*2) : rangelen);
|
||||
/* Return the result in form of a multi-bulk reply. RESP3 clients
|
||||
* will receive sub arrays with score->element, while RESP2 returned
|
||||
* a flat array. */
|
||||
if (withscores && c->resp == 2)
|
||||
addReplyArrayLen(c, rangelen*2);
|
||||
else
|
||||
addReplyArrayLen(c, rangelen);
|
||||
|
||||
if (zobj->encoding == OBJ_ENCODING_ZIPLIST) {
|
||||
unsigned char *zl = zobj->ptr;
|
||||
@ -2419,13 +2471,13 @@ void zrangeGenericCommand(client *c, int reverse) {
|
||||
while (rangelen--) {
|
||||
serverAssertWithInfo(c,zobj,eptr != NULL && sptr != NULL);
|
||||
serverAssertWithInfo(c,zobj,ziplistGet(eptr,&vstr,&vlen,&vlong));
|
||||
|
||||
if (withscores && c->resp > 2) addReplyArrayLen(c,2);
|
||||
if (vstr == NULL)
|
||||
addReplyBulkLongLong(c,vlong);
|
||||
else
|
||||
addReplyBulkCBuffer(c,vstr,vlen);
|
||||
|
||||
if (withscores)
|
||||
addReplyDouble(c,zzlGetScore(sptr));
|
||||
if (withscores) addReplyDouble(c,zzlGetScore(sptr));
|
||||
|
||||
if (reverse)
|
||||
zzlPrev(zl,&eptr,&sptr);
|
||||
@ -2453,9 +2505,9 @@ void zrangeGenericCommand(client *c, int reverse) {
|
||||
while(rangelen--) {
|
||||
serverAssertWithInfo(c,zobj,ln != NULL);
|
||||
ele = ln->ele;
|
||||
if (withscores && c->resp > 2) addReplyArrayLen(c,2);
|
||||
addReplyBulkCBuffer(c,ele,sdslen(ele));
|
||||
if (withscores)
|
||||
addReplyDouble(c,ln->score);
|
||||
if (withscores) addReplyDouble(c,ln->score);
|
||||
ln = reverse ? ln->backward : ln->level[0].forward;
|
||||
}
|
||||
} else {
|
||||
@ -2523,7 +2575,7 @@ void genericZrangebyscoreCommand(client *c, int reverse) {
|
||||
}
|
||||
|
||||
/* Ok, lookup the key and get the range */
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.emptymultibulk)) == NULL ||
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.null[c->resp])) == NULL ||
|
||||
checkType(c,zobj,OBJ_ZSET)) return;
|
||||
|
||||
if (zobj->encoding == OBJ_ENCODING_ZIPLIST) {
|
||||
@ -2543,7 +2595,7 @@ void genericZrangebyscoreCommand(client *c, int reverse) {
|
||||
|
||||
/* No "first" element in the specified interval. */
|
||||
if (eptr == NULL) {
|
||||
addReply(c, shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
@ -2554,7 +2606,7 @@ void genericZrangebyscoreCommand(client *c, int reverse) {
|
||||
/* We don't know in advance how many matching elements there are in the
|
||||
* list, so we push this object that will represent the multi-bulk
|
||||
* length in the output buffer, and will "fix" it later */
|
||||
replylen = addDeferredMultiBulkLength(c);
|
||||
replylen = addReplyDeferredLen(c);
|
||||
|
||||
/* If there is an offset, just traverse the number of elements without
|
||||
* checking the score because that is done in the next loop. */
|
||||
@ -2576,19 +2628,18 @@ void genericZrangebyscoreCommand(client *c, int reverse) {
|
||||
if (!zslValueLteMax(score,&range)) break;
|
||||
}
|
||||
|
||||
/* We know the element exists, so ziplistGet should always succeed */
|
||||
/* We know the element exists, so ziplistGet should always
|
||||
* succeed */
|
||||
serverAssertWithInfo(c,zobj,ziplistGet(eptr,&vstr,&vlen,&vlong));
|
||||
|
||||
rangelen++;
|
||||
if (withscores && c->resp > 2) addReplyArrayLen(c,2);
|
||||
if (vstr == NULL) {
|
||||
addReplyBulkLongLong(c,vlong);
|
||||
} else {
|
||||
addReplyBulkCBuffer(c,vstr,vlen);
|
||||
}
|
||||
|
||||
if (withscores) {
|
||||
addReplyDouble(c,score);
|
||||
}
|
||||
if (withscores) addReplyDouble(c,score);
|
||||
|
||||
/* Move to next node */
|
||||
if (reverse) {
|
||||
@ -2611,14 +2662,14 @@ void genericZrangebyscoreCommand(client *c, int reverse) {
|
||||
|
||||
/* No "first" element in the specified interval. */
|
||||
if (ln == NULL) {
|
||||
addReply(c, shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
/* We don't know in advance how many matching elements there are in the
|
||||
* list, so we push this object that will represent the multi-bulk
|
||||
* length in the output buffer, and will "fix" it later */
|
||||
replylen = addDeferredMultiBulkLength(c);
|
||||
replylen = addReplyDeferredLen(c);
|
||||
|
||||
/* If there is an offset, just traverse the number of elements without
|
||||
* checking the score because that is done in the next loop. */
|
||||
@ -2639,11 +2690,9 @@ void genericZrangebyscoreCommand(client *c, int reverse) {
|
||||
}
|
||||
|
||||
rangelen++;
|
||||
if (withscores && c->resp > 2) addReplyArrayLen(c,2);
|
||||
addReplyBulkCBuffer(c,ln->ele,sdslen(ln->ele));
|
||||
|
||||
if (withscores) {
|
||||
addReplyDouble(c,ln->score);
|
||||
}
|
||||
if (withscores) addReplyDouble(c,ln->score);
|
||||
|
||||
/* Move to next node */
|
||||
if (reverse) {
|
||||
@ -2656,11 +2705,8 @@ void genericZrangebyscoreCommand(client *c, int reverse) {
|
||||
serverPanic("Unknown sorted set encoding");
|
||||
}
|
||||
|
||||
if (withscores) {
|
||||
rangelen *= 2;
|
||||
}
|
||||
|
||||
setDeferredMultiBulkLength(c, replylen, rangelen);
|
||||
if (withscores && c->resp == 2) rangelen *= 2;
|
||||
setDeferredArrayLen(c, replylen, rangelen);
|
||||
}
|
||||
|
||||
void zrangebyscoreCommand(client *c) {
|
||||
@ -2871,7 +2917,7 @@ void genericZrangebylexCommand(client *c, int reverse) {
|
||||
}
|
||||
|
||||
/* Ok, lookup the key and get the range */
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.emptymultibulk)) == NULL ||
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.null[c->resp])) == NULL ||
|
||||
checkType(c,zobj,OBJ_ZSET))
|
||||
{
|
||||
zslFreeLexRange(&range);
|
||||
@ -2894,7 +2940,7 @@ void genericZrangebylexCommand(client *c, int reverse) {
|
||||
|
||||
/* No "first" element in the specified interval. */
|
||||
if (eptr == NULL) {
|
||||
addReply(c, shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
zslFreeLexRange(&range);
|
||||
return;
|
||||
}
|
||||
@ -2906,7 +2952,7 @@ void genericZrangebylexCommand(client *c, int reverse) {
|
||||
/* We don't know in advance how many matching elements there are in the
|
||||
* list, so we push this object that will represent the multi-bulk
|
||||
* length in the output buffer, and will "fix" it later */
|
||||
replylen = addDeferredMultiBulkLength(c);
|
||||
replylen = addReplyDeferredLen(c);
|
||||
|
||||
/* If there is an offset, just traverse the number of elements without
|
||||
* checking the score because that is done in the next loop. */
|
||||
@ -2958,7 +3004,7 @@ void genericZrangebylexCommand(client *c, int reverse) {
|
||||
|
||||
/* No "first" element in the specified interval. */
|
||||
if (ln == NULL) {
|
||||
addReply(c, shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
zslFreeLexRange(&range);
|
||||
return;
|
||||
}
|
||||
@ -2966,7 +3012,7 @@ void genericZrangebylexCommand(client *c, int reverse) {
|
||||
/* We don't know in advance how many matching elements there are in the
|
||||
* list, so we push this object that will represent the multi-bulk
|
||||
* length in the output buffer, and will "fix" it later */
|
||||
replylen = addDeferredMultiBulkLength(c);
|
||||
replylen = addReplyDeferredLen(c);
|
||||
|
||||
/* If there is an offset, just traverse the number of elements without
|
||||
* checking the score because that is done in the next loop. */
|
||||
@ -3001,7 +3047,7 @@ void genericZrangebylexCommand(client *c, int reverse) {
|
||||
}
|
||||
|
||||
zslFreeLexRange(&range);
|
||||
setDeferredMultiBulkLength(c, replylen, rangelen);
|
||||
setDeferredArrayLen(c, replylen, rangelen);
|
||||
}
|
||||
|
||||
void zrangebylexCommand(client *c) {
|
||||
@ -3027,11 +3073,11 @@ void zscoreCommand(client *c) {
|
||||
robj *zobj;
|
||||
double score;
|
||||
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.nullbulk)) == NULL ||
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.null[c->resp])) == NULL ||
|
||||
checkType(c,zobj,OBJ_ZSET)) return;
|
||||
|
||||
if (zsetScore(zobj,c->argv[2]->ptr,&score) == C_ERR) {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
} else {
|
||||
addReplyDouble(c,score);
|
||||
}
|
||||
@ -3043,7 +3089,7 @@ void zrankGenericCommand(client *c, int reverse) {
|
||||
robj *zobj;
|
||||
long rank;
|
||||
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.nullbulk)) == NULL ||
|
||||
if ((zobj = lookupKeyReadOrReply(c,key,shared.null[c->resp])) == NULL ||
|
||||
checkType(c,zobj,OBJ_ZSET)) return;
|
||||
|
||||
serverAssertWithInfo(c,ele,sdsEncodedObject(ele));
|
||||
@ -3051,7 +3097,7 @@ void zrankGenericCommand(client *c, int reverse) {
|
||||
if (rank >= 0) {
|
||||
addReplyLongLong(c,rank);
|
||||
} else {
|
||||
addReply(c,shared.nullbulk);
|
||||
addReplyNull(c);
|
||||
}
|
||||
}
|
||||
|
||||
@ -3109,11 +3155,11 @@ void genericZpopCommand(client *c, robj **keyv, int keyc, int where, int emitkey
|
||||
|
||||
/* No candidate for zpopping, return empty. */
|
||||
if (!zobj) {
|
||||
addReply(c,shared.emptymultibulk);
|
||||
addReplyNull(c);
|
||||
return;
|
||||
}
|
||||
|
||||
void *arraylen_ptr = addDeferredMultiBulkLength(c);
|
||||
void *arraylen_ptr = addReplyDeferredLen(c);
|
||||
long arraylen = 0;
|
||||
|
||||
/* We emit the key only for the blocking variant. */
|
||||
@ -3180,7 +3226,7 @@ void genericZpopCommand(client *c, robj **keyv, int keyc, int where, int emitkey
|
||||
}
|
||||
} while(--count);
|
||||
|
||||
setDeferredMultiBulkLength(c,arraylen_ptr,arraylen + (emitkey != 0));
|
||||
setDeferredArrayLen(c,arraylen_ptr,arraylen + (emitkey != 0));
|
||||
}
|
||||
|
||||
/* ZPOPMIN key [<count>] */
|
||||
@ -3235,7 +3281,7 @@ void blockingGenericZpopCommand(client *c, int where) {
|
||||
/* If we are inside a MULTI/EXEC and the zset is empty the only thing
|
||||
* we can do is treating it as a timeout (even with timeout 0). */
|
||||
if (c->flags & CLIENT_MULTI) {
|
||||
addReply(c,shared.nullmultibulk);
|
||||
addReplyNullArray(c);
|
||||
return;
|
||||
}
|
||||
|
||||
|
48
src/util.c
48
src/util.c
@ -39,6 +39,7 @@
|
||||
#include <float.h>
|
||||
#include <stdint.h>
|
||||
#include <errno.h>
|
||||
#include <time.h>
|
||||
|
||||
#include "util.h"
|
||||
#include "sha1.h"
|
||||
@ -47,7 +48,7 @@
|
||||
int stringmatchlen(const char *pattern, int patternLen,
|
||||
const char *string, int stringLen, int nocase)
|
||||
{
|
||||
while(patternLen) {
|
||||
while(patternLen && stringLen) {
|
||||
switch(pattern[0]) {
|
||||
case '*':
|
||||
while (pattern[1] == '*') {
|
||||
@ -170,6 +171,22 @@ int stringmatch(const char *pattern, const char *string, int nocase) {
|
||||
return stringmatchlen(pattern,strlen(pattern),string,strlen(string),nocase);
|
||||
}
|
||||
|
||||
/* Fuzz stringmatchlen() trying to crash it with bad input. */
|
||||
int stringmatchlen_fuzz_test(void) {
|
||||
char str[32];
|
||||
char pat[32];
|
||||
int cycles = 10000000;
|
||||
int total_matches = 0;
|
||||
while(cycles--) {
|
||||
int strlen = rand() % sizeof(str);
|
||||
int patlen = rand() % sizeof(pat);
|
||||
for (int j = 0; j < strlen; j++) str[j] = rand() % 128;
|
||||
for (int j = 0; j < patlen; j++) pat[j] = rand() % 128;
|
||||
total_matches += stringmatchlen(pat, patlen, str, strlen, 0);
|
||||
}
|
||||
return total_matches;
|
||||
}
|
||||
|
||||
/* Convert a string representing an amount of memory into the number of
|
||||
* bytes, so for instance memtoll("1Gb") will return 1073741824 that is
|
||||
* (1024*1024*1024).
|
||||
@ -346,6 +363,7 @@ int string2ll(const char *s, size_t slen, long long *value) {
|
||||
int negative = 0;
|
||||
unsigned long long v;
|
||||
|
||||
/* A zero length string is not a valid number. */
|
||||
if (plen == slen)
|
||||
return 0;
|
||||
|
||||
@ -355,6 +373,8 @@ int string2ll(const char *s, size_t slen, long long *value) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Handle negative numbers: just set a flag and continue like if it
|
||||
* was a positive number. Later convert into negative. */
|
||||
if (p[0] == '-') {
|
||||
negative = 1;
|
||||
p++; plen++;
|
||||
@ -368,13 +388,11 @@ int string2ll(const char *s, size_t slen, long long *value) {
|
||||
if (p[0] >= '1' && p[0] <= '9') {
|
||||
v = p[0]-'0';
|
||||
p++; plen++;
|
||||
} else if (p[0] == '0' && slen == 1) {
|
||||
*value = 0;
|
||||
return 1;
|
||||
} else {
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Parse all the other digits, checking for overflow at every step. */
|
||||
while (plen < slen && p[0] >= '0' && p[0] <= '9') {
|
||||
if (v > (ULLONG_MAX / 10)) /* Overflow. */
|
||||
return 0;
|
||||
@ -391,6 +409,8 @@ int string2ll(const char *s, size_t slen, long long *value) {
|
||||
if (plen < slen)
|
||||
return 0;
|
||||
|
||||
/* Convert to negative if needed, and do the final overflow check when
|
||||
* converting from unsigned long long to long long. */
|
||||
if (negative) {
|
||||
if (v > ((unsigned long long)(-(LLONG_MIN+1))+1)) /* Overflow. */
|
||||
return 0;
|
||||
@ -602,7 +622,7 @@ void getRandomHexChars(char *p, size_t len) {
|
||||
* already, this will be detected and handled correctly.
|
||||
*
|
||||
* The function does not try to normalize everything, but only the obvious
|
||||
* case of one or more "../" appearning at the start of "filename"
|
||||
* case of one or more "../" appearing at the start of "filename"
|
||||
* relative path. */
|
||||
sds getAbsolutePath(char *filename) {
|
||||
char cwd[1024];
|
||||
@ -649,6 +669,24 @@ sds getAbsolutePath(char *filename) {
|
||||
return abspath;
|
||||
}
|
||||
|
||||
/*
|
||||
* Gets the proper timezone in a more portable fashion
|
||||
* i.e timezone variables are linux specific.
|
||||
*/
|
||||
|
||||
unsigned long getTimeZone(void) {
|
||||
#ifdef __linux__
|
||||
return timezone;
|
||||
#else
|
||||
struct timeval tv;
|
||||
struct timezone tz;
|
||||
|
||||
gettimeofday(&tv, &tz);
|
||||
|
||||
return tz.tz_minuteswest * 60UL;
|
||||
#endif
|
||||
}
|
||||
|
||||
/* Return true if the specified path is just a file basename without any
|
||||
* relative or absolute path. This function just checks that no / or \
|
||||
* character exists inside the specified path, that's enough in the
|
||||
|
@ -40,6 +40,7 @@
|
||||
|
||||
int stringmatchlen(const char *p, int plen, const char *s, int slen, int nocase);
|
||||
int stringmatch(const char *p, const char *s, int nocase);
|
||||
int stringmatchlen_fuzz_test(void);
|
||||
long long memtoll(const char *p, int *err);
|
||||
uint32_t digits10(uint64_t v);
|
||||
uint32_t sdigits10(int64_t v);
|
||||
@ -50,6 +51,7 @@ int string2ld(const char *s, size_t slen, long double *dp);
|
||||
int d2string(char *buf, size_t len, double value);
|
||||
int ld2string(char *buf, size_t len, long double value, int humanfriendly);
|
||||
sds getAbsolutePath(char *filename);
|
||||
unsigned long getTimeZone(void);
|
||||
int pathIsBaseName(char *path);
|
||||
|
||||
#ifdef REDIS_TEST
|
||||
|
@ -164,7 +164,7 @@ void *zrealloc(void *ptr, size_t size) {
|
||||
if (!newptr) zmalloc_oom_handler(size);
|
||||
|
||||
*((size_t*)newptr) = size;
|
||||
update_zmalloc_stat_free(oldsize);
|
||||
update_zmalloc_stat_free(oldsize+PREFIX_SIZE);
|
||||
update_zmalloc_stat_alloc(size+PREFIX_SIZE);
|
||||
return (char*)newptr+PREFIX_SIZE;
|
||||
#endif
|
||||
@ -183,7 +183,7 @@ size_t zmalloc_size(void *ptr) {
|
||||
return size+PREFIX_SIZE;
|
||||
}
|
||||
size_t zmalloc_usable(void *ptr) {
|
||||
return zmalloc_usable(ptr)-PREFIX_SIZE;
|
||||
return zmalloc_size(ptr)-PREFIX_SIZE;
|
||||
}
|
||||
#endif
|
||||
|
||||
@ -438,4 +438,20 @@ size_t zmalloc_get_memory_size(void) {
|
||||
#endif
|
||||
}
|
||||
|
||||
#ifdef REDIS_TEST
|
||||
#define UNUSED(x) ((void)(x))
|
||||
int zmalloc_test(int argc, char **argv) {
|
||||
void *ptr;
|
||||
|
||||
UNUSED(argc);
|
||||
UNUSED(argv);
|
||||
printf("Initial used memory: %zu\n", zmalloc_used_memory());
|
||||
ptr = zmalloc(123);
|
||||
printf("Allocated 123 bytes; used: %zu\n", zmalloc_used_memory());
|
||||
ptr = zrealloc(ptr, 456);
|
||||
printf("Reallocated to 456 bytes; used: %zu\n", zmalloc_used_memory());
|
||||
zfree(ptr);
|
||||
printf("Freed pointer; used: %zu\n", zmalloc_used_memory());
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
@ -103,4 +103,8 @@ size_t zmalloc_usable(void *ptr);
|
||||
#define zmalloc_usable(p) zmalloc_size(p)
|
||||
#endif
|
||||
|
||||
#ifdef REDIS_TEST
|
||||
int zmalloc_test(int argc, char **argv);
|
||||
#endif
|
||||
|
||||
#endif /* __ZMALLOC_H */
|
||||
|
@ -49,7 +49,7 @@ start_server {tags {"repl"}} {
|
||||
set fd [open /tmp/repldump2.txt w]
|
||||
puts -nonewline $fd $csv2
|
||||
close $fd
|
||||
puts "Master - Slave inconsistency"
|
||||
puts "Master - Replica inconsistency"
|
||||
puts "Run diff -u against /tmp/repldump*.txt for more info"
|
||||
}
|
||||
assert_equal [r debug digest] [r -1 debug digest]
|
||||
|
@ -29,7 +29,7 @@ start_server {} {
|
||||
wait_for_condition 50 1000 {
|
||||
[$R(1) dbsize] == 1 && [$R(2) dbsize] == 1
|
||||
} else {
|
||||
fail "Slaves not replicating from master"
|
||||
fail "Replicas not replicating from master"
|
||||
}
|
||||
$R(0) config set repl-backlog-size 10mb
|
||||
$R(1) config set repl-backlog-size 10mb
|
||||
@ -41,12 +41,12 @@ start_server {} {
|
||||
set elapsed [expr {[clock milliseconds]-$cycle_start_time}]
|
||||
if {$elapsed > $duration*1000} break
|
||||
if {rand() < .05} {
|
||||
test "PSYNC2 #3899 regression: kill first slave" {
|
||||
test "PSYNC2 #3899 regression: kill first replica" {
|
||||
$R(1) client kill type master
|
||||
}
|
||||
}
|
||||
if {rand() < .05} {
|
||||
test "PSYNC2 #3899 regression: kill chained slave" {
|
||||
test "PSYNC2 #3899 regression: kill chained replica" {
|
||||
$R(2) client kill type master
|
||||
}
|
||||
}
|
||||
|
@ -33,9 +33,8 @@ start_server {} {
|
||||
|
||||
set cycle 1
|
||||
while {([clock seconds]-$start_time) < $duration} {
|
||||
test "PSYNC2: --- CYCLE $cycle ---" {
|
||||
incr cycle
|
||||
}
|
||||
test "PSYNC2: --- CYCLE $cycle ---" {}
|
||||
incr cycle
|
||||
|
||||
# Create a random replication layout.
|
||||
# Start with switching master (this simulates a failover).
|
||||
@ -96,7 +95,7 @@ start_server {} {
|
||||
if {$disconnect} {
|
||||
$R($slave_id) client kill type master
|
||||
if {$debug_msg} {
|
||||
puts "+++ Breaking link for slave #$slave_id"
|
||||
puts "+++ Breaking link for replica #$slave_id"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -139,6 +138,11 @@ start_server {} {
|
||||
}
|
||||
assert {$sum == 4}
|
||||
}
|
||||
|
||||
# Limit anyway the maximum number of cycles. This is useful when the
|
||||
# test is skipped via --only option of the test suite. In that case
|
||||
# we don't want to see many seconds of this test being just skipped.
|
||||
if {$cycle > 50} break
|
||||
}
|
||||
|
||||
test "PSYNC2: Bring the master back again for next test" {
|
||||
@ -154,7 +158,7 @@ start_server {} {
|
||||
wait_for_condition 50 1000 {
|
||||
[status $R($master_id) connected_slaves] == 4
|
||||
} else {
|
||||
fail "Slave not reconnecting"
|
||||
fail "Replica not reconnecting"
|
||||
}
|
||||
}
|
||||
|
||||
@ -169,13 +173,13 @@ start_server {} {
|
||||
wait_for_condition 50 1000 {
|
||||
[status $R($master_id) connected_slaves] == 4
|
||||
} else {
|
||||
fail "Slave not reconnecting"
|
||||
fail "Replica not reconnecting"
|
||||
}
|
||||
set new_sync_count [status $R($master_id) sync_full]
|
||||
assert {$sync_count == $new_sync_count}
|
||||
}
|
||||
|
||||
test "PSYNC2: Slave RDB restart with EVALSHA in backlog issue #4483" {
|
||||
test "PSYNC2: Replica RDB restart with EVALSHA in backlog issue #4483" {
|
||||
# Pick a random slave
|
||||
set slave_id [expr {($master_id+1)%5}]
|
||||
set sync_count [status $R($master_id) sync_full]
|
||||
@ -190,7 +194,7 @@ start_server {} {
|
||||
wait_for_condition 50 1000 {
|
||||
[$R($master_id) debug digest] == [$R($slave_id) debug digest]
|
||||
} else {
|
||||
fail "Slave not reconnecting"
|
||||
fail "Replica not reconnecting"
|
||||
}
|
||||
|
||||
# Prevent the slave from receiving master updates, and at
|
||||
@ -224,7 +228,7 @@ start_server {} {
|
||||
wait_for_condition 50 1000 {
|
||||
[status $R($master_id) connected_slaves] == 4
|
||||
} else {
|
||||
fail "Slave not reconnecting"
|
||||
fail "Replica not reconnecting"
|
||||
}
|
||||
set new_sync_count [status $R($master_id) sync_full]
|
||||
assert {$sync_count == $new_sync_count}
|
||||
@ -234,7 +238,7 @@ start_server {} {
|
||||
wait_for_condition 50 1000 {
|
||||
[$R($master_id) debug digest] == [$R($slave_id) debug digest]
|
||||
} else {
|
||||
fail "Debug digest mismatch between master and slave in post-restart handshake"
|
||||
fail "Debug digest mismatch between master and replica in post-restart handshake"
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -16,7 +16,7 @@ start_server {tags {"repl"}} {
|
||||
wait_for_condition 50 100 {
|
||||
[r -1 get foo] eq {12345}
|
||||
} else {
|
||||
fail "Write did not reached slave"
|
||||
fail "Write did not reached replica"
|
||||
}
|
||||
}
|
||||
|
||||
@ -34,7 +34,7 @@ start_server {tags {"repl"}} {
|
||||
wait_for_condition 50 100 {
|
||||
[r -1 get foo] eq {12345}
|
||||
} else {
|
||||
fail "Write did not reached slave"
|
||||
fail "Write did not reached replica"
|
||||
}
|
||||
}
|
||||
|
||||
@ -60,7 +60,7 @@ start_server {tags {"repl"}} {
|
||||
wait_for_condition 50 100 {
|
||||
[r -1 get foo] eq {aaabbb}
|
||||
} else {
|
||||
fail "Write did not reached slave"
|
||||
fail "Write did not reached replica"
|
||||
}
|
||||
}
|
||||
|
||||
@ -81,7 +81,7 @@ start_server {tags {"repl"}} {
|
||||
set fd [open /tmp/repldump2.txt w]
|
||||
puts -nonewline $fd $csv2
|
||||
close $fd
|
||||
puts "Master - Slave inconsistency"
|
||||
puts "Master - Replica inconsistency"
|
||||
puts "Run diff -u against /tmp/repldump*.txt for more info"
|
||||
}
|
||||
assert_equal [r debug digest] [r -1 debug digest]
|
||||
|
@ -25,7 +25,7 @@ start_server {tags {"repl"}} {
|
||||
set fd [open /tmp/repldump2.txt w]
|
||||
puts -nonewline $fd $csv2
|
||||
close $fd
|
||||
puts "Master - Slave inconsistency"
|
||||
puts "Master - Replica inconsistency"
|
||||
puts "Run diff -u against /tmp/repldump*.txt for more info"
|
||||
}
|
||||
assert_equal [r debug digest] [r -1 debug digest]
|
||||
@ -98,7 +98,7 @@ start_server {tags {"repl"}} {
|
||||
set fd [open /tmp/repldump2.txt w]
|
||||
puts -nonewline $fd $csv2
|
||||
close $fd
|
||||
puts "Master - Slave inconsistency"
|
||||
puts "Master - Replica inconsistency"
|
||||
puts "Run diff -u against /tmp/repldump*.txt for more info"
|
||||
}
|
||||
|
||||
|
@ -47,7 +47,7 @@ start_server {tags {"repl"}} {
|
||||
set fd [open /tmp/repldump2.txt w]
|
||||
puts -nonewline $fd $csv2
|
||||
close $fd
|
||||
puts "Master - Slave inconsistency"
|
||||
puts "Master - Replica inconsistency"
|
||||
puts "Run diff -u against /tmp/repldump*.txt for more info"
|
||||
}
|
||||
assert_equal [r debug digest] [r -1 debug digest]
|
||||
|
@ -60,7 +60,7 @@ proc test_psync {descr duration backlog_size backlog_ttl delay cond diskless rec
|
||||
if ($reconnect) {
|
||||
for {set j 0} {$j < $duration*10} {incr j} {
|
||||
after 100
|
||||
# catch {puts "MASTER [$master dbsize] keys, SLAVE [$slave dbsize] keys"}
|
||||
# catch {puts "MASTER [$master dbsize] keys, REPLICA [$slave dbsize] keys"}
|
||||
|
||||
if {($j % 20) == 0} {
|
||||
catch {
|
||||
@ -96,7 +96,7 @@ proc test_psync {descr duration backlog_size backlog_ttl delay cond diskless rec
|
||||
set fd [open /tmp/repldump2.txt w]
|
||||
puts -nonewline $fd $csv2
|
||||
close $fd
|
||||
puts "Master - Slave inconsistency"
|
||||
puts "Master - Replica inconsistency"
|
||||
puts "Run diff -u against /tmp/repldump*.txt for more info"
|
||||
}
|
||||
assert_equal [r debug digest] [r -1 debug digest]
|
||||
|
@ -32,7 +32,7 @@ start_server {tags {"repl"}} {
|
||||
wait_for_condition 50 1000 {
|
||||
[string match *handshake* [$slave role]]
|
||||
} else {
|
||||
fail "Slave does not enter handshake state"
|
||||
fail "Replica does not enter handshake state"
|
||||
}
|
||||
}
|
||||
|
||||
@ -45,7 +45,7 @@ start_server {tags {"repl"}} {
|
||||
wait_for_condition 50 1000 {
|
||||
[log_file_matches $slave_log "*Timeout connecting to the MASTER*"]
|
||||
} else {
|
||||
fail "Slave is not able to detect timeout"
|
||||
fail "Replica is not able to detect timeout"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -66,7 +66,7 @@ start_server {tags {"repl"}} {
|
||||
[lindex [$A role] 0] eq {slave} &&
|
||||
[string match {*master_link_status:up*} [$A info replication]]
|
||||
} else {
|
||||
fail "Can't turn the instance into a slave"
|
||||
fail "Can't turn the instance into a replica"
|
||||
}
|
||||
}
|
||||
|
||||
@ -77,7 +77,7 @@ start_server {tags {"repl"}} {
|
||||
wait_for_condition 50 100 {
|
||||
[$A debug digest] eq [$B debug digest]
|
||||
} else {
|
||||
fail "Master and slave have different digest: [$A debug digest] VS [$B debug digest]"
|
||||
fail "Master and replica have different digest: [$A debug digest] VS [$B debug digest]"
|
||||
}
|
||||
}
|
||||
|
||||
@ -102,10 +102,10 @@ start_server {tags {"repl"}} {
|
||||
[lindex [$B role] 0] eq {slave} &&
|
||||
[string match {*master_link_status:up*} [$B info replication]]
|
||||
} else {
|
||||
fail "Can't turn the instance into a slave"
|
||||
fail "Can't turn the instance into a replica"
|
||||
}
|
||||
|
||||
# Push elements into the "foo" list of the new slave.
|
||||
# Push elements into the "foo" list of the new replica.
|
||||
# If the client is still attached to the instance, we'll get
|
||||
# a desync between the two instances.
|
||||
$A rpush foo a b c
|
||||
@ -116,7 +116,7 @@ start_server {tags {"repl"}} {
|
||||
[$A lrange foo 0 -1] eq {a b c} &&
|
||||
[$B lrange foo 0 -1] eq {a b c}
|
||||
} else {
|
||||
fail "Master and slave have different digest: [$A debug digest] VS [$B debug digest]"
|
||||
fail "Master and replica have different digest: [$A debug digest] VS [$B debug digest]"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -135,7 +135,7 @@ start_server {tags {"repl"}} {
|
||||
s master_link_status
|
||||
} {down}
|
||||
|
||||
test {The role should immediately be changed to "slave"} {
|
||||
test {The role should immediately be changed to "replica"} {
|
||||
s role
|
||||
} {slave}
|
||||
|
||||
@ -154,7 +154,7 @@ start_server {tags {"repl"}} {
|
||||
wait_for_condition 500 100 {
|
||||
[r 0 get mykey] eq {bar}
|
||||
} else {
|
||||
fail "SET on master did not propagated on slave"
|
||||
fail "SET on master did not propagated on replica"
|
||||
}
|
||||
}
|
||||
|
||||
@ -201,7 +201,7 @@ foreach dl {no yes} {
|
||||
lappend slaves [srv 0 client]
|
||||
start_server {} {
|
||||
lappend slaves [srv 0 client]
|
||||
test "Connect multiple slaves at the same time (issue #141), diskless=$dl" {
|
||||
test "Connect multiple replicas at the same time (issue #141), diskless=$dl" {
|
||||
# Send SLAVEOF commands to slaves
|
||||
[lindex $slaves 0] slaveof $master_host $master_port
|
||||
[lindex $slaves 1] slaveof $master_host $master_port
|
||||
@ -220,7 +220,7 @@ foreach dl {no yes} {
|
||||
}
|
||||
}
|
||||
if {$retry == 0} {
|
||||
error "assertion:Slaves not correctly synchronized"
|
||||
error "assertion:Replicas not correctly synchronized"
|
||||
}
|
||||
|
||||
# Wait that slaves acknowledge they are online so
|
||||
@ -231,7 +231,7 @@ foreach dl {no yes} {
|
||||
[lindex [[lindex $slaves 1] role] 3] eq {connected} &&
|
||||
[lindex [[lindex $slaves 2] role] 3] eq {connected}
|
||||
} else {
|
||||
fail "Slaves still not connected after some time"
|
||||
fail "Replicas still not connected after some time"
|
||||
}
|
||||
|
||||
# Stop the write load
|
||||
@ -248,7 +248,7 @@ foreach dl {no yes} {
|
||||
[$master dbsize] == [[lindex $slaves 1] dbsize] &&
|
||||
[$master dbsize] == [[lindex $slaves 2] dbsize]
|
||||
} else {
|
||||
fail "Different number of keys between masted and slave after too long time."
|
||||
fail "Different number of keys between masted and replica after too long time."
|
||||
}
|
||||
|
||||
# Check digests
|
||||
@ -266,3 +266,46 @@ foreach dl {no yes} {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
start_server {tags {"repl"}} {
|
||||
set master [srv 0 client]
|
||||
set master_host [srv 0 host]
|
||||
set master_port [srv 0 port]
|
||||
set load_handle0 [start_write_load $master_host $master_port 3]
|
||||
start_server {} {
|
||||
test "Master stream is correctly processed while the replica has a script in -BUSY state" {
|
||||
set slave [srv 0 client]
|
||||
$slave config set lua-time-limit 500
|
||||
$slave slaveof $master_host $master_port
|
||||
|
||||
# Wait for the slave to be online
|
||||
wait_for_condition 500 100 {
|
||||
[lindex [$slave role] 3] eq {connected}
|
||||
} else {
|
||||
fail "Replica still not connected after some time"
|
||||
}
|
||||
|
||||
# Wait some time to make sure the master is sending data
|
||||
# to the slave.
|
||||
after 5000
|
||||
|
||||
# Stop the ability of the slave to process data by sendig
|
||||
# a script that will put it in BUSY state.
|
||||
$slave eval {for i=1,3000000000 do end} 0
|
||||
|
||||
# Wait some time again so that more master stream will
|
||||
# be processed.
|
||||
after 2000
|
||||
|
||||
# Stop the write load
|
||||
stop_write_load $load_handle0
|
||||
|
||||
# number of keys
|
||||
wait_for_condition 500 100 {
|
||||
[$master debug digest] eq [$slave debug digest]
|
||||
} else {
|
||||
fail "Different datasets between replica and master"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -17,7 +17,7 @@ test "Basic failover works if the master is down" {
|
||||
wait_for_condition 1000 50 {
|
||||
[lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port
|
||||
} else {
|
||||
fail "At least one Sentinel did not received failover info"
|
||||
fail "At least one Sentinel did not receive failover info"
|
||||
}
|
||||
}
|
||||
restart_instance redis $master_id
|
||||
@ -108,7 +108,7 @@ test "Failover works if we configure for absolute agreement" {
|
||||
wait_for_condition 1000 50 {
|
||||
[lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port
|
||||
} else {
|
||||
fail "At least one Sentinel did not received failover info"
|
||||
fail "At least one Sentinel did not receive failover info"
|
||||
}
|
||||
}
|
||||
restart_instance redis $master_id
|
||||
|
@ -16,7 +16,7 @@ test "We can failover with Sentinel 1 crashed" {
|
||||
wait_for_condition 1000 50 {
|
||||
[lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port
|
||||
} else {
|
||||
fail "Sentinel $id did not received failover info"
|
||||
fail "Sentinel $id did not receive failover info"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -30,7 +30,7 @@ test "After Sentinel 1 is restarted, its config gets updated" {
|
||||
wait_for_condition 1000 50 {
|
||||
[lindex [S 1 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port
|
||||
} else {
|
||||
fail "Restarted Sentinel did not received failover info"
|
||||
fail "Restarted Sentinel did not receive failover info"
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -36,7 +36,7 @@ proc 02_crash_and_failover {} {
|
||||
wait_for_condition 1000 50 {
|
||||
[lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port
|
||||
} else {
|
||||
fail "At least one Sentinel did not received failover info"
|
||||
fail "At least one Sentinel did not receive failover info"
|
||||
}
|
||||
}
|
||||
restart_instance redis $master_id
|
||||
|
@ -12,7 +12,7 @@ test "Manual failover works" {
|
||||
wait_for_condition 1000 50 {
|
||||
[lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port
|
||||
} else {
|
||||
fail "At least one Sentinel did not received failover info"
|
||||
fail "At least one Sentinel did not receive failover info"
|
||||
}
|
||||
}
|
||||
set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user