Merge branch 'keydbpro' into PRO_RELEASE_6
Former-commit-id: 6b385bc057d8a01ed57a6c0d89eb30e9832fe1ca
This commit is contained in:
commit
08b6ab2a3e
21
.github/ISSUE_TEMPLATE/question.md
vendored
21
.github/ISSUE_TEMPLATE/question.md
vendored
@ -1,21 +0,0 @@
|
||||
---
|
||||
name: Question
|
||||
about: Ask the Redis developers
|
||||
title: '[QUESTION]'
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Please keep in mind that this issue tracker should be used for reporting bugs or proposing improvements to the Redis server.
|
||||
|
||||
Generally, questions about using Redis should be directed to the [community](https://redis.io/community):
|
||||
|
||||
* [the mailing list](https://groups.google.com/forum/#!forum/redis-db)
|
||||
* [the `redis` tag at StackOverflow](http://stackoverflow.com/questions/tagged/redis)
|
||||
* [/r/redis subreddit](http://www.reddit.com/r/redis)
|
||||
* [the irc channel #redis](http://webchat.freenode.net/?channels=redis) on freenode
|
||||
|
||||
It is also possible that your question was already asked here, so please do a quick issues search before submitting. Lastly, if your question is about one of Redis' [clients](https://redis.io/clients), you may to contact your client's developers for help.
|
||||
|
||||
That said, please feel free to replace all this with your question :)
|
3
.gitignore
vendored
3
.gitignore
vendored
@ -29,6 +29,7 @@ redis-check-rdb
|
||||
keydb-check-rdb
|
||||
redis-check-dump
|
||||
keydb-check-dump
|
||||
keydb-diagnostic-tool
|
||||
redis-cli
|
||||
redis-sentinel
|
||||
redis-server
|
||||
@ -57,4 +58,4 @@ Makefile.dep
|
||||
.ccls
|
||||
.ccls-cache/*
|
||||
compile_commands.json
|
||||
redis.code-workspace
|
||||
keydb.code-workspace
|
||||
|
@ -183,7 +183,7 @@ To compile against jemalloc on Mac OS X systems, use:
|
||||
Monotonic clock
|
||||
---------------
|
||||
|
||||
By default, Redis will build using the POSIX clock_gettime function as the
|
||||
By default, KeyDB will build using the POSIX clock_gettime function as the
|
||||
monotonic clock source. On most modern systems, the internal processor clock
|
||||
can be used to improve performance. Cautions can be found here:
|
||||
http://oliveryang.net/2015/09/pitfalls-of-TSC-usage/
|
||||
|
4
TLS.md
4
TLS.md
@ -28,8 +28,8 @@ To manually run a Redis server with TLS mode (assuming `gen-test-certs.sh` was
|
||||
invoked so sample certificates/keys are available):
|
||||
|
||||
./src/keydb-server --tls-port 6379 --port 0 \
|
||||
--tls-cert-file ./tests/tls/keydb.crt \
|
||||
--tls-key-file ./tests/tls/keydb.key \
|
||||
--tls-cert-file ./tests/tls/client.crt \
|
||||
--tls-key-file ./tests/tls/client.key \
|
||||
--tls-ca-cert-file ./tests/tls/ca.crt
|
||||
|
||||
To connect to this Redis server with `keydb-cli`:
|
||||
|
251
keydb.conf
251
keydb.conf
@ -32,8 +32,17 @@
|
||||
# If instead you are interested in using includes to override configuration
|
||||
# options, it is better to use include as the last line.
|
||||
#
|
||||
# Included paths may contain wildcards. All files matching the wildcards will
|
||||
# be included in alphabetical order.
|
||||
# Note that if an include path contains a wildcards but no files match it when
|
||||
# the server is started, the include statement will be ignored and no error will
|
||||
# be emitted. It is safe, therefore, to include wildcard files from empty
|
||||
# directories.
|
||||
#
|
||||
# include /path/to/local.conf
|
||||
# include /path/to/other.conf
|
||||
# include /path/to/fragments/*.conf
|
||||
#
|
||||
|
||||
################################## MODULES #####################################
|
||||
|
||||
@ -49,23 +58,32 @@
|
||||
# for connections from all available network interfaces on the host machine.
|
||||
# It is possible to listen to just one or multiple selected interfaces using
|
||||
# the "bind" configuration directive, followed by one or more IP addresses.
|
||||
# Each address can be prefixed by "-", which means that redis will not fail to
|
||||
# start if the address is not available. Being not available only refers to
|
||||
# addresses that does not correspond to any network interfece. Addresses that
|
||||
# are already in use will always fail, and unsupported protocols will always BE
|
||||
# silently skipped.
|
||||
#
|
||||
# Examples:
|
||||
#
|
||||
# bind 192.168.1.100 10.0.0.1
|
||||
# bind 127.0.0.1 ::1
|
||||
# bind 192.168.1.100 10.0.0.1 # listens on two specific IPv4 addresses
|
||||
# bind 127.0.0.1 ::1 # listens on loopback IPv4 and IPv6
|
||||
# bind * -::* # like the default, all available interfaces
|
||||
#
|
||||
# ~~~ WARNING ~~~ If the computer running KeyDB is directly exposed to the
|
||||
# internet, binding to all the interfaces is dangerous and will expose the
|
||||
# instance to everybody on the internet. So by default we uncomment the
|
||||
# following bind directive, that will force KeyDB to listen only on the
|
||||
# IPv4 loopback interface address (this means KeyDB will only be able to
|
||||
# IPv4 and IPv6 (if available) loopback interface addresses (this means KeyDB will only be able to
|
||||
# accept client connections from the same host that it is running on).
|
||||
#
|
||||
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
|
||||
# JUST COMMENT OUT THE FOLLOWING LINE.
|
||||
#
|
||||
# You will also need to set a password unless you explicitly disable protected
|
||||
# mode.
|
||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
bind 127.0.0.1
|
||||
bind 127.0.0.1 -::1
|
||||
|
||||
# Protected mode is a layer of security protection, in order to avoid that
|
||||
# KeyDB instances left open on the internet are accessed and exploited.
|
||||
@ -125,7 +143,7 @@ timeout 0
|
||||
# On other kernels the period depends on the kernel configuration.
|
||||
#
|
||||
# A reasonable value for this option is 300 seconds, which is the new
|
||||
# KeyDB default starting with Redis 3.2.1.
|
||||
# KeyDB default starting with KeyDB 3.2.1.
|
||||
tcp-keepalive 300
|
||||
|
||||
################################# TLS/SSL #####################################
|
||||
@ -141,15 +159,37 @@ tcp-keepalive 300
|
||||
# server to connected clients, masters or cluster peers. These files should be
|
||||
# PEM formatted.
|
||||
#
|
||||
# tls-cert-file redis.crt
|
||||
# tls-key-file redis.key
|
||||
# tls-cert-file keydb.crt
|
||||
# tls-key-file keydb.key
|
||||
#
|
||||
# If the key file is encrypted using a passphrase, it can be included here
|
||||
# as well.
|
||||
#
|
||||
# tls-key-file-pass secret
|
||||
|
||||
# Normally KeyDB uses the same certificate for both server functions (accepting
|
||||
# connections) and client functions (replicating from a master, establishing
|
||||
# cluster bus connections, etc.).
|
||||
#
|
||||
# Sometimes certificates are issued with attributes that designate them as
|
||||
# client-only or server-only certificates. In that case it may be desired to use
|
||||
# different certificates for incoming (server) and outgoing (client)
|
||||
# connections. To do that, use the following directives:
|
||||
#
|
||||
# tls-client-cert-file client.crt
|
||||
# tls-client-key-file client.key
|
||||
#
|
||||
# If the key file is encrypted using a passphrase, it can be included here
|
||||
# as well.
|
||||
#
|
||||
# tls-client-key-file-pass secret
|
||||
|
||||
# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange:
|
||||
#
|
||||
# tls-dh-params-file redis.dh
|
||||
# tls-dh-params-file keydb.dh
|
||||
|
||||
# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL
|
||||
# clients and peers. Redis requires an explicit configuration of at least one
|
||||
# clients and peers. KeyDB requires an explicit configuration of at least one
|
||||
# of these, and will not implicitly use the system wide configuration.
|
||||
#
|
||||
# tls-ca-cert-file ca.crt
|
||||
@ -172,7 +212,7 @@ tcp-keepalive 300
|
||||
#
|
||||
# tls-replication yes
|
||||
|
||||
# By default, the Redis Cluster bus uses a plain TCP connection. To enable
|
||||
# By default, the KeyDB Cluster bus uses a plain TCP connection. To enable
|
||||
# TLS for the bus protocol, use the following directive:
|
||||
#
|
||||
# tls-cluster yes
|
||||
@ -269,6 +309,16 @@ logfile ""
|
||||
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
|
||||
# syslog-facility local0
|
||||
|
||||
# To disable the built in crash log, which will possibly produce cleaner core
|
||||
# dumps when they are needed, uncomment the following:
|
||||
#
|
||||
# crash-log-enabled no
|
||||
|
||||
# To disable the fast memory check that's run as part of the crash log, which
|
||||
# will possibly let keydb terminate sooner, uncomment the following:
|
||||
#
|
||||
# crash-memcheck-enabled no
|
||||
|
||||
# Set the number of databases. The default database is DB 0, you can select
|
||||
# a different one on a per-connection basis using SELECT <dbid> where
|
||||
# dbid is a number between 0 and 'databases'-1
|
||||
@ -282,9 +332,31 @@ databases 16
|
||||
# ASCII art logo in startup logs by setting the following option to yes.
|
||||
always-show-logo yes
|
||||
|
||||
# By default, KeyDB modifies the process title (as seen in 'top' and 'ps') to
|
||||
# provide some runtime information. It is possible to disable this and leave
|
||||
# the process name as executed by setting the following to no.
|
||||
set-proc-title yes
|
||||
|
||||
# Retrieving "message of today" using CURL requests.
|
||||
#enable-motd yes
|
||||
|
||||
# When changing the process title, KeyDB uses the following template to construct
|
||||
# the modified title.
|
||||
#
|
||||
# Template variables are specified in curly brackets. The following variables are
|
||||
# supported:
|
||||
#
|
||||
# {title} Name of process as executed if parent, or type of child process.
|
||||
# {listen-addr} Bind address or '*' followed by TCP or TLS port listening on, or
|
||||
# Unix socket if only that's available.
|
||||
# {server-mode} Special mode, i.e. "[sentinel]" or "[cluster]".
|
||||
# {port} TCP port listening on, or 0.
|
||||
# {tls-port} TLS port listening on, or 0.
|
||||
# {unixsocket} Unix domain socket listening on, or "".
|
||||
# {config-file} Name of configuration file used.
|
||||
#
|
||||
proc-title-template "{title} {listen-addr} {server-mode}"
|
||||
|
||||
################################ SNAPSHOTTING ################################
|
||||
#
|
||||
# Save the DB on disk:
|
||||
@ -299,8 +371,6 @@ always-show-logo yes
|
||||
# after 300 sec (5 min) if at least 10 keys changed
|
||||
# after 60 sec if at least 10000 keys changed
|
||||
#
|
||||
# Note: you can disable saving completely by commenting out all "save" lines.
|
||||
#
|
||||
# It is also possible to remove all the previously configured save
|
||||
# points by adding a save directive with a single empty string argument
|
||||
# like in the following example:
|
||||
@ -341,6 +411,21 @@ rdbcompression yes
|
||||
# tell the loading code to skip the check.
|
||||
rdbchecksum yes
|
||||
|
||||
# Enables or disables full sanitation checks for ziplist and listpack etc when
|
||||
# loading an RDB or RESTORE payload. This reduces the chances of a assertion or
|
||||
# crash later on while processing commands.
|
||||
# Options:
|
||||
# no - Never perform full sanitation
|
||||
# yes - Always perform full sanitation
|
||||
# clients - Perform full sanitation only for user connections.
|
||||
# Excludes: RDB files, RESTORE commands received from the master
|
||||
# connection, and client connections which have the
|
||||
# skip-sanitize-payload ACL flag.
|
||||
# The default should be 'clients' but since it currently affects cluster
|
||||
# resharding via MIGRATE, it is temporarily set to 'no' by default.
|
||||
#
|
||||
# sanitize-dump-payload no
|
||||
|
||||
# The filename where to dump the DB
|
||||
dbfilename dump.rdb
|
||||
|
||||
@ -397,7 +482,7 @@ dir ./
|
||||
#
|
||||
# masterauth <master-password>
|
||||
#
|
||||
# However this is not enough if you are using KeyDB ACLs (for Redis version
|
||||
# However this is not enough if you are using KeyDB ACLs (for KeyDB version
|
||||
# 6 or greater), and the default user is not capable of running the PSYNC
|
||||
# command and/or other commands needed for replication (gathered in the
|
||||
# @replication group). In this case it's better to configure a special user to
|
||||
@ -443,7 +528,7 @@ replica-serve-stale-data yes
|
||||
# may also cause problems if clients are writing to it because of a
|
||||
# misconfiguration.
|
||||
#
|
||||
# Since Redis 2.6 by default replicas are read-only.
|
||||
# Since KeyDB 2.6 by default replicas are read-only.
|
||||
#
|
||||
# Note: read only replicas are not designed to be exposed to untrusted clients
|
||||
# on the internet. It's just a protection layer against misuse of the instance.
|
||||
@ -595,6 +680,18 @@ repl-disable-tcp-nodelay no
|
||||
# By default the priority is 100.
|
||||
replica-priority 100
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# By default, KeyDB Sentinel includes all replicas in its reports. A replica
|
||||
# can be excluded from KeyDB Sentinel's announcements. An unannounced replica
|
||||
# will be ignored by the 'sentinel replicas <master>' command and won't be
|
||||
# exposed to KeyDB Sentinel's clients.
|
||||
#
|
||||
# This option does not change the behavior of replica-priority. Even with
|
||||
# replica-announced set to 'no', the replica can be promoted to master. To
|
||||
# prevent this behavior, set replica-priority to 0.
|
||||
#
|
||||
# replica-announced yes
|
||||
|
||||
# It is possible for a master to stop accepting writes if there are less than
|
||||
# N replicas connected, having a lag less or equal than M seconds.
|
||||
#
|
||||
@ -714,6 +811,8 @@ replica-priority 100
|
||||
# off Disable the user: it's no longer possible to authenticate
|
||||
# with this user, however the already authenticated connections
|
||||
# will still work.
|
||||
# skip-sanitize-payload RESTORE dump-payload sanitation is skipped.
|
||||
# sanitize-payload RESTORE dump-payload is sanitized (default).
|
||||
# +<command> Allow the execution of that command
|
||||
# -<command> Disallow the execution of that command
|
||||
# +@<category> Allow the execution of all the commands in such category
|
||||
@ -736,6 +835,11 @@ replica-priority 100
|
||||
# It is possible to specify multiple patterns.
|
||||
# allkeys Alias for ~*
|
||||
# resetkeys Flush the list of allowed keys patterns.
|
||||
# &<pattern> Add a glob-style pattern of Pub/Sub channels that can be
|
||||
# accessed by the user. It is possible to specify multiple channel
|
||||
# patterns.
|
||||
# allchannels Alias for &*
|
||||
# resetchannels Flush the list of allowed channel patterns.
|
||||
# ><password> Add this password to the list of valid password for the user.
|
||||
# For example >mypass will add "mypass" to the list.
|
||||
# This directive clears the "nopass" flag (see later).
|
||||
@ -775,6 +879,40 @@ replica-priority 100
|
||||
#
|
||||
# Basically ACL rules are processed left-to-right.
|
||||
#
|
||||
# The following is a list of command categories and their meanings:
|
||||
# * keyspace - Writing or reading from keys, databases, or their metadata
|
||||
# in a type agnostic way. Includes DEL, RESTORE, DUMP, RENAME, EXISTS, DBSIZE,
|
||||
# KEYS, EXPIRE, TTL, FLUSHALL, etc. Commands that may modify the keyspace,
|
||||
# key or metadata will also have `write` category. Commands that only read
|
||||
# the keyspace, key or metadata will have the `read` category.
|
||||
# * read - Reading from keys (values or metadata). Note that commands that don't
|
||||
# interact with keys, will not have either `read` or `write`.
|
||||
# * write - Writing to keys (values or metadata)
|
||||
# * admin - Administrative commands. Normal applications will never need to use
|
||||
# these. Includes REPLICAOF, CONFIG, DEBUG, SAVE, MONITOR, ACL, SHUTDOWN, etc.
|
||||
# * dangerous - Potentially dangerous (each should be considered with care for
|
||||
# various reasons). This includes FLUSHALL, MIGRATE, RESTORE, SORT, KEYS,
|
||||
# CLIENT, DEBUG, INFO, CONFIG, SAVE, REPLICAOF, etc.
|
||||
# * connection - Commands affecting the connection or other connections.
|
||||
# This includes AUTH, SELECT, COMMAND, CLIENT, ECHO, PING, etc.
|
||||
# * blocking - Potentially blocking the connection until released by another
|
||||
# command.
|
||||
# * fast - Fast O(1) commands. May loop on the number of arguments, but not the
|
||||
# number of elements in the key.
|
||||
# * slow - All commands that are not Fast.
|
||||
# * pubsub - PUBLISH / SUBSCRIBE related
|
||||
# * transaction - WATCH / MULTI / EXEC related commands.
|
||||
# * scripting - Scripting related.
|
||||
# * set - Data type: sets related.
|
||||
# * sortedset - Data type: zsets related.
|
||||
# * list - Data type: lists related.
|
||||
# * hash - Data type: hashes related.
|
||||
# * string - Data type: strings related.
|
||||
# * bitmap - Data type: bitmaps related.
|
||||
# * hyperloglog - Data type: hyperloglog related.
|
||||
# * geo - Data type: geo related.
|
||||
# * stream - Data type: streams related.
|
||||
#
|
||||
# For more information about ACL configuration please refer to
|
||||
# the Redis web site at https://redis.io/topics/acl
|
||||
|
||||
@ -798,14 +936,38 @@ acllog-max-len 128
|
||||
#
|
||||
# aclfile /etc/keydb/users.acl
|
||||
|
||||
# IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatibility
|
||||
# IMPORTANT NOTE: starting with KeyDB 6 "requirepass" is just a compatibility
|
||||
# layer on top of the new ACL system. The option effect will be just setting
|
||||
# the password for the default user. Clients will still authenticate using
|
||||
# AUTH <password> as usually, or more explicitly with AUTH default <password>
|
||||
# if they follow the new protocol: both will work.
|
||||
#
|
||||
# The requirepass is not compatible with aclfile option and the ACL LOAD
|
||||
# command, these will cause requirepass to be ignored.
|
||||
#
|
||||
# requirepass foobared
|
||||
|
||||
# New users are initialized with restrictive permissions by default, via the
|
||||
# equivalent of this ACL rule 'off resetkeys -@all'. Starting with KeyDB 6.2, it
|
||||
# is possible to manage access to Pub/Sub channels with ACL rules as well. The
|
||||
# default Pub/Sub channels permission if new users is controlled by the
|
||||
# acl-pubsub-default configuration directive, which accepts one of these values:
|
||||
#
|
||||
# allchannels: grants access to all Pub/Sub channels
|
||||
# resetchannels: revokes access to all Pub/Sub channels
|
||||
#
|
||||
# To ensure backward compatibility while upgrading KeyDB 6.0, acl-pubsub-default
|
||||
# defaults to the 'allchannels' permission.
|
||||
#
|
||||
# Future compatibility note: it is very likely that in a future version of KeyDB
|
||||
# the directive's default of 'allchannels' will be changed to 'resetchannels' in
|
||||
# order to provide better out-of-the-box Pub/Sub security. Therefore, it is
|
||||
# recommended that you explicitly define Pub/Sub permissions for all users
|
||||
# rather then rely on implicit default values. Once you've set explicit
|
||||
# Pub/Sub for all existing users, you should uncomment the following line.
|
||||
#
|
||||
# acl-pubsub-default resetchannels
|
||||
|
||||
# Command renaming (DEPRECATED).
|
||||
#
|
||||
# ------------------------------------------------------------------------
|
||||
@ -842,7 +1004,7 @@ acllog-max-len 128
|
||||
# Once the limit is reached KeyDB will close all the new connections sending
|
||||
# an error 'max number of clients reached'.
|
||||
#
|
||||
# IMPORTANT: When Redis Cluster is used, the max number of connections is also
|
||||
# IMPORTANT: When KeyDB Cluster is used, the max number of connections is also
|
||||
# shared with the cluster bus: every node in the cluster will use two
|
||||
# connections, one incoming and another outgoing. It is important to size the
|
||||
# limit accordingly in case of very large clusters.
|
||||
@ -918,7 +1080,15 @@ acllog-max-len 128
|
||||
#
|
||||
# maxmemory-samples 5
|
||||
|
||||
# Starting from Redis 5, by default a replica will ignore its maxmemory setting
|
||||
# Eviction processing is designed to function well with the default setting.
|
||||
# If there is an unusually large amount of write traffic, this value may need to
|
||||
# be increased. Decreasing this value may reduce latency at the risk of
|
||||
# eviction processing effectiveness
|
||||
# 0 = minimum latency, 10 = default, 100 = process without regard to latency
|
||||
#
|
||||
# maxmemory-eviction-tenacity 10
|
||||
|
||||
# Starting from KeyDB 5, by default a replica will ignore its maxmemory setting
|
||||
# (unless it is promoted to master after a failover or manually). It means
|
||||
# that the eviction of keys will be just handled by the master, sending the
|
||||
# DEL commands to the replica as keys evict in the master side.
|
||||
@ -1011,6 +1181,13 @@ replica-lazy-flush no
|
||||
|
||||
lazyfree-lazy-user-del no
|
||||
|
||||
# FLUSHDB, FLUSHALL, and SCRIPT FLUSH support both asynchronous and synchronous
|
||||
# deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the
|
||||
# commands. When neither flag is passed, this directive will be used to determine
|
||||
# if the data should be deleted asynchronously.
|
||||
|
||||
lazyfree-lazy-user-flush no
|
||||
|
||||
############################ KERNEL OOM CONTROL ##############################
|
||||
|
||||
# On Linux, it is possible to hint the kernel OOM killer on what processes
|
||||
@ -1042,6 +1219,19 @@ oom-score-adj no
|
||||
# oom-score-adj-values to positive values will always succeed.
|
||||
oom-score-adj-values 0 200 800
|
||||
|
||||
|
||||
#################### KERNEL transparent hugepage CONTROL ######################
|
||||
|
||||
# Usually the kernel Transparent Huge Pages control is set to "madvise" or
|
||||
# or "never" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which
|
||||
# case this config has no effect. On systems in which it is set to "always",
|
||||
# KeyDB will attempt to disable it specifically for the KeyDB process in order
|
||||
# to avoid latency problems specifically with fork(2) and CoW.
|
||||
# If for some reason you prefer to keep it enabled, you can set this config to
|
||||
# "no" and the kernel global to "always".
|
||||
|
||||
disable-thp yes
|
||||
|
||||
############################## APPEND ONLY MODE ###############################
|
||||
|
||||
# By default KeyDB asynchronously dumps the dataset on disk. This mode is
|
||||
@ -1269,12 +1459,21 @@ lua-time-limit 5000
|
||||
# master in your cluster.
|
||||
#
|
||||
# Default is 1 (replicas migrate only if their masters remain with at least
|
||||
# one replica). To disable migration just set it to a very large value.
|
||||
# one replica). To disable migration just set it to a very large value or
|
||||
# set cluster-allow-replica-migration to 'no'.
|
||||
# A value of 0 can be set but is useful only for debugging and dangerous
|
||||
# in production.
|
||||
#
|
||||
# cluster-migration-barrier 1
|
||||
|
||||
# Turning off this option allows to use less automatic cluster configuration.
|
||||
# It both disables migration to orphaned masters and migration from masters
|
||||
# that became empty.
|
||||
#
|
||||
# Default is 'yes' (allow automatic migrations).
|
||||
#
|
||||
# cluster-allow-replica-migration yes
|
||||
|
||||
# By default KeyDB Cluster nodes stop accepting queries if they detect there
|
||||
# is at least a hash slot uncovered (no available node is serving it).
|
||||
# This way if the cluster is partially down (for example a range of hash slots
|
||||
@ -1325,17 +1524,23 @@ lua-time-limit 5000
|
||||
#
|
||||
# In order to make KeyDB Cluster working in such environments, a static
|
||||
# configuration where each node knows its public address is needed. The
|
||||
# following two options are used for this scope, and are:
|
||||
# following four options are used for this scope, and are:
|
||||
#
|
||||
# * cluster-announce-ip
|
||||
# * cluster-announce-port
|
||||
# * cluster-announce-tls-port
|
||||
# * cluster-announce-bus-port
|
||||
#
|
||||
# Each instructs the node about its address, client port, and cluster message
|
||||
# Each instructs the node about its address, client ports (for connections
|
||||
# without and with TLS), and cluster message
|
||||
# bus port. The information is then published in the header of the bus packets
|
||||
# so that other nodes will be able to correctly map the address of the node
|
||||
# publishing the information.
|
||||
#
|
||||
# If cluster-tls is set to yes and cluster-announce-tls-port is omitted or set
|
||||
# to zero, then cluster-announce-port refers to the TLS port. Note also that
|
||||
# cluster-announce-tls-port has no effect if cluster-tls is set to no.
|
||||
#
|
||||
# If the above options are not used, the normal KeyDB Cluster auto-detection
|
||||
# will be used instead.
|
||||
#
|
||||
@ -1347,7 +1552,8 @@ lua-time-limit 5000
|
||||
# Example:
|
||||
#
|
||||
# cluster-announce-ip 10.1.1.5
|
||||
# cluster-announce-port 6379
|
||||
# cluster-announce-tls-port 6379
|
||||
# cluster-announce-port 0
|
||||
# cluster-announce-bus-port 6380
|
||||
|
||||
################################## SLOW LOG ###################################
|
||||
@ -1421,8 +1627,9 @@ latency-monitor-threshold 0
|
||||
# x Expired events (events generated every time a key expires)
|
||||
# e Evicted events (events generated when a key is evicted for maxmemory)
|
||||
# t Stream commands
|
||||
# d Module key type events
|
||||
# m Key-miss events (Note: It is not included in the 'A' class)
|
||||
# A Alias for g$lshzxet, so that the "AKE" string means all the events
|
||||
# A Alias for g$lshzxetd, so that the "AKE" string means all the events
|
||||
# (Except key-miss events which are excluded from 'A' due to their
|
||||
# unique nature).
|
||||
#
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -20,12 +20,12 @@
|
||||
# The port that this sentinel instance will run on
|
||||
port 26379
|
||||
|
||||
# By default Redis Sentinel does not run as a daemon. Use 'yes' if you need it.
|
||||
# Note that Redis will write a pid file in /var/run/keydb-sentinel.pid when
|
||||
# By default KeyDB Sentinel does not run as a daemon. Use 'yes' if you need it.
|
||||
# Note that KeyDB will write a pid file in /var/run/keydb-sentinel.pid when
|
||||
# daemonized.
|
||||
daemonize yes
|
||||
|
||||
# When running daemonized, Redis Sentinel writes a pid file in
|
||||
# When running daemonized, KeyDB Sentinel writes a pid file in
|
||||
# /var/run/keydb-sentinel.pid by default. You can specify a custom pid file
|
||||
# location here.
|
||||
pidfile /var/run/sentinel/keydb-sentinel.pid
|
||||
@ -59,7 +59,7 @@ logfile /var/log/keydb/keydb-sentinel.log
|
||||
|
||||
# dir <working-directory>
|
||||
# Every long running process should have a well-defined working directory.
|
||||
# For Redis Sentinel to chdir to /tmp at startup is the simplest thing
|
||||
# For KeyDB Sentinel to chdir to /tmp at startup is the simplest thing
|
||||
# for the process to don't interfere with administrative tasks such as
|
||||
# unmounting filesystems.
|
||||
dir /var/lib/keydb
|
||||
@ -86,22 +86,34 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
# sentinel auth-pass <master-name> <password>
|
||||
#
|
||||
# Set the password to use to authenticate with the master and replicas.
|
||||
# Useful if there is a password set in the Redis instances to monitor.
|
||||
# Useful if there is a password set in the KeyDB instances to monitor.
|
||||
#
|
||||
# Note that the master password is also used for replicas, so it is not
|
||||
# possible to set a different password in masters and replicas instances
|
||||
# if you want to be able to monitor these instances with Sentinel.
|
||||
#
|
||||
# However you can have Redis instances without the authentication enabled
|
||||
# mixed with Redis instances requiring the authentication (as long as the
|
||||
# However you can have KeyDB instances without the authentication enabled
|
||||
# mixed with KeyDB instances requiring the authentication (as long as the
|
||||
# password set is the same for all the instances requiring the password) as
|
||||
# the AUTH command will have no effect in Redis instances with authentication
|
||||
# the AUTH command will have no effect in KeyDB instances with authentication
|
||||
# switched off.
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
|
||||
|
||||
# sentinel auth-user <master-name> <username>
|
||||
#
|
||||
# This is useful in order to authenticate to instances having ACL capabilities,
|
||||
# that is, running KeyDB 6.0 or greater. When just auth-pass is provided the
|
||||
# Sentinel instance will authenticate to KeyDB using the old "AUTH <pass>"
|
||||
# method. When also an username is provided, it will use "AUTH <user> <pass>".
|
||||
# In the KeyDB servers side, the ACL to provide just minimal access to
|
||||
# Sentinel instances, should be configured along the following lines:
|
||||
#
|
||||
# user sentinel-user >somepassword +client +subscribe +publish \
|
||||
# +ping +info +multi +slaveof +config +client +exec on
|
||||
|
||||
# sentinel down-after-milliseconds <master-name> <milliseconds>
|
||||
#
|
||||
# Number of milliseconds the master (or any attached replica or sentinel) should
|
||||
@ -112,6 +124,73 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
# Default is 30 seconds.
|
||||
sentinel down-after-milliseconds mymaster 30000
|
||||
|
||||
# IMPORTANT NOTE: starting with KeyDB 6.2 ACL capability is supported for
|
||||
# Sentinel mode, please refer to the Redis website https://redis.io/topics/acl
|
||||
# for more details.
|
||||
|
||||
# Sentinel's ACL users are defined in the following format:
|
||||
#
|
||||
# user <username> ... acl rules ...
|
||||
#
|
||||
# For example:
|
||||
#
|
||||
# user worker +@admin +@connection ~* on >ffa9203c493aa99
|
||||
#
|
||||
# For more information about ACL configuration please refer to the Redis
|
||||
# website at https://redis.io/topics/acl and KeyDB server configuration
|
||||
# template keydb.conf.
|
||||
|
||||
# ACL LOG
|
||||
#
|
||||
# The ACL Log tracks failed commands and authentication events associated
|
||||
# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked
|
||||
# by ACLs. The ACL Log is stored in memory. You can reclaim memory with
|
||||
# ACL LOG RESET. Define the maximum entry length of the ACL Log below.
|
||||
acllog-max-len 128
|
||||
|
||||
# Using an external ACL file
|
||||
#
|
||||
# Instead of configuring users here in this file, it is possible to use
|
||||
# a stand-alone file just listing users. The two methods cannot be mixed:
|
||||
# if you configure users here and at the same time you activate the external
|
||||
# ACL file, the server will refuse to start.
|
||||
#
|
||||
# The format of the external ACL user file is exactly the same as the
|
||||
# format that is used inside keydb.conf to describe users.
|
||||
#
|
||||
# aclfile /etc/keydb/sentinel-users.acl
|
||||
|
||||
# requirepass <password>
|
||||
#
|
||||
# You can configure Sentinel itself to require a password, however when doing
|
||||
# so Sentinel will try to authenticate with the same password to all the
|
||||
# other Sentinels. So you need to configure all your Sentinels in a given
|
||||
# group with the same "requirepass" password. Check the following documentation
|
||||
# for more info: https://redis.io/topics/sentinel
|
||||
#
|
||||
# IMPORTANT NOTE: starting with KeyDB 6.2 "requirepass" is a compatibility
|
||||
# layer on top of the ACL system. The option effect will be just setting
|
||||
# the password for the default user. Clients will still authenticate using
|
||||
# AUTH <password> as usually, or more explicitly with AUTH default <password>
|
||||
# if they follow the new protocol: both will work.
|
||||
#
|
||||
# New config files are advised to use separate authentication control for
|
||||
# incoming connections (via ACL), and for outgoing connections (via
|
||||
# sentinel-user and sentinel-pass)
|
||||
#
|
||||
# The requirepass is not compatable with aclfile option and the ACL LOAD
|
||||
# command, these will cause requirepass to be ignored.
|
||||
|
||||
# sentinel sentinel-user <username>
|
||||
#
|
||||
# You can configure Sentinel to authenticate with other Sentinels with specific
|
||||
# user name.
|
||||
|
||||
# sentinel sentinel-pass <password>
|
||||
#
|
||||
# The password for Sentinel to authenticate with other Sentinels. If sentinel-user
|
||||
# is not configured, Sentinel will use 'default' user with sentinel-pass to authenticate.
|
||||
|
||||
# sentinel parallel-syncs <master-name> <numreplicas>
|
||||
#
|
||||
# How many replicas we can reconfigure to point to the new replica simultaneously
|
||||
@ -172,7 +251,7 @@ sentinel failover-timeout mymaster 180000
|
||||
# generated in the WARNING level (for instance -sdown, -odown, and so forth).
|
||||
# This script should notify the system administrator via email, SMS, or any
|
||||
# other messaging system, that there is something wrong with the monitored
|
||||
# Redis systems.
|
||||
# KeyDB systems.
|
||||
#
|
||||
# The script is called with just two arguments: the first is the event type
|
||||
# and the second the event description.
|
||||
@ -182,7 +261,7 @@ sentinel failover-timeout mymaster 180000
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# sentinel notification-script mymaster /var/redis/notify.sh
|
||||
# sentinel notification-script mymaster /var/keydb/notify.sh
|
||||
|
||||
# CLIENTS RECONFIGURATION SCRIPT
|
||||
#
|
||||
@ -207,7 +286,7 @@ sentinel failover-timeout mymaster 180000
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# sentinel client-reconfig-script mymaster /var/redis/reconfig.sh
|
||||
# sentinel client-reconfig-script mymaster /var/keydb/reconfig.sh
|
||||
|
||||
# SECURITY
|
||||
#
|
||||
@ -218,11 +297,11 @@ sentinel failover-timeout mymaster 180000
|
||||
|
||||
sentinel deny-scripts-reconfig yes
|
||||
|
||||
# REDIS COMMANDS RENAMING
|
||||
# KEYDB COMMANDS RENAMING
|
||||
#
|
||||
# Sometimes the Redis server has certain commands, that are needed for Sentinel
|
||||
# Sometimes the KeyDB server has certain commands, that are needed for Sentinel
|
||||
# to work correctly, renamed to unguessable strings. This is often the case
|
||||
# of CONFIG and SLAVEOF in the context of providers that provide Redis as
|
||||
# of CONFIG and SLAVEOF in the context of providers that provide KeyDB as
|
||||
# a service, and don't want the customers to reconfigure the instances outside
|
||||
# of the administration console.
|
||||
#
|
||||
@ -239,6 +318,24 @@ sentinel deny-scripts-reconfig yes
|
||||
# SENTINEL SET can also be used in order to perform this configuration at runtime.
|
||||
#
|
||||
# In order to set a command back to its original name (undo the renaming), it
|
||||
# is possible to just rename a command to itsef:
|
||||
# is possible to just rename a command to itself:
|
||||
#
|
||||
# SENTINEL rename-command mymaster CONFIG CONFIG
|
||||
|
||||
# HOSTNAMES SUPPORT
|
||||
#
|
||||
# Normally Sentinel uses only IP addresses and requires SENTINEL MONITOR
|
||||
# to specify an IP address. Also, it requires the KeyDB replica-announce-ip
|
||||
# keyword to specify only IP addresses.
|
||||
#
|
||||
# You may enable hostnames support by enabling resolve-hostnames. Note
|
||||
# that you must make sure your DNS is configured properly and that DNS
|
||||
# resolution does not introduce very long delays.
|
||||
#
|
||||
SENTINEL resolve-hostnames no
|
||||
|
||||
# When resolve-hostnames is enabled, Sentinel still uses IP addresses
|
||||
# when exposing instances to users, configuration files, etc. If you want
|
||||
# to retain the hostnames when announced, enable announce-hostnames below.
|
||||
#
|
||||
SENTINEL announce-hostnames no
|
||||
|
53
pkg/deb/debian/zsh-completion/_keydb-cli
Normal file
53
pkg/deb/debian/zsh-completion/_keydb-cli
Normal file
@ -0,0 +1,53 @@
|
||||
#compdef keydb-cli
|
||||
local -a options
|
||||
options=(
|
||||
'-h[Server hostname (default: 127.0.0.1).]: :_hosts'
|
||||
'-p[Server port (default: 6379).]'
|
||||
'-s[Server socket (overrides hostname and port).]'
|
||||
'-a[Password to use when connecting to the server. You can also use the REDISCLI_AUTH environment variable to pass this password more safely (if both are used, this argument takes precedence).]'
|
||||
'--user[Used to send ACL style "AUTH username pass". Needs -a.]'
|
||||
'--pass[Alias of -a for consistency with the new --user option.]'
|
||||
'--askpass[Force user to input password with mask from STDIN. If this argument is used, "-a" and REDISCLI_AUTH environment variable will be ignored.]'
|
||||
'-u[Server URI.]'
|
||||
'-r[Execute specified command N times.]'
|
||||
'-i[When -r is used, waits <interval> seconds per command. It is possible to specify sub-second times like -i 0.1.]'
|
||||
'-n[Database number.]'
|
||||
'-3[Start session in RESP3 protocol mode.]'
|
||||
'-x[Read last argument from STDIN.]'
|
||||
'-d[Delimiter between response bulks for raw formatting (default: \n).]'
|
||||
'-D[D <delimiter> Delimiter between responses for raw formatting (default: \n).]'
|
||||
'-c[Enable cluster mode (follow -ASK and -MOVED redirections).]'
|
||||
'-e[Return exit error code when command execution fails.]'
|
||||
'--raw[Use raw formatting for replies (default when STDOUT is not a tty).]'
|
||||
'--no-raw[Force formatted output even when STDOUT is not a tty.]'
|
||||
'--quoted-input[Force input to be handled as quoted strings.]'
|
||||
'--csv[Output in CSV format.]'
|
||||
'--show-pushes[Whether to print RESP3 PUSH messages. Enabled by default when STDOUT is a tty but can be overriden with --show-pushes no.]'
|
||||
'--stat[Print rolling stats about server: mem, clients, ...]'
|
||||
'--latency[Enter a special mode continuously sampling latency. If you use this mode in an interactive session it runs forever displaying real-time stats. Otherwise if --raw or --csv is specified, or if you redirect the output to a non TTY, it samples the latency for 1 second (you can use -i to change the interval), then produces a single output and exits.]'
|
||||
'--latency-history[Like --latency but tracking latency changes over time. Default time interval is 15 sec. Change it using -i.]'
|
||||
'--latency-dist[Shows latency as a spectrum, requires xterm 256 colors. Default time interval is 1 sec. Change it using -i.]'
|
||||
'--lru-test[Simulate a cache workload with an 80-20 distribution.]'
|
||||
'--replica[Simulate a replica showing commands received from the master.]'
|
||||
'--rdb[Transfer an RDB dump from remote server to local file.]'
|
||||
'--pipe[Transfer raw KeyDB protocol from stdin to server.]'
|
||||
'--pipe-timeout[In --pipe mode, abort with error if after sending all data. no reply is received within <n> seconds. Default timeout: 30. Use 0 to wait forever.]'
|
||||
'--bigkeys[Sample KeyDB keys looking for keys with many elements (complexity).]'
|
||||
'--memkeys[Sample KeyDB keys looking for keys consuming a lot of memory.]'
|
||||
'--memkeys-samples[Sample KeyDB keys looking for keys consuming a lot of memory. And define number of key elements to sample]'
|
||||
'--hotkeys[Sample KeyDB keys looking for hot keys. only works when maxmemory-policy is *lfu.]'
|
||||
'--scan[List all keys using the SCAN command.]'
|
||||
'--pattern[Keys pattern when using the --scan, --bigkeys or --hotkeys options (default: *).]'
|
||||
'--quoted-pattern[Same as --pattern, but the specified string can be quoted, in order to pass an otherwise non binary-safe string.]'
|
||||
'--intrinsic-latency[Run a test to measure intrinsic system latency. The test will run for the specified amount of seconds.]'
|
||||
'--eval[Send an EVAL command using the Lua script at <file>.]'
|
||||
'--ldb[Used with --eval enable the Redis Lua debugger.]'
|
||||
'--ldb-sync-mode[Like --ldb but uses the synchronous Lua debugger, in this mode the server is blocked and script changes are not rolled back from the server memory.]'
|
||||
'--cluster[<command> args... opts... Cluster Manager command and arguments (see below).]'
|
||||
'--verbose[Verbose mode.]'
|
||||
'--no-auth-warning[Dont show warning message when using password on command line interface.]'
|
||||
'--help[Output this help and exit.]'
|
||||
'--version[Output version and exit.]'
|
||||
)
|
||||
|
||||
_arguments -s $options
|
File diff suppressed because it is too large
Load Diff
@ -20,20 +20,20 @@
|
||||
# The port that this sentinel instance will run on
|
||||
port 26379
|
||||
|
||||
# By default Redis Sentinel does not run as a daemon. Use 'yes' if you need it.
|
||||
# Note that Redis will write a pid file in /var/run/redis-sentinel.pid when
|
||||
# By default KeyDB Sentinel does not run as a daemon. Use 'yes' if you need it.
|
||||
# Note that KeyDB will write a pid file in /var/run/keydb-sentinel.pid when
|
||||
# daemonized.
|
||||
daemonize no
|
||||
|
||||
# When running daemonized, Redis Sentinel writes a pid file in
|
||||
# /var/run/redis-sentinel.pid by default. You can specify a custom pid file
|
||||
# When running daemonized, KeyDB Sentinel writes a pid file in
|
||||
# /var/run/keydb-sentinel.pid by default. You can specify a custom pid file
|
||||
# location here.
|
||||
pidfile /var/run/redis-sentinel.pid
|
||||
pidfile /var/run/sentinel/keydb-sentinel.pid
|
||||
|
||||
# Specify the log file name. Also the empty string can be used to force
|
||||
# Sentinel to log on the standard output. Note that if you use standard
|
||||
# output for logging but daemonize, logs will be sent to /dev/null
|
||||
logfile /var/log/redis/sentinel.log
|
||||
logfile /var/log/keydb/keydb-sentinel.log
|
||||
|
||||
# sentinel announce-ip <ip>
|
||||
# sentinel announce-port <port>
|
||||
@ -59,12 +59,12 @@ logfile /var/log/redis/sentinel.log
|
||||
|
||||
# dir <working-directory>
|
||||
# Every long running process should have a well-defined working directory.
|
||||
# For Redis Sentinel to chdir to /tmp at startup is the simplest thing
|
||||
# For KeyDB Sentinel to chdir to /tmp at startup is the simplest thing
|
||||
# for the process to don't interfere with administrative tasks such as
|
||||
# unmounting filesystems.
|
||||
dir /tmp
|
||||
|
||||
# sentinel monitor <master-name> <ip> <redis-port> <quorum>
|
||||
# sentinel monitor <master-name> <ip> <keydb-port> <quorum>
|
||||
#
|
||||
# Tells Sentinel to monitor this master, and to consider it in O_DOWN
|
||||
# (Objectively Down) state only if at least <quorum> sentinels agree.
|
||||
@ -86,22 +86,34 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
# sentinel auth-pass <master-name> <password>
|
||||
#
|
||||
# Set the password to use to authenticate with the master and replicas.
|
||||
# Useful if there is a password set in the Redis instances to monitor.
|
||||
# Useful if there is a password set in the KeyDB instances to monitor.
|
||||
#
|
||||
# Note that the master password is also used for replicas, so it is not
|
||||
# possible to set a different password in masters and replicas instances
|
||||
# if you want to be able to monitor these instances with Sentinel.
|
||||
#
|
||||
# However you can have Redis instances without the authentication enabled
|
||||
# mixed with Redis instances requiring the authentication (as long as the
|
||||
# However you can have KeyDB instances without the authentication enabled
|
||||
# mixed with KeyDB instances requiring the authentication (as long as the
|
||||
# password set is the same for all the instances requiring the password) as
|
||||
# the AUTH command will have no effect in Redis instances with authentication
|
||||
# the AUTH command will have no effect in KeyDB instances with authentication
|
||||
# switched off.
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
|
||||
|
||||
# sentinel auth-user <master-name> <username>
|
||||
#
|
||||
# This is useful in order to authenticate to instances having ACL capabilities,
|
||||
# that is, running KeyDB 6.0 or greater. When just auth-pass is provided the
|
||||
# Sentinel instance will authenticate to KeyDB using the old "AUTH <pass>"
|
||||
# method. When also an username is provided, it will use "AUTH <user> <pass>".
|
||||
# In the KeyDB servers side, the ACL to provide just minimal access to
|
||||
# Sentinel instances, should be configured along the following lines:
|
||||
#
|
||||
# user sentinel-user >somepassword +client +subscribe +publish \
|
||||
# +ping +info +multi +slaveof +config +client +exec on
|
||||
|
||||
# sentinel down-after-milliseconds <master-name> <milliseconds>
|
||||
#
|
||||
# Number of milliseconds the master (or any attached replica or sentinel) should
|
||||
@ -112,6 +124,73 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
# Default is 30 seconds.
|
||||
sentinel down-after-milliseconds mymaster 30000
|
||||
|
||||
# IMPORTANT NOTE: starting with KeyDB 6.2 ACL capability is supported for
|
||||
# Sentinel mode, please refer to the Redis website https://redis.io/topics/acl
|
||||
# for more details.
|
||||
|
||||
# Sentinel's ACL users are defined in the following format:
|
||||
#
|
||||
# user <username> ... acl rules ...
|
||||
#
|
||||
# For example:
|
||||
#
|
||||
# user worker +@admin +@connection ~* on >ffa9203c493aa99
|
||||
#
|
||||
# For more information about ACL configuration please refer to the Redis
|
||||
# website at https://redis.io/topics/acl and KeyDB server configuration
|
||||
# template keydb.conf.
|
||||
|
||||
# ACL LOG
|
||||
#
|
||||
# The ACL Log tracks failed commands and authentication events associated
|
||||
# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked
|
||||
# by ACLs. The ACL Log is stored in memory. You can reclaim memory with
|
||||
# ACL LOG RESET. Define the maximum entry length of the ACL Log below.
|
||||
acllog-max-len 128
|
||||
|
||||
# Using an external ACL file
|
||||
#
|
||||
# Instead of configuring users here in this file, it is possible to use
|
||||
# a stand-alone file just listing users. The two methods cannot be mixed:
|
||||
# if you configure users here and at the same time you activate the external
|
||||
# ACL file, the server will refuse to start.
|
||||
#
|
||||
# The format of the external ACL user file is exactly the same as the
|
||||
# format that is used inside keydb.conf to describe users.
|
||||
#
|
||||
# aclfile /etc/keydb/sentinel-users.acl
|
||||
|
||||
# requirepass <password>
|
||||
#
|
||||
# You can configure Sentinel itself to require a password, however when doing
|
||||
# so Sentinel will try to authenticate with the same password to all the
|
||||
# other Sentinels. So you need to configure all your Sentinels in a given
|
||||
# group with the same "requirepass" password. Check the following documentation
|
||||
# for more info: https://redis.io/topics/sentinel
|
||||
#
|
||||
# IMPORTANT NOTE: starting with KeyDB 6.2 "requirepass" is a compatibility
|
||||
# layer on top of the ACL system. The option effect will be just setting
|
||||
# the password for the default user. Clients will still authenticate using
|
||||
# AUTH <password> as usually, or more explicitly with AUTH default <password>
|
||||
# if they follow the new protocol: both will work.
|
||||
#
|
||||
# New config files are advised to use separate authentication control for
|
||||
# incoming connections (via ACL), and for outgoing connections (via
|
||||
# sentinel-user and sentinel-pass)
|
||||
#
|
||||
# The requirepass is not compatable with aclfile option and the ACL LOAD
|
||||
# command, these will cause requirepass to be ignored.
|
||||
|
||||
# sentinel sentinel-user <username>
|
||||
#
|
||||
# You can configure Sentinel to authenticate with other Sentinels with specific
|
||||
# user name.
|
||||
|
||||
# sentinel sentinel-pass <password>
|
||||
#
|
||||
# The password for Sentinel to authenticate with other Sentinels. If sentinel-user
|
||||
# is not configured, Sentinel will use 'default' user with sentinel-pass to authenticate.
|
||||
|
||||
# sentinel parallel-syncs <master-name> <numreplicas>
|
||||
#
|
||||
# How many replicas we can reconfigure to point to the new replica simultaneously
|
||||
@ -172,7 +251,7 @@ sentinel failover-timeout mymaster 180000
|
||||
# generated in the WARNING level (for instance -sdown, -odown, and so forth).
|
||||
# This script should notify the system administrator via email, SMS, or any
|
||||
# other messaging system, that there is something wrong with the monitored
|
||||
# Redis systems.
|
||||
# KeyDB systems.
|
||||
#
|
||||
# The script is called with just two arguments: the first is the event type
|
||||
# and the second the event description.
|
||||
@ -182,7 +261,7 @@ sentinel failover-timeout mymaster 180000
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# sentinel notification-script mymaster /var/redis/notify.sh
|
||||
# sentinel notification-script mymaster /var/keydb/notify.sh
|
||||
|
||||
# CLIENTS RECONFIGURATION SCRIPT
|
||||
#
|
||||
@ -207,7 +286,7 @@ sentinel failover-timeout mymaster 180000
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# sentinel client-reconfig-script mymaster /var/redis/reconfig.sh
|
||||
# sentinel client-reconfig-script mymaster /var/keydb/reconfig.sh
|
||||
|
||||
# SECURITY
|
||||
#
|
||||
@ -218,11 +297,11 @@ sentinel failover-timeout mymaster 180000
|
||||
|
||||
sentinel deny-scripts-reconfig yes
|
||||
|
||||
# REDIS COMMANDS RENAMING
|
||||
# KEYDB COMMANDS RENAMING
|
||||
#
|
||||
# Sometimes the Redis server has certain commands, that are needed for Sentinel
|
||||
# Sometimes the KeyDB server has certain commands, that are needed for Sentinel
|
||||
# to work correctly, renamed to unguessable strings. This is often the case
|
||||
# of CONFIG and SLAVEOF in the context of providers that provide Redis as
|
||||
# of CONFIG and SLAVEOF in the context of providers that provide KeyDB as
|
||||
# a service, and don't want the customers to reconfigure the instances outside
|
||||
# of the administration console.
|
||||
#
|
||||
@ -239,6 +318,24 @@ sentinel deny-scripts-reconfig yes
|
||||
# SENTINEL SET can also be used in order to perform this configuration at runtime.
|
||||
#
|
||||
# In order to set a command back to its original name (undo the renaming), it
|
||||
# is possible to just rename a command to itsef:
|
||||
# is possible to just rename a command to itself:
|
||||
#
|
||||
# SENTINEL rename-command mymaster CONFIG CONFIG
|
||||
|
||||
# HOSTNAMES SUPPORT
|
||||
#
|
||||
# Normally Sentinel uses only IP addresses and requires SENTINEL MONITOR
|
||||
# to specify an IP address. Also, it requires the KeyDB replica-announce-ip
|
||||
# keyword to specify only IP addresses.
|
||||
#
|
||||
# You may enable hostnames support by enabling resolve-hostnames. Note
|
||||
# that you must make sure your DNS is configured properly and that DNS
|
||||
# resolution does not introduce very long delays.
|
||||
#
|
||||
SENTINEL resolve-hostnames no
|
||||
|
||||
# When resolve-hostnames is enabled, Sentinel still uses IP addresses
|
||||
# when exposing instances to users, configuration files, etc. If you want
|
||||
# to retain the hostnames when announced, enable announce-hostnames below.
|
||||
#
|
||||
SENTINEL announce-hostnames no
|
2
runtest
2
runtest
@ -10,7 +10,7 @@ done
|
||||
|
||||
if [ -z $TCLSH ]
|
||||
then
|
||||
echo "You need tcl 8.5 or newer in order to run the Redis test"
|
||||
echo "You need tcl 8.5 or newer in order to run the KeyDB test"
|
||||
exit 1
|
||||
fi
|
||||
$TCLSH tests/test_helper.tcl "${@}"
|
||||
|
@ -8,7 +8,7 @@ done
|
||||
|
||||
if [ -z $TCLSH ]
|
||||
then
|
||||
echo "You need tcl 8.5 or newer in order to run the Redis Cluster test"
|
||||
echo "You need tcl 8.5 or newer in order to run the KeyDB Cluster test"
|
||||
exit 1
|
||||
fi
|
||||
$TCLSH tests/cluster/run.tcl $*
|
||||
|
@ -9,7 +9,7 @@ done
|
||||
|
||||
if [ -z $TCLSH ]
|
||||
then
|
||||
echo "You need tcl 8.5 or newer in order to run the Redis ModuleApi test"
|
||||
echo "You need tcl 8.5 or newer in order to run the KeyDB ModuleApi test"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -8,7 +8,7 @@ done
|
||||
|
||||
if [ -z $TCLSH ]
|
||||
then
|
||||
echo "You need tcl 8.5 or newer in order to run the Redis Sentinel test"
|
||||
echo "You need tcl 8.5 or newer in order to run the KeyDB Sentinel test"
|
||||
exit 1
|
||||
fi
|
||||
$TCLSH tests/sentinel/run.tcl $*
|
||||
|
@ -20,12 +20,12 @@
|
||||
# The port that this sentinel instance will run on
|
||||
port 26379
|
||||
|
||||
# By default Redis Sentinel does not run as a daemon. Use 'yes' if you need it.
|
||||
# Note that Redis will write a pid file in /var/run/keydb-sentinel.pid when
|
||||
# By default KeyDB Sentinel does not run as a daemon. Use 'yes' if you need it.
|
||||
# Note that KeyDB will write a pid file in /var/run/keydb-sentinel.pid when
|
||||
# daemonized.
|
||||
daemonize no
|
||||
|
||||
# When running daemonized, Redis Sentinel writes a pid file in
|
||||
# When running daemonized, KeyDB Sentinel writes a pid file in
|
||||
# /var/run/keydb-sentinel.pid by default. You can specify a custom pid file
|
||||
# location here.
|
||||
pidfile /var/run/keydb-sentinel.pid
|
||||
@ -59,7 +59,7 @@ logfile ""
|
||||
|
||||
# dir <working-directory>
|
||||
# Every long running process should have a well-defined working directory.
|
||||
# For Redis Sentinel to chdir to /tmp at startup is the simplest thing
|
||||
# For KeyDB Sentinel to chdir to /tmp at startup is the simplest thing
|
||||
# for the process to don't interfere with administrative tasks such as
|
||||
# unmounting filesystems.
|
||||
dir /tmp
|
||||
@ -86,16 +86,16 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
# sentinel auth-pass <master-name> <password>
|
||||
#
|
||||
# Set the password to use to authenticate with the master and replicas.
|
||||
# Useful if there is a password set in the Redis instances to monitor.
|
||||
# Useful if there is a password set in the KeyDB instances to monitor.
|
||||
#
|
||||
# Note that the master password is also used for replicas, so it is not
|
||||
# possible to set a different password in masters and replicas instances
|
||||
# if you want to be able to monitor these instances with Sentinel.
|
||||
#
|
||||
# However you can have Redis instances without the authentication enabled
|
||||
# mixed with Redis instances requiring the authentication (as long as the
|
||||
# However you can have KeyDB instances without the authentication enabled
|
||||
# mixed with KeyDB instances requiring the authentication (as long as the
|
||||
# password set is the same for all the instances requiring the password) as
|
||||
# the AUTH command will have no effect in Redis instances with authentication
|
||||
# the AUTH command will have no effect in KeyDB instances with authentication
|
||||
# switched off.
|
||||
#
|
||||
# Example:
|
||||
@ -105,10 +105,10 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
# sentinel auth-user <master-name> <username>
|
||||
#
|
||||
# This is useful in order to authenticate to instances having ACL capabilities,
|
||||
# that is, running Redis 6.0 or greater. When just auth-pass is provided the
|
||||
# Sentinel instance will authenticate to Redis using the old "AUTH <pass>"
|
||||
# that is, running KeyDB 6.0 or greater. When just auth-pass is provided the
|
||||
# Sentinel instance will authenticate to KeyDB using the old "AUTH <pass>"
|
||||
# method. When also an username is provided, it will use "AUTH <user> <pass>".
|
||||
# In the Redis servers side, the ACL to provide just minimal access to
|
||||
# In the KeyDB servers side, the ACL to provide just minimal access to
|
||||
# Sentinel instances, should be configured along the following lines:
|
||||
#
|
||||
# user sentinel-user >somepassword +client +subscribe +publish \
|
||||
@ -125,7 +125,7 @@ sentinel monitor mymaster 127.0.0.1 6379 2
|
||||
sentinel down-after-milliseconds mymaster 30000
|
||||
|
||||
# IMPORTANT NOTE: starting with KeyDB 6.2 ACL capability is supported for
|
||||
# Sentinel mode, please refer to the KeyDB website https://redis.io/topics/acl
|
||||
# Sentinel mode, please refer to the Redis website https://redis.io/topics/acl
|
||||
# for more details.
|
||||
|
||||
# Sentinel's ACL users are defined in the following format:
|
||||
@ -137,8 +137,8 @@ sentinel down-after-milliseconds mymaster 30000
|
||||
# user worker +@admin +@connection ~* on >ffa9203c493aa99
|
||||
#
|
||||
# For more information about ACL configuration please refer to the Redis
|
||||
# website at https://redis.io/topics/acl and redis server configuration
|
||||
# template redis.conf.
|
||||
# website at https://redis.io/topics/acl and KeyDB server configuration
|
||||
# template keydb.conf.
|
||||
|
||||
# ACL LOG
|
||||
#
|
||||
@ -156,9 +156,9 @@ acllog-max-len 128
|
||||
# ACL file, the server will refuse to start.
|
||||
#
|
||||
# The format of the external ACL user file is exactly the same as the
|
||||
# format that is used inside redis.conf to describe users.
|
||||
# format that is used inside keydb.conf to describe users.
|
||||
#
|
||||
# aclfile /etc/redis/sentinel-users.acl
|
||||
# aclfile /etc/keydb/sentinel-users.acl
|
||||
|
||||
# requirepass <password>
|
||||
#
|
||||
@ -168,7 +168,7 @@ acllog-max-len 128
|
||||
# group with the same "requirepass" password. Check the following documentation
|
||||
# for more info: https://redis.io/topics/sentinel
|
||||
#
|
||||
# IMPORTANT NOTE: starting with Redis 6.2 "requirepass" is a compatibility
|
||||
# IMPORTANT NOTE: starting with KeyDB 6.2 "requirepass" is a compatibility
|
||||
# layer on top of the ACL system. The option effect will be just setting
|
||||
# the password for the default user. Clients will still authenticate using
|
||||
# AUTH <password> as usually, or more explicitly with AUTH default <password>
|
||||
@ -251,7 +251,7 @@ sentinel failover-timeout mymaster 180000
|
||||
# generated in the WARNING level (for instance -sdown, -odown, and so forth).
|
||||
# This script should notify the system administrator via email, SMS, or any
|
||||
# other messaging system, that there is something wrong with the monitored
|
||||
# Redis systems.
|
||||
# KeyDB systems.
|
||||
#
|
||||
# The script is called with just two arguments: the first is the event type
|
||||
# and the second the event description.
|
||||
@ -261,7 +261,7 @@ sentinel failover-timeout mymaster 180000
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# sentinel notification-script mymaster /var/redis/notify.sh
|
||||
# sentinel notification-script mymaster /var/keydb/notify.sh
|
||||
|
||||
# CLIENTS RECONFIGURATION SCRIPT
|
||||
#
|
||||
@ -286,7 +286,7 @@ sentinel failover-timeout mymaster 180000
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# sentinel client-reconfig-script mymaster /var/redis/reconfig.sh
|
||||
# sentinel client-reconfig-script mymaster /var/keydb/reconfig.sh
|
||||
|
||||
# SECURITY
|
||||
#
|
||||
@ -297,11 +297,11 @@ sentinel failover-timeout mymaster 180000
|
||||
|
||||
sentinel deny-scripts-reconfig yes
|
||||
|
||||
# REDIS COMMANDS RENAMING
|
||||
# KEYDB COMMANDS RENAMING
|
||||
#
|
||||
# Sometimes the Redis server has certain commands, that are needed for Sentinel
|
||||
# Sometimes the KeyDB server has certain commands, that are needed for Sentinel
|
||||
# to work correctly, renamed to unguessable strings. This is often the case
|
||||
# of CONFIG and SLAVEOF in the context of providers that provide Redis as
|
||||
# of CONFIG and SLAVEOF in the context of providers that provide KeyDB as
|
||||
# a service, and don't want the customers to reconfigure the instances outside
|
||||
# of the administration console.
|
||||
#
|
||||
@ -325,7 +325,7 @@ sentinel deny-scripts-reconfig yes
|
||||
# HOSTNAMES SUPPORT
|
||||
#
|
||||
# Normally Sentinel uses only IP addresses and requires SENTINEL MONITOR
|
||||
# to specify an IP address. Also, it requires the Redis replica-announce-ip
|
||||
# to specify an IP address. Also, it requires the KeyDB replica-announce-ip
|
||||
# keyword to specify only IP addresses.
|
||||
#
|
||||
# You may enable hostnames support by enabling resolve-hostnames. Note
|
||||
|
47
src/Makefile
47
src/Makefile
@ -3,11 +3,11 @@
|
||||
# This file is released under the BSD license, see the COPYING file
|
||||
#
|
||||
# The Makefile composes the final FINAL_CFLAGS and FINAL_LDFLAGS using
|
||||
# what is needed for Redis plus the standard CFLAGS and LDFLAGS passed.
|
||||
# what is needed for KeyDB plus the standard CFLAGS and LDFLAGS passed.
|
||||
# However when building the dependencies (Jemalloc, Lua, Hiredis, ...)
|
||||
# CFLAGS and LDFLAGS are propagated to the dependencies, so to pass
|
||||
# flags only to be used when compiling / linking Redis itself REDIS_CFLAGS
|
||||
# and REDIS_LDFLAGS are used instead (this is the case of 'make gcov').
|
||||
# flags only to be used when compiling / linking KeyDB itself KEYDB_CFLAGS
|
||||
# and KEYDB_LDFLAGS are used instead (this is the case of 'make gcov').
|
||||
#
|
||||
# Dependencies are stored in the Makefile.dep file. To rebuild this file
|
||||
# Just use 'make dep', but this is only needed by developers.
|
||||
@ -29,7 +29,7 @@ ifneq (,$(findstring FreeBSD,$(uname_S)))
|
||||
STD+=-Wno-c11-extensions
|
||||
endif
|
||||
endif
|
||||
WARN=-Wall -W -Wno-missing-field-initializers
|
||||
WARN=-Wall -W -Wno-missing-field-initializers -Wno-address-of-packed-member -Wno-atomic-alignment
|
||||
OPT=$(OPTIMIZATION)
|
||||
|
||||
# Detect if the compiler supports C11 _Atomic
|
||||
@ -89,7 +89,7 @@ ifeq ($(COMPILER_NAME),clang)
|
||||
LDFLAGS+= -latomic
|
||||
endif
|
||||
|
||||
# To get ARM stack traces if Redis crashes we need a special C flag.
|
||||
# To get ARM stack traces if KeyDB crashes we need a special C flag.
|
||||
ifneq (,$(filter aarch64 armv,$(uname_M)))
|
||||
CFLAGS+=-funwind-tables
|
||||
CXXFLAGS+=-funwind-tables
|
||||
@ -131,9 +131,9 @@ endif
|
||||
# Override default settings if possible
|
||||
-include .make-settings
|
||||
|
||||
FINAL_CFLAGS=$(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS) $(REDIS_CFLAGS)
|
||||
FINAL_CXXFLAGS=$(CXX_STD) $(WARN) $(OPT) $(DEBUG) $(CXXFLAGS) $(REDIS_CFLAGS)
|
||||
FINAL_LDFLAGS=$(LDFLAGS) $(REDIS_LDFLAGS) $(DEBUG)
|
||||
FINAL_CFLAGS=$(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS) $(KEYDB_CFLAGS) $(REDIS_CFLAGS)
|
||||
FINAL_CXXFLAGS=$(CXX_STD) $(WARN) $(OPT) $(DEBUG) $(CXXFLAGS) $(KEYDB_CFLAGS) $(REDIS_CFLAGS)
|
||||
FINAL_LDFLAGS=$(LDFLAGS) $(KEYDB_LDFLAGS) $(DEBUG)
|
||||
FINAL_LIBS+=-lm -lz -latomic -L$(LICENSE_LIB_DIR) -lkey -lcrypto -lbz2 -lzstd -llz4 -lsnappy
|
||||
DEBUG=-g -ggdb
|
||||
|
||||
@ -273,6 +273,7 @@ endif
|
||||
ifeq ($(BUILD_WITH_SYSTEMD),yes)
|
||||
FINAL_LIBS+=$(LIBSYSTEMD_LIBS)
|
||||
FINAL_CFLAGS+= -DHAVE_LIBSYSTEMD
|
||||
FINAL_CXXFLAGS+= -DHAVE_LIBSYSTEMD
|
||||
endif
|
||||
|
||||
ifeq ($(MALLOC),tcmalloc)
|
||||
@ -331,6 +332,14 @@ else
|
||||
endef
|
||||
endif
|
||||
|
||||
# Alpine OS doesn't have support for the execinfo backtrace library we use for debug, so we provide an alternate implementation using libwunwind.
|
||||
OS := $(shell cat /etc/os-release | grep ID= | head -n 1 | cut -d'=' -f2)
|
||||
ifeq ($(OS),alpine)
|
||||
FINAL_CXXFLAGS+=-DUNW_LOCAL_ONLY
|
||||
FINAL_LIBS += -lunwind
|
||||
endif
|
||||
|
||||
|
||||
REDIS_CC=$(QUIET_CC)$(CC) $(FINAL_CFLAGS)
|
||||
REDIS_CXX=$(QUIET_CC)$(CXX) $(FINAL_CXXFLAGS)
|
||||
KEYDB_AS=$(QUIET_CC) as --64 -g
|
||||
@ -360,8 +369,10 @@ REDIS_BENCHMARK_NAME=keydb-benchmark$(PROG_SUFFIX)
|
||||
REDIS_BENCHMARK_OBJ=ae.o anet.o redis-benchmark.o adlist.o dict.o zmalloc.o release.o crcspeed.o crc64.o siphash.o redis-benchmark.o storage-lite.o fastlock.o new.o monotonic.o cli_common.o mt19937-64.o $(ASM_OBJ)
|
||||
REDIS_CHECK_RDB_NAME=keydb-check-rdb$(PROG_SUFFIX)
|
||||
REDIS_CHECK_AOF_NAME=keydb-check-aof$(PROG_SUFFIX)
|
||||
KEYDB_DIAGNOSTIC_NAME=keydb-diagnostic-tool$(PROG_SUFFIX)
|
||||
KEYDB_DIAGNOSTIC_OBJ=ae.o anet.o keydb-diagnostic-tool.o adlist.o dict.o zmalloc.o release.o crcspeed.o crc64.o siphash.o keydb-diagnostic-tool.o storage-lite.o fastlock.o new.o monotonic.o cli_common.o mt19937-64.o $(ASM_OBJ)
|
||||
|
||||
all: $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) $(REDIS_CHECK_RDB_NAME) $(REDIS_CHECK_AOF_NAME)
|
||||
all: $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) $(REDIS_CHECK_RDB_NAME) $(REDIS_CHECK_AOF_NAME) $(KEYDB_DIAGNOSTIC_NAME)
|
||||
@echo ""
|
||||
@echo "Hint: It's a good idea to run 'make test' ;)"
|
||||
@echo ""
|
||||
@ -385,9 +396,9 @@ persist-settings: distclean
|
||||
echo CFLAGS=$(CFLAGS) >> .make-settings
|
||||
echo CXXFLAGS=$(CXXFLAGS) >> .make-settings
|
||||
echo LDFLAGS=$(LDFLAGS) >> .make-settings
|
||||
echo REDIS_CFLAGS=$(REDIS_CFLAGS) >> .make-settings
|
||||
echo REDIS_CXXFLAGS=$(REDIS_CXXFLAGS) >> .make-settings
|
||||
echo REDIS_LDFLAGS=$(REDIS_LDFLAGS) >> .make-settings
|
||||
echo KEYDB_CFLAGS=$(KEYDB_CFLAGS) >> .make-settings
|
||||
echo KEYDB_CXXFLAGS=$(KEYDB_CXXFLAGS) >> .make-settings
|
||||
echo KEYDB_LDFLAGS=$(KEYDB_LDFLAGS) >> .make-settings
|
||||
echo PREV_FINAL_CFLAGS=$(FINAL_CFLAGS) >> .make-settings
|
||||
echo PREV_FINAL_CXXFLAGS=$(FINAL_CXXFLAGS) >> .make-settings
|
||||
echo PREV_FINAL_LDFLAGS=$(FINAL_LDFLAGS) >> .make-settings
|
||||
@ -433,6 +444,10 @@ $(REDIS_CLI_NAME): $(REDIS_CLI_OBJ)
|
||||
$(REDIS_BENCHMARK_NAME): $(REDIS_BENCHMARK_OBJ)
|
||||
$(REDIS_LD) -o $@ $^ ../deps/hiredis/libhiredis.a ../deps/hdr_histogram/hdr_histogram.o $(FINAL_LIBS)
|
||||
|
||||
# keydb-diagnostic-tool
|
||||
$(KEYDB_DIAGNOSTIC_NAME): $(KEYDB_DIAGNOSTIC_OBJ)
|
||||
$(REDIS_LD) -o $@ $^ ../deps/hiredis/libhiredis.a $(FINAL_LIBS)
|
||||
|
||||
DEP = $(REDIS_SERVER_OBJ:%.o=%.d) $(REDIS_CLI_OBJ:%.o=%.d) $(REDIS_BENCHMARK_OBJ:%.o=%.d)
|
||||
-include $(DEP)
|
||||
|
||||
@ -455,7 +470,7 @@ motd_server.o: motd.cpp .make-prerequisites
|
||||
$(KEYDB_AS) $< -o $@
|
||||
|
||||
clean:
|
||||
rm -rf $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) $(REDIS_CHECK_RDB_NAME) $(REDIS_CHECK_AOF_NAME) *.o *.gcda *.gcno *.gcov KeyDB.info lcov-html Makefile.dep
|
||||
rm -rf $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) $(REDIS_CHECK_RDB_NAME) $(REDIS_CHECK_AOF_NAME) $(KEYDB_DIAGNOSTIC_NAME) *.o *.gcda *.gcno *.gcov KeyDB.info lcov-html Makefile.dep
|
||||
rm -rf storage/*.o
|
||||
rm -rf keydb-server
|
||||
rm -f $(DEP)
|
||||
@ -497,7 +512,7 @@ bench: $(REDIS_BENCHMARK_NAME)
|
||||
$(MAKE) CXXFLAGS="-m32" CFLAGS="-m32" LDFLAGS="-m32"
|
||||
|
||||
gcov:
|
||||
$(MAKE) REDIS_CXXFLAGS="-fprofile-arcs -ftest-coverage -DCOVERAGE_TEST" REDIS_CFLAGS="-fprofile-arcs -ftest-coverage -DCOVERAGE_TEST" REDIS_LDFLAGS="-fprofile-arcs -ftest-coverage"
|
||||
$(MAKE) KEYDB_CXXFLAGS="-fprofile-arcs -ftest-coverage -DCOVERAGE_TEST" KEYDB_CFLAGS="-fprofile-arcs -ftest-coverage -DCOVERAGE_TEST" KEYDB_LDFLAGS="-fprofile-arcs -ftest-coverage"
|
||||
|
||||
noopt:
|
||||
$(MAKE) OPTIMIZATION="-O0"
|
||||
@ -506,7 +521,7 @@ valgrind:
|
||||
$(MAKE) OPTIMIZATION="-O0" USEASM="false" MALLOC="libc" CFLAGS="-DSANITIZE" CXXFLAGS="-DSANITIZE"
|
||||
|
||||
helgrind:
|
||||
$(MAKE) OPTIMIZATION="-O0" MALLOC="libc" CFLAGS="-D__ATOMIC_VAR_FORCE_SYNC_MACROS" REDIS_CFLAGS="-I/usr/local/include" REDIS_LDFLAGS="-L/usr/local/lib"
|
||||
$(MAKE) OPTIMIZATION="-O0" MALLOC="libc" CFLAGS="-D__ATOMIC_VAR_FORCE_SYNC_MACROS" KEYDB_CFLAGS="-I/usr/local/include" KEYDB_LDFLAGS="-L/usr/local/lib"
|
||||
|
||||
src/help.h:
|
||||
@../utils/generate-command-help.rb > help.h
|
||||
@ -521,4 +536,4 @@ install: all
|
||||
@ln -sf $(REDIS_SERVER_NAME) $(INSTALL_BIN)/$(REDIS_SENTINEL_NAME)
|
||||
|
||||
uninstall:
|
||||
rm -f $(INSTALL_BIN)/{$(REDIS_SERVER_NAME),$(REDIS_BENCHMARK_NAME),$(REDIS_CLI_NAME),$(REDIS_CHECK_RDB_NAME),$(REDIS_CHECK_AOF_NAME),$(REDIS_SENTINEL_NAME)}
|
||||
rm -f $(INSTALL_BIN)/{$(REDIS_SERVER_NAME),$(REDIS_BENCHMARK_NAME),$(REDIS_CLI_NAME),$(REDIS_CHECK_RDB_NAME),$(REDIS_CHECK_AOF_NAME),$(REDIS_SENTINEL_NAME),$(KEYDB_DIAGNOSTIC_NAME)}
|
||||
|
@ -175,12 +175,11 @@ void queueClientForReprocessing(client *c) {
|
||||
/* The client may already be into the unblocked list because of a previous
|
||||
* blocking operation, don't add back it into the list multiple times. */
|
||||
serverAssert(GlobalLocksAcquired());
|
||||
fastlock_lock(&c->lock);
|
||||
std::unique_lock<fastlock> ul(c->lock);
|
||||
if (!(c->flags & CLIENT_UNBLOCKED)) {
|
||||
c->flags |= CLIENT_UNBLOCKED;
|
||||
listAddNodeTail(g_pserver->rgthreadvar[c->iel].unblocked_clients,c);
|
||||
}
|
||||
fastlock_unlock(&c->lock);
|
||||
}
|
||||
|
||||
/* Unblock a client calling the right function depending on the kind
|
||||
|
@ -561,7 +561,7 @@ void clusterInit(void) {
|
||||
|
||||
serverAssert(serverTL == &g_pserver->rgthreadvar[IDX_EVENT_LOOP_MAIN]);
|
||||
if (createSocketAcceptHandler(&g_pserver->cfd, clusterAcceptHandler) != C_OK) {
|
||||
serverPanic("Unrecoverable error creating Redis Cluster socket accept handler.");
|
||||
serverPanic("Unrecoverable error creating KeyDB Cluster socket accept handler.");
|
||||
}
|
||||
|
||||
/* The slots -> keys map is a radix tree. Initialize it here. */
|
||||
@ -5172,11 +5172,12 @@ void dumpCommand(client *c) {
|
||||
|
||||
/* KEYDB.MVCCRESTORE key mvcc expire serialized-value */
|
||||
void mvccrestoreCommand(client *c) {
|
||||
long long mvcc, expire;
|
||||
long long expire;
|
||||
uint64_t mvcc;
|
||||
robj *key = c->argv[1], *obj = nullptr;
|
||||
int type;
|
||||
|
||||
if (getLongLongFromObjectOrReply(c, c->argv[2], &mvcc, "Invalid MVCC Tstamp") != C_OK)
|
||||
if (getUnsignedLongLongFromObjectOrReply(c, c->argv[2], &mvcc, "Invalid MVCC Tstamp") != C_OK)
|
||||
return;
|
||||
|
||||
if (getLongLongFromObjectOrReply(c, c->argv[3], &expire, "Invalid expire") != C_OK)
|
||||
|
@ -456,6 +456,9 @@ void connSetThreadAffinity(connection *conn, int cpu) {
|
||||
{
|
||||
serverLog(LL_WARNING, "Failed to set socket affinity");
|
||||
}
|
||||
#else
|
||||
(void)conn;
|
||||
(void)cpu;
|
||||
#endif
|
||||
}
|
||||
|
||||
|
@ -1669,9 +1669,8 @@ void copyCommand(client *c) {
|
||||
}
|
||||
|
||||
dbAdd(dst,newkey,newobj);
|
||||
if (expire != nullptr) {
|
||||
if (expire != nullptr) setExpire(c, dst, newkey, expire->duplicate());
|
||||
}
|
||||
if (expire != nullptr)
|
||||
setExpire(c, dst, newkey, expire->duplicate());
|
||||
|
||||
/* OK! key copied */
|
||||
signalModifiedKey(c,dst,c->argv[2]);
|
||||
|
@ -51,6 +51,12 @@ typedef ucontext_t sigcontext_t;
|
||||
#include <cxxabi.h>
|
||||
#endif /* HAVE_BACKTRACE */
|
||||
|
||||
//UNW_LOCAL_ONLY being set means we use libunwind for backtraces instead of execinfo
|
||||
#ifdef UNW_LOCAL_ONLY
|
||||
#include <libunwind.h>
|
||||
#include <cxxabi.h>
|
||||
#endif
|
||||
|
||||
#ifdef __CYGWIN__
|
||||
#ifndef SA_ONSTACK
|
||||
#define SA_ONSTACK 0x08000000
|
||||
@ -944,7 +950,7 @@ void _serverAssert(const char *estr, const char *file, int line) {
|
||||
serverLog(LL_WARNING,"==> %s:%d '%s' is not true",file,line,estr);
|
||||
|
||||
if (g_pserver->crashlog_enabled) {
|
||||
#ifdef HAVE_BACKTRACE
|
||||
#if defined HAVE_BACKTRACE || defined UNW_LOCAL_ONLY
|
||||
logStackTrace(NULL, 1);
|
||||
#endif
|
||||
printCrashReport();
|
||||
@ -1035,14 +1041,13 @@ void _serverPanic(const char *file, int line, const char *msg, ...) {
|
||||
vsnprintf(fmtmsg,sizeof(fmtmsg),msg,ap);
|
||||
va_end(ap);
|
||||
|
||||
g_fInCrash = true;
|
||||
bugReportStart();
|
||||
serverLog(LL_WARNING,"------------------------------------------------");
|
||||
serverLog(LL_WARNING,"!!! Software Failure. Press left mouse button to continue");
|
||||
serverLog(LL_WARNING,"Guru Meditation: %s #%s:%d",fmtmsg,file,line);
|
||||
|
||||
if (g_pserver->crashlog_enabled) {
|
||||
#ifdef HAVE_BACKTRACE
|
||||
#if defined HAVE_BACKTRACE || defined UNW_LOCAL_ONLY
|
||||
logStackTrace(NULL, 1);
|
||||
#endif
|
||||
printCrashReport();
|
||||
@ -1597,6 +1602,65 @@ void safe_write(int fd, const void *pv, ssize_t cb)
|
||||
} while (offset < cb);
|
||||
}
|
||||
|
||||
#ifdef UNW_LOCAL_ONLY
|
||||
|
||||
/* Logs the stack trace using the libunwind call.
|
||||
* The eip argument is unused as libunwind only gets local context.
|
||||
* The uplevel argument indicates how many of the calling functions to skip.
|
||||
*/
|
||||
void logStackTrace(void * eip, int uplevel) {
|
||||
(void)eip;//UNUSED
|
||||
const char *msg;
|
||||
int fd = openDirectLogFiledes();
|
||||
|
||||
if (fd == -1) return; /* If we can't log there is anything to do. */
|
||||
|
||||
msg = "\n------ STACK TRACE ------\n";
|
||||
if (write(fd,msg,strlen(msg)) == -1) {/* Avoid warning. */};
|
||||
unw_cursor_t cursor;
|
||||
unw_context_t context;
|
||||
|
||||
unw_getcontext(&context);
|
||||
unw_init_local(&cursor, &context);
|
||||
|
||||
/* Write symbols to log file */
|
||||
msg = "\nBacktrace:\n";
|
||||
if (write(fd,msg,strlen(msg)) == -1) {/* Avoid warning. */};
|
||||
|
||||
for (int i = 0; i < uplevel; i++) {
|
||||
unw_step(&cursor);
|
||||
}
|
||||
|
||||
while ( unw_step(&cursor) ) {
|
||||
unw_word_t ip, sp, off;
|
||||
|
||||
unw_get_reg(&cursor, UNW_REG_IP, &ip);
|
||||
unw_get_reg(&cursor, UNW_REG_SP, &sp);
|
||||
|
||||
char symbol[256] = {"<unknown>"};
|
||||
char *name = symbol;
|
||||
|
||||
if ( !unw_get_proc_name(&cursor, symbol, sizeof(symbol), &off) ) {
|
||||
int status;
|
||||
if ( (name = abi::__cxa_demangle(symbol, NULL, NULL, &status)) == 0 )
|
||||
name = symbol;
|
||||
}
|
||||
|
||||
dprintf(fd, "%s(+0x%" PRIxPTR ") [0x%016" PRIxPTR "] sp=0x%016" PRIxPTR "\n",
|
||||
name,
|
||||
static_cast<uintptr_t>(off),
|
||||
static_cast<uintptr_t>(ip),
|
||||
static_cast<uintptr_t>(sp));
|
||||
|
||||
if ( name != symbol )
|
||||
free(name);
|
||||
}
|
||||
}
|
||||
|
||||
#endif /* UNW_LOCAL_ONLY */
|
||||
|
||||
#ifdef HAVE_BACKTRACE
|
||||
|
||||
void backtrace_symbols_demangle_fd(void **trace, size_t csym, int fd)
|
||||
{
|
||||
char **syms = backtrace_symbols(trace, csym);
|
||||
@ -1640,8 +1704,6 @@ void backtrace_symbols_demangle_fd(void **trace, size_t csym, int fd)
|
||||
free(syms);
|
||||
}
|
||||
|
||||
#ifdef HAVE_BACKTRACE
|
||||
|
||||
/* Logs the stack trace using the backtrace() call. This function is designed
|
||||
* to be called from signal handlers safely.
|
||||
* The eip argument is optional (can take NULL).
|
||||
@ -1930,6 +1992,9 @@ void sigsegvHandler(int sig, siginfo_t *info, void *secret) {
|
||||
|
||||
logRegisters(uc);
|
||||
#endif
|
||||
#ifdef UNW_LOCAL_ONLY
|
||||
logStackTrace(NULL, 1);
|
||||
#endif
|
||||
|
||||
printCrashReport();
|
||||
|
||||
@ -2024,6 +2089,8 @@ void watchdogSignalHandler(int sig, siginfo_t *info, void *secret) {
|
||||
serverLogFromHandler(LL_WARNING,"\n--- WATCHDOG TIMER EXPIRED ---");
|
||||
#ifdef HAVE_BACKTRACE
|
||||
logStackTrace(getMcontextEip(uc), 1);
|
||||
#elif defined UNW_LOCAL_ONLY
|
||||
logStackTrace(NULL, 1);
|
||||
#else
|
||||
serverLogFromHandler(LL_WARNING,"Sorry: no support for backtrace().");
|
||||
#endif
|
||||
|
@ -826,8 +826,8 @@ void expireEntryFat::expireSubKey(const char *szSubkey, long long when)
|
||||
fFound = true;
|
||||
}
|
||||
if (fFound) {
|
||||
m_vecexpireEntries.erase(itr);
|
||||
dictDelete(m_dictIndex, szSubkey);
|
||||
m_vecexpireEntries.erase(itr);
|
||||
break;
|
||||
}
|
||||
++itr;
|
||||
|
967
src/keydb-diagnostic-tool.cpp
Normal file
967
src/keydb-diagnostic-tool.cpp
Normal file
@ -0,0 +1,967 @@
|
||||
/* KeyDB diagnostic utility.
|
||||
*
|
||||
* Copyright (c) 2009-2021, Salvatore Sanfilippo <antirez at gmail dot com>
|
||||
* Copyright (c) 2021, EQ Alpha Technology Ltd. <john at eqalpha dot com>
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright notice,
|
||||
* this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of Redis nor the names of its contributors may be used
|
||||
* to endorse or promote products derived from this software without
|
||||
* specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
* POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#include "fmacros.h"
|
||||
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <errno.h>
|
||||
#include <time.h>
|
||||
#include <sys/resource.h>
|
||||
#include <sys/time.h>
|
||||
#include <signal.h>
|
||||
#include <assert.h>
|
||||
#include <math.h>
|
||||
#include <pthread.h>
|
||||
#include <deque>
|
||||
extern "C" {
|
||||
#include <sds.h> /* Use hiredis sds. */
|
||||
#include <sdscompat.h>
|
||||
#include "hiredis.h"
|
||||
}
|
||||
#include "ae.h"
|
||||
#include "adlist.h"
|
||||
#include "dict.h"
|
||||
#include "zmalloc.h"
|
||||
#include "storage.h"
|
||||
#include "atomicvar.h"
|
||||
#include "crc16_slottable.h"
|
||||
|
||||
#define UNUSED(V) ((void) V)
|
||||
#define RANDPTR_INITIAL_SIZE 8
|
||||
#define MAX_LATENCY_PRECISION 3
|
||||
#define MAX_THREADS 500
|
||||
#define CLUSTER_SLOTS 16384
|
||||
|
||||
#define CLIENT_GET_EVENTLOOP(c) \
|
||||
(c->thread_id >= 0 ? config.threads[c->thread_id]->el : config.el)
|
||||
|
||||
struct benchmarkThread;
|
||||
struct clusterNode;
|
||||
struct redisConfig;
|
||||
|
||||
int g_fTestMode = false;
|
||||
|
||||
static struct config {
|
||||
aeEventLoop *el;
|
||||
const char *hostip;
|
||||
int hostport;
|
||||
const char *hostsocket;
|
||||
int numclients;
|
||||
int liveclients;
|
||||
int period_ms;
|
||||
int requests;
|
||||
int requests_issued;
|
||||
int requests_finished;
|
||||
int keysize;
|
||||
int datasize;
|
||||
int randomkeys;
|
||||
int randomkeys_keyspacelen;
|
||||
int keepalive;
|
||||
int pipeline;
|
||||
int showerrors;
|
||||
long long start;
|
||||
long long totlatency;
|
||||
long long *latency;
|
||||
const char *title;
|
||||
list *clients;
|
||||
int quiet;
|
||||
int csv;
|
||||
int loop;
|
||||
int idlemode;
|
||||
int dbnum;
|
||||
sds dbnumstr;
|
||||
char *tests;
|
||||
char *auth;
|
||||
const char *user;
|
||||
int precision;
|
||||
int max_threads;
|
||||
struct benchmarkThread **threads;
|
||||
int cluster_mode;
|
||||
int cluster_node_count;
|
||||
struct clusterNode **cluster_nodes;
|
||||
struct redisConfig *redis_config;
|
||||
int is_fetching_slots;
|
||||
int is_updating_slots;
|
||||
int slots_last_update;
|
||||
int enable_tracking;
|
||||
/* Thread mutexes to be used as fallbacks by atomicvar.h */
|
||||
pthread_mutex_t requests_issued_mutex;
|
||||
pthread_mutex_t requests_finished_mutex;
|
||||
pthread_mutex_t liveclients_mutex;
|
||||
pthread_mutex_t is_fetching_slots_mutex;
|
||||
pthread_mutex_t is_updating_slots_mutex;
|
||||
pthread_mutex_t updating_slots_mutex;
|
||||
pthread_mutex_t slots_last_update_mutex;
|
||||
} config;
|
||||
|
||||
typedef struct _client {
|
||||
redisContext *context;
|
||||
sds obuf;
|
||||
char **randptr; /* Pointers to :rand: strings inside the command buf */
|
||||
size_t randlen; /* Number of pointers in client->randptr */
|
||||
size_t randfree; /* Number of unused pointers in client->randptr */
|
||||
char **stagptr; /* Pointers to slot hashtags (cluster mode only) */
|
||||
size_t staglen; /* Number of pointers in client->stagptr */
|
||||
size_t stagfree; /* Number of unused pointers in client->stagptr */
|
||||
size_t written; /* Bytes of 'obuf' already written */
|
||||
long long start; /* Start time of a request */
|
||||
long long latency; /* Request latency */
|
||||
int pending; /* Number of pending requests (replies to consume) */
|
||||
int prefix_pending; /* If non-zero, number of pending prefix commands. Commands
|
||||
such as auth and select are prefixed to the pipeline of
|
||||
benchmark commands and discarded after the first send. */
|
||||
int prefixlen; /* Size in bytes of the pending prefix commands */
|
||||
int thread_id;
|
||||
struct clusterNode *cluster_node;
|
||||
int slots_last_update;
|
||||
redisReply *lastReply;
|
||||
} *client;
|
||||
|
||||
/* Threads. */
|
||||
|
||||
typedef struct benchmarkThread {
|
||||
int index;
|
||||
pthread_t thread;
|
||||
aeEventLoop *el;
|
||||
} benchmarkThread;
|
||||
|
||||
/* Cluster. */
|
||||
typedef struct clusterNode {
|
||||
char *ip;
|
||||
int port;
|
||||
sds name;
|
||||
int flags;
|
||||
sds replicate; /* Master ID if node is a replica */
|
||||
int *slots;
|
||||
int slots_count;
|
||||
int current_slot_index;
|
||||
int *updated_slots; /* Used by updateClusterSlotsConfiguration */
|
||||
int updated_slots_count; /* Used by updateClusterSlotsConfiguration */
|
||||
int replicas_count;
|
||||
sds *migrating; /* An array of sds where even strings are slots and odd
|
||||
* strings are the destination node IDs. */
|
||||
sds *importing; /* An array of sds where even strings are slots and odd
|
||||
* strings are the source node IDs. */
|
||||
int migrating_count; /* Length of the migrating array (migrating slots*2) */
|
||||
int importing_count; /* Length of the importing array (importing slots*2) */
|
||||
struct redisConfig *redis_config;
|
||||
} clusterNode;
|
||||
|
||||
typedef struct redisConfig {
|
||||
sds save;
|
||||
sds appendonly;
|
||||
} redisConfig;
|
||||
|
||||
int g_fInCrash = false;
|
||||
|
||||
/* Prototypes */
|
||||
static void writeHandler(aeEventLoop *el, int fd, void *privdata, int mask);
|
||||
static benchmarkThread *createBenchmarkThread(int index);
|
||||
static void freeBenchmarkThread(benchmarkThread *thread);
|
||||
static void freeBenchmarkThreads();
|
||||
static redisContext *getRedisContext(const char *ip, int port,
|
||||
const char *hostsocket);
|
||||
|
||||
/* Implementation */
|
||||
static long long ustime(void) {
|
||||
struct timeval tv;
|
||||
long long ust;
|
||||
|
||||
gettimeofday(&tv, NULL);
|
||||
ust = ((long)tv.tv_sec)*1000000;
|
||||
ust += tv.tv_usec;
|
||||
return ust;
|
||||
}
|
||||
|
||||
/* _serverAssert is needed by dict */
|
||||
extern "C" void _serverAssert(const char *estr, const char *file, int line) {
|
||||
fprintf(stderr, "=== ASSERTION FAILED ===");
|
||||
fprintf(stderr, "==> %s:%d '%s' is not true",file,line,estr);
|
||||
*((char*)-1) = 'x';
|
||||
}
|
||||
|
||||
static redisContext *getRedisContext(const char *ip, int port,
|
||||
const char *hostsocket)
|
||||
{
|
||||
redisContext *ctx = NULL;
|
||||
redisReply *reply = NULL;
|
||||
if (hostsocket == NULL)
|
||||
ctx = redisConnect(ip, port);
|
||||
else
|
||||
ctx = redisConnectUnix(hostsocket);
|
||||
if (ctx == NULL || ctx->err) {
|
||||
fprintf(stderr,"Could not connect to Redis at ");
|
||||
const char *err = (ctx != NULL ? ctx->errstr : "");
|
||||
if (hostsocket == NULL)
|
||||
fprintf(stderr,"%s:%d: %s\n",ip,port,err);
|
||||
else
|
||||
fprintf(stderr,"%s: %s\n",hostsocket,err);
|
||||
goto cleanup;
|
||||
}
|
||||
if (config.auth == NULL)
|
||||
return ctx;
|
||||
if (config.user == NULL)
|
||||
reply = (redisReply*)redisCommand(ctx,"AUTH %s", config.auth);
|
||||
else
|
||||
reply = (redisReply*)redisCommand(ctx,"AUTH %s %s", config.user, config.auth);
|
||||
if (reply != NULL) {
|
||||
if (reply->type == REDIS_REPLY_ERROR) {
|
||||
if (hostsocket == NULL)
|
||||
fprintf(stderr, "Node %s:%d replied with error:\n%s\n", ip, port, reply->str);
|
||||
else
|
||||
fprintf(stderr, "Node %s replied with error:\n%s\n", hostsocket, reply->str);
|
||||
goto cleanup;
|
||||
}
|
||||
freeReplyObject(reply);
|
||||
return ctx;
|
||||
}
|
||||
fprintf(stderr, "ERROR: failed to fetch reply from ");
|
||||
if (hostsocket == NULL)
|
||||
fprintf(stderr, "%s:%d\n", ip, port);
|
||||
else
|
||||
fprintf(stderr, "%s\n", hostsocket);
|
||||
cleanup:
|
||||
freeReplyObject(reply);
|
||||
redisFree(ctx);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void freeClient(client c) {
|
||||
aeEventLoop *el = CLIENT_GET_EVENTLOOP(c);
|
||||
listNode *ln;
|
||||
aeDeleteFileEvent(el,c->context->fd,AE_WRITABLE);
|
||||
aeDeleteFileEvent(el,c->context->fd,AE_READABLE);
|
||||
if (c->thread_id >= 0) {
|
||||
int requests_finished = 0;
|
||||
atomicGet(config.requests_finished, requests_finished);
|
||||
if (requests_finished >= config.requests) {
|
||||
aeStop(el);
|
||||
}
|
||||
}
|
||||
redisFree(c->context);
|
||||
sdsfree(c->obuf);
|
||||
zfree(c->randptr);
|
||||
zfree(c->stagptr);
|
||||
zfree(c);
|
||||
if (config.max_threads) pthread_mutex_lock(&(config.liveclients_mutex));
|
||||
config.liveclients--;
|
||||
ln = listSearchKey(config.clients,c);
|
||||
assert(ln != NULL);
|
||||
listDelNode(config.clients,ln);
|
||||
if (config.max_threads) pthread_mutex_unlock(&(config.liveclients_mutex));
|
||||
}
|
||||
|
||||
static void freeAllClients(void) {
|
||||
listNode *ln = config.clients->head, *next;
|
||||
|
||||
while(ln) {
|
||||
next = ln->next;
|
||||
freeClient((client)ln->value);
|
||||
ln = next;
|
||||
}
|
||||
}
|
||||
|
||||
static void resetClient(client c) {
|
||||
aeEventLoop *el = CLIENT_GET_EVENTLOOP(c);
|
||||
aeDeleteFileEvent(el,c->context->fd,AE_WRITABLE);
|
||||
aeDeleteFileEvent(el,c->context->fd,AE_READABLE);
|
||||
aeCreateFileEvent(el,c->context->fd,AE_WRITABLE,writeHandler,c);
|
||||
c->written = 0;
|
||||
c->pending = config.pipeline;
|
||||
}
|
||||
|
||||
static void randomizeClientKey(client c) {
|
||||
size_t i;
|
||||
|
||||
for (i = 0; i < c->randlen; i++) {
|
||||
char *p = c->randptr[i]+11;
|
||||
size_t r = 0;
|
||||
if (config.randomkeys_keyspacelen != 0)
|
||||
r = random() % config.randomkeys_keyspacelen;
|
||||
size_t j;
|
||||
|
||||
for (j = 0; j < 12; j++) {
|
||||
*p = '0'+r%10;
|
||||
r/=10;
|
||||
p--;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void readHandler(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
client c = (client)privdata;
|
||||
void *reply = NULL;
|
||||
UNUSED(el);
|
||||
UNUSED(fd);
|
||||
UNUSED(mask);
|
||||
|
||||
/* Calculate latency only for the first read event. This means that the
|
||||
* server already sent the reply and we need to parse it. Parsing overhead
|
||||
* is not part of the latency, so calculate it only once, here. */
|
||||
if (c->latency < 0) c->latency = ustime()-(c->start);
|
||||
|
||||
if (redisBufferRead(c->context) != REDIS_OK) {
|
||||
fprintf(stderr,"Error: %s\n",c->context->errstr);
|
||||
exit(1);
|
||||
} else {
|
||||
while(c->pending) {
|
||||
if (redisGetReply(c->context,&reply) != REDIS_OK) {
|
||||
fprintf(stderr,"Error: %s\n",c->context->errstr);
|
||||
exit(1);
|
||||
}
|
||||
if (reply != NULL) {
|
||||
if (reply == (void*)REDIS_REPLY_ERROR) {
|
||||
fprintf(stderr,"Unexpected error reply, exiting...\n");
|
||||
exit(1);
|
||||
}
|
||||
redisReply *r = (redisReply*)reply;
|
||||
int is_err = (r->type == REDIS_REPLY_ERROR);
|
||||
|
||||
if (is_err && config.showerrors) {
|
||||
/* TODO: static lasterr_time not thread-safe */
|
||||
static time_t lasterr_time = 0;
|
||||
time_t now = time(NULL);
|
||||
if (lasterr_time != now) {
|
||||
lasterr_time = now;
|
||||
if (c->cluster_node) {
|
||||
printf("Error from server %s:%d: %s\n",
|
||||
c->cluster_node->ip,
|
||||
c->cluster_node->port,
|
||||
r->str);
|
||||
} else printf("Error from server: %s\n", r->str);
|
||||
}
|
||||
}
|
||||
|
||||
freeReplyObject(reply);
|
||||
/* This is an OK for prefix commands such as auth and select.*/
|
||||
if (c->prefix_pending > 0) {
|
||||
c->prefix_pending--;
|
||||
c->pending--;
|
||||
/* Discard prefix commands on first response.*/
|
||||
if (c->prefixlen > 0) {
|
||||
size_t j;
|
||||
sdsrange(c->obuf, c->prefixlen, -1);
|
||||
/* We also need to fix the pointers to the strings
|
||||
* we need to randomize. */
|
||||
for (j = 0; j < c->randlen; j++)
|
||||
c->randptr[j] -= c->prefixlen;
|
||||
c->prefixlen = 0;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
int requests_finished = 0;
|
||||
atomicGetIncr(config.requests_finished, requests_finished, 1);
|
||||
if (requests_finished < config.requests)
|
||||
config.latency[requests_finished] = c->latency;
|
||||
c->pending--;
|
||||
if (c->pending == 0) {
|
||||
resetClient(c);
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void writeHandler(aeEventLoop *el, int fd, void *privdata, int mask) {
|
||||
client c = (client)privdata;
|
||||
UNUSED(el);
|
||||
UNUSED(fd);
|
||||
UNUSED(mask);
|
||||
|
||||
/* Initialize request when nothing was written. */
|
||||
if (c->written == 0) {
|
||||
/* Really initialize: randomize keys and set start time. */
|
||||
if (config.randomkeys) randomizeClientKey(c);
|
||||
atomicGet(config.slots_last_update, c->slots_last_update);
|
||||
c->start = ustime();
|
||||
c->latency = -1;
|
||||
}
|
||||
if (sdslen(c->obuf) > c->written) {
|
||||
void *ptr = c->obuf+c->written;
|
||||
ssize_t nwritten = write(c->context->fd,ptr,sdslen(c->obuf)-c->written);
|
||||
if (nwritten == -1) {
|
||||
if (errno != EPIPE)
|
||||
fprintf(stderr, "Writing to socket: %s\n", strerror(errno));
|
||||
freeClient(c);
|
||||
return;
|
||||
}
|
||||
c->written += nwritten;
|
||||
if (sdslen(c->obuf) == c->written) {
|
||||
aeDeleteFileEvent(el,c->context->fd,AE_WRITABLE);
|
||||
aeCreateFileEvent(el,c->context->fd,AE_READABLE,readHandler,c);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Create a benchmark client, configured to send the command passed as 'cmd' of
|
||||
* 'len' bytes.
|
||||
*
|
||||
* The command is copied N times in the client output buffer (that is reused
|
||||
* again and again to send the request to the server) accordingly to the configured
|
||||
* pipeline size.
|
||||
*
|
||||
* Also an initial SELECT command is prepended in order to make sure the right
|
||||
* database is selected, if needed. The initial SELECT will be discarded as soon
|
||||
* as the first reply is received.
|
||||
*
|
||||
* To create a client from scratch, the 'from' pointer is set to NULL. If instead
|
||||
* we want to create a client using another client as reference, the 'from' pointer
|
||||
* points to the client to use as reference. In such a case the following
|
||||
* information is take from the 'from' client:
|
||||
*
|
||||
* 1) The command line to use.
|
||||
* 2) The offsets of the __rand_int__ elements inside the command line, used
|
||||
* for arguments randomization.
|
||||
*
|
||||
* Even when cloning another client, prefix commands are applied if needed.*/
|
||||
static client createClient(const char *cmd, size_t len, client from, int thread_id) {
|
||||
int j;
|
||||
int is_cluster_client = (config.cluster_mode && thread_id >= 0);
|
||||
client c = (client)zmalloc(sizeof(struct _client), MALLOC_LOCAL);
|
||||
|
||||
const char *ip = NULL;
|
||||
int port = 0;
|
||||
c->cluster_node = NULL;
|
||||
if (config.hostsocket == NULL || is_cluster_client) {
|
||||
if (!is_cluster_client) {
|
||||
ip = config.hostip;
|
||||
port = config.hostport;
|
||||
} else {
|
||||
int node_idx = 0;
|
||||
if (config.max_threads < config.cluster_node_count)
|
||||
node_idx = config.liveclients % config.cluster_node_count;
|
||||
else
|
||||
node_idx = thread_id % config.cluster_node_count;
|
||||
clusterNode *node = config.cluster_nodes[node_idx];
|
||||
assert(node != NULL);
|
||||
ip = (const char *) node->ip;
|
||||
port = node->port;
|
||||
c->cluster_node = node;
|
||||
}
|
||||
c->context = redisConnectNonBlock(ip,port);
|
||||
} else {
|
||||
c->context = redisConnectUnixNonBlock(config.hostsocket);
|
||||
}
|
||||
if (c->context->err) {
|
||||
fprintf(stderr,"Could not connect to Redis at ");
|
||||
if (config.hostsocket == NULL || is_cluster_client)
|
||||
fprintf(stderr,"%s:%d: %s\n",ip,port,c->context->errstr);
|
||||
else
|
||||
fprintf(stderr,"%s: %s\n",config.hostsocket,c->context->errstr);
|
||||
exit(1);
|
||||
}
|
||||
c->thread_id = thread_id;
|
||||
/* Suppress hiredis cleanup of unused buffers for max speed. */
|
||||
c->context->reader->maxbuf = 0;
|
||||
|
||||
/* Build the request buffer:
|
||||
* Queue N requests accordingly to the pipeline size, or simply clone
|
||||
* the example client buffer. */
|
||||
c->obuf = sdsempty();
|
||||
/* Prefix the request buffer with AUTH and/or SELECT commands, if applicable.
|
||||
* These commands are discarded after the first response, so if the client is
|
||||
* reused the commands will not be used again. */
|
||||
c->prefix_pending = 0;
|
||||
if (config.auth) {
|
||||
char *buf = NULL;
|
||||
int len;
|
||||
if (config.user == NULL)
|
||||
len = redisFormatCommand(&buf, "AUTH %s", config.auth);
|
||||
else
|
||||
len = redisFormatCommand(&buf, "AUTH %s %s",
|
||||
config.user, config.auth);
|
||||
c->obuf = sdscatlen(c->obuf, buf, len);
|
||||
free(buf);
|
||||
c->prefix_pending++;
|
||||
}
|
||||
|
||||
if (config.enable_tracking) {
|
||||
char *buf = NULL;
|
||||
int len = redisFormatCommand(&buf, "CLIENT TRACKING on");
|
||||
c->obuf = sdscatlen(c->obuf, buf, len);
|
||||
free(buf);
|
||||
c->prefix_pending++;
|
||||
}
|
||||
|
||||
/* If a DB number different than zero is selected, prefix our request
|
||||
* buffer with the SELECT command, that will be discarded the first
|
||||
* time the replies are received, so if the client is reused the
|
||||
* SELECT command will not be used again. */
|
||||
if (config.dbnum != 0 && !is_cluster_client) {
|
||||
c->obuf = sdscatprintf(c->obuf,"*2\r\n$6\r\nSELECT\r\n$%d\r\n%s\r\n",
|
||||
(int)sdslen(config.dbnumstr),config.dbnumstr);
|
||||
c->prefix_pending++;
|
||||
}
|
||||
c->prefixlen = sdslen(c->obuf);
|
||||
/* Append the request itself. */
|
||||
if (from) {
|
||||
c->obuf = sdscatlen(c->obuf,
|
||||
from->obuf+from->prefixlen,
|
||||
sdslen(from->obuf)-from->prefixlen);
|
||||
} else {
|
||||
for (j = 0; j < config.pipeline; j++)
|
||||
c->obuf = sdscatlen(c->obuf,cmd,len);
|
||||
}
|
||||
|
||||
c->written = 0;
|
||||
c->pending = config.pipeline+c->prefix_pending;
|
||||
c->randptr = NULL;
|
||||
c->randlen = 0;
|
||||
c->stagptr = NULL;
|
||||
c->staglen = 0;
|
||||
|
||||
/* Find substrings in the output buffer that need to be randomized. */
|
||||
if (config.randomkeys) {
|
||||
if (from) {
|
||||
c->randlen = from->randlen;
|
||||
c->randfree = 0;
|
||||
c->randptr = (char**)zmalloc(sizeof(char*)*c->randlen, MALLOC_LOCAL);
|
||||
/* copy the offsets. */
|
||||
for (j = 0; j < (int)c->randlen; j++) {
|
||||
c->randptr[j] = c->obuf + (from->randptr[j]-from->obuf);
|
||||
/* Adjust for the different select prefix length. */
|
||||
c->randptr[j] += c->prefixlen - from->prefixlen;
|
||||
}
|
||||
} else {
|
||||
char *p = c->obuf;
|
||||
|
||||
c->randlen = 0;
|
||||
c->randfree = RANDPTR_INITIAL_SIZE;
|
||||
c->randptr = (char**)zmalloc(sizeof(char*)*c->randfree, MALLOC_LOCAL);
|
||||
while ((p = strstr(p,"__rand_int__")) != NULL) {
|
||||
if (c->randfree == 0) {
|
||||
c->randptr = (char**)zrealloc(c->randptr,sizeof(char*)*c->randlen*2, MALLOC_LOCAL);
|
||||
c->randfree += c->randlen;
|
||||
}
|
||||
c->randptr[c->randlen++] = p;
|
||||
c->randfree--;
|
||||
p += 12; /* 12 is strlen("__rand_int__). */
|
||||
}
|
||||
}
|
||||
}
|
||||
/* If cluster mode is enabled, set slot hashtags pointers. */
|
||||
if (config.cluster_mode) {
|
||||
if (from) {
|
||||
c->staglen = from->staglen;
|
||||
c->stagfree = 0;
|
||||
c->stagptr = (char**)zmalloc(sizeof(char*)*c->staglen, MALLOC_LOCAL);
|
||||
/* copy the offsets. */
|
||||
for (j = 0; j < (int)c->staglen; j++) {
|
||||
c->stagptr[j] = c->obuf + (from->stagptr[j]-from->obuf);
|
||||
/* Adjust for the different select prefix length. */
|
||||
c->stagptr[j] += c->prefixlen - from->prefixlen;
|
||||
}
|
||||
} else {
|
||||
char *p = c->obuf;
|
||||
|
||||
c->staglen = 0;
|
||||
c->stagfree = RANDPTR_INITIAL_SIZE;
|
||||
c->stagptr = (char**)zmalloc(sizeof(char*)*c->stagfree, MALLOC_LOCAL);
|
||||
while ((p = strstr(p,"{tag}")) != NULL) {
|
||||
if (c->stagfree == 0) {
|
||||
c->stagptr = (char**)zrealloc(c->stagptr,
|
||||
sizeof(char*) * c->staglen*2, MALLOC_LOCAL);
|
||||
c->stagfree += c->staglen;
|
||||
}
|
||||
c->stagptr[c->staglen++] = p;
|
||||
c->stagfree--;
|
||||
p += 5; /* 5 is strlen("{tag}"). */
|
||||
}
|
||||
}
|
||||
}
|
||||
aeEventLoop *el = NULL;
|
||||
if (thread_id < 0) el = config.el;
|
||||
else {
|
||||
benchmarkThread *thread = config.threads[thread_id];
|
||||
el = thread->el;
|
||||
}
|
||||
if (config.idlemode == 0)
|
||||
aeCreateFileEvent(el,c->context->fd,AE_WRITABLE,writeHandler,c);
|
||||
listAddNodeTail(config.clients,c);
|
||||
atomicIncr(config.liveclients, 1);
|
||||
atomicGet(config.slots_last_update, c->slots_last_update);
|
||||
return c;
|
||||
}
|
||||
|
||||
static void initBenchmarkThreads() {
|
||||
int i;
|
||||
if (config.threads) freeBenchmarkThreads();
|
||||
config.threads = (benchmarkThread**)zmalloc(config.max_threads * sizeof(benchmarkThread*), MALLOC_LOCAL);
|
||||
for (i = 0; i < config.max_threads; i++) {
|
||||
benchmarkThread *thread = createBenchmarkThread(i);
|
||||
config.threads[i] = thread;
|
||||
}
|
||||
}
|
||||
|
||||
/* Thread functions. */
|
||||
|
||||
static benchmarkThread *createBenchmarkThread(int index) {
|
||||
benchmarkThread *thread = (benchmarkThread*)zmalloc(sizeof(*thread), MALLOC_LOCAL);
|
||||
if (thread == NULL) return NULL;
|
||||
thread->index = index;
|
||||
thread->el = aeCreateEventLoop(1024*10);
|
||||
return thread;
|
||||
}
|
||||
|
||||
static void freeBenchmarkThread(benchmarkThread *thread) {
|
||||
if (thread->el) aeDeleteEventLoop(thread->el);
|
||||
zfree(thread);
|
||||
}
|
||||
|
||||
static void freeBenchmarkThreads() {
|
||||
int i = 0;
|
||||
for (; i < config.max_threads; i++) {
|
||||
benchmarkThread *thread = config.threads[i];
|
||||
if (thread) freeBenchmarkThread(thread);
|
||||
}
|
||||
zfree(config.threads);
|
||||
config.threads = NULL;
|
||||
}
|
||||
|
||||
static void *execBenchmarkThread(void *ptr) {
|
||||
benchmarkThread *thread = (benchmarkThread *) ptr;
|
||||
aeMain(thread->el);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void initConfigDefaults() {
|
||||
config.numclients = 50;
|
||||
config.requests = 100000;
|
||||
config.liveclients = 0;
|
||||
config.el = aeCreateEventLoop(1024*10);
|
||||
config.keepalive = 1;
|
||||
config.datasize = 3;
|
||||
config.pipeline = 1;
|
||||
config.period_ms = 5000;
|
||||
config.showerrors = 0;
|
||||
config.randomkeys = 0;
|
||||
config.randomkeys_keyspacelen = 0;
|
||||
config.quiet = 0;
|
||||
config.csv = 0;
|
||||
config.loop = 0;
|
||||
config.idlemode = 0;
|
||||
config.latency = NULL;
|
||||
config.clients = listCreate();
|
||||
config.hostip = "127.0.0.1";
|
||||
config.hostport = 6379;
|
||||
config.hostsocket = NULL;
|
||||
config.tests = NULL;
|
||||
config.dbnum = 0;
|
||||
config.auth = NULL;
|
||||
config.precision = 1;
|
||||
config.max_threads = MAX_THREADS;
|
||||
config.threads = NULL;
|
||||
config.cluster_mode = 0;
|
||||
config.cluster_node_count = 0;
|
||||
config.cluster_nodes = NULL;
|
||||
config.redis_config = NULL;
|
||||
config.is_fetching_slots = 0;
|
||||
config.is_updating_slots = 0;
|
||||
config.slots_last_update = 0;
|
||||
config.enable_tracking = 0;
|
||||
}
|
||||
|
||||
/* Returns number of consumed options. */
|
||||
int parseOptions(int argc, const char **argv) {
|
||||
int i;
|
||||
int lastarg;
|
||||
int exit_status = 1;
|
||||
|
||||
for (i = 1; i < argc; i++) {
|
||||
lastarg = (i == (argc-1));
|
||||
|
||||
if (!strcmp(argv[i],"-c") || !strcmp(argv[i],"--clients")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.numclients = atoi(argv[++i]);
|
||||
} else if (!strcmp(argv[i],"--time")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.period_ms = atoi(argv[++i]);
|
||||
if (config.period_ms <= 0) {
|
||||
printf("Warning: Invalid value for thread time. Defaulting to 5000ms.\n");
|
||||
config.period_ms = 5000;
|
||||
}
|
||||
} else if (!strcmp(argv[i],"-h") || !strcmp(argv[i],"--host")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.hostip = strdup(argv[++i]);
|
||||
} else if (!strcmp(argv[i],"-p") || !strcmp(argv[i],"--port")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.hostport = atoi(argv[++i]);
|
||||
} else if (!strcmp(argv[i],"-s")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.hostsocket = strdup(argv[++i]);
|
||||
} else if (!strcmp(argv[i],"--password") ) {
|
||||
if (lastarg) goto invalid;
|
||||
config.auth = strdup(argv[++i]);
|
||||
} else if (!strcmp(argv[i],"--user")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.user = argv[++i];
|
||||
} else if (!strcmp(argv[i],"--dbnum")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.dbnum = atoi(argv[++i]);
|
||||
config.dbnumstr = sdsfromlonglong(config.dbnum);
|
||||
} else if (!strcmp(argv[i],"-t") || !strcmp(argv[i],"--threads")) {
|
||||
if (lastarg) goto invalid;
|
||||
config.max_threads = atoi(argv[++i]);
|
||||
if (config.max_threads > MAX_THREADS) {
|
||||
printf("Warning: Too many threads, limiting threads to %d.\n", MAX_THREADS);
|
||||
config.max_threads = MAX_THREADS;
|
||||
} else if (config.max_threads <= 0) {
|
||||
printf("Warning: Invalid value for max threads. Defaulting to %d.\n", MAX_THREADS);
|
||||
config.max_threads = MAX_THREADS;
|
||||
}
|
||||
} else if (!strcmp(argv[i],"--help")) {
|
||||
exit_status = 0;
|
||||
goto usage;
|
||||
} else {
|
||||
/* Assume the user meant to provide an option when the arg starts
|
||||
* with a dash. We're done otherwise and should use the remainder
|
||||
* as the command and arguments for running the benchmark. */
|
||||
if (argv[i][0] == '-') goto invalid;
|
||||
return i;
|
||||
}
|
||||
}
|
||||
|
||||
return i;
|
||||
|
||||
invalid:
|
||||
printf("Invalid option \"%s\" or option argument missing\n\n",argv[i]);
|
||||
|
||||
usage:
|
||||
printf(
|
||||
"Usage: keydb-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests>] [-k <boolean>]\n\n"
|
||||
" -h, --host <hostname> Server hostname (default 127.0.0.1)\n"
|
||||
" -p, --port <port> Server port (default 6379)\n"
|
||||
" -c <clients> Number of parallel connections (default 50)\n"
|
||||
" -t, --threads <threads> Maximum number of threads to start before ending\n"
|
||||
" --time <time> Time between spinning up new client threads, in milliseconds\n"
|
||||
" --dbnum <db> Select the specified DB number (default 0)\n"
|
||||
" --user <username> Used to send ACL style 'AUTH username pass'. Needs -a.\n"
|
||||
" --password <password> Password for Redis Auth\n\n"
|
||||
);
|
||||
exit(exit_status);
|
||||
}
|
||||
|
||||
int extractPropertyFromInfo(const char *info, const char *key, double &val) {
|
||||
char *line = strstr((char*)info, key);
|
||||
if (line == nullptr) return 1;
|
||||
line += strlen(key) + 1; // Skip past key name and following colon
|
||||
char *newline = strchr(line, '\n');
|
||||
*newline = 0; // Terminate string after relevant line
|
||||
val = strtod(line, nullptr);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int extractPropertyFromInfo(const char *info, const char *key, unsigned int &val) {
|
||||
char *line = strstr((char*)info, key);
|
||||
if (line == nullptr) return 1;
|
||||
line += strlen(key) + 1; // Skip past key name and following colon
|
||||
char *newline = strchr(line, '\n');
|
||||
*newline = 0; // Terminate string after relevant line
|
||||
val = atoi(line);
|
||||
return 0;
|
||||
}
|
||||
|
||||
double getSelfCpuTime(struct rusage *self_ru) {
|
||||
getrusage(RUSAGE_SELF, self_ru);
|
||||
double user_time = self_ru->ru_utime.tv_sec + (self_ru->ru_utime.tv_usec / (double)1000000);
|
||||
double system_time = self_ru->ru_stime.tv_sec + (self_ru->ru_stime.tv_usec / (double)1000000);
|
||||
return user_time + system_time;
|
||||
}
|
||||
|
||||
double getServerCpuTime(redisContext *ctx) {
|
||||
redisReply *reply = (redisReply*)redisCommand(ctx, "INFO CPU");
|
||||
if (reply->type != REDIS_REPLY_STRING) {
|
||||
freeReplyObject(reply);
|
||||
printf("Error executing INFO command. Exiting.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
double used_cpu_user, used_cpu_sys;
|
||||
if (extractPropertyFromInfo(reply->str, "used_cpu_user", used_cpu_user)) {
|
||||
printf("Error reading user CPU usage from INFO command. Exiting.\n");
|
||||
return -1;
|
||||
}
|
||||
if (extractPropertyFromInfo(reply->str, "used_cpu_sys", used_cpu_sys)) {
|
||||
printf("Error reading system CPU usage from INFO command. Exiting.\n");
|
||||
return -1;
|
||||
}
|
||||
freeReplyObject(reply);
|
||||
return used_cpu_user + used_cpu_sys;
|
||||
}
|
||||
|
||||
double getMean(std::deque<double> *q) {
|
||||
double sum = 0;
|
||||
for (long unsigned int i = 0; i < q->size(); i++) {
|
||||
sum += (*q)[i];
|
||||
}
|
||||
return sum / q->size();
|
||||
}
|
||||
|
||||
bool isAtFullLoad(double cpuPercent, unsigned int threads) {
|
||||
return cpuPercent / threads >= 96;
|
||||
}
|
||||
|
||||
int main(int argc, const char **argv) {
|
||||
int i;
|
||||
|
||||
storage_init(NULL, 0);
|
||||
|
||||
srandom(time(NULL));
|
||||
signal(SIGHUP, SIG_IGN);
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
|
||||
initConfigDefaults();
|
||||
|
||||
i = parseOptions(argc,argv);
|
||||
argc -= i;
|
||||
argv += i;
|
||||
|
||||
config.latency = (long long*)zmalloc(sizeof(long long)*config.requests, MALLOC_LOCAL);
|
||||
|
||||
if (config.max_threads > 0) {
|
||||
int err = 0;
|
||||
err |= pthread_mutex_init(&(config.requests_issued_mutex), NULL);
|
||||
err |= pthread_mutex_init(&(config.requests_finished_mutex), NULL);
|
||||
err |= pthread_mutex_init(&(config.liveclients_mutex), NULL);
|
||||
err |= pthread_mutex_init(&(config.is_fetching_slots_mutex), NULL);
|
||||
err |= pthread_mutex_init(&(config.is_updating_slots_mutex), NULL);
|
||||
err |= pthread_mutex_init(&(config.updating_slots_mutex), NULL);
|
||||
err |= pthread_mutex_init(&(config.slots_last_update_mutex), NULL);
|
||||
if (err != 0)
|
||||
{
|
||||
perror("Failed to initialize mutex");
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
}
|
||||
|
||||
const char *set_value = "abcdefghijklmnopqrstuvwxyz";
|
||||
int self_threads = 0;
|
||||
char command[63];
|
||||
|
||||
initBenchmarkThreads();
|
||||
redisContext *ctx = getRedisContext(config.hostip, config.hostport, config.hostsocket);
|
||||
double server_cpu_time, last_server_cpu_time = getServerCpuTime(ctx);
|
||||
struct rusage self_ru;
|
||||
double self_cpu_time, last_self_cpu_time = getSelfCpuTime(&self_ru);
|
||||
double server_cpu_load, last_server_cpu_load = 0, self_cpu_load, server_cpu_gain;
|
||||
std::deque<double> load_gain_history = {};
|
||||
double current_gain_avg, peak_gain_avg = 0;
|
||||
|
||||
redisReply *reply = (redisReply*)redisCommand(ctx, "INFO CPU");
|
||||
if (reply->type != REDIS_REPLY_STRING) {
|
||||
freeReplyObject(reply);
|
||||
printf("Error executing INFO command. Exiting.\r\n");
|
||||
return 1;
|
||||
}
|
||||
unsigned int server_threads;
|
||||
if (extractPropertyFromInfo(reply->str, "server_threads", server_threads)) {
|
||||
printf("Error reading server threads from INFO command. Exiting.\r\n");
|
||||
return 1;
|
||||
}
|
||||
freeReplyObject(reply);
|
||||
|
||||
printf("Server has %d threads.\nStarting...\n", server_threads);
|
||||
fflush(stdout);
|
||||
|
||||
while (self_threads < config.max_threads) {
|
||||
for (int i = 0; i < config.numclients; i++) {
|
||||
sprintf(command, "SET %d %s\r\n", self_threads * config.numclients + i, set_value);
|
||||
createClient(command, strlen(command), NULL,self_threads);
|
||||
}
|
||||
|
||||
benchmarkThread *t = config.threads[self_threads];
|
||||
if (pthread_create(&(t->thread), NULL, execBenchmarkThread, t)){
|
||||
fprintf(stderr, "FATAL: Failed to start thread %d. Exiting.\n", self_threads);
|
||||
exit(1);
|
||||
}
|
||||
self_threads++;
|
||||
|
||||
usleep(config.period_ms * 1000);
|
||||
|
||||
server_cpu_time = getServerCpuTime(ctx);
|
||||
self_cpu_time = getSelfCpuTime(&self_ru);
|
||||
server_cpu_load = (server_cpu_time - last_server_cpu_time) * 100000 / config.period_ms;
|
||||
self_cpu_load = (self_cpu_time - last_self_cpu_time) * 100000 / config.period_ms;
|
||||
if (server_cpu_time < 0) {
|
||||
break;
|
||||
}
|
||||
printf("%d threads, %d total clients. CPU Usage Self: %.1f%% (%.1f%% per thread), Server: %.1f%% (%.1f%% per thread)\r",
|
||||
self_threads,
|
||||
self_threads * config.numclients,
|
||||
self_cpu_load,
|
||||
self_cpu_load / self_threads,
|
||||
server_cpu_load,
|
||||
server_cpu_load / server_threads);
|
||||
fflush(stdout);
|
||||
server_cpu_gain = server_cpu_load - last_server_cpu_load;
|
||||
load_gain_history.push_back(server_cpu_gain);
|
||||
if (load_gain_history.size() > 5) {
|
||||
load_gain_history.pop_front();
|
||||
}
|
||||
current_gain_avg = getMean(&load_gain_history);
|
||||
if (current_gain_avg > peak_gain_avg) {
|
||||
peak_gain_avg = current_gain_avg;
|
||||
}
|
||||
last_server_cpu_time = server_cpu_time;
|
||||
last_self_cpu_time = self_cpu_time;
|
||||
last_server_cpu_load = server_cpu_load;
|
||||
|
||||
if (isAtFullLoad(server_cpu_load, server_threads)) {
|
||||
printf("\nServer is at full CPU load. If higher performance is expected, check server configuration.\n");
|
||||
break;
|
||||
}
|
||||
|
||||
if (current_gain_avg <= 0.05 * peak_gain_avg) {
|
||||
printf("\nServer CPU load appears to have stagnated with increasing clients.\n"
|
||||
"Server does not appear to be at full load. Check network for throughput.\n");
|
||||
break;
|
||||
}
|
||||
|
||||
if (self_threads * config.numclients > 2000) {
|
||||
printf("\nClient limit of 2000 reached. Server is not at full load and appears to be increasing.\n"
|
||||
"2000 clients should be more than enough to reach a bottleneck. Check all configuration.\n");
|
||||
}
|
||||
}
|
||||
|
||||
printf("Done.\n");
|
||||
|
||||
freeAllClients();
|
||||
freeBenchmarkThreads();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -426,7 +426,7 @@ sds createLatencyReport(void) {
|
||||
}
|
||||
|
||||
if (advise_slowlog_inspect) {
|
||||
report = sdscat(report,"- Check your Slow Log to understand what are the commands you are running which are too slow to execute. Please check https://redis.io/commands/slowlog for more information.\n");
|
||||
report = sdscat(report,"- Check your Slow Log to understand what are the commands you are running which are too slow to execute. Please check https://docs.keydb.dev/docs/commands#slowlog for more information.\n");
|
||||
}
|
||||
|
||||
/* Intrinsic latency. */
|
||||
|
@ -612,6 +612,19 @@ int moduleDelKeyIfEmpty(RedisModuleKey *key) {
|
||||
}
|
||||
}
|
||||
|
||||
/* This function is used to set the thread local variables (serverTL) for
|
||||
* arbitrary module threads. All incoming module threads share the same set of
|
||||
* thread local variables (modulethreadvar).
|
||||
*
|
||||
* This is needed as some KeyDB functions use thread local variables to do things,
|
||||
* and we don't want to share the thread local variables of existing server threads */
|
||||
void moduleSetThreadVariablesIfNeeded(void) {
|
||||
if (serverTL == nullptr) {
|
||||
serverTL = &g_pserver->modulethreadvar;
|
||||
g_fModuleThread = true;
|
||||
}
|
||||
}
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
* Service API exported to modules
|
||||
*
|
||||
@ -2265,6 +2278,7 @@ int RM_GetContextFlags(RedisModuleCtx *ctx) {
|
||||
* periodically in timer callbacks or other periodic callbacks.
|
||||
*/
|
||||
int RM_AvoidReplicaTraffic() {
|
||||
moduleSetThreadVariablesIfNeeded();
|
||||
return checkClientPauseTimeoutAndReturnIfPaused();
|
||||
}
|
||||
|
||||
@ -2341,8 +2355,11 @@ void *RM_OpenKey(RedisModuleCtx *ctx, robj *keyname, int mode) {
|
||||
/* Destroy a RedisModuleKey struct (freeing is the responsibility of the caller). */
|
||||
static void moduleCloseKey(RedisModuleKey *key) {
|
||||
int signal = SHOULD_SIGNAL_MODIFIED_KEYS(key->ctx);
|
||||
moduleAcquireGIL(false);
|
||||
if ((key->mode & REDISMODULE_WRITE) && signal)
|
||||
signalModifiedKey(key->ctx->client,key->db,key->key);
|
||||
/* TODO: if (key->iter) RM_KeyIteratorStop(kp); */
|
||||
moduleReleaseGIL(false);
|
||||
if (key->iter) zfree(key->iter);
|
||||
RM_ZsetRangeStop(key);
|
||||
if (key && key->value && key->value->type == OBJ_STREAM &&
|
||||
@ -5596,10 +5613,7 @@ int moduleClientIsBlockedOnKeys(client *c) {
|
||||
* RedisModule_BlockClientOnKeys() is accessible from the timeout
|
||||
* callback via RM_GetBlockedClientPrivateData). */
|
||||
int RM_UnblockClient(RedisModuleBlockedClient *bc, void *privdata) {
|
||||
if (serverTL == nullptr) {
|
||||
serverTL = &g_pserver->modulethreadvar;
|
||||
g_fModuleThread = true;
|
||||
}
|
||||
moduleSetThreadVariablesIfNeeded();
|
||||
if (bc->blocked_on_keys) {
|
||||
/* In theory the user should always pass the timeout handler as an
|
||||
* argument, but better to be safe than sorry. */
|
||||
@ -5899,10 +5913,7 @@ void RM_FreeThreadSafeContext(RedisModuleCtx *ctx) {
|
||||
* a blocked client connected to the thread safe context. */
|
||||
void RM_ThreadSafeContextLock(RedisModuleCtx *ctx) {
|
||||
UNUSED(ctx);
|
||||
if (serverTL == nullptr) {
|
||||
serverTL = &g_pserver->modulethreadvar;
|
||||
g_fModuleThread = true;
|
||||
}
|
||||
moduleSetThreadVariablesIfNeeded();
|
||||
moduleAcquireGIL(FALSE /*fServerThread*/, true /*fExclusive*/);
|
||||
}
|
||||
|
||||
|
@ -763,6 +763,20 @@ int getLongLongFromObjectOrReply(client *c, robj *o, long long *target, const ch
|
||||
return C_OK;
|
||||
}
|
||||
|
||||
int getUnsignedLongLongFromObjectOrReply(client *c, robj *o, uint64_t *target, const char *msg) {
|
||||
uint64_t value;
|
||||
if (getUnsignedLongLongFromObject(o, &value) != C_OK) {
|
||||
if (msg != NULL) {
|
||||
addReplyError(c,(char*)msg);
|
||||
} else {
|
||||
addReplyError(c,"value is not an integer or out of range");
|
||||
}
|
||||
return C_ERR;
|
||||
}
|
||||
*target = value;
|
||||
return C_OK;
|
||||
}
|
||||
|
||||
int getLongFromObjectOrReply(client *c, robj *o, long *target, const char *msg) {
|
||||
long long value;
|
||||
|
||||
|
@ -2850,18 +2850,19 @@ void syncWithMaster(connection *conn) {
|
||||
goto error;
|
||||
}
|
||||
|
||||
retry_connect:
|
||||
/* Send a PING to check the master is able to reply without errors. */
|
||||
if (mi->repl_state == REPL_STATE_CONNECTING) {
|
||||
if (mi->repl_state == REPL_STATE_CONNECTING || mi->repl_state == REPL_STATE_RETRY_NOREPLPING) {
|
||||
serverLog(LL_NOTICE,"Non blocking connect for SYNC fired the event.");
|
||||
/* Delete the writable event so that the readable event remains
|
||||
* registered and we can wait for the PONG reply. */
|
||||
connSetReadHandler(conn, syncWithMaster);
|
||||
connSetWriteHandler(conn, NULL);
|
||||
mi->repl_state = REPL_STATE_RECEIVE_PING_REPLY;
|
||||
/* Send the PING, don't check for errors at all, we have the timeout
|
||||
* that will take care about this. */
|
||||
err = sendCommand(conn,"PING",NULL);
|
||||
err = sendCommand(conn,mi->repl_state == REPL_STATE_RETRY_NOREPLPING ? "PING" : "REPLPING",NULL);
|
||||
if (err) goto write_error;
|
||||
mi->repl_state = REPL_STATE_RECEIVE_PING_REPLY;
|
||||
return;
|
||||
}
|
||||
|
||||
@ -2874,7 +2875,13 @@ void syncWithMaster(connection *conn) {
|
||||
* Note that older versions of Redis replied with "operation not
|
||||
* permitted" instead of using a proper error code, so we test
|
||||
* both. */
|
||||
if (err[0] != '+' &&
|
||||
if (strncmp(err,"-ERR unknown command",20) == 0) {
|
||||
serverLog(LL_NOTICE,"Master does not support REPLPING, sending PING instead...");
|
||||
mi->repl_state = REPL_STATE_RETRY_NOREPLPING;
|
||||
sdsfree(err);
|
||||
err = NULL;
|
||||
goto retry_connect;
|
||||
} else if (err[0] != '+' &&
|
||||
strncmp(err,"-NOAUTH",7) != 0 &&
|
||||
strncmp(err,"-NOPERM",7) != 0 &&
|
||||
strncmp(err,"-ERR operation not permitted",28) != 0)
|
||||
@ -4948,7 +4955,7 @@ void replicationNotifyLoadedKey(redisDb *db, robj_roptr key, robj_roptr val, lon
|
||||
redisObjectStack objTtl;
|
||||
initStaticStringObject(objTtl, sdscatprintf(sdsempty(), "%lld", expire));
|
||||
redisObjectStack objMvcc;
|
||||
initStaticStringObject(objMvcc, sdscatprintf(sdsempty(), "%lu", mvccFromObj(val)));
|
||||
initStaticStringObject(objMvcc, sdscatprintf(sdsempty(), "%" PRIu64, mvccFromObj(val)));
|
||||
redisObject *argv[5] = {shared.mvccrestore, key.unsafe_robjcast(), &objMvcc, &objTtl, &objPayload};
|
||||
|
||||
replicationFeedSlaves(g_pserver->slaves, db->id, argv, 5);
|
||||
|
@ -57,7 +57,6 @@
|
||||
#include <limits.h>
|
||||
#include <float.h>
|
||||
#include <math.h>
|
||||
#include <sys/resource.h>
|
||||
#include <sys/utsname.h>
|
||||
#include <locale.h>
|
||||
#include <sys/socket.h>
|
||||
@ -69,7 +68,6 @@
|
||||
#include "keycheck.h"
|
||||
#include "motd.h"
|
||||
#include "t_nhash.h"
|
||||
#include <sys/resource.h>
|
||||
#ifdef __linux__
|
||||
#include <sys/prctl.h>
|
||||
#include <sys/mman.h>
|
||||
@ -761,6 +759,10 @@ struct redisCommand redisCommandTable[] = {
|
||||
"ok-stale ok-loading fast @connection @replication",
|
||||
0,NULL,0,0,0,0,0,0},
|
||||
|
||||
{"replping",pingCommand,-1,
|
||||
"ok-stale fast @connection @replication",
|
||||
0,NULL,0,0,0,0,0,0},
|
||||
|
||||
{"echo",echoCommand,2,
|
||||
"fast @connection",
|
||||
0,NULL,0,0,0,0,0,0},
|
||||
@ -3114,6 +3116,7 @@ void createSharedObjects(void) {
|
||||
shared.lastid = makeObjectShared("LASTID",6);
|
||||
shared.default_username = makeObjectShared("default",7);
|
||||
shared.ping = makeObjectShared("ping",4);
|
||||
shared.replping = makeObjectShared("replping", 8);
|
||||
shared.setid = makeObjectShared("SETID",5);
|
||||
shared.keepttl = makeObjectShared("KEEPTTL",7);
|
||||
shared.load = makeObjectShared("LOAD",4);
|
||||
@ -5009,11 +5012,7 @@ int prepareForShutdown(int flags) {
|
||||
overwrite the synchronous saving did by SHUTDOWN. */
|
||||
if (g_pserver->FRdbSaveInProgress()) {
|
||||
serverLog(LL_WARNING,"There is a child saving an .rdb. Killing it!");
|
||||
/* Note that, in killRDBChild, we call rdbRemoveTempFile that will
|
||||
* do close fd(in order to unlink file actully) in background thread.
|
||||
* The temp rdb file fd may won't be closed when redis exits quickly,
|
||||
* but OS will close this fd when process exits. */
|
||||
killRDBChild(true);
|
||||
killRDBChild();
|
||||
/* Note that, in killRDBChild normally has backgroundSaveDoneHandler
|
||||
* doing it's cleanup, but in this case this code will not be reached,
|
||||
* so we need to call rdbRemoveTempFile which will close fd(in order
|
||||
@ -7358,7 +7357,7 @@ int main(int argc, char **argv) {
|
||||
serverLog(LL_WARNING, "Failed to test the kernel for a bug that could lead to data corruption during background save. "
|
||||
"Your system could be affected, please report this error.");
|
||||
if (!checkIgnoreWarning("ARM64-COW-BUG")) {
|
||||
serverLog(LL_WARNING,"Redis will now exit to prevent data corruption. "
|
||||
serverLog(LL_WARNING,"KeyDB will now exit to prevent data corruption. "
|
||||
"Note that it is possible to suppress this warning by setting the following config: ignore-warnings ARM64-COW-BUG");
|
||||
exit(1);
|
||||
}
|
||||
|
@ -571,6 +571,7 @@ typedef enum {
|
||||
REPL_STATE_NONE = 0, /* No active replication */
|
||||
REPL_STATE_CONNECT, /* Must connect to master */
|
||||
REPL_STATE_CONNECTING, /* Connecting to master */
|
||||
REPL_STATE_RETRY_NOREPLPING, /* Master does not support REPLPING, retry with PING */
|
||||
/* --- Handshake states, must be ordered --- */
|
||||
REPL_STATE_RECEIVE_PING_REPLY, /* Wait for PING reply */
|
||||
REPL_STATE_SEND_HANDSHAKE, /* Send handshake sequance to master */
|
||||
@ -1698,7 +1699,7 @@ struct sharedObjectsStruct {
|
||||
*emptyscan, *multi, *exec, *left, *right, *hset, *srem, *xgroup, *xclaim,
|
||||
*script, *replconf, *eval, *persist, *set, *pexpireat, *pexpire,
|
||||
*time, *pxat, *px, *retrycount, *force, *justid,
|
||||
*lastid, *ping, *setid, *keepttl, *load, *createconsumer,
|
||||
*lastid, *ping, *replping, *setid, *keepttl, *load, *createconsumer,
|
||||
*getack, *special_asterick, *special_equals, *default_username,
|
||||
*hdel, *zrem, *mvccrestore, *pexpirememberat,
|
||||
*select[PROTO_SHARED_SELECT_CMDS],
|
||||
@ -2953,6 +2954,7 @@ robj *createZsetZiplistObject(void);
|
||||
robj *createStreamObject(void);
|
||||
robj *createModuleObject(moduleType *mt, void *value);
|
||||
int getLongFromObjectOrReply(client *c, robj *o, long *target, const char *msg);
|
||||
int getUnsignedLongLongFromObjectOrReply(client *c, robj *o, uint64_t *target, const char *msg);
|
||||
int getPositiveLongFromObjectOrReply(client *c, robj *o, long *target, const char *msg);
|
||||
int getRangeLongFromObjectOrReply(client *c, robj *o, long min, long max, long *target, const char *msg);
|
||||
int checkType(client *c, robj_roptr o, int type);
|
||||
|
@ -512,7 +512,7 @@ void spopWithCountCommand(client *c) {
|
||||
const char *sdsele;
|
||||
robj *objele;
|
||||
int encoding;
|
||||
int64_t llele;
|
||||
int64_t llele = 0;
|
||||
unsigned long remaining = size-count; /* Elements left after SPOP. */
|
||||
|
||||
/* If we are here, the number of requested elements is less than the
|
||||
@ -664,7 +664,7 @@ void srandmemberWithCountCommand(client *c) {
|
||||
int uniq = 1;
|
||||
robj_roptr set;
|
||||
const char *ele;
|
||||
int64_t llele;
|
||||
int64_t llele = 0;
|
||||
int encoding;
|
||||
|
||||
dict *d;
|
||||
@ -813,7 +813,7 @@ void srandmemberWithCountCommand(client *c) {
|
||||
void srandmemberCommand(client *c) {
|
||||
robj_roptr set;
|
||||
const char *ele;
|
||||
int64_t llele;
|
||||
int64_t llele = 0;
|
||||
int encoding;
|
||||
|
||||
if (c->argc == 3) {
|
||||
|
@ -813,7 +813,7 @@ int64_t streamTrim(stream *s, streamAddTrimArgs *args) {
|
||||
}
|
||||
deleted += deleted_from_lp;
|
||||
|
||||
/* Now we the entries/deleted counters. */
|
||||
/* Now we update the entries/deleted counters. */
|
||||
p = lpFirst(lp);
|
||||
lp = lpReplaceInteger(lp,&p,entries-deleted_from_lp);
|
||||
p = lpNext(lp,p); /* Skip deleted field. */
|
||||
@ -842,7 +842,7 @@ int64_t streamTrim(stream *s, streamAddTrimArgs *args) {
|
||||
|
||||
/* Trims a stream by length. Returns the number of deleted items. */
|
||||
int64_t streamTrimByLength(stream *s, long long maxlen, int approx) {
|
||||
streamAddTrimArgs args = {0};
|
||||
streamAddTrimArgs args = {{0}};
|
||||
args.trim_strategy = TRIM_STRATEGY_MAXLEN;
|
||||
args.approx_trim = approx;
|
||||
args.limit = approx ? 100 * g_pserver->stream_node_max_entries : 0;
|
||||
@ -852,7 +852,7 @@ int64_t streamTrimByLength(stream *s, long long maxlen, int approx) {
|
||||
|
||||
/* Trims a stream by minimum ID. Returns the number of deleted items. */
|
||||
int64_t streamTrimByID(stream *s, streamID minid, int approx) {
|
||||
streamAddTrimArgs args = {0};
|
||||
streamAddTrimArgs args = {{0}};
|
||||
args.trim_strategy = TRIM_STRATEGY_MINID;
|
||||
args.approx_trim = approx;
|
||||
args.limit = approx ? 100 * g_pserver->stream_node_max_entries : 0;
|
||||
|
@ -1,9 +1,9 @@
|
||||
# Redis configuration for testing.
|
||||
# KeyDB configuration for testing.
|
||||
|
||||
always-show-logo yes
|
||||
notify-keyspace-events KEA
|
||||
daemonize no
|
||||
pidfile /var/run/redis.pid
|
||||
pidfile /var/run/keydb.pid
|
||||
port 6379
|
||||
timeout 0
|
||||
bind 127.0.0.1
|
||||
|
@ -1,5 +1,5 @@
|
||||
# Minimal configuration for testing.
|
||||
always-show-logo yes
|
||||
daemonize no
|
||||
pidfile /var/run/redis.pid
|
||||
pidfile /var/run/keydb.pid
|
||||
loglevel verbose
|
||||
|
@ -1,4 +1,4 @@
|
||||
source tests/support/redis.tcl
|
||||
source tests/support/keydb.tcl
|
||||
source tests/support/util.tcl
|
||||
|
||||
set ::tlsdir "tests/tls"
|
||||
|
@ -1,4 +1,4 @@
|
||||
source tests/support/redis.tcl
|
||||
source tests/support/keydb.tcl
|
||||
source tests/support/util.tcl
|
||||
|
||||
set ::tlsdir "tests/tls"
|
||||
|
@ -1,4 +1,4 @@
|
||||
source tests/support/redis.tcl
|
||||
source tests/support/keydb.tcl
|
||||
|
||||
set ::tlsdir "tests/tls"
|
||||
|
||||
|
@ -10,7 +10,7 @@
|
||||
package require Tcl 8.5
|
||||
|
||||
set tcl_precision 17
|
||||
source ../support/redis.tcl
|
||||
source ../support/keydb.tcl
|
||||
source ../support/util.tcl
|
||||
source ../support/server.tcl
|
||||
source ../support/test.tcl
|
||||
@ -36,7 +36,7 @@ set ::run_matching {} ; # If non empty, only tests matching pattern are run.
|
||||
|
||||
if {[catch {cd tmp}]} {
|
||||
puts "tmp directory not found."
|
||||
puts "Please run this test from the Redis source root."
|
||||
puts "Please run this test from the KeyDB source root."
|
||||
exit 1
|
||||
}
|
||||
|
||||
@ -92,7 +92,7 @@ proc spawn_instance {type base_port count {conf {}} {base_conf_file ""}} {
|
||||
puts $cfg [format "tls-key-file %s/../../tls/server.key" [pwd]]
|
||||
puts $cfg [format "tls-client-cert-file %s/../../tls/client.crt" [pwd]]
|
||||
puts $cfg [format "tls-client-key-file %s/../../tls/client.key" [pwd]]
|
||||
puts $cfg [format "tls-dh-params-file %s/../../tls/redis.dh" [pwd]]
|
||||
puts $cfg [format "tls-dh-params-file %s/../../tls/keydb.dh" [pwd]]
|
||||
puts $cfg [format "tls-ca-cert-file %s/../../tls/ca.crt" [pwd]]
|
||||
puts $cfg "loglevel debug"
|
||||
} else {
|
||||
@ -303,7 +303,7 @@ proc pause_on_error {} {
|
||||
set count 10
|
||||
if {[lindex $argv 1] ne {}} {set count [lindex $argv 1]}
|
||||
foreach_redis_id id {
|
||||
puts "=== REDIS $id ===="
|
||||
puts "=== KeyDB $id ===="
|
||||
puts [exec tail -$count redis_$id/log.txt]
|
||||
puts "---------------------\n"
|
||||
}
|
||||
@ -317,7 +317,7 @@ proc pause_on_error {} {
|
||||
}
|
||||
} elseif {$cmd eq {ls}} {
|
||||
foreach_redis_id id {
|
||||
puts -nonewline "Redis $id"
|
||||
puts -nonewline "KeyDB $id"
|
||||
set errcode [catch {
|
||||
set str {}
|
||||
append str "@[RI $id tcp_port]: "
|
||||
@ -348,13 +348,13 @@ proc pause_on_error {} {
|
||||
}
|
||||
}
|
||||
} elseif {$cmd eq {help}} {
|
||||
puts "ls List Sentinel and Redis instances."
|
||||
puts "ls List Sentinel and KeyDB instances."
|
||||
puts "show-sentinel-logs \[N\] Show latest N lines of logs."
|
||||
puts "show-keydb-logs \[N\] Show latest N lines of logs."
|
||||
puts "S <id> cmd ... arg Call command in Sentinel <id>."
|
||||
puts "R <id> cmd ... arg Call command in Redis <id>."
|
||||
puts "R <id> cmd ... arg Call command in KeyDB <id>."
|
||||
puts "SI <id> <field> Show Sentinel <id> INFO <field>."
|
||||
puts "RI <id> <field> Show Redis <id> INFO <field>."
|
||||
puts "RI <id> <field> Show KeyDB <id> INFO <field>."
|
||||
puts "continue Resume test."
|
||||
} else {
|
||||
set errcode [catch {eval $line} retval]
|
||||
|
@ -1,12 +1,10 @@
|
||||
set system_name [string tolower [exec uname -s]]
|
||||
# ldd --version returns 1 under musl for unknown reasons. If this check stops working, that may be why
|
||||
set is_musl [catch {exec ldd --version}]
|
||||
set system_supported 0
|
||||
|
||||
# We only support darwin or Linux with glibc
|
||||
if {$system_name eq {darwin}} {
|
||||
set system_supported 1
|
||||
} elseif {$system_name eq {linux} && $is_musl eq 0} {
|
||||
} elseif {$system_name eq {linux}} {
|
||||
# Avoid the test on libmusl, which does not support backtrace
|
||||
set ldd [exec ldd src/keydb-server]
|
||||
if {![string match {*libc.musl*} $ldd]} {
|
||||
|
@ -4,7 +4,7 @@ proc show_cluster_status {} {
|
||||
# The following is the regexp we use to match the log line
|
||||
# time info. Logs are in the following form:
|
||||
#
|
||||
# 11296:M 25 May 2020 17:37:14.652 # Server initialized
|
||||
# 11296:11296:M 25 May 2020 17:37:14.652 # Server initialized
|
||||
set log_regexp {^[0-9]+:^[0-9]+:[A-Z] [0-9]+ [A-z]+ [0-9]+ ([0-9:.]+) .*}
|
||||
set repl_regexp {(master|repl|sync|backlog|meaningful|offset)}
|
||||
|
||||
|
@ -355,7 +355,7 @@ proc start_server {options {code undefined}} {
|
||||
dict set config "tls-key-file" [format "%s/tests/tls/server.key" [pwd]]
|
||||
dict set config "tls-client-cert-file" [format "%s/tests/tls/client.crt" [pwd]]
|
||||
dict set config "tls-client-key-file" [format "%s/tests/tls/client.key" [pwd]]
|
||||
dict set config "tls-dh-params-file" [format "%s/tests/tls/redis.dh" [pwd]]
|
||||
dict set config "tls-dh-params-file" [format "%s/tests/tls/keydb.dh" [pwd]]
|
||||
dict set config "tls-ca-cert-file" [format "%s/tests/tls/ca.crt" [pwd]]
|
||||
dict set config "loglevel" "debug"
|
||||
}
|
||||
|
@ -5,7 +5,7 @@
|
||||
package require Tcl 8.5
|
||||
|
||||
set tcl_precision 17
|
||||
source tests/support/redis.tcl
|
||||
source tests/support/keydb.tcl
|
||||
source tests/support/server.tcl
|
||||
source tests/support/tmpfile.tcl
|
||||
source tests/support/test.tcl
|
||||
@ -57,8 +57,8 @@ set ::all_tests {
|
||||
integration/psync2-reg
|
||||
integration/psync2-pingoff
|
||||
integration/failover
|
||||
integration/redis-cli
|
||||
integration/redis-benchmark
|
||||
integration/keydb-cli
|
||||
integration/keydb-benchmark
|
||||
unit/pubsub
|
||||
unit/slowlog
|
||||
unit/scripting
|
||||
|
@ -62,7 +62,6 @@ start_server {overrides {save ""} tags {"other"}} {
|
||||
} {*index is out of range*}
|
||||
|
||||
tags {consistency} {
|
||||
if {true} {
|
||||
if {$::accurate} {set numops 10000} else {set numops 1000}
|
||||
test {Check consistency of different data types after a reload} {
|
||||
r flushdb
|
||||
@ -113,7 +112,6 @@ start_server {overrides {save ""} tags {"other"}} {
|
||||
}
|
||||
} {1}
|
||||
}
|
||||
}
|
||||
|
||||
test {EXPIRES after a reload (snapshot + append only file rewrite)} {
|
||||
r flushdb
|
||||
|
@ -100,8 +100,8 @@ start_server {tags {"tls"}} {
|
||||
set master_port [srv 0 port]
|
||||
|
||||
# Use a non-restricted client/server cert for the replica
|
||||
set redis_crt [format "%s/tests/tls/redis.crt" [pwd]]
|
||||
set redis_key [format "%s/tests/tls/redis.key" [pwd]]
|
||||
set redis_crt [format "%s/tests/tls/keydb.crt" [pwd]]
|
||||
set redis_key [format "%s/tests/tls/keydb.key" [pwd]]
|
||||
|
||||
start_server [list overrides [list tls-cert-file $redis_crt tls-key-file $redis_key] \
|
||||
omit [list tls-client-cert-file tls-client-key-file]] {
|
||||
|
40
utils/compare_config.sh
Normal file
40
utils/compare_config.sh
Normal file
@ -0,0 +1,40 @@
|
||||
#! /bin/bash
|
||||
|
||||
if [[ "$1" == "--help" ]] || [[ "$1" == "-h" ]] || [[ "$#" -ne 2 ]] ; then
|
||||
echo "This script is used to compare different KeyDB configuration files."
|
||||
echo ""
|
||||
echo " Usage: compare_config.sh [keydb1.conf] [keydb2.conf]"
|
||||
echo ""
|
||||
echo "Output: a side by side sorted list of all active parameters, followed by a summary of the differences."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
conf_1=$(mktemp)
|
||||
conf_2=$(mktemp)
|
||||
|
||||
echo "----------------------------------------------------"
|
||||
echo "--- display all active parameters in config files---"
|
||||
echo "----------------------------------------------------"
|
||||
echo ""
|
||||
echo "--- $1 ---" > $conf_1
|
||||
echo "" >> $conf_1
|
||||
grep -ve "^#" -ve "^$" $1 | sort >> $conf_1
|
||||
echo "--- $2 ---" >> $conf_2
|
||||
echo "" >> $conf_2
|
||||
grep -ve "^#" -ve "^$" $2 | sort >> $conf_2
|
||||
|
||||
pr -T --merge $conf_1 $conf_2
|
||||
|
||||
echo ""
|
||||
echo ""
|
||||
echo "--------------------------------------------"
|
||||
echo "--- display config file differences only ---"
|
||||
echo "--------------------------------------------"
|
||||
echo ""
|
||||
|
||||
sdiff --suppress-common-lines $conf_1 $conf_2
|
||||
|
||||
rm $conf_1
|
||||
rm $conf_2
|
||||
|
||||
exit 0
|
@ -3,10 +3,10 @@
|
||||
# Generate some test certificates which are used by the regression test suite:
|
||||
#
|
||||
# tests/tls/ca.{crt,key} Self signed CA certificate.
|
||||
# tests/tls/redis.{crt,key} A certificate with no key usage/policy restrictions.
|
||||
# tests/tls/keydb.{crt,key} A certificate with no key usage/policy restrictions.
|
||||
# tests/tls/client.{crt,key} A certificate restricted for SSL client usage.
|
||||
# tests/tls/server.{crt,key} A certificate restricted fro SSL server usage.
|
||||
# tests/tls/redis.dh DH Params file.
|
||||
# tests/tls/keydb.dh DH Params file.
|
||||
|
||||
generate_cert() {
|
||||
local name=$1
|
||||
@ -19,7 +19,7 @@ generate_cert() {
|
||||
[ -f $keyfile ] || openssl genrsa -out $keyfile 2048
|
||||
openssl req \
|
||||
-new -sha256 \
|
||||
-subj "/O=Redis Test/CN=$cn" \
|
||||
-subj "/O=KeyDB Test/CN=$cn" \
|
||||
-key $keyfile | \
|
||||
openssl x509 \
|
||||
-req -sha256 \
|
||||
@ -38,7 +38,7 @@ openssl req \
|
||||
-x509 -new -nodes -sha256 \
|
||||
-key tests/tls/ca.key \
|
||||
-days 3650 \
|
||||
-subj '/O=Redis Test/CN=Certificate Authority' \
|
||||
-subj '/O=KeyDB Test/CN=Certificate Authority' \
|
||||
-out tests/tls/ca.crt
|
||||
|
||||
cat > tests/tls/openssl.cnf <<_END_
|
||||
@ -53,6 +53,6 @@ _END_
|
||||
|
||||
generate_cert server "Server-only" "-extfile tests/tls/openssl.cnf -extensions server_cert"
|
||||
generate_cert client "Client-only" "-extfile tests/tls/openssl.cnf -extensions client_cert"
|
||||
generate_cert redis "Generic-cert"
|
||||
generate_cert keydb "Generic-cert"
|
||||
|
||||
[ -f tests/tls/redis.dh ] || openssl dhparam -out tests/tls/redis.dh 2048
|
||||
[ -f tests/tls/keydb.dh ] || openssl dhparam -out tests/tls/keydb.dh 2048
|
||||
|
@ -2,7 +2,7 @@
|
||||
# Copyright (C) 2011 Salvatore Sanfilippo
|
||||
# Released under the BSD license like Redis itself
|
||||
|
||||
source ../tests/support/redis.tcl
|
||||
source ../tests/support/keydb.tcl
|
||||
set ::port 12123
|
||||
set ::tests {PING,SET,GET,INCR,LPUSH,LPOP,SADD,SPOP,LRANGE_100,LRANGE_600,MSET}
|
||||
set ::datasize 16
|
||||
|
Loading…
x
Reference in New Issue
Block a user