483 Commits

Author SHA1 Message Date
Yossi Gottlieb
70c823a64e Add oom-score-adj configuration option to control Linux OOM killer. (#1690)
Add Linux kernel OOM killer control option.

This adds the ability to control the Linux OOM killer oom_score_adj
parameter for all Redis processes, depending on the process role (i.e.
master, replica, background child).

A oom-score-adj global boolean flag control this feature. In addition,
specific values can be configured using oom-score-adj-values if
additional tuning is required.
2020-08-12 17:58:56 +03:00
Tyson Andre
6e17afa80a Implement SMISMEMBER key member [member ...] (#7615)
This is a rebased version of #3078 originally by shaharmor
with the following patches by TysonAndre made after rebasing
to work with the updated C API:

1. Add 2 more unit tests
   (wrong argument count error message, integer over 64 bits)
2. Use addReplyArrayLen instead of addReplyMultiBulkLen.
3. Undo changes to src/help.h - for the ZMSCORE PR,
   I heard those should instead be automatically
   generated from the redis-doc repo if it gets updated

Motivations:

- Example use case: Client code to efficiently check if each element of a set
  of 1000 items is a member of a set of 10 million items.
  (Similar to reasons for working on #7593)
- HMGET and ZMSCORE already exist. This may lead to developers deciding
  to implement functionality that's best suited to a regular set with a
  data type of sorted set or hash map instead, for the multi-get support.

Currently, multi commands or lua scripting to call sismember multiple times
would almost definitely be less efficient than a native smismember
for the following reasons:

- Need to fetch the set from the string every time
  instead of reusing the C pointer.
- Using pipelining or multi-commands would result in more bytes sent
  and received by the client for the repeated SISMEMBER KEY sections.
- Need to specially encode the data and decode it from the client
  for lua-based solutions.
- Proposed solutions using Lua or SADD/SDIFF could trigger writes to
  memory, which is undesirable on a redis replica server
  or when commands get replicated to replicas.

Co-Authored-By: Shahar Mor <shahar@peer5.com>
Co-Authored-By: Tyson Andre <tysonandre775@hotmail.com>
2020-08-11 11:55:06 +03:00
Rajat Pawar
8644c26487 Fix comment about ACLGetCommandPerm() 2020-08-10 23:11:26 -07:00
杨博东
51aa08d26d Avoid redundant calls to signalKeyAsReady (#7625)
signalKeyAsReady has some overhead (namely dictFind) so we should
only call it when there are clients blocked on the relevant type (BLOCKED_*)
2020-08-11 08:18:09 +03:00
Wang Yuan
921c633df9 Fix applying zero offset to null pointer when creating moduleFreeContextReusedClient (#7323)
Before this fix we where attempting to select a db before creating db the DB, see: #7323

This issue doesn't seem to have any implications, since the selected DB index is 0,
the db pointer remains NULL, and will later be correctly set before using this dummy
client for the first time.

As we know, we call 'moduleInitModulesSystem()' before 'initServer()'. We will allocate
memory for server.db in 'initServer', but we call 'createClient()' that will call 'selectDb()'
in 'moduleInitModulesSystem()', before the databases where created. Instead, we should call
'createClient()' for moduleFreeContextReusedClient after 'initServer()'.
2020-08-08 14:36:41 +03:00
Oran Agra
cad93ed273 Accelerate diskless master connections, and general re-connections (#6271)
Diskless master has some inherent latencies.
1) fork starts with delay from cron rather than immediately
2) replica is put online only after an ACK. but the ACK
   was sent only once a second.
3) but even if it would arrive immediately, it will not
   register in case cron didn't yet detect that the fork is done.

Besides that, when a replica disconnects, it doesn't immediately
attempts to re-connect, it waits for replication cron (one per second).
in case it was already online, it may be important to try to re-connect
as soon as possible, so that the backlog at the master doesn't vanish.

In case it disconnected during rdb transfer, one can argue that it's
not very important to re-connect immediately, but this is needed for the
"diskless loading short read" test to be able to run 100 iterations in 5
seconds, rather than 3 (waiting for replication cron re-connection)

changes in this commit:
1) sync command starts a fork immediately if no sync_delay is configured
2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
3) when a replica unexpectedly disconnets, it immediately tries to
   re-connect rather than waiting 1s
4) when when a child exits, if there is another replica waiting, we spawn a new
   one right away, instead of waiting for 1s replicationCron.
5) added a call to connectWithMaster from replicationSetMaster. which is called
   from the REPLICAOF command but also in 3 places in cluster.c, in all of
   these the connection attempt will now be immediate instead of delayed by 1
   second.

side note:
we can add a call to rdbPipeReadHandler in replconfCommand when getting
a REPLCONF ACK from the replica to solve a race where the replica got
the entire rdb and EOF marker before we detected that the pipe was
closed.
in the test i did see this race happens in one about of some 300 runs,
but i concluded that this race is unlikely in real life (where the
replica is on another host and we're more likely to first detect the
pipe was closed.
the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
seconds instead (waiting for another REPLCONF ACK).

Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
Now that CheckChildrenDone is calling the new replicationStartPendingFork
(extracted from serverCron) there's actually no need to call
startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
calling replicationStartPendingFork that handles that anyway.
The code in updateSlavesWaitingForBgsave had a bug in which it ignored
repl-diskless-sync-delay, but removing that code shows that this bug was
hiding another bug, which is that the max_idle should have used >= and
not >, this one second delay has a big impact on my new test.
2020-08-06 16:53:06 +03:00
Oran Agra
58ca0b0be2 Assertion and panic, print crash log without generating SIGSEGV
This makes it possible to add tests that generate assertions, and run
them with valgrind, making sure that there are no memory violations
prior to the assertion.

New config options:
- crash-log-enabled - can be disabled for cleaner core dumps
- crash-memcheck-enabled - useful for faster termination after a crash
- use-exit-on-panic - to be used by the test suite so that valgrind can
  detect leaks and memory corruptions

Other changes:
- Crash log is printed even on system that dont HAVE_BACKTRACE, i.e. in
  both SIGSEGV and assert / panic
- Assertion and panic won't print registers and code around EIP (which
  was useless), but will do fast memory test (which may still indicate
  that the assertion was due to memory corrpution)

I had to reshuffle code in order to re-use it, so i extracted come code
into function without actually doing any changes to the code:
- logServerInfo
- logModulesInfo
- doFastMemoryTest (with the exception of it being conditional)
- dumpCodeAroundEIP

changes to the crash report on segfault:
- logRegisters is called right after the stack trace (before info) done
  just in order to have more re-usable code
- stack trace skips the first two items on the stack (the crash log and
  signal handler functions)
2020-08-06 16:47:27 +03:00
Tyson Andre
486e39e86e Add a ZMSCORE command returning an array of scores. (#7593)
Syntax: `ZMSCORE KEY MEMBER [MEMBER ...]`

This is an extension of #2359
amended by Tyson Andre to work with the changed unstable API,
add more tests, and consistently return an array.

- It seemed as if it would be more likely to get reviewed
  after updating the implementation.

Currently, multi commands or lua scripting to call zscore multiple times
would almost definitely be less efficient than a native ZMSCORE
for the following reasons:

- Need to fetch the set from the string every time instead of reusing the C
  pointer.
- Using pipelining or multi-commands would result in more bytes sent by
  the client for the repeated `ZMSCORE KEY` sections.
- Need to specially encode the data and decode it from the client
  for lua-based solutions.
- The fastest solution I've seen for large sets(thousands or millions)
  involves lua and a variadic ZADD, then a ZINTERSECT, then a ZRANGE 0 -1,
  then UNLINK of a temporary set (or lua). This is still inefficient.

Co-authored-by: Tyson Andre <tysonandre775@hotmail.com>
2020-08-04 17:49:33 +03:00
Arun Ranganathan
444b53e640 Show threading configuration in INFO output (#7446)
Co-authored-by: Oran Agra <oran@redislabs.com>
2020-07-29 08:46:44 +03:00
Jiayuan Chen
198770751f Add optional tls verification (#7502)
Adds an `optional` value to the previously boolean `tls-auth-clients` configuration keyword.

Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
2020-07-28 10:45:21 +03:00
grishaf
f8751d03ba Fix prepareForShutdown function declaration (#7566) 2020-07-26 08:27:30 +03:00
Meir Shpilraien (Spielrein)
73198c5019 This PR introduces a new loaded keyspace event (#7536)
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-07-23 12:38:51 +03:00
Yossi Gottlieb
c611a836f6 TLS: Session caching configuration support. (#7420)
* TLS: Session caching configuration support.
* TLS: Remove redundant config initialization.
2020-07-10 11:33:47 +03:00
Oran Agra
1c35f8741b RESTORE ABSTTL won't store expired keys into the db (#7472)
Similarly to EXPIREAT with TTL in the past, which implicitly deletes the
key and return success, RESTORE should not store key that are already
expired into the db.
When used together with REPLACE it should emit a DEL to keyspace
notification and replication stream.
2020-07-10 10:02:37 +03:00
Salvatore Sanfilippo
54a422f6e0 Merge pull request #7390 from oranagra/exec_fails_abort
EXEC always fails with EXECABORT and multi-state is cleared
2020-06-23 13:12:52 +02:00
Oran Agra
fe8d6fe749 EXEC always fails with EXECABORT and multi-state is cleared
In order to support the use of multi-exec in pipeline, it is important that
MULTI and EXEC are never rejected and it is easy for the client to know if the
connection is still in multi state.

It was easy to make sure MULTI and DISCARD never fail (done by previous
commits) since these only change the client state and don't do any actual
change in the server, but EXEC is a different story.

Since in the past, it was possible for clients to handle some EXEC errors and
retry the EXEC, we now can't affort to return any error on EXEC other than
EXECABORT, which now carries with it the real reason for the abort too.

Other fixes in this commit:
- Some checks that where performed at the time of queuing need to be re-
  validated when EXEC runs, for instance if the transaction contains writes
  commands, it needs to be aborted. there was one check that was already done
  in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
  -NOREPLICAS, -MASTERDOWN
- When a command is rejected by processCommand it was rejected with addReply,
  which was not recognized as an error in case the bad command came from the
  master. this will enable to count or MONITOR these errors in the future.
- make it easier for tests to create additional (non deferred) clients.
- add tests for the fixes of this commit.
2020-06-23 12:01:33 +03:00
antirez
5766a69a41 LPOS: implement the final design. 2020-06-10 12:49:15 +02:00
Paul Spooren
cc04253b1d LRANK: Add command (the command will be renamed LPOS).
The `LRANK` command returns the index (position) of a given element
within a list. Using the `direction` argument it is possible to specify
going from head to tail (acending, 1) or from tail to head (decending,
-1). Only the first found index is returend. The complexity is O(N).

When using lists as a queue it can be of interest at what position a
given element is, for instance to monitor a job processing through a
work queue. This came up within the Python `rq` project which is based
on Redis[0].

[0]: https://github.com/rq/rq/issues/1197

Signed-off-by: Paul Spooren <mail@aparcar.org>
2020-06-10 12:07:40 +02:00
antirez
0cd63f06b4 Replication: showLatestBacklog() refactored out. 2020-05-28 10:08:16 +02:00
antirez
858845ad56 Remove the meaningful offset feature.
After a closer look, the Redis core devleopers all believe that this was
too fragile, caused many bugs that we didn't expect and that were very
hard to track. Better to find an alternative solution that is simpler.
2020-05-27 12:06:33 +02:00
antirez
7f994abc48 Make disconnectSlaves() synchronous in the base case.
Otherwise we run into that:

Backtrace:
src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
src/redis-server 127.0.0.1:21322[0x4552d4]
src/redis-server 127.0.0.1:21322[0x4c5593]
src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]

Since we disconnect all the replicas and free the replication backlog in
certain replication paths, and the code that will free the replication
backlog expects that no replica is connected.

However we still need to free the replicas asynchronously in certain
cases, as documented in the top comment of disconnectSlaves().
2020-05-22 19:29:09 +02:00
antirez
146201c694 Cache master without checking of deferred close flags.
The context is issue #7205: since the introduction of threaded I/O we close
clients asynchronously by default from readQueryFromClient(). So we
should no longer prevent the caching of the master client, to later
PSYNC incrementally, if such flags are set. However we also don't want
the master client to be cached with such flags (would be closed
immediately after being restored). And yet we want a way to understand
if a master was closed because of a protocol error, and in that case
prevent the caching.
2020-05-15 10:19:13 +02:00
antirez
e3166461ed Track events processed while blocked globally.
Related to #7234.
2020-05-14 10:06:27 +02:00
Titouan Christophe
405f435b37 make struct user anonymous (only typedefed)
This works because this struct is never referenced by its name,
but always by its type.

This prevents a conflict with struct user from <sys/user.h>
when compiling against uclibc.

Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
2020-05-05 11:35:03 +02:00
Salvatore Sanfilippo
a39bfa8c41 Merge pull request #7192 from hwware/trackingprefix
Client Side Caching: Add Number of Tracking Prefix Stats in Server Info
2020-05-04 11:06:44 +02:00
hwware
7a4e8aafc3 Client Side Caching: Add Tracking Prefix Number Stats in Server Info 2020-05-02 19:20:44 -04:00
zhenwei pi
2c853869bf Support setcpuaffinity on linux/bsd
Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.

There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863

So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.

Examples of cpulist in redis.conf:
server_cpulist 0-7:2      means cpu affinity 0,2,4,6
bio_cpulist 1,3           means cpu affinity 1,3
aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11    means cpu affinity 1,10,11

Test on linux/freebsd, both work fine.

Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
2020-05-02 21:19:47 +08:00
Oran Agra
5633862924 Keep track of meaningful replication offset in replicas too
Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when #3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.
2020-04-27 15:52:23 +02:00
antirez
ae3cf7911a LCS -> STRALGO LCS.
STRALGO should be a container for mostly read-only string
algorithms in Redis. The algorithms should have two main
characteristics:

1. They should be non trivial to compute, and often not part of
programming language standard libraries.
2. They should be fast enough that it is a good idea to have optimized C
implementations.

Next thing I would love to see? A small strings compression algorithm.
2020-04-24 16:54:32 +02:00
antirez
d3d5108c7d Tracking: NOLOOP internals implementation. 2020-04-21 10:51:46 +02:00
antirez
ce7bfb275d Use the special static refcount for stack objects. 2020-04-09 16:25:30 +02:00
antirez
9d24b6f810 RDB: refactor some RDB loading code into dbAddRDBLoad(). 2020-04-09 16:21:48 +02:00
antirez
df90feb894 incrRefCount(): abort on statically allocated object. 2020-04-09 16:20:41 +02:00
antirez
f9c0953fdd RDB: load files faster avoiding useless free+realloc.
Reloading of the RDB generated by

    DEBUG POPULATE 5000000
    SAVE

is now 25% faster.

This commit also prepares the ability to have more flexibility when
loading stuff from the RDB, since we no longer use dbAdd() but can
control exactly how things are added in the database.
2020-04-09 10:24:46 +02:00
antirez
45d7fcec22 Speedup INFO by counting client memory incrementally.
Related to #5145.

Design note: clients may change type when they turn into replicas or are
moved into the Pub/Sub category and so forth. Moreover the recomputation
of the bytes used is problematic for obvious reasons: it changes
continuously, so as a conservative way to avoid accumulating errors,
each client remembers the contribution it gave to the sum, and removes
it when it is freed or before updating it with the new memory usage.
2020-04-07 12:07:54 +02:00
Salvatore Sanfilippo
a58de4cff0 Merge pull request #6243 from soloestoy/expand-lazy-free-server-del
lazyfree: add a new configuration lazyfree-lazy-user-del
2020-04-06 17:27:39 +02:00
antirez
f543150d0b Merge branch 'lcs' into unstable 2020-04-06 13:51:55 +02:00
Salvatore Sanfilippo
60251ac868 Merge pull request #6797 from patpatbear/issue_#6565_memory_borderline
Check OOM at script start to get stable lua OOM state.
2020-04-06 11:59:01 +02:00
Salvatore Sanfilippo
9a64bcf30a Merge pull request #6694 from oranagra/signal_modified_key
modules don't signalModifiedKey in setKey() since that's done (optionally) in RM_CloseKey
2020-04-02 19:00:20 +02:00
antirez
d77fd23ae2 LCS: initial functionality implemented. 2020-04-01 16:13:18 +02:00
Salvatore Sanfilippo
2399ed885b Merge pull request #7037 from guybe7/fix_module_replicate_multi
Modules: Test MULTI/EXEC replication of RM_Replicate
2020-03-31 17:00:57 +02:00
Guy Benoish
fd914fdd52 Modules: Test MULTI/EXEC replication of RM_Replicate
Makse sure call() doesn't wrap replicated commands with
a redundant MULTI/EXEC

Other, unrelated changes:
1. Formatting compiler warning in INFO CLIENTS
2. Use CLIENT_ID_AOF instead of UINT64_MAX
2020-03-31 13:55:51 +03:00
antirez
84dc7c0f65 Merge branch 'pubsub_patterns_boost' of https://github.com/leeyiw/redis into leeyiw-pubsub_patterns_boost 2020-03-31 12:40:08 +02:00
antirez
ab89ab5173 Fix module commands propagation double MULTI bug.
b512cb40 introduced automatic wrapping of MULTI/EXEC for the
alsoPropagate API. However this collides with the built-in mechanism
already present in module.c. To avoid complex changes near Redis 6 GA
this commit introduces the ability to exclude call() MUTLI/EXEC wrapping
for also propagate in order to continue to use the old code paths in
module.c.
2020-03-31 11:00:45 +02:00
antirez
aa3efaa1d2 timeout.c created: move client timeouts code there. 2020-03-27 16:35:03 +01:00
antirez
ab10e0f2fb Precise timeouts: cleaup the table on unblock.
Now that this mechanism is the sole one used for blocked clients
timeouts, it is more wise to cleanup the table when the client unblocks
for any reason. We use a flag: CLIENT_IN_TO_TABLE, in order to avoid a
radix tree lookup when the client was already removed from the table
because we processed it by scanning the radix tree.
2020-03-27 16:35:03 +01:00
antirez
2ee65aff6b Precise timeouts: fix comments after functional change. 2020-03-27 16:35:03 +01:00
antirez
0264a78063 Precise timeouts: use only radix tree for timeouts. 2020-03-27 16:35:03 +01:00
antirez
f232cb6670 Precise timeouts: working initial implementation. 2020-03-27 16:35:03 +01:00
antirez
86bd36d93a Precise timeouts: refactor unblocking on timeout. 2020-03-27 16:35:02 +01:00