1127 Commits

Author SHA1 Message Date
Oran Agra
7c88eca1e6 fix valgrind test failure in replication test
in 00323f342 i added more keys to that test to make it run longer
but in valgrind this now means the test times out, give valgrind more
time.
2020-05-18 10:26:53 +03:00
antirez
857bf0b4b9 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2020-05-17 18:24:48 +02:00
antirez
5781712458 Improve the PSYNC2 test reliability. 2020-05-17 18:24:34 +02:00
Oran Agra
ba6f40ea94 add regression test for the race in #7205
with the original version of 6.0.0, this test detects an excessive full
sync.
with the fix in 146201c69, this test detects memory corruption,
especially when using libc allocator with or without valgrind.
2020-05-17 18:26:02 +03:00
Salvatore Sanfilippo
3c8824aa54 Merge pull request #7229 from yossigo/tls-fails-on-recent-debian
TLS: Fix test failures on recent Debian/Ubuntu.
2020-05-14 18:15:17 +02:00
antirez
17546e831e Merge branch 'free_clients_during_loading' into unstable 2020-05-14 11:28:08 +02:00
antirez
94e78d22fd Regression test for #7249. 2020-05-14 11:27:31 +02:00
Oran Agra
00323f342d fix unstable replication test
this test which has coverage for varoius flows of diskless master was
failing randomly from time to time.

the failure was:
[err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found

what seemed to have happened is that the master didn't detect that all
replicas dropped by the time the replication ended, it thought that one
replica is still connected.

now the test takes a few seconds longer but it seems stable.
2020-05-12 08:59:09 +03:00
Oran Agra
b1913ae504 fix redis 6.0 not freeing closed connections during loading.
This bug was introduced by a recent change in which readQueryFromClient
is using freeClientAsync, and despite the fact that now
freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since
it's not called during loading in processEventsWhileBlocked.
furthermore, afterSleep was called in that case but beforeSleep wasn't.

This bug also caused slowness sine the level-triggered mode of epoll
kept signaling these connections as readable causing us to keep doing
connRead again and again for ll of these, which keep accumulating.

now both before and after sleep are called, but not all of their actions
are performed during loading, some are only reserved for the main loop.

fixes issue #7215
2020-05-11 11:33:46 +03:00
WuYunlong
ed326b743a Handle keys with hash tag when computing hash slot using tcl cluster client. 2020-05-11 13:14:18 +08:00
WuYunlong
82752a96b1 Add a test to prove current tcl cluster client can not handle keys with hash tag. 2020-05-11 13:14:18 +08:00
Yossi Gottlieb
e2b59c13a3 TLS: Fix test failures on recent Debian/Ubuntu.
Seems like on some systems choosing specific TLS v1/v1.1 versions no
longer works as expected. Test is reduced for v1.2 now which is still
good enough to test the mechansim, and matters most anyway.
2020-05-10 17:38:04 +03:00
antirez
e7c236b187 Test: --dont-clean should do first cleanup. 2020-05-05 13:18:53 +02:00
Salvatore Sanfilippo
de51f56d3d Merge pull request #7179 from bytedance/cpu-affinity
Support setcpuaffinity on linux/bsd
2020-05-04 10:56:20 +02:00
Oran Agra
2b9d070df3 add daily github actions with libc malloc and valgrind
* fix memlry leaks with diskless replica short read.
* fix a few timing issues with valgrind runs
* fix issue with valgrind and watchdog schedule signal

about the valgrind WD issue:
the stack trace test in logging.tcl, has issues with valgrind:
==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
==28808==   too small or bad protection modes

it seems to be some valgrind bug with SA_ONSTACK.
SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
also, not sure if it's even valid without a call to sigaltstack()
2020-05-04 09:52:20 +03:00
zhenwei pi
2c853869bf Support setcpuaffinity on linux/bsd
Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.

There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863

So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.

Examples of cpulist in redis.conf:
server_cpulist 0-7:2      means cpu affinity 0,2,4,6
bio_cpulist 1,3           means cpu affinity 1,3
aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11    means cpu affinity 1,10,11

Test on linux/freebsd, both work fine.

Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
2020-05-02 21:19:47 +08:00
Salvatore Sanfilippo
a8561470ca Merge pull request #7134 from guybe7/xstate_command
Extend XINFO STREAM output
2020-04-28 16:31:00 +02:00
Guy Benoish
b83ae07117 Extend XINFO STREAM output
Introducing XINFO STREAM <key> FULL
2020-04-28 13:03:43 +03:00
Oran Agra
a29e617381 fix loading race in psync2 tests 2020-04-28 09:18:01 +03:00
Oran Agra
5633862924 Keep track of meaningful replication offset in replicas too
Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when #3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.
2020-04-27 15:52:23 +02:00
antirez
75cf725568 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2020-04-24 16:59:56 +02:00
antirez
ae3cf7911a LCS -> STRALGO LCS.
STRALGO should be a container for mostly read-only string
algorithms in Redis. The algorithms should have two main
characteristics:

1. They should be non trivial to compute, and often not part of
programming language standard libraries.
2. They should be fast enough that it is a good idea to have optimized C
implementations.

Next thing I would love to see? A small strings compression algorithm.
2020-04-24 16:54:32 +02:00
Salvatore Sanfilippo
8887bd306c Merge pull request #7114 from guybe7/stream_tag_xsetid
Add the stream tag to XSETID tests
2020-04-23 16:29:46 +02:00
Salvatore Sanfilippo
0dd08d746e Merge pull request #7123 from fayadexinqing/optimizeClusterSlots
Optimize the command of cluster slots
2020-04-23 16:18:22 +02:00
antirez
4222dfae6e Tracking: test expired keys notifications. 2020-04-22 11:45:34 +02:00
antirez
3b98f3f2ce Tracking: NOLOOP tests. 2020-04-22 11:24:19 +02:00
yanhui13
fc3d393607 add tcl test for cluster slots 2020-04-21 16:56:10 +08:00
Guy Benoish
b43d2fa48b Add the stream tag to XSETID tests 2020-04-19 15:59:58 +03:00
antirez
c97372af78 A few comments and name changes for #7103. 2020-04-17 10:51:12 +02:00
Oran Agra
4ccb0cee4b testsuite run the defrag latency test solo
this test is time sensitive and it sometimes fail to pass below the
latency threshold, even on strong machines.

this test was the reson we're running just 2 parallel tests in the
github actions CI, revering this.
2020-04-16 18:09:22 +03:00
antirez
f543150d0b Merge branch 'lcs' into unstable 2020-04-06 13:51:55 +02:00
antirez
9cd8e2d749 LCS: more tests. 2020-04-06 13:51:49 +02:00
antirez
2f1c1ffa0b LCS tests. 2020-04-06 13:45:37 +02:00
Oran Agra
98208c3f30 diffrent fix for runtest --host --port 2020-04-06 09:41:14 +03:00
Guy Benoish
0529bc185a Try to fix time-sensitive tests in blockonkey.tcl
There is an inherent race between the deferring client and the
"main" client of the test: While the deferring client issues a blocking
command, we can't know for sure that by the time the "main" client
tries to issue another command (Usually one that unblocks the deferring
client) the deferring client is even blocked...
For lack of a better choice this commit uses TCL's 'after' in order
to give some time for the deferring client to issues its blocking
command before the "main" client does its thing.
This problem probably exists in many other tests but this commit
tries to fix blockonkeys.tcl
2020-04-03 14:51:45 +03:00
Salvatore Sanfilippo
d07c340a58 Merge pull request #7030 from valentinogeron/xread-in-lua
XREAD and XREADGROUP should not be allowed from scripts when BLOCK op…
2020-04-03 11:14:13 +02:00
Guy Benoish
74daccece3 Fix no-negative-zero test 2020-04-02 18:41:29 +03:00
Salvatore Sanfilippo
ee34e741f0 Merge pull request #6546 from guybe7/fix_neg_zero
Make sure Redis does not reply with negative zero
2020-04-02 16:26:57 +02:00
Salvatore Sanfilippo
079cf0ece4 Merge pull request #7029 from valentinogeron/fix-xack
XACK should be executed in a "all or nothing" fashion.
2020-04-02 11:23:23 +02:00
Guy Benoish
bc8c56a71a Fix memory corruption in moduleHandleBlockedClients
By using a "circular BRPOPLPUSH"-like scenario it was
possible the get the same client on db->blocking_keys
twice (See comment in moduleTryServeClientBlockedOnKey)

The fix was actually already implememnted in
moduleTryServeClientBlockedOnKey but it had a bug:
the funxction should return 0 or 1 (not OK or ERR)

Other changes:
1. Added two commands to blockonkeys.c test module (To
   reproduce the case described above)
2. Simplify blockonkeys.c in order to make testing easier
3. cast raxSize() to avoid warning with format spec
2020-04-01 12:53:26 +03:00
Salvatore Sanfilippo
2399ed885b Merge pull request #7037 from guybe7/fix_module_replicate_multi
Modules: Test MULTI/EXEC replication of RM_Replicate
2020-03-31 17:00:57 +02:00
Guy Benoish
e5309fea93 RENAME can unblock XREADGROUP
Other changes:
Support stream in serverLogObjectDebugInfo
2020-03-31 17:41:10 +03:00
Guy Benoish
fd914fdd52 Modules: Test MULTI/EXEC replication of RM_Replicate
Makse sure call() doesn't wrap replicated commands with
a redundant MULTI/EXEC

Other, unrelated changes:
1. Formatting compiler warning in INFO CLIENTS
2. Use CLIENT_ID_AOF instead of UINT64_MAX
2020-03-31 13:55:51 +03:00
antirez
0b7cdb9dfb Fix the propagate Tcl test after module changes. 2020-03-31 12:09:38 +02:00
antirez
55a751597a Modify the propagate unit test to show more cases. 2020-03-31 12:04:06 +02:00
antirez
ab89ab5173 Fix module commands propagation double MULTI bug.
b512cb40 introduced automatic wrapping of MULTI/EXEC for the
alsoPropagate API. However this collides with the built-in mechanism
already present in module.c. To avoid complex changes near Redis 6 GA
this commit introduces the ability to exclude call() MUTLI/EXEC wrapping
for also propagate in order to continue to use the old code paths in
module.c.
2020-03-31 11:00:45 +02:00
Valentino Geron
53a7041535 XREAD and XREADGROUP should not be allowed from scripts when BLOCK option is being used 2020-03-26 15:46:31 +02:00
Valentino Geron
81c1e22b8d XACK should be executed in a "all or nothing" fashion.
First, we must parse the IDs, so that we abort ASAP.
The return value of this command cannot be an error if
the client successfully acknowledged some messages,
so it should be executed in a "all or nothing" fashion.
2020-03-26 15:40:23 +02:00
Salvatore Sanfilippo
307ff72522 Merge pull request #6644 from oranagra/stream_aofrw
AOFRW on an empty stream created with MKSTREAM loads badkly
2020-03-26 11:12:44 +01:00
Oran Agra
1ed18d7cd7 AOFRW on an empty stream created with MKSTREAM loads badkly
the AOF will be loaded successfully, but the stream will be missing,
i.e inconsistencies with the original db.

this was because XADD with id of 0-0 would error.

add a test to reproduce.
2020-03-25 21:47:57 +02:00