1994 Commits

Author SHA1 Message Date
WuYunlong
eb2c8b2c61 Add a test to prove current tcl cluster client can not handle keys with hash tag. 2020-05-22 12:37:49 +02:00
Oran Agra
00d8b92b89 fix valgrind test failure in replication test
in b4416280c i added more keys to that test to make it run longer
but in valgrind this now means the test times out, give valgrind more
time.
2020-05-22 12:37:49 +02:00
Oran Agra
5e17e6276c add regression test for the race in #7205
with the original version of 6.0.0, this test detects an excessive full
sync.
with the fix in 1a7cd2c0e, this test detects memory corruption,
especially when using libc allocator with or without valgrind.
2020-05-22 12:37:49 +02:00
antirez
96e7c011e2 Improve the PSYNC2 test reliability. 2020-05-22 12:37:49 +02:00
John Sully
27eb239f1a Fix bad merge in CI.yml
Former-commit-id: 6311d709c39b3bacaeab77b18033010f1b548f81
2020-05-21 22:09:06 -04:00
Qu Chen
42f5da5d2d Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas. 2020-05-21 18:42:10 -07:00
John Sully
3324d4dc0f Merge commit '5719b3054a534e62c25ae97680ecd4f7238ba484' into unstable
Former-commit-id: 3e03f308b564cd94f4a6407c80792d080e0f83c5
2020-05-21 17:55:09 -04:00
John Sully
b6500a08dc Merge commit '026cc11b056f063631d990f1a9db45b4e583974e' into unstable
Former-commit-id: 95cecb0229af0278cf614ffd746ba829ae7c897c
2020-05-21 17:45:15 -04:00
John Sully
c4db71f971 Merge commit '024c380b9da02bc4112822c0f5f9ac1388b4205b' into unstable
Former-commit-id: 7676f5b15f24a044257250b8891d23b14642da48
2020-05-21 17:36:53 -04:00
John Sully
24322b9b6d Merge commit 'eba28e2cea0b2632cf751426ada02adf24f273db' into unstable
Former-commit-id: d5b057534a3dbf50f94465332107da2490811946
2020-05-21 17:32:53 -04:00
John Sully
ba7483c20b Merge commit 'c35a53169ffdbdf73f91224b6e62f6129fb9a838' into unstable
Former-commit-id: 7a15f6dfc7331d6759201137971e7c2965672be8
2020-05-21 17:30:56 -04:00
Oran Agra
88d71f4793 fix a rare active defrag edge case bug leading to stagnation
There's a rare case which leads to stagnation in the defragger, causing
it to keep scanning the keyspace and do nothing (not moving any
allocation), this happens when all the allocator slabs of a certain bin
have the same % utilization, but the slab from which new allocations are
made have a lower utilization.

this commit fixes it by removing the current slab from the overall
average utilization of the bin, and also eliminate any precision loss in
the utilization calculation and move the decision about the defrag to
reside inside jemalloc.

and also add a test that consistently reproduce this issue.
2020-05-20 16:04:42 +03:00
Salvatore Sanfilippo
9fba05f758
Merge pull request #7232 from trevor211/handleHashTagWhenComputingHashSlot
Tcl client support hash tagged keys.
2020-05-19 09:23:44 +02:00
Oran Agra
75c11d7fec fix valgrind test failure in replication test
in b4416280c i added more keys to that test to make it run longer
but in valgrind this now means the test times out, give valgrind more
time.
2020-05-18 10:26:53 +03:00
antirez
0ca2f4f824 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2020-05-17 18:24:48 +02:00
antirez
96bb0c9471 Improve the PSYNC2 test reliability. 2020-05-17 18:24:34 +02:00
Oran Agra
357aace895 add regression test for the race in #7205
with the original version of 6.0.0, this test detects an excessive full
sync.
with the fix in 1a7cd2c0e, this test detects memory corruption,
especially when using libc allocator with or without valgrind.
2020-05-17 18:26:02 +03:00
Yossi Gottlieb
16ba33c05b TLS: Fix test failures on recent Debian/Ubuntu.
Seems like on some systems choosing specific TLS v1/v1.1 versions no
longer works as expected. Test is reduced for v1.2 now which is still
good enough to test the mechansim, and matters most anyway.
2020-05-15 22:23:24 +02:00
Salvatore Sanfilippo
8f5c2bc8aa
Merge pull request #7229 from yossigo/tls-fails-on-recent-debian
TLS: Fix test failures on recent Debian/Ubuntu.
2020-05-14 18:15:17 +02:00
Oran Agra
9da134cd88 fix redis 6.0 not freeing closed connections during loading.
This bug was introduced by a recent change in which readQueryFromClient
is using freeClientAsync, and despite the fact that now
freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since
it's not called during loading in processEventsWhileBlocked.
furthermore, afterSleep was called in that case but beforeSleep wasn't.

This bug also caused slowness sine the level-triggered mode of epoll
kept signaling these connections as readable causing us to keep doing
connRead again and again for ll of these, which keep accumulating.

now both before and after sleep are called, but not all of their actions
are performed during loading, some are only reserved for the main loop.

fixes issue #7215
2020-05-14 11:29:43 +02:00
antirez
f7f219a137 Regression test for #7249. 2020-05-14 11:29:43 +02:00
Oran Agra
5c41802d55 fix unstable replication test
this test which has coverage for varoius flows of diskless master was
failing randomly from time to time.

the failure was:
[err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found

what seemed to have happened is that the master didn't detect that all
replicas dropped by the time the replication ended, it thought that one
replica is still connected.

now the test takes a few seconds longer but it seems stable.
2020-05-14 11:29:43 +02:00
antirez
c38fd1f661 Merge branch 'free_clients_during_loading' into unstable 2020-05-14 11:28:08 +02:00
antirez
dec6fd3adc Regression test for #7249. 2020-05-14 11:27:31 +02:00
Oran Agra
b4416280cf fix unstable replication test
this test which has coverage for varoius flows of diskless master was
failing randomly from time to time.

the failure was:
[err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found

what seemed to have happened is that the master didn't detect that all
replicas dropped by the time the replication ended, it thought that one
replica is still connected.

now the test takes a few seconds longer but it seems stable.
2020-05-12 08:59:09 +03:00
John
181fadb708 more reliability fixes for multimaster
Former-commit-id: 3543a3c763de91a4d76bca89659fec9bf6b7a1c8
2020-05-11 05:38:21 -04:00
John
daaf82b673 more reliability fixes for multimaster
Former-commit-id: fd5b541260908423c35227ff9e42a83f96ace6c0
2020-05-11 09:37:42 +00:00
John
3d6f990104 Make multimaster tests more reliable
Former-commit-id: 3122912920973cb433d625a09b183c3f538e2523
2020-05-11 05:23:47 -04:00
John
0e024808c2 Make multimaster tests more reliable
Former-commit-id: 4fe59ba11b720864ea0124885b358cb72127cc2d
2020-05-11 09:22:27 +00:00
Oran Agra
905e28ee87 fix redis 6.0 not freeing closed connections during loading.
This bug was introduced by a recent change in which readQueryFromClient
is using freeClientAsync, and despite the fact that now
freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since
it's not called during loading in processEventsWhileBlocked.
furthermore, afterSleep was called in that case but beforeSleep wasn't.

This bug also caused slowness sine the level-triggered mode of epoll
kept signaling these connections as readable causing us to keep doing
connRead again and again for ll of these, which keep accumulating.

now both before and after sleep are called, but not all of their actions
are performed during loading, some are only reserved for the main loop.

fixes issue #7215
2020-05-11 11:33:46 +03:00
WuYunlong
70c4851f2c Handle keys with hash tag when computing hash slot using tcl cluster client. 2020-05-11 13:14:18 +08:00
WuYunlong
bc9efb577b Add a test to prove current tcl cluster client can not handle keys with hash tag. 2020-05-11 13:14:18 +08:00
John Sully
d1948ab944 Merge branch 'unstable' into keydbpro
Former-commit-id: d89c15518f984c1d4d4e7638a4e8ac5aa499632a
2020-05-11 00:53:38 -04:00
John Sully
ec82755227 hrename tests
Former-commit-id: f77c227b2d34b7ec74c1fc993e03f063dcbfa090
2020-05-10 17:14:44 -04:00
Yossi Gottlieb
4d1178cc24 TLS: Fix test failures on recent Debian/Ubuntu.
Seems like on some systems choosing specific TLS v1/v1.1 versions no
longer works as expected. Test is reduced for v1.2 now which is still
good enough to test the mechansim, and matters most anyway.
2020-05-10 17:38:04 +03:00
antirez
e48c37316e Test: --dont-clean should do first cleanup. 2020-05-08 10:37:36 +02:00
Oran Agra
3d3861dd88 add daily github actions with libc malloc and valgrind
* fix memlry leaks with diskless replica short read.
* fix a few timing issues with valgrind runs
* fix issue with valgrind and watchdog schedule signal

about the valgrind WD issue:
the stack trace test in logging.tcl, has issues with valgrind:
==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
==28808==   too small or bad protection modes

it seems to be some valgrind bug with SA_ONSTACK.
SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
also, not sure if it's even valid without a call to sigaltstack()
2020-05-08 10:37:35 +02:00
zhenwei pi
d6436eb7cf Support setcpuaffinity on linux/bsd
Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.

There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863

So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.

Examples of cpulist in redis.conf:
server_cpulist 0-7:2      means cpu affinity 0,2,4,6
bio_cpulist 1,3           means cpu affinity 1,3
aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11    means cpu affinity 1,10,11

Test on linux/freebsd, both work fine.

Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
2020-05-08 10:37:35 +02:00
antirez
e49d97298a Test: --dont-clean should do first cleanup. 2020-05-05 13:18:53 +02:00
Salvatore Sanfilippo
acf566b291
Merge pull request #7179 from bytedance/cpu-affinity
Support setcpuaffinity on linux/bsd
2020-05-04 10:56:20 +02:00
Oran Agra
deee2c1ef2 add daily github actions with libc malloc and valgrind
* fix memlry leaks with diskless replica short read.
* fix a few timing issues with valgrind runs
* fix issue with valgrind and watchdog schedule signal

about the valgrind WD issue:
the stack trace test in logging.tcl, has issues with valgrind:
==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
==28808==   too small or bad protection modes

it seems to be some valgrind bug with SA_ONSTACK.
SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
also, not sure if it's even valid without a call to sigaltstack()
2020-05-04 09:52:20 +03:00
zhenwei pi
1a0deab2a5 Support setcpuaffinity on linux/bsd
Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.

There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863

So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.

Examples of cpulist in redis.conf:
server_cpulist 0-7:2      means cpu affinity 0,2,4,6
bio_cpulist 1,3           means cpu affinity 1,3
aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11    means cpu affinity 1,10,11

Test on linux/freebsd, both work fine.

Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
2020-05-02 21:19:47 +08:00
Guy Benoish
6c0bc608a1 Extend XINFO STREAM output
Introducing XINFO STREAM <key> FULL
2020-04-30 13:02:58 +02:00
Salvatore Sanfilippo
e0de7c0852
Merge pull request #7134 from guybe7/xstate_command
Extend XINFO STREAM output
2020-04-28 16:31:00 +02:00
Guy Benoish
1e2aee3919 Extend XINFO STREAM output
Introducing XINFO STREAM <key> FULL
2020-04-28 13:03:43 +03:00
Oran Agra
ea63aea72d fix loading race in psync2 tests 2020-04-28 11:20:15 +02:00
Oran Agra
d31c0c5264 fix loading race in psync2 tests 2020-04-28 09:18:01 +03:00
Oran Agra
e4d2bb62b2 Keep track of meaningful replication offset in replicas too
Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when #3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.
2020-04-27 15:52:49 +02:00
Guy Benoish
43329c9b64 Add the stream tag to XSETID tests 2020-04-27 15:52:49 +02:00
Oran Agra
4447ddc8bb Keep track of meaningful replication offset in replicas too
Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when #3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.
2020-04-27 15:52:23 +02:00