1034 Commits

Author SHA1 Message Date
antirez
6666d30f6a Tracking: test expired keys notifications. 2020-04-24 10:14:48 +02:00
antirez
1b28c0cbab Tracking: NOLOOP tests. 2020-04-24 10:14:48 +02:00
antirez
d4d2d24745 A few comments and name changes for #7103. 2020-04-17 13:02:40 +02:00
Oran Agra
f37a123687 testsuite run the defrag latency test solo
this test is time sensitive and it sometimes fail to pass below the
latency threshold, even on strong machines.

this test was the reson we're running just 2 parallel tests in the
github actions CI, revering this.
2020-04-17 13:02:40 +02:00
antirez
87924d6731 LCS: more tests. 2020-04-07 16:52:57 +02:00
antirez
106501fab0 LCS tests. 2020-04-07 16:52:57 +02:00
Oran Agra
dcd6726366 diffrent fix for runtest --host --port 2020-04-07 16:52:28 +02:00
Guy Benoish
728c620562 Try to fix time-sensitive tests in blockonkey.tcl
There is an inherent race between the deferring client and the
"main" client of the test: While the deferring client issues a blocking
command, we can't know for sure that by the time the "main" client
tries to issue another command (Usually one that unblocks the deferring
client) the deferring client is even blocked...
For lack of a better choice this commit uses TCL's 'after' in order
to give some time for the deferring client to issues its blocking
command before the "main" client does its thing.
This problem probably exists in many other tests but this commit
tries to fix blockonkeys.tcl
2020-04-07 16:52:28 +02:00
Valentino Geron
dc38619afb XREAD and XREADGROUP should not be allowed from scripts when BLOCK option is being used 2020-04-07 16:52:04 +02:00
Guy Benoish
7cb94fd6cc Fix no-negative-zero test 2020-04-07 16:52:04 +02:00
Guy Benoish
3a063f58c8 Make sure Redis does not reply with negative zero 2020-04-07 16:52:04 +02:00
Valentino Geron
1f6160ccb2 XACK should be executed in a "all or nothing" fashion.
First, we must parse the IDs, so that we abort ASAP.
The return value of this command cannot be an error if
the client successfully acknowledged some messages,
so it should be executed in a "all or nothing" fashion.
2020-04-07 16:52:04 +02:00
Guy Benoish
f1ed8d93a0 Fix memory corruption in moduleHandleBlockedClients
By using a "circular BRPOPLPUSH"-like scenario it was
possible the get the same client on db->blocking_keys
twice (See comment in moduleTryServeClientBlockedOnKey)

The fix was actually already implememnted in
moduleTryServeClientBlockedOnKey but it had a bug:
the funxction should return 0 or 1 (not OK or ERR)

Other changes:
1. Added two commands to blockonkeys.c test module (To
   reproduce the case described above)
2. Simplify blockonkeys.c in order to make testing easier
3. cast raxSize() to avoid warning with format spec
2020-04-07 16:52:03 +02:00
Guy Benoish
5c23cd55d4 Modules: Test MULTI/EXEC replication of RM_Replicate
Makse sure call() doesn't wrap replicated commands with
a redundant MULTI/EXEC

Other, unrelated changes:
1. Formatting compiler warning in INFO CLIENTS
2. Use CLIENT_ID_AOF instead of UINT64_MAX
2020-03-31 17:12:19 +02:00
Guy Benoish
07acd86c7e RENAME can unblock XREADGROUP
Other changes:
Support stream in serverLogObjectDebugInfo
2020-03-31 16:57:20 +02:00
antirez
a9706f96f4 Fix the propagate Tcl test after module changes. 2020-03-31 16:57:20 +02:00
antirez
6354b71663 Modify the propagate unit test to show more cases. 2020-03-31 16:57:20 +02:00
antirez
514ca204bb Fix module commands propagation double MULTI bug.
b512cb40 introduced automatic wrapping of MULTI/EXEC for the
alsoPropagate API. However this collides with the built-in mechanism
already present in module.c. To avoid complex changes near Redis 6 GA
this commit introduces the ability to exclude call() MUTLI/EXEC wrapping
for also propagate in order to continue to use the old code paths in
module.c.
2020-03-31 16:57:20 +02:00
Oran Agra
454e12cb89 AOFRW on an empty stream created with MKSTREAM loads badkly
the AOF will be loaded successfully, but the stream will be missing,
i.e inconsistencies with the original db.

this was because XADD with id of 0-0 would error.

add a test to reproduce.
2020-03-31 16:57:20 +02:00
antirez
28d402d31b PSYNC2: meaningful offset test. 2020-03-25 15:55:24 +01:00
Oran Agra
e1e2f91589 MULTI/EXEC during LUA script timeout are messed up
Redis refusing to run MULTI or EXEC during script timeout may cause partial
transactions to run.

1) if the client sends MULTI+commands+EXEC in pipeline without waiting for
response, but these arrive to the shards partially while there's a busy script,
and partially after it eventually finishes: we'll end up running only part of
the transaction (since multi was ignored, and exec would fail).

2) similar to the above if EXEC arrives during busy script, it'll be ignored and
the client state remains in a transaction.

the 3rd test which i added for a case where MULTI and EXEC are ok, and
only the body arrives during busy script was already handled correctly
since processCommand calls flagTransaction
2020-03-25 15:55:24 +01:00
antirez
4cfceac287 Fix BITFIELD_RO test. 2020-03-25 15:55:24 +01:00
bodong.ybd
015d1cb2ff Added BITFIELD_RO variants for read-only operations. 2020-03-25 15:55:24 +01:00
WuYunlong
23ccd476b7 Add 14-consistency-check.tcl to prove there is a data consistency issue. 2020-03-25 15:54:34 +01:00
antirez
91af3f73e1 Regression test for #7011. 2020-03-25 15:54:34 +01:00
bodong.ybd
c609bf3f2c Fix bug of tcl test using external server 2020-03-25 15:54:34 +01:00
Oran Agra
cde46df309 fix for flaky psync2 test
*** [err]: PSYNC2: total sum of full synchronizations is exactly 4 in tests/integration/psync2.tcl
Expected 5 == 4 (context: type eval line 6 cmd {assert {$sum == 4}} proc ::test)

issue was that sometime the test got an unexpected full sync since it
tried to switch to the replica before it was in sync with it's master.
2020-03-12 15:53:47 +01:00
Oran Agra
5daed7baee fix race in module api test for fork
in some cases we were trying to kill the fork before it got created
2020-02-27 18:02:30 +01:00
Oran Agra
7390aa673f fix github actions failing latency test for active defrag - part 2
it seems that running two clients at a time is ok too, resuces action
time from 20 minutes to 10. we'll use this for now, and if one day it
won't be enough we'll have to run just the sensitive tests one by one
separately from the others.

this commit also fixes an issue with the defrag test that appears to be
very rare.
2020-02-27 18:02:30 +01:00
Oran Agra
7fec53a9af fix github actions failing latency test for active defrag
seems that github actions are slow, using just one client to reduce
false positives.

also adding verbose, testing only on latest ubuntu, and building on
older one.

when doing that, i can reduce the test threshold back to something saner
2020-02-27 18:02:30 +01:00
Oran Agra
c44794bd3c Fix latency sensitivity of new defrag test
I saw that the new defag test for list was failing in CI recently, so i
reduce it's threshold from 12 to 60.

besides that, i add / improve the latency test for that other two defrag
tests (add a sensitive latency and digest / save checks)

and fix bad usage of debug populate (can't overrides existing keys).
this was the original intention, which creates higher fragmentation.
2020-02-27 18:02:30 +01:00
antirez
2542ff9c25 Test engine: experimental change to avoid busy port problems. 2020-02-27 18:02:30 +01:00
antirez
22ad06eafd Test engine: detect timeout when checking for Redis startup. 2020-02-27 18:02:30 +01:00
antirez
2d9a144515 Test engine: better tracking of what workers are doing. 2020-02-27 18:00:47 +01:00
Guy Benoish
5a15c0edf9 Fix memory leak in test_ld_conv 2020-02-27 18:00:47 +01:00
Oran Agra
79e8b17d7b Defrag big lists in portions to avoid latency and freeze
When active defrag kicks in and finds a big list, it will create a bookmark to
a node so that it is able to resume iteration from that node later.

The quicklist manages that bookmark, and updates it in case that node is deleted.

This will increase memory usage only on lists of over 1000 (see
active-defrag-max-scan-fields) quicklist nodes (1000 ziplists, not 1000 items)
by 16 bytes.

In 32 bit build, this change reduces the maximum effective config of
list-compress-depth and list-max-ziplist-size (from 32767 to 8191)
2020-02-27 18:00:47 +01:00
Guy Benoish
d1be7aaa18 XGROUP DESTROY should unblock XREADGROUP with -NOGROUP 2020-02-27 18:00:47 +01:00
antirez
54237ff024 Test is more complex now, increase default timeout. 2020-02-27 18:00:47 +01:00
antirez
aa556cb7dd Tracking: first set of tests for the feature. 2020-02-27 18:00:46 +01:00
antirez
491949ee5b ACL LOG: make max log entries configurable. 2020-02-12 14:15:35 +01:00
antirez
c30fe3a93f ACL LOG: test for AUTH reason. 2020-02-12 14:15:35 +01:00
antirez
c60351e489 ACL LOG: implement a few basic tests. 2020-02-12 14:15:35 +01:00
WuYunlong
7d63a7d128 Add tcl regression test in scripting.tcl to reproduce memory leak. 2020-02-04 10:23:48 +01:00
Guy Benoish
2ad427f862 ld2string should fail if string contains \0 in the middle
This bug affected RM_StringToLongDouble and HINCRBYFLOAT.
I added tests for both cases.

Main changes:
1. Fixed string2ld to fail if string contains \0 in the middle
2. Use string2ld in getLongDoubleFromObject - No point of
   having duplicated code here

The two changes above broke RM_SaveLongDouble/RM_LoadLongDouble
because the long double string was saved with length+1 (An innocent
mistake, but it's actually a bug - The length passed to
RM_SaveLongDouble should not include the last \0).
2020-02-04 10:23:48 +01:00
Guy Benoish
d9f508d527 Modules: Fix blocked-client-related memory leak
If a blocked module client times-out (or disconnects, unblocked
by CLIENT command, etc.) we need to call moduleUnblockClient
in order to free memory allocated by the module sub-system
and blocked-client private data

Other changes:
Made blockedonkeys.tcl tests a bit more aggressive in order
to smoke-out potential memory leaks
2020-02-04 10:23:48 +01:00
Guy Benoish
3913ffd5bc Blocking XREAD[GROUP] should always reply with valid data (or timeout)
This commit solves the following bug:
127.0.0.1:6379> XGROUP CREATE x grp $ MKSTREAM
OK
127.0.0.1:6379> XADD x 666 f v
"666-0"
127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x >
1) 1) "x"
   2) 1) 1) "666-0"
         2) 1) "f"
            2) "v"
127.0.0.1:6379> XADD x 667 f v
"667-0"
127.0.0.1:6379> XDEL x 667
(integer) 1
127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x >
1) 1) "x"
   2) (empty array)

The root cause is that we use s->last_id in streamCompareID
while we should use the last *valid* ID
2020-01-10 13:16:14 +01:00
Guy Benoish
444dd1075c Stream: Handle streamID-related edge cases
This commit solves several edge cases that are related to
exhausting the streamID limits: We should correctly calculate
the succeeding streamID instead of blindly incrementing 'seq'
This affects both XREAD and XADD.

Other (unrelated) changes:
Reply with a better error message when trying to add an entry
to a stream that has exhausted last_id
2019-12-29 15:46:31 +01:00
Oran Agra
f866b31a74 config.c adjust config limits and mutable
- make lua-replicate-commands mutable (it never was, but i don't see why)
- make tcp-backlog immutable (fix a recent refactory mistake)
- increase the max limit of a few configs to match what they were before
the recent refactory
2019-12-29 15:46:31 +01:00
antirez
e7ddea016d Revert "Geo: output 10 chars of geohash, not 11."
This reverts commit bdca9e882d1619bde19cbad78549f3167ddc3d41.
2019-12-18 12:54:46 +01:00
zhaozhao.zz
5c56f82bc3 incrbyfloat: fix issue #5256 ttl lost after propagate 2019-12-18 15:44:51 +08:00