The execution of the RPOPLPUSH command by the fuzzer created junk keys,
that were later being selected by RANDOMKEY and modified.
This also meant that lists were statistically tested more than other
files.
Fix the fuzzer not to pass junk key names to RPOPLPUSH, and add a check
that detects that new keys are not added by the fuzzer to detect future
similar issues.
(cherry picked from commit 3f3f678a4741e6af18230ee1862d9ced7af79faf)
- promote the code in DEBUG PROTOCOL to addReplyBigNum
- DEBUG PROTOCOL ATTRIB skips the attribute when client is RESP2
- networking.c addReply for push and attributes generate assertion when
called on a RESP2 client, anything else would produce a broken
protocol that clients can't handle.
(cherry picked from commit 6a5bac309e868deef749c36949723b415de2496f)
This makes it possible to distinguish between null response and an empty
array (currently the tests infra translates both to an empty string/list)
(cherry picked from commit 7103367ad44b4241e59a709771cb464aa2a86b20)
When test stop 'load handler' by killing the process that generating the load,
some commands that already in the input buffer, still might be processed by the server.
This may cause some instability in tests, that count on that no more commands
processed after we stop the `load handler'
In this commit, new proc 'wait_load_handlers_disconnected' added, to verify that no more
cammands from any 'load handler' prossesed, by checking that the clients who
genreate the load is disconnceted.
Also, replacing check of dbsize with wait_for_ofs_sync before comparing debug digest, as
it would fail in case the last key the workload wrote was an overridden key (not a new one).
Affected tests
Race fix:
- failover command to specific replica works
- Connect multiple replicas at the same time (issue #141), master diskless=$mdl, replica diskless=$sdl
- AOF rewrite during write load: RDB preamble=$rdbpre
Cleanup and speedup:
- Test replication with blocking lists and sorted sets operations
- Test replication with parallel clients writing in different DBs
- Test replication partial resync: $descr (diskless: $mdl, $sdl, reconnect: $reconnect
(cherry picked from commit 32a2584e079a1b3c2d1e6649e38239381a73a459)
Adding a new type mask for key space notification, REDISMODULE_NOTIFY_MODULE, to enable unique notifications from commands on REDISMODULE_KEYTYPE_MODULE type keys (which is currently unsupported).
Modules can subscribe to a module key keyspace notification by RM_SubscribeToKeyspaceEvents,
and clients by notify-keyspace-events of redis.conf or via the CONFIG SET, with the characters 'd' or 'A'
(REDISMODULE_NOTIFY_MODULE type mask is part of the '**A**ll' notation for key space notifications).
Refactor: move some pubsub test infra from pubsub.tcl to util.tcl to be re-used by other tests.
1. the `dump_logs` option would have printed only logs of servers that were
spawn before the test proc started, and not ones that the test proc
started inside it.
2. when a server proc catches an exception it should normally forward the
exception upwards, specifically when it's an assertion that should be
caught by a test proc above. however, in `durable` mode, we caught all
exceptions printed them to stdout and let the code continue,
this was wrong to do for assertions, which should have still been
propagated to the test function.
3. don't bother to search for crash log to print if we printed the the
entire log anyway
4. if no crash log was found, no need to print anything (i.e. the fact it
wasn't found)
5. rename warnings_from_file to crashlog_from_file
Problem:
Currently, when performing random distribution verification, we determine
the probability of each element occurring in the sum, but the probability is
only an estimate, these tests had rare sporadic failures, and we cannot verify
what the probability of failure will be.
Solution:
Using the chi-square distribution instead of the original random distribution
validation makes the test more reasonable and easier to find problems.
The cluster bus is established over TLS or non-TLS depending on the configuration tls-cluster. The client ports distributed in the cluster and sent to clients are assumed to be TLS or non-TLS also depending on tls-cluster.
The cluster bus is now extended to also contain the non-TLS port of clients in a TLS cluster, when available. The non-TLS port of a cluster node, when available, is sent to clients connected without TLS in responses to CLUSTER SLOTS, CLUSTER NODES, CLUSTER SLAVES and MOVED and ASK redirects, instead of the TLS port.
The user was able to override the client port by defining cluster-announce-port. Now cluster-announce-tls-port is added, so the user can define an alternative announce port for both TLS and non-TLS clients.
Fixes#8134
Add tests for fixing migrating slot at all stages:
1. when migration is half inited on "migrating" node
2. when migration is half inited on "importing" node
3. migration inited, but not finished
4. migration is half finished on "migrating" node
5. migration is half finished on "importing" node
Also add tests for many simultaneous slot migrations.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
In certain scenario start_server may think it failed to start a redis server
although it started successfully. in these cases, it'll not terminate it, and
it'll remain running when the test is over.
In start_server if config doesn't have bind (the minimal.conf in introspection.tcl),
it will try to bind ipv4 and ipv6. One may success while other fails. It will
output "Could not create server TCP listening socket".
wait_server_started uses this message to check whether instance started
successfully. So it will consider that it failed even though redis started successfully.
Additionally, in some cases it wasn't clear to users why the server exited,
since the warning message printed to the log, could in some cases be harmless,
and in some cases fatal.
This PR adds makes a clear distinction between a warning log message and
a fatal one, and changes the test suite to look for the fatal message.
When redis responds with tracking-redir-broken push message (RESP3),
it was responding with a broken protocol: an array of 3 elements, but only
pushes 2 elements.
Some bugs in the test make this pass. Read the push reply
will consume an extra reply, because the reply length is 3, but there
are only two elements, so the next reply will be treated as third
element. So the test is corrected too.
Other changes:
* checkPrefixCollisionsOrReply success should return 1 instead of -1,
this bug didn't have any implications.
* improve client tracking tests to validate more of the response it reads.
Changes to HRANDFIELD and ZRANDMEMBER:
* Fix risk of OOM panic when client query a very big negative count (avoid allocating huge temporary buffer).
* Fix uneven random distribution in HRANDFIELD with negative count (wasn't using dictGetFairRandomKey).
* Add tests to check an even random distribution (HRANDFIELD, SRANDMEMBER, ZRANDMEMBER).
Co-authored-by: Oran Agra <oran@redislabs.com>
* The corrupt dump fuzzer found a division by zero.
* in some cases the random fields from the HRANDFIELD tests produced
fields with newlines and other special chars (due to \ char), this caused
the TCL tests to see a bulk response that has a newline in it and add {}
around it, later it can think this is a nested list. in fact the `alpha` random
string generator isn't using spaces and newlines, so it should not use `\`
either.
New commands:
`HRANDFIELD [<count> [WITHVALUES]]`
`ZRANDMEMBER [<count> [WITHSCORES]]`
Algorithms are similar to the one in SRANDMEMBER.
Both return a simple bulk response when no arguments are given, and an array otherwise.
In case values/scores are requested, RESP2 returns a long array, and RESP3 a nested array.
note: in all 3 commands, the only option that also provides random order is the one with negative count.
Changes to SRANDMEMBER
* Optimization when count is 1, we can use the more efficient algorithm of non-unique random
* optimization: work with sds strings rather than robj
Other changes:
* zzlGetScore: when zset needs to convert string to double, we use safer memcpy (in
case the buffer is too small)
* Solve a "bug" in SRANDMEMBER test: it intended to test a positive count (case 3 or
case 4) and by accident used a negative count
Co-authored-by: xinluton <xinluton@qq.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
It was confusing as to why these don't return a map type.
the reason is that order matters, so we need to make sure the client
library knows to respect it.
Added comments in the implementation and tests to cover it.
This adds basic coverage to IO threads by running the cluster and few selected Redis test suite tests with the IO threads enabled.
Also provides some necessary additional improvements to the test suite:
* Add --config to sentinel/cluster tests for arbitrary configuration.
* Fix --tags whitelisting which was broken.
* Add a `network` tag to some tests that are more network intensive. This is work in progress and more tests should be properly tagged in the future.
When client tracking is enabled signalModifiedKey can increase memory usage,
this can cause the loop in performEvictions to keep running since it was measuring
the memory usage impact of signalModifiedKey.
The section that measures the memory impact of the eviction should be just on dbDelete,
excluding keyspace notification, client tracking, and propagation to AOF and replicas.
This resolves part of the problem described in #8069
p.s. fix took 1 minute, test took about 3 hours to write.
(cherry picked from commit b640e2944e42759412ac67228cf64c43dfbed9c3)
When a Lua script returns a map to redis (a feature which was added in
redis 6 together with RESP3), it would have returned the value first and
the key second.
If the client was using RESP2, it was getting them out of order, and if
the client was in RESP3, it was getting a map of value => key.
This was happening regardless of the Lua script using redis.setresp(3)
or not.
This also affects a case where the script was returning a map which it got
from from redis by doing something like: redis.setresp(3); return redis.call()
This fix is a breaking change for redis 6.0 users who happened to rely
on the wrong order (either ones that used redis.setresp(3), or ones that
returned a map explicitly).
This commit also includes other two changes in the tests:
1. The test suite now handles RESP3 maps as dicts rather than nested
lists
2. Remove some redundant (duplicate) tests from tracking.tcl
(cherry picked from commit bcde1d978d74f84cb4a66dc8bbd746842217f8b2)
Add INFO field, rdb_active_cow_size, to report COW of a live fork child while
it's active.
- once in 1024 keys check the time, and if there's more than one second since
the last report send a report to the parent via the pipe.
- refactor the child_info_data struct, it's an implementation detail that
shouldn't be in the server struct, and not used to communicate data between
caller and callee
- remove the magic value from that struct (not sure what it was good for), and
instead add handling of short reads.
- add another value to the structure, cow_type, to indicate if the report is
for the new rdb_active_cow_size field, or it's the last report of a
successful operation
- add new Module API to report the active COW
- add more asserts variants to test.tcl
When a Lua script returns a map to redis (a feature which was added in
redis 6 together with RESP3), it would have returned the value first and
the key second.
If the client was using RESP2, it was getting them out of order, and if
the client was in RESP3, it was getting a map of value => key.
This was happening regardless of the Lua script using redis.setresp(3)
or not.
This also affects a case where the script was returning a map which it got
from from redis by doing something like: redis.setresp(3); return redis.call()
This fix is a breaking change for redis 6.0 users who happened to rely
on the wrong order (either ones that used redis.setresp(3), or ones that
returned a map explicitly).
This commit also includes other two changes in the tests:
1. The test suite now handles RESP3 maps as dicts rather than nested
lists
2. Remove some redundant (duplicate) tests from tracking.tcl
This Commit pushes forward the observability on overall error statistics and command statistics within redis-server:
It extends INFO COMMANDSTATS to have
- failed_calls in - so we can keep track of errors that happen from the command itself, broken by command.
- rejected_calls - so we can keep track of errors that were triggered outside the commmand processing per se
Adds a new section to INFO, named ERRORSTATS that enables keeping track of the different errors that
occur within redis ( within processCommand and call ) based on the reply Error Prefix ( The first word
after the "-", up to the first space ).
This commit also fixes RM_ReplyWithError so that it can be correctly identified as an error reply.
Apparently the "leaks" took reports a different error string about process
that's not found in each version of MacOS.
This cause the test suite to fail on some OS versions, since some tests terminate
the process before looking for leaks.
Instead of looking at the error string, we now look at the (documented) exit code.
Add a new set of defrag functions that take a defrag context and allow
defragmenting memory blocks and RedisModuleStrings.
Modules can register a defrag callback which will be invoked when the
defrag process handles globals.
Modules with custom data types can also register a datatype-specific
defrag callback which is invoked for keys that require defragmentation.
The callback and associated functions support both one-step and
multi-step options, depending on the complexity of the key as exposed by
the free_effort callback.
This adds a new `tls-client-cert-file` and `tls-client-key-file`
configuration directives which make it possible to use different
certificates for the TLS-server and TLS-client functions of Redis.
This is an optional directive. If it is not specified the `tls-cert-file`
and `tls-key-file` directives are used for TLS client functions as well.
Also, `utils/gen-test-certs.sh` now creates additional server-only and client-only certs and will skip intensive operations if target files already exist.
when using --baseport to run two tests suite in parallel (different
folders), we need to also make sure the port used by the testsuite to
communicate with it's workers is unique. otherwise the attept to find a
free port connects to the other test suite and messes it.
maybe one day we need to attempt to bind, instead of connect when tring
to find a free port.
The test creates keys with various encodings, DUMP them, corrupt the payload
and RESTORES it.
It utilizes the recently added use-exit-on-panic config to distinguish between
asserts and segfaults.
If the restore succeeds, it runs random commands on the key to attempt to
trigger a crash.
It runs in two modes, one with deep sanitation enabled and one without.
In the first one we don't expect any assertions or segfaults, in the second one
we expect assertions, but no segfaults.
We also check for leaks and invalid reads using valgrind, and if we find them
we print the commands that lead to that issue.
Changes in the code (other than the test):
- Replace a few NPD (null pointer deference) flows and division by zero with an
assertion, so that it doesn't fail the test. (since we set the server to use
`exit` rather than `abort` on assertion).
- Fix quite a lot of flows in rdb.c that could have lead to memory leaks in
RESTORE command (since it now responds with an error rather than panic)
- Add a DEBUG flag for SET-SKIP-CHECKSUM-VALIDATION so that the test don't need
to bother with faking a valid checksum
- Remove a pile of code in serverLogObjectDebugInfo which is actually unsafe to
run in the crash report (see comments in the code)
- fix a missing boundary check in lzf_decompress
test suite infra improvements:
- be able to run valgrind checks before the process terminates
- rotate log files when restarting servers
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
Test support for the new map, null and push message types. Map objects are parsed as a list of lists of key value pairs.
for instance: user => john password => 123
will be parsed to the following TCL list:
{{user john} {password 123}}
Also added the following tests:
Redirection still works with RESP3
Able to use a RESP3 client as a redirection client
No duplicate invalidation messages when turning BCAST mode on after normal tracking
Server is able to evacuate enough keys when num of keys surpasses limit by more than defined initial effort
Different clients using different protocols can track the same key
OPTOUT tests
OPTIN tests
Clients can redirect to the same connection
tracking-redir-broken test
HELLO 3 checks
Invalidation messages still work when using RESP3, with and without redirection
Switching to RESP3 doesn't disturb previous tracked keys
Tracking info is correct
Flushall and flushdb produce invalidation messages
These tests achieve 100% line coverage for tracking.c using lcov.
The tests sometimes fail to find a log message.
Recently i added a print that shows the log files that are searched
and it shows that the message was in deed there.
The only reason i can't think of for this seach to fail, is we we
happened to read an incomplete line, which didn't match our pattern and
then on the next iteration we would continue reading from the line after
it.
The fix is to always re-evaluation the previous line.
(cherry picked from commit 35eb8ec6f3f6a2deab49ff70b0e9a8587adfd6de)
The tests sometimes fail to find a log message.
Recently i added a print that shows the log files that are searched
and it shows that the message was in deed there.
The only reason i can't think of for this seach to fail, is we we
happened to read an incomplete line, which didn't match our pattern and
then on the next iteration we would continue reading from the line after
it.
The fix is to always re-evaluation the previous line.
- add test suite coverage for redis-benchmark
- add --version (similar to what redis-cli has)
- fix bug sending more requests than intended when pipeline > 1.
- when done sending requests, avoid freeing client in the write handler, in theory before
responses are received (probably dead code since the read handler will call clientDone first)
Co-authored-by: Oran Agra <oran@redislabs.com>
Adding [B]LMOVE <src> <dst> RIGHT|LEFT RIGHT|LEFT. deprecating [B]RPOPLPUSH.
Note that when receiving a BRPOPLPUSH we'll still propagate an RPOPLPUSH,
but on BLMOVE RIGHT LEFT we'll propagate an LMOVE
improvement to existing tests
- Replace "after 1000" with "wait_for_condition" when wait for
clients to block/unblock.
- Add a pre-existing element to target list on basic tests so
that we can check if the new element was added to the correct
side of the list.
- check command stats on the replica to make sure the right
command was replicated
Co-authored-by: Oran Agra <oran@redislabs.com>
if there are nested tests and nested servers, we need to restore the
previous value of cur_test when a test exist.
example:
```
test{test 1} {
start_server {
test{test 1.1 - master only} {
}
start_server {
test{test 1.2 - with replication} {
}
}
}
}
```
when `test 1.1 - master only exists`, we're still inside `test 1`
(cherry picked from commit 610b4ff16a62062338588c4508a73784fb962c0b)