The Redis sentinel would crash with a segfault after a few minutes
because it tried to read from a page without read permissions. Check up
front whether the sds is long enough to contain redis:slave or
redis:master before memcmp() as is done everywhere else in
sentinelRefreshInstanceInfo().
Bug report and commit message from Theo Buehler. Fix from Nam Nguyen.
Co-authored-by: Nam Nguyen <namn@berkeley.edu>
(cherry picked from commit 8c03eb90da3951ed92d8d2729fc6d2ce4feb9e45)
valsize was not modified during the for loop below instead of getting from c->argv[4], therefore there is no need to put inside the for loop.. Moreover, putting the check outside loop will also avoid memory leaking, decrRefCount(key) should be called in the original code if we put the check in for loop
(cherry picked from commit 2afa308306fc641204f10a2bbe2fe35e28b6d259)
Fix consistency test added in 0c9916d00 without considering TLS
redis-cli configuration.
(cherry picked from commit 675b00c7e0b7d68bafa11fcc7f66a394c3c3cd36)
The connection API may create an accepted connection object in an error
state, and callers are expected to check it before attempting to use it.
Co-authored-by: mrpre <mrpre@163.com>
(cherry picked from commit bc450c5f63d39d0f0b8c97fa91d15bb8d688b86d)
- the test now waits for specific set of log messages rather than wait for
timeout looking for just one message.
- we don't wanna sample the current length of the log after an action, due
to a race, we need to start the search from the line number of the last
message we where waiting for.
- when attempting to trigger a full sync, use multi-exec to avoid a race
where the replica manages to re-connect before we completed the set of
actions that should force a full sync.
- fix verify_log_message which was broken and unused
(cherry picked from commit 06aaeabaea9d9b248e8a790dde352cd14d66628a)
Adds an `optional` value to the previously boolean `tls-auth-clients` configuration keyword.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
(cherry picked from commit 198770751fdc4c46eb4971ead9b5787fd6ce39fd)
Initialize and configure OpenSSL even when tls-port is not used, because
we may still have tls-cluster or tls-replication.
Also, make sure to reconfigure OpenSSL when these parameters are changed
as TLS could have been enabled for the first time.
(cherry picked from commit b76a93c362091daafd8a8d15a45d527b7437d013)
on ci.redis.io the test fails a lot, reporting that bgsave didn't end.
increaseing the timeout we wait for that bgsave to get aborted.
in addition to that, i also verify that it indeed got aborted by
checking that the save counter wasn't reset.
add another test to verify that a successful bgsave indeed resets the
change counter.
(cherry picked from commit 49d4aebce0a0b94cd2b302d276be95d1a1ce8610)
in cases where you have
test name {
start_server {
start_server {
assert
}
}
}
the exception will be thrown to the test proc, and the servers are
supposed to be killed on the way out. but it seems there was always a
bug of not cleaning the server stack, and recently (#7404) we started
relying on that stack in order to kill them, so with that bug sometimes
we would have tried to kill the same server twice, and leave one alive.
luckly, in most cases the pattern is:
start_server {
test name {
}
}
(cherry picked from commit bb170fa06e5909dd816b6530121952d57c8209a0)
This re-implements the redis-cli --pipe test so it no longer depends on a close feature available only in TCL 8.6.
Basically what this test does is run redis-cli --pipe, generates a bunch of commands and pipes them through redis-cli, and inspects the result in both Redis and the redis-cli output.
To do that, we need to close stdin for redis-cli to indicate we're done so it can flush its buffers and exit. TCL has bi-directional channels can only offers a way to "one-way close" a channel with TCL 8.6. To work around that, we now generate the commands into a file and feed that file to redis-cli directly.
As we're writing to an actual file, the number of commands is now reduced.
(cherry picked from commit dbc0a64843ccd07515ac41ca80497a9e5ffd107a)
Since the dynamic allocations in raxIterator are only used for deep walks, memory
leak due to missing call to raxStop can only happen for rax with key names longer
than 32 bytes.
Out of all the missing calls, the only ones that may lead to a leak are the rax
for consumer groups and consumers, and these were only in AOFRW and rdbSave, which
normally only happen in fork or at shutdown.
(cherry picked from commit 0b8d47a9857142203c03846fc4284746695d3dc3)
This fixes the issue described in CVE-2014-5461. At this time we cannot
confirm that the original issue has a real impact on Redis, but it is
included as an extra safety measure.
The previous algorithm is of O(n^2) time complexity.
It would have run through the ziplist entries one by one, each time doing a `realloc` and a
`memmove` (moving the entire tail of the ziplist).
The new algorithm is O(n), it runs over all the records once, computing the size of the `realloc`
needed, then does one `realloc`, and run thought the records again doing many smaller `memmove`s,
each time moving just one record.
So this change reduces many reallocs, and moves each record just once.
Co-authored-by: zhumaohua <zhumaohua@megvii.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Update adds a general source for retrieving a monotonic time.
In addition, AE has been updated to utilize the new monotonic
clock for timer processing.
This performance improvement is **not** enabled in a default build due to various H/W compatibility
concerns, see README.md for details. It does however change the default use of gettimeofday with
clock_gettime and somewhat improves performance.
This update provides the following
1. An interface for retrieving a monotonic clock. getMonotonicUs returns a uint64_t (aka monotime)
with the number of micro-seconds from an arbitrary point. No more messing with tv_sec/tv_usec.
Simple routines are provided for measuring elapsed milli-seconds or elapsed micro-seconds (the
most common use case for a monotonic timer). No worries about time moving backwards.
2. High-speed assembler implementation for x86 and ARM. The standard method for retrieving the
monotonic clock is POSIX.1b (1993): clock_gettime(CLOCK_MONOTONIC, timespec*). However, most
modern processors provide a constant speed instruction clock which can be retrieved in a fraction
of the time that it takes to call clock_gettime. For x86, this is provided by the RDTSC
instruction. For ARM, this is provided by the CNTVCT_EL0 instruction. As a compile-time option,
these high-speed timers can be chosen. (Default is POSIX clock_gettime.)
3. Refactor of event loop timers. The timer processing in ae.c has been refactored to use the new
monotonic clock interface. This results in simpler/cleaner logic and improved performance.
fe8d6fe74 (released in 6.0.6) has a side effect, when processCommand
rejects a command with pre-made shared object error string, it trims the
newlines from the end of the string. if that string is later used with
addReply, the newline will be missing, breaking the protocol, and
leaving the client hung.
It seems that the only scenario which this happens is when replying with
-LOADING to some command, and later using that reply from the CONFIG
SET command (still during loading). this will result in hung client.
Refactoring the code in order to avoid trimming these newlines from
shared string objects, and do the newline trimming only in other cases
where it's needed.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
During a long AOF or RDB loading, the memory stats were not updated, and
INFO would return stale data, specifically about fragmentation and RSS.
In the past some of these were sampled directly inside the INFO command,
but were moved to cron as an optimization.
This commit introduces a concept of loadingCron which should take
some of the responsibilities of serverCron.
It attempts to limit it's rate to approximately the server Hz, but may
not be very accurate.
In order to avoid too many system call, we use the cached ustime, and
also make sure to update it in both AOF loading and RDB loading inside
processEventsWhileBlocked (it seems AOF loading was missing it).
If the server gets MULTI command followed by only read
commands, and right before it gets the EXEC it reaches OOM,
the client will get OOM response.
So, from now on, it will get OOM response only if there was
at least one command that was tagged with `use-memory` flag
A first step to enable a consistent full percentile analysis on query latency so that we can fully understand the performance and stability characteristics of the redis-server system we are measuring. It also improves the instantaneous reported metrics, and the csv output format.
Otherwise, it is treated as a single allocation and freed synchronously. The following logic is used for estimating the effort in constant-ish time complexity:
1. Check the number of nodes.
1. Add an allocation for each consumer group registered inside the stream.
1. Check the number of PELs in the first CG, and then add this count times the number of CGs.
1. Check the number of consumers in the first CG, and then add this count times the number of CGs.
The previous fix using _Atomic was insufficient, since we check and set it in
different places.
The implications of this bug are just that a portion of the bug report will be shown
twice, in the race case of two concurrent crashes.
When calling to LPOS command when RANK is higher than matches,
the return value is non valid response. For example:
```
LPUSH l a
:1
LPOS l b RANK 5 COUNT 10
*-4
```
It may break client-side parser.
Now, we count how many replies were replied in the array.
```
LPUSH l a
:1
LPOS l b RANK 5 COUNT 10
*0
```
After fork, the child process(redis-aof-rewrite) will get the fd opened
by the parent process(redis), when redis killed by kill -9, it will not
graceful exit(call prepareForShutdown()), so redis-aof-rewrite thread may still
alive, the fd(lock) will still be held by redis-aof-rewrite thread, and
redis restart will fail to get lock, means fail to start.
This issue was causing failures in the cluster tests in github actions.
Co-authored-by: Oran Agra <oran@redislabs.com>
Since users often post just the crash log in github issues, the log
print that's above it is missing.
No reason not to include the size in the panic message itself.