Two jobs were missing from the job list for failure notification
* test-ubuntu-tls-io-threads
* test-sanitizer-force-defrag
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Addresses the failure here:
https://github.com/valkey-io/valkey/actions/runs/13000845302/job/36259016156#step:5:7272.
This change does three things:
1. For some reason TCL 8.5 (which is used on macos) is handling `\x03ba`
as `\0xba`, according to
https://www.tcl-lang.org/man/tcl8.5/TclCmd/Tcl.htm#M27 so we encode
"bar" using hex escapes too.
2. Fix a spacing issue.
3. Make it so that if the restore fails, it immediately errors.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Do command existence and arity checks when loading AOF to avoid crash
Currently, loading commands such as `cluster` or `cluster slots xxx`
from AOF will cause the server to crash.
1. `cluster` is a container command, and executing proc will cause a
crash because we do not check subcommand and arity.
2. `cluster slots xxx`, arity check fail, reply with an error from the
AOF client and trigger a panic.
Of course, there are many other ways for a problematic AOF to cause the
panic, but it is still necessary do some basic checks before executing.
In this way, in these basic cases, we can print useful error messages
instead of crashing directly.
Signed-off-by: Binbin <binloveplay1314@qq.com>
Use Linux syscall mmap/munmap to manage a RDMA memory region, then we
have a guard page protected VMA like (cat /proc/PID/maps):
785018afe000-785018aff000 ---p 00000000 00:00 0 -> top guard page
785018aff000-785018bff000 rw-p 00000000 00:00 0 -> RDMA memory region
785018bff000-785018c00000 ---p 00000000 00:00 0 -> bottom guard page
Once any code accesses memory unexpectedly, segment fault occurs.
Signed-off-by: zhenwei pi <zhenwei.pi@linux.dev>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
In C, we had better initialize every variable in struct, this PR fixes
one missed variable Initialization.
---------
Signed-off-by: hwware <wen.hui.ware@gmail.com>
This address 2 issues:
1. It is possible (somehow) that the inner server client (r) was not
working resp 3 when entering this test.
this makes sure it does.
2. in case the test failed it might leave the redirection client closed.
there is a cross test assumption it should be open, so moved most of the
assert checks to the end of the test.
example fail:
https://github.com/valkey-io/valkey/actions/runs/12979601179/job/36195523412
---------
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
New config `rdb-version-check` with values:
* `strict`: Reject future RDB versions.
* `relaxed`: Try parsing future RDB versions and fail only when an
unknown RDB opcode or type is encountered.
This can make it possible for Valkey 8.1 to try read a dump from for
example Valkey 9.0 or later on a best-effort basis. The conditions for
when this is expected to work can be defined when the future Valkey
versions are released. Loading is expected to fail in the following
cases:
* If the data set contains any new key types or other data elements not
supported by the current version.
* If the RDB contains new representations or encodings of existing key
types or other data elements.
This change also prepares for the next RDB version bump. A range of RDB
versions (12-79) is reserved, since it's expected to be used by foreign
software RDB versions, so Valkey will not accept versions in this range
even with the `relaxed` version check. The DUMP/RESTORE format has no
magic string; only the RDB version number.
This change also prepares for the magic string to change from REDIS to
VALKEY next time we bump the RDB version.
Related to #1108.
---------
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Use-after-free has been detect by address sanitizer, such as in this
test run:
https://github.com/valkey-io/valkey/actions/runs/12981530413/job/36200075972?pr=1620#step:5:1339
`hashtableShrinkIfNeeded` may free one of the hash tables and invalidate
the variables used by the `fillBucketHole(ht, b, pos_in_bucket,
table_index)` just after, causing use-after-free. Fill bucket hole first
and shrink afterwards is assumed to solve the issue. (Not reproduced
locally.)
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Might close https://github.com/valkey-io/valkey/issues/1484.
I noticed that we don't disable pause after fork on the last test that
was getting executed, so it might getting stuck in pause loops after the
test ends if it tries another psync for any reason.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Fixes the unit test for hashtable random fairness intermittent failures when
running with the `--accurate` flag.
https://github.com/valkey-io/valkey/actions/runs/12969591890/job/36173815884#step:10:105
The test case picks a random element out of 400, repeated 1M times, and
then checks that 60% of the elements are picked within 3 standard
deviations from the number of times they're expected to be picked. In
this test run (with `--accurate`), the expected number is 2500 and the
standard deviation is 50, which is only 2% of the expected value. This
makes the check too strict and makes the test flaky.
As an alternative, we allow 80% of the elements to be picked within 10%
of the expected number. With this alternative condition, we can also
raise the check for the non-edge case from 60% to 80% of the elements to
be within 3 standard deviations. (With fewer repetitions, 3 standard
deviations is greater than 10% of the expected value, so this new
condition only affects the `--accurate` test run.)
Additional change: Set a random seed to the hash function in the test
suite. Until now, we only seeded the random number generator.
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
As discussed in PR #336.
We have different types of resources like CPU, memory, network, etc. The
`slowlog` can only record commands eat lots of CPU during the processing
phase (doesn't include read/write network time), but can not record
commands eat too many memory and network. For example:
1. run "SET key value(10 megabytes)" command would not be recored in
slowlog, since when processing it the SET command only insert the
value's pointer into db dict. But that command eats huge memory in query
buffer and bandwidth from network. In this case, just 1000 tps can cause
10GB/s network flow.
2. run "GET key" command and the key's value length is 10 megabytes. The
get command can eat huge memory in output buffer and bandwidth to
network.
This PR introduces a new command `COMMANDLOG`, to log commands that
consume significant network bandwidth, including both input and output.
Users can retrieve the results using `COMMANDLOG get <count>
large-request` and `COMMANDLOG get <count> large-reply`, all subcommands
for `COMMANDLOG` are:
* `COMMANDLOG HELP`
* `COMMANDLOG GET <count> <slow|large-request|large-reply>`
* `COMMANDLOG LEN <slow|large-request|large-reply>`
* `COMMANDLOG RESET <slow|large-request|large-reply>`
And the slowlog is also incorporated into the commandlog.
For each of these three types, additional configs have been added for
control:
* `commandlog-request-larger-than` and
`commandlog-large-request-max-len` represent the threshold for large
requests(the unit is Bytes) and the maximum number of commands that can
be recorded.
* `commandlog-reply-larger-than` and `commandlog-large-reply-max-len`
represent the threshold for large replies(the unit is Bytes) and the
maximum number of commands that can be recorded.
* `commandlog-execution-slower-than` and
`commandlog-slow-execution-max-len` represent the threshold for slow
executions(the unit is microseconds) and the maximum number of commands
that can be recorded.
* Additionally, `slowlog-log-slower-than` and `slowlog-max-len` are now
set as aliases for these two new configs.
---------
Signed-off-by: zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
This PR builds upon the [previous entry prefetching
optimization](https://github.com/valkey-io/valkey/pull/1501) to further
enhance performance by implementing value prefetching for hashtable
iterators.
## Implementation
Modified `hashtableInitIterator` to accept a new flags parameter,
allowing control over iterator behavior.
Implemented conditional value prefetching within `hashtableNext` based
on the new `HASHTABLE_ITER_PREFETCH_VALUES` flag.
When the flag is set, hashtableNext now calls `prefetchBucketValues` at
the start of each new bucket, preemptively loading the values of filled
entries into the CPU cache.
The actual prefetching of values is performed using type-specific
callback functions implemented in `server.c`:
- For `robj` the `hashtableObjectPrefetchValue` callback is used to
prefetch the value if not embeded.
This implementation is specifically focused on main database iterations
at this stage. Applying it to hashtables that hold other object types
should not be problematic, but its performance benefits for those cases
will need to be proven through testing and benchmarking.
## Performance
### Setup:
- 64cores Graviton 3 Amazon EC2 instance.
- 50 mil keys with different value sizes.
- Running valkey server over RAM file system.
- crc checksum and comperssion off.
### Action
- save command.
### Results
The results regarding the duration of “save” command was taken from
“info all” command.
```
+--------------------+------------------+------------------+
| Prefetching | Value size (byte)| Time (seconds) |
+--------------------+------------------+------------------+
| No | 100 | 20.112279 |
| Yes | 100 | 12.758519 |
| No | 40 | 16.945366 |
| Yes | 40 | 10.902022 |
| No | 20 | 9.817000 |
| Yes | 20 | 9.626821 |
| No | 10 | 9.71510 |
| Yes | 10 | 9.510565 |
+--------------------+------------------+------------------+
```
The results largely align with our expectations, showing significant
improvements for larger values (100 bytes and 40 bytes) that are stored
outside the robj. For smaller values (20 bytes and 10 bytes) that are
embedded within the robj, we see almost no improvement, which is as
expected.
However, the small improvement observed even for these embedded values
is somewhat surprising. Given that we are not actively prefetching these
embedded values, this minor performance gain was not anticipated.
perf record on save command **without** value prefetching:
```
--99.98%--rdbSaveDb
|
|--91.38%--rdbSaveKeyValuePair
| |
| |--42.72%--rdbSaveRawString
| | |
| | |--26.69%--rdbWriteRaw
| | | |
| | | --25.75%--rioFileWrite.lto_priv.0
| | |
| | --15.41%--rdbSaveLen
| | |
| | |--7.58%--rdbWriteRaw
| | | |
| | | --7.08%--rioFileWrite.lto_priv.0
| | | |
| | | --6.54%--_IO_fwrite
| | |
| | |
| | --7.42%--rdbWriteRaw.constprop.1
| | |
| | --7.18%--rioFileWrite.lto_priv.0
| | |
| | --6.73%--_IO_fwrite
| |
| |
| |--40.44%--rdbSaveStringObject
| |
| --7.62%--rdbSaveObjectType
| |
| --7.39%--rdbWriteRaw.constprop.1
| |
| --7.04%--rioFileWrite.lto_priv.0
| |
| --6.59%--_IO_fwrite
|
|
--7.33%--hashtableNext.constprop.1
|
--6.28%--prefetchNextBucketEntries.lto_priv.0
```
perf record on save command **with** value prefetching:
```
rdbSaveRio
|
--99.93%--rdbSaveDb
|
|--79.81%--rdbSaveKeyValuePair
| |
| |--66.79%--rdbSaveRawString
| | |
| | |--42.31%--rdbWriteRaw
| | | |
| | | --40.74%--rioFileWrite.lto_priv.0
| | |
| | --23.37%--rdbSaveLen
| | |
| | |--11.78%--rdbWriteRaw
| | | |
| | | --11.03%--rioFileWrite.lto_priv.0
| | | |
| | | --10.30%--_IO_fwrite
| | | |
| | |
| | --10.98%--rdbWriteRaw.constprop.1
| | |
| | --10.44%--rioFileWrite.lto_priv.0
| | |
| | --9.74%--_IO_fwrite
| | |
| |
| |--11.33%--rdbSaveObjectType
| | |
| | --10.96%--rdbWriteRaw.constprop.1
| | |
| | --10.51%--rioFileWrite.lto_priv.0
| | |
| | --9.75%--_IO_fwrite
| | |
| |
| --0.77%--rdbSaveStringObject
|
--18.39%--hashtableNext
|
|--10.04%--hashtableObjectPrefetchValue
|
--6.06%--prefetchNextBucketEntries
```
Conclusions:
The prefetching strategy appears to be working as intended, shifting the
performance bottleneck from data access to I/O operations.
The significant reduction in rdbSaveStringObject time suggests that
string objects(which are the values) are being accessed more
efficiently.
Signed-off-by: NadavGigi <nadavgigi102@gmail.com>
This includes a way to run two versions of the server from the TCL test
framework. It's a preparation to add more cross-version tests. The
runtest script accepts a new parameter
./runtest --other-server-path path/to/valkey-server
and a new tag "needs:other-server" for test cases and start_server.
Tests with this tag are automatically skipped if `--other-server-path`
is not provided.
This PR adds it in a CI job with Valkey 7.2.7 by downloading a binary
release.
Fixes#76
---------
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
The desync regression test was created as a regression test for the
following bug:
in case we embed NULL termination inside inline/multi-bulk message we
will not be able to perform strchr in order to
identify the newline(\n)/carriage-return(\r) in the client query buffer.
this can influence (for example) replica reading primary stream and keep
filling it's query buffer endlessly consuming more and more memory.
In order to handle the above risk, a check was added to verify the
inline bulk and multi-bulk size are not exceeding the 64K bytes in the
query-buffer. A test was placed in order to verify this.
This PR introduce the following fixes to the desync regression test:
1. fix the sent payload to flush 1024 bytes block of 'A's instead of
'payload' which was sent by mistake.
2. Make sure that the connection is correctly terminated on protocol
error by the server after exceeding the 64K and not over 64K.
3. add another test intrinsic which will also verify the nested bulk
with embedded null termination (was not verified before)
fixes https://github.com/valkey-io/valkey/issues/1583
NOTE: Although it is possible to change the use of strchr to a more
"safe" utility (eg memchr) which will not pause scan at first occurrence
of '\0', we still like to protect against over excessive usage of the
query buffer and also preserve the current behavior(?). We will look
into improving this though in a followup issue.
---------
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
Signed-off-by: ranshid <88133677+ranshid@users.noreply.github.com>
change the format of the dual channel replication logs so that it will
not
conflict with existing log formats like modules.
Fixes: https://github.com/valkey-io/valkey/issues/1509
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
The test case checks for expire-cycle in LATENCY LATEST, but with the
new hash table, the expiry-cycle is too fast to be logged by latency
monitor. Lower the latency monitor threshold to make it more likely to
be logged.
Fixes#1580
---------
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
When processing a cluster bus PING extension, there is a memory leak
when adding a new key to the `nodes_black_list` dict. We now make sure
to free the key `sds` if the dict did not take ownership of it.
Signed-off-by: Pierre Turin <pieturin@amazon.com>
This issue affected only two message types (CLUSTERMSG_TYPE_PUBLISH and CLUSTERMSG_TYPE_PUBLISHSHARD) because they used a light message header, which caused the CLUSTER INFO stats to miss sent/received message information for those types.
---------
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Signed-off-by: Harkrishn Patro <bunty.hari@gmail.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
This commit creates a new compilation unit for the scripting engine code
by extracting the existing code from the functions unit.
We're doing this refactor to prepare the code for running the `EVAL`
command using different scripting engines.
This PR has a module API change: we changed the type of error messages
returned by the callback
`ValkeyModuleScriptingEngineCreateFunctionsLibraryFunc` to be a
`ValkeyModuleString` (aka `robj`);
This PR also fixes#1470.
---------
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
Some commands that use unix-time, such as `EXPIREAT` and `SET EXAT`, should include the deleted keys in the `expired_keys` statistics if the specified time has already expired, and notifications should be sent in the manner of expired.
---------
Signed-off-by: Ray Cao <zisong.cw@alibaba-inc.com>
Just like spell-check workflow, we should allow to trigger it
in push events, so that the forks repo can notice the format
thing way before submitting the PR.
Signed-off-by: Binbin <binloveplay1314@qq.com>
Adds filter options to CLIENT LIST:
* USER <username>
Return clients authenticated by <username>.
* ADDR <ip:port>
Return clients connected from the specified address.
* LADDR <ip:port>
Return clients connected to the specified local address.
* SKIPME (YES|NO)
Exclude the current client from the list (default: no).
* MAXAGE <maxage>
Only list connections older than the specified age.
Modifies the ID filter to CLIENT KILL to allow multiple IDs
* ID <client-id> [<client-id>...]
Kill connections by client ids.
This makes CLIENT LIST and CLIENT KILL accept the same options.
For backward compatibility, the default value for SKIPME is NO for
CLIENT LIST and YES for CLIENT KILL.
The MAXAGE comes from CLIENT KILL, where it *keeps* clients with the
given max age and kills the older ones. This logic becomes weird for
CLIENT LIST, but is kept for similary with CLIENT KILL, for the use case
of first testing manually using CLIENT LIST, and then running CLIENT
KILL with the same filters.
The `ID client-id [client-id ...]` no longer needs to be the last
filter. The parsing logic determines if an argument is an ID or not
based on whether it can be parsed as an integer or not.
Partly addresses: #668
---------
Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
Add `paused_actions` and `paused_timeout_milliseconds` for INFO Clients
to inform users about if clients are paused.
---------
Signed-off-by: zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
`sds` is a typedef of `char *`.
`const sds` means `char * const`, i.e. a const-pointer to non-const
content.
More often, you would want `const char *`, i.e. a pointer to
const-content. Until now, it's not possible to express that. This PR
adds `const_sds` which is a pointer to const-content sds.
To get a const-pointer to const-content sds, you can use `const
const_sds`.
In this PR, some uses of `const sds` are replaced by `const_sds`. We can
use it more later.
Fixes#1542
---------
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
In some cases unix groups could have whitespace and/or `\` in them.
One example is my workstation. It's a MacOS in an Active Directory
domain. So my user has group `LD\Domain Users`.
Running `make test` on `unstable` and `8.0` branches fails with:
I'm not sure if we need to fix this in 8.0. But it seems that it should
be fixed in unstable.
Signed-off-by: secwall <secwall@yandex-team.ru>
This PR replaces dict with the new hashtable data structure in the HASH
datatype. There is a new struct for hashtable items which contains a
pointer to value sds string and the embedded key sds string. These
values were previously stored in dictEntry. This structure is kept
opaque so we can easily add small value embedding or other optimizations
in the future.
closes#1095
---------
Signed-off-by: Rain Valentine <rsg000@gmail.com>
After #1545 disabled some tests for reply schema validation, we now have
another issue that ECHO is not covered.
```
WARNING! The following commands were not hit at all:
echo
ERROR! at least one command was not hit by the tests
```
This patch adds a test case for ECHO in the unit/other test suite. I
haven't checked if there are more commands that aren't covered.
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
The commands used in valkey-cli tests are not important the reply schema
validation. Skip them to avoid the problem if tests hanging. This has
failed lately in the daily job:
```
[TIMEOUT]: clients state report follows.
sock55fedcc19be0 => (IN PROGRESS) valkey-cli pubsub mode with single standard channel subscription
Killing still running Valkey server 33357
```
These test cases use a special valkey-cli command `:get pubsub` command,
which is an internal command to valkey-cli rather than a Valkey server
command. This command hangs when compiled with with logreqres enabled.
Easy solution is to skip the tests in this setup.
The test cases were introduced in #1432.
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>