Update references of copyright being assigned to Salvatore when it was
transferred to Redis Ltd. as per
https://github.com/valkey-io/valkey/issues/544.
---------
Signed-off-by: Pieter Cailliau <pieter@redis.com>
I have validated that these settings closely match the existing coding
style with one major exception on `BreakBeforeBraces`, which will be
`Attach` going forward. The mixed `BreakBeforeBraces` styles in the
current codebase are hard to imitate and also very odd IMHO - see below
```
if (a == 1) { /*Attach */
}
```
```
if (a == 1 ||
b == 2)
{ /* Why? */
}
```
Please do NOT merge just yet. Will add the github action next once the
style is reviewed/approved.
---------
Signed-off-by: Ping Xie <pingxie@google.com>
This includes comments used for module API documentation.
* Strategy for replacement: Regex search: `(//|/\*| \*|#).* ("|\()?(r|R)edis( |\.
|'|\n|,|-|\)|")(?!nor the names of its contributors)(?!Ltd.)(?!Labs)(?!Contributors.)`
* Don't edit copyright comments
* Replace "Redis version X.X" -> "Redis OSS version X.X" to distinguish
from newly licensed repository
* Replace "Redis Object" -> "Object"
* Exclude markdown for now
* Don't edit Lua scripting comments referring to redis.X API
* Replace "Redis Protocol" -> "RESP"
* Replace redis-benchmark, -cli, -server, -check-aof/rdb with "valkey-"
prefix
* Most other places, I use best judgement to either remove "Redis", or
replace with "the server" or "server"
Fixes#148
---------
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
When we insert entries into dict, it may autonomously expand if needed.
However, when we delete entries from dict, it doesn't shrink to the
proper size. If there are few entries in a very large dict, it may cause
huge waste of memory and inefficiency when iterating.
The main keyspace dicts (keys and expires), are shrinked by cron
(`tryResizeHashTables` calls `htNeedsResize` and `dictResize`),
And some data structures such as zset and hash also do that (call
`htNeedsResize`) right after a loop of calls to `dictDelete`,
But many other dicts are completely missing that call (they can only
expand).
In this PR, we provide the ability to automatically shrink the dict when
deleting. The conditions triggering the shrinking is the same as
`htNeedsResize` used to have. i.e. we expand when we're over 100%
utilization, and shrink when we're below 10% utilization.
Additionally:
* Add `dictPauseAutoResize` so that flows that do mass deletions, will
only trigger shrinkage at the end.
* Rename `dictResize` to `dictShrinkToFit` (same logic as it used to
have, but better name describing it)
* Rename `_dictExpand` to `_dictResize` (same logic as it used to have,
but better name describing it)
related to discussion
https://github.com/redis/redis/pull/12819#discussion_r1409293878
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data.
## Important changes
* Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms.
* getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time.
* Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree.
* scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue.
* Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot.
* Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading.
* DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well.
## Performance
This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict.
RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load.
## Interface changes
* Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS`
* Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored.
* New RDB version to support the new op code for SLOT information.
---------
Co-authored-by: Vitaly Arbuzov <arvit@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Roshan Khatri <rvkhatri@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Optimized HRANDFIELD and ZRANDMEMBER commands as in #8444,
CASE 3 under listpack encoding. Boost optimization to CASE 2.5.
CASE 2.5 listpack only. Sampling unique elements, in non-random order.
Listpack encoded hashes / zsets are meant to be relatively small, so
HRANDFIELD_SUB_STRATEGY_MUL / ZRANDMEMBER_SUB_STRATEGY_MUL
isn't necessary and we rather not make copies of the entries. Instead, we
emit them directly to the output buffer.
Simple benchmarks shows it provides some 400% improvement in HRANDFIELD
and ZRANGESTORE both in CASE 3.
Unrelated changes: remove useless setTypeRandomElements and fix a typo.
For sets and hashes that will eventually be stored as the hash encoding, it's much faster to immediately convert them to their hash encoding and then perform the insertions since it avoids the O(N) search and frequent reallocations. This change checks the number of arguments in the incoming command, and converts the data-structure if the number of new entries exceeds the listpack-max-entries configuration. This can cause us to over-allocate memory if their are duplicate entries in the input, which is unexpected.
unstable
Summary:
throughput summary: 805.54 requests per second
latency summary (msec):
avg min p50 p95 p99 max
61.908 25.680 68.351 73.279 75.967 79.295
hset-improvement
Summary:
throughput summary: 4701.46 requests per second
latency summary (msec):
avg min p50 p95 p99 max
10.546 0.832 11.959 12.471 13.119 14.967
Related to the hang reported in #11671
Currently, redis can disconnect a client due to reaching output buffer limit,
it'll also avoid feeding that output buffer with more data, but it will keep
running the loop in the command (despite the client already being marked for
disconnection)
This PR is an attempt to mitigate the problem, specifically for commands that
are easy to abuse, specifically: KEYS, HRANDFIELD, SRANDMEMBER, ZRANDMEMBER.
The RAND family of commands can take a negative COUNT argument (which is not
bound to the number of elements in the key), so it's enough to create a key
with one field, and then these commands can be used to hang redis.
For KEYS the caller can use the existing keyspace in redis (if big enough).
Check the validity of the value before performing the create operation,
prevents new data from being generated even if the request fails to execute.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: chendianqiang <chendianqiang@meituan.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
Before this change in hashTypeSet() function, we first use dictFind()
to look for the field and if it does not exist, we use dictAdd() to add it.
In dictAdd() function the dictionary will look for the field again and I
think this is meaningless as we already know that the field does not exist.
An optimization is to use dictAddRaw() instead of dictFind() and dictAdd().
If we use dictAddRaw(), a new entry will be added when the field does not
exist, and what we should do then is just set the value of that entry, and set
its key to 'sdsdup(field)' in the case that 'HASH_SET_TAKE_FIELD' flag wasn't set.
* Extending the use of hashTypeGetValue.
Functions hashTypeExists, hashTypeGetValueLength and addHashFieldToReply
have a similar pattern on calling hashTypeGetFromHashTable or
hashTypeGetFromZipList depending on the underlying data structure. What
does functions are duing is exactly what hashTypeGetValue does. Those
functions were changed to use existing function hashTypeGetValue making
the code more consistent.
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Fixed a bug that used the `hincrbyfloat` or `hincrby` commands to make the field or value exceed the
`hash_max_listpack_value` but did not change the object encoding of the hash structure.
Add a length check for field and value, check the length of value first, if the length of value does not
exceed `hash_max_listpack_value` then check the length of field.
If the length of field or value is too long, it will reduce the efficiency of listpack, and the object encoding
will become hashtable after AOF restart, so this is also to keep the same before and after AOF restart.
Summary of changes:
1. Rename `redisCommand->name` to `redisCommand->declared_name`, it is a
const char * for native commands and SDS for module commands.
2. Store the [sub]command fullname in `redisCommand->fullname` (sds).
3. List subcommands in `ACL CAT`
4. List subcommands in `COMMAND LIST`
5. `moduleUnregisterCommands` now will also free the module subcommands.
6. RM_GetCurrentCommandName returns full command name
Other changes:
1. Add `addReplyErrorArity` and `addReplyErrorExpireTime`
2. Remove `getFullCommandName` function that now is useless.
3. Some cleanups about `fullname` since now it is SDS.
4. Delete `populateSingleCommand` function from server.h that is useless.
5. Added tests to cover this change.
6. Add some module unload tests and fix the leaks
7. Make error messages uniform, make sure they always contain the full command
name and that it's quoted.
7. Fixes some typos
see the history in #9504, fixes#10124
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: guybe7 <guy.benoish@redislabs.com>
Leak found by the corrupt-dump-fuzzer when using GCC ASAN, which seems
to falsely report leaks on pointers kept only on the stack when calling exit.
Instead we now use _exit on panic / assert to skip these leak checks.
Additionally, check for sanitizer warnings in the corrupt-dump-fuzzer between iterations,
so that when something is found we know which test to relate it too (and it prints reproduction command list)
- fix possible heap corruption in ziplist and listpack resulting by trying to
allocate more than the maximum size of 4GB.
- prevent ziplist (hash and zset) from reaching size of above 1GB, will be
converted to HT encoding, that's not a useful size.
- prevent listpack (stream) from reaching size of above 1GB.
- XADD will start a new listpack if the new record may cause the previous
listpack to grow over 1GB.
- XADD will respond with an error if a single stream record is over 1GB
- List type (ziplist in quicklist) was truncating strings that were over 4GB,
now it'll respond with an error.
Co-authored-by: sundb <sundbcn@gmail.com>
In the `HRANDFIELD`, `SRANDMEMBER` and `ZRANDMEMBER` commands,
There are some strategies that could in some rare cases return an unfair random.
these cases are where s small dict happens be be hashed unevenly.
Specifically when `count*ZRANDMEMBER_SUB_STRATEGY_MUL > size`,
using `dictGetRandomKey` to randomize from a dict will result in an unfair random result.
Part two of implementing #8702 (zset), after #8887.
## Description of the feature
Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance.
## Rdb format changes
New `RDB_TYPE_ZSET_LISTPACK` rdb type.
## Rdb loading improvements:
1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist.
2) Simplifying the release of empty key objects when RDB loading.
3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c.
## Interface changes
1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`).
2) OBJECT ENCODING will return listpack instead of ziplist.
## Listpack improvements:
1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack.
2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string.
3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`.
## Zset improvements:
1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop.
2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset.
## Tests
1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function.
2) Add zset RDB loading test.
3) Add benchmark test for `lpCompare` and `ziplsitCompare`.
4) Add empty listpack zset corrupt dump test.
Part one of implementing #8702 (taking hashes first before other types)
## Description of the feature
1. Change ziplist encoded hash objects to listpack encoding.
2. Convert existing ziplists on RDB loading time. an O(n) operation.
## Rdb format changes
1. Add RDB_TYPE_HASH_LISTPACK rdb type.
2. Bump RDB_VERSION to 10
## Interface changes
1. New `hash-max-listpack-entries` config is an alias for `hash-max-ziplist-entries` (same with `hash-max-listpack-value`)
2. OBJECT ENCODING will return `listpack` instead of `ziplist`
## Listpack improvements:
1. Support direct insert, replace integer element (rather than convert back and forth from string)
3. Add more listpack capabilities to match the ziplist ones (like `lpFind`, `lpRandomPairs` and such)
4. Optimize element length fetching, avoid multiple calculations
5. Use inline to avoid function call overhead.
## Tests
1. Add a new test to the RDB load time conversion
2. Adding the listpack unit tests. (based on the one in ziplist.c)
3. Add a few "corrupt payload: fuzzer findings" tests, and slightly modify existing ones.
Co-authored-by: Oran Agra <oran@redislabs.com>
Reduce dict struct memory overhead
on 64bit dict size goes down from jemalloc's 96 byte bin to its 56 byte bin.
summary of changes:
- Remove `privdata` from callbacks and dict creation. (this affects many files, see "Interface change" below).
- Meld `dictht` struct into the `dict` struct to eliminate struct padding. (this affects just dict.c and defrag.c)
- Eliminate the `sizemask` field, can be calculated from size when needed.
- Convert the `size` field into `size_exp` (exponent), utilizes one byte instead of 8.
Interface change: pass dict pointer to dict type call back functions.
This is instead of passing the removed privdata field. In the future if
we'd like to have private data in the callbacks we can extract it from
the dict type. We can extend dictType to include a custom dict struct
allocator and use it to allocate more data at the end of the dict
struct. This data can then be used to store private data later acccessed
by the callbacks.
due to a copy-paste bug, it used to reply with null response rather than empty array.
this commit includes new tests that are looking at the RESP response directly in
order to be able to tell the difference between them.
Co-authored-by: Oran Agra <oran@redislabs.com>
1. Add one key-value pair to myhash, which the length of key and value both less than hash-max-ziplist-value, for example:
>hset myhash key value
2. Then execute the following command
>hsetnx myhash key value1 (the length greater than hash-max-ziplist-value)
3. This will add nothing, but the code type of "myhash" changed from ziplist to dict even there are only one key-value pair in "myhash", and both of them less than hash-max-ziplist-value.
SRANDMEMBER with negative count (non unique) can return the same member
multiple times, and the order of elements in the returned collection matters.
For these reasons returning a RESP3 Set type is not valid for the negative
count, but also not really valid for the positive (unique) variant either (the
command returns an array of random picks, not a set)
This PR also contains a minor optimization for SRANDMEMBER, HRANDFIELD,
and ZRANDMEMBER, to avoid the temporary dict from being rehashed while it grows.
Co-authored-by: Oran Agra <oran@redislabs.com>
Avoids memmove and reallocs when replacing a ziplist element of the
same encoded size as the new value.
Affects HSET, HINRBY, HINCRBYFLOAT (via hashTypeSet) and LSET (via
quicklistReplaceAtIndex).
It is inefficient to repeatedly pick a single random element from a
ziplist.
For CASE4, which is when the user requested a low number of unique
random picks from the collectoin, we used thta pattern.
Now we use a different algorithm that picks unique elements from a
ziplist, and guarentee no duplicate but doesn't provide random order
(which is only needed in the non-unique random picks case)
Unrelated changes:
* change ziplist count and indexes variables to unsigned
* solve compilation warnings about uninitialized vars in gcc 10.2
Co-authored-by: xinluton <xinluton@qq.com>
Changes to HRANDFIELD and ZRANDMEMBER:
* Fix risk of OOM panic when client query a very big negative count (avoid allocating huge temporary buffer).
* Fix uneven random distribution in HRANDFIELD with negative count (wasn't using dictGetFairRandomKey).
* Add tests to check an even random distribution (HRANDFIELD, SRANDMEMBER, ZRANDMEMBER).
Co-authored-by: Oran Agra <oran@redislabs.com>
New commands:
`HRANDFIELD [<count> [WITHVALUES]]`
`ZRANDMEMBER [<count> [WITHSCORES]]`
Algorithms are similar to the one in SRANDMEMBER.
Both return a simple bulk response when no arguments are given, and an array otherwise.
In case values/scores are requested, RESP2 returns a long array, and RESP3 a nested array.
note: in all 3 commands, the only option that also provides random order is the one with negative count.
Changes to SRANDMEMBER
* Optimization when count is 1, we can use the more efficient algorithm of non-unique random
* optimization: work with sds strings rather than robj
Other changes:
* zzlGetScore: when zset needs to convert string to double, we use safer memcpy (in
case the buffer is too small)
* Solve a "bug" in SRANDMEMBER test: it intended to test a positive count (case 3 or
case 4) and by accident used a negative count
Co-authored-by: xinluton <xinluton@qq.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Fix wrong server dirty increment in
* spopWithCountCommand
* hsetCommand
* ltrimCommand
* pfaddCommand
Some didn't increment the amount of fields (just one per command).
Others had excessive increments.
If RESTORE passes successfully with full sanitization, we can't affort
to crash later on assertion due to duplicate records in a hash when
converting it form ziplist to dict.
This means that when doing full sanitization, we must make sure there
are no duplicate records in any of the collections.
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
Syntax:
COPY <key> <new-key> [DB <dest-db>] [REPLACE]
No support for module keys yet.
Co-authored-by: tmgauss
Co-authored-by: Itamar Haber <itamar@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
looks like each platform implements long double differently (different bit count)
so we can't save them as binary, and we also want to avoid creating a new RDB
format version, so we save these are hex strings using "%La".
This commit includes a change in the arguments of ld2string to support this.
as well as tests for coverage and short reads.
coded by @guybe7
The string representation of `long double` may take
up to ~5000 chars (see PR #3745).
Before this fix HINCRBYFLOAT would never overflow (since
the string could not exceed 256 chars). Now it can.