2010-02-27 18:07:46 +01:00
|
|
|
/* String -> String Map data structure optimized for size.
|
|
|
|
* This file implements a data structure mapping strings to other strings
|
|
|
|
* implementing an O(n) lookup data structure designed to be very memory
|
|
|
|
* efficient.
|
|
|
|
*
|
2024-04-09 01:24:03 -07:00
|
|
|
* The Hash type uses this data structure for hashes composed of a small
|
2013-12-05 16:35:32 +01:00
|
|
|
* number of elements, to switch to a hash table once a given number of
|
2010-02-27 18:07:46 +01:00
|
|
|
* elements is reached.
|
|
|
|
*
|
2024-04-09 01:24:03 -07:00
|
|
|
* Given that many times Hashes are used to represent objects composed
|
2010-02-27 18:07:46 +01:00
|
|
|
* of few fields, this is a very big win in terms of used memory.
|
|
|
|
*
|
|
|
|
* --------------------------------------------------------------------------
|
|
|
|
*
|
2024-08-14 17:20:36 +01:00
|
|
|
* Copyright (c) 2009-2010, Redis Ltd.
|
2010-02-27 18:07:46 +01:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions are met:
|
|
|
|
*
|
|
|
|
* * Redistributions of source code must retain the above copyright notice,
|
|
|
|
* this list of conditions and the following disclaimer.
|
|
|
|
* * Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* * Neither the name of Redis nor the names of its contributors may be used
|
|
|
|
* to endorse or promote products derived from this software without
|
|
|
|
* specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
|
|
|
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
|
|
|
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
|
|
|
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
|
|
|
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
|
|
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
|
|
|
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
|
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
|
|
|
* POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Memory layout of a zipmap, for the map "foo" => "bar", "hello" => "world":
|
|
|
|
*
|
2010-04-01 13:24:18 +02:00
|
|
|
* <zmlen><len>"foo"<len><free>"bar"<len>"hello"<len><free>"world"
|
2010-02-27 18:07:46 +01:00
|
|
|
*
|
2010-04-01 13:24:18 +02:00
|
|
|
* <zmlen> is 1 byte length that holds the current size of the zipmap.
|
|
|
|
* When the zipmap length is greater than or equal to 254, this value
|
|
|
|
* is not used and the zipmap needs to be traversed to find out the length.
|
2010-02-27 18:07:46 +01:00
|
|
|
*
|
|
|
|
* <len> is the length of the following string (key or value).
|
|
|
|
* <len> lengths are encoded in a single value or in a 5 bytes value.
|
|
|
|
* If the first byte value (as an unsigned 8 bit value) is between 0 and
|
2014-11-25 21:58:05 +08:00
|
|
|
* 253, it's a single-byte length. If it is 254 then a four bytes unsigned
|
2012-04-07 14:40:29 +02:00
|
|
|
* integer follows (in the host byte ordering). A value of 255 is used to
|
2014-11-25 21:58:05 +08:00
|
|
|
* signal the end of the hash.
|
2010-02-27 18:07:46 +01:00
|
|
|
*
|
2014-06-26 18:48:40 +02:00
|
|
|
* <free> is the number of free unused bytes after the string, resulting
|
2012-04-07 14:40:29 +02:00
|
|
|
* from modification of values associated to a key. For instance if "foo"
|
|
|
|
* is set to "bar", and later "foo" will be set to "hi", it will have a
|
|
|
|
* free byte to use if the value will enlarge again later, or even in
|
|
|
|
* order to add a key/value pair if it fits.
|
2010-02-27 18:07:46 +01:00
|
|
|
*
|
|
|
|
* <free> is always an unsigned 8 bit number, because if after an
|
2010-04-01 13:24:18 +02:00
|
|
|
* update operation there are more than a few free bytes, the zipmap will be
|
|
|
|
* reallocated to make sure it is as small as possible.
|
2010-02-27 18:07:46 +01:00
|
|
|
*
|
|
|
|
* The most compact representation of the above two elements hash is actually:
|
|
|
|
*
|
2010-04-01 13:24:18 +02:00
|
|
|
* "\x02\x03foo\x03\x00bar\x05hello\x05\x00world\xff"
|
2010-02-27 18:07:46 +01:00
|
|
|
*
|
2010-04-01 13:24:18 +02:00
|
|
|
* Note that because keys and values are prefixed length "objects",
|
|
|
|
* the lookup will take O(N) where N is the number of elements
|
2010-02-27 18:07:46 +01:00
|
|
|
* in the zipmap and *not* the number of bytes needed to represent the zipmap.
|
|
|
|
* This lowers the constant times considerably.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <stdio.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include "zmalloc.h"
|
2012-02-14 16:11:46 +01:00
|
|
|
#include "endianconv.h"
|
2010-05-22 15:13:09 +02:00
|
|
|
|
|
|
|
#define ZIPMAP_BIGLEN 254
|
|
|
|
#define ZIPMAP_END 255
|
2010-02-27 18:07:46 +01:00
|
|
|
|
2010-05-22 15:13:09 +02:00
|
|
|
/* The following macro returns the number of bytes needed to encode the length
|
|
|
|
* for the integer value _l, that is, 1 byte for lengths < ZIPMAP_BIGLEN and
|
|
|
|
* 5 bytes for all the other lengths. */
|
|
|
|
#define ZIPMAP_LEN_BYTES(_l) (((_l) < ZIPMAP_BIGLEN) ? 1 : sizeof(unsigned int) + 1)
|
|
|
|
|
|
|
|
/* Decode the encoded length pointed by 'p' */
|
|
|
|
static unsigned int zipmapDecodeLength(unsigned char *p) {
|
|
|
|
unsigned int len = *p;
|
|
|
|
|
|
|
|
if (len < ZIPMAP_BIGLEN) return len;
|
|
|
|
memcpy(&len, p + 1, sizeof(unsigned int));
|
2011-03-09 17:31:02 +01:00
|
|
|
memrev32ifbe(&len);
|
2010-05-22 15:13:09 +02:00
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Encode the length 'l' writing it in 'p'. If p is NULL it just returns
|
|
|
|
* the amount of bytes required to encode such a length. */
|
|
|
|
static unsigned int zipmapEncodeLength(unsigned char *p, unsigned int len) {
|
|
|
|
if (p == NULL) {
|
|
|
|
return ZIPMAP_LEN_BYTES(len);
|
|
|
|
} else {
|
|
|
|
if (len < ZIPMAP_BIGLEN) {
|
|
|
|
p[0] = len;
|
|
|
|
return 1;
|
|
|
|
} else {
|
|
|
|
p[0] = ZIPMAP_BIGLEN;
|
|
|
|
memcpy(p + 1, &len, sizeof(len));
|
2011-03-09 17:31:02 +01:00
|
|
|
memrev32ifbe(p + 1);
|
2010-05-22 15:13:09 +02:00
|
|
|
return 1 + sizeof(len);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-08-31 15:42:44 +02:00
|
|
|
static unsigned int zipmapGetEncodedLengthSize(unsigned char *p) {
|
|
|
|
return (*p < ZIPMAP_BIGLEN) ? 1 : 5;
|
2010-02-27 18:07:46 +01:00
|
|
|
}
|
|
|
|
|
2010-02-27 18:13:55 +01:00
|
|
|
/* Return the total amount used by a key (encoded length + payload) */
|
2010-02-27 18:07:46 +01:00
|
|
|
static unsigned int zipmapRawKeyLength(unsigned char *p) {
|
2010-05-22 15:13:09 +02:00
|
|
|
unsigned int l = zipmapDecodeLength(p);
|
|
|
|
return zipmapEncodeLength(NULL, l) + l;
|
2010-02-27 18:07:46 +01:00
|
|
|
}
|
|
|
|
|
2010-02-27 18:13:55 +01:00
|
|
|
/* Return the total amount used by a value
|
2010-02-27 18:07:46 +01:00
|
|
|
* (encoded length + single byte free count + payload) */
|
|
|
|
static unsigned int zipmapRawValueLength(unsigned char *p) {
|
2010-05-22 15:13:09 +02:00
|
|
|
unsigned int l = zipmapDecodeLength(p);
|
2010-02-27 18:07:46 +01:00
|
|
|
unsigned int used;
|
2014-06-26 18:48:40 +02:00
|
|
|
|
2010-05-22 15:13:09 +02:00
|
|
|
used = zipmapEncodeLength(NULL, l);
|
2010-02-27 18:07:46 +01:00
|
|
|
used += p[used] + 1 + l;
|
|
|
|
return used;
|
|
|
|
}
|
|
|
|
|
2011-11-01 19:57:51 +00:00
|
|
|
/* Call before iterating through elements via zipmapNext() */
|
2010-03-05 14:04:17 +01:00
|
|
|
unsigned char *zipmapRewind(unsigned char *zm) {
|
|
|
|
return zm + 1;
|
|
|
|
}
|
|
|
|
|
2010-03-04 19:45:15 +01:00
|
|
|
/* This function is used to iterate through all the zipmap elements.
|
|
|
|
* In the first call the first argument is the pointer to the zipmap + 1.
|
|
|
|
* In the next calls what zipmapNext returns is used as first argument.
|
|
|
|
* Example:
|
|
|
|
*
|
2010-03-05 14:04:17 +01:00
|
|
|
* unsigned char *i = zipmapRewind(my_zipmap);
|
2010-03-04 19:45:15 +01:00
|
|
|
* while((i = zipmapNext(i,&key,&klen,&value,&vlen)) != NULL) {
|
|
|
|
* printf("%d bytes key at $p\n", klen, key);
|
|
|
|
* printf("%d bytes value at $p\n", vlen, value);
|
|
|
|
* }
|
|
|
|
*/
|
2010-03-05 14:04:17 +01:00
|
|
|
unsigned char *
|
|
|
|
zipmapNext(unsigned char *zm, unsigned char **key, unsigned int *klen, unsigned char **value, unsigned int *vlen) {
|
2010-05-22 15:13:09 +02:00
|
|
|
if (zm[0] == ZIPMAP_END) return NULL;
|
2010-03-04 19:45:15 +01:00
|
|
|
if (key) {
|
|
|
|
*key = zm;
|
2010-05-22 15:13:09 +02:00
|
|
|
*klen = zipmapDecodeLength(zm);
|
|
|
|
*key += ZIPMAP_LEN_BYTES(*klen);
|
2010-03-04 19:45:15 +01:00
|
|
|
}
|
|
|
|
zm += zipmapRawKeyLength(zm);
|
|
|
|
if (value) {
|
|
|
|
*value = zm + 1;
|
2010-05-22 15:13:09 +02:00
|
|
|
*vlen = zipmapDecodeLength(zm);
|
|
|
|
*value += ZIPMAP_LEN_BYTES(*vlen);
|
2010-03-04 19:45:15 +01:00
|
|
|
}
|
|
|
|
zm += zipmapRawValueLength(zm);
|
|
|
|
return zm;
|
|
|
|
}
|
|
|
|
|
2021-04-14 20:46:54 +08:00
|
|
|
/* Validate the integrity of the data structure.
|
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
|
|
|
* when `deep` is 0, only the integrity of the header is validated.
|
|
|
|
* when `deep` is 1, we scan all the entries one by one. */
|
|
|
|
int zipmapValidateIntegrity(unsigned char *zm, size_t size, int deep) {
|
|
|
|
#define OUT_OF_RANGE(p) ((p) < zm + 2 || (p) > zm + size - 1)
|
|
|
|
unsigned int l, s, e;
|
|
|
|
|
|
|
|
/* check that we can actually read the header (or ZIPMAP_END). */
|
|
|
|
if (size < 2) return 0;
|
|
|
|
|
|
|
|
/* the last byte must be the terminator. */
|
|
|
|
if (zm[size - 1] != ZIPMAP_END) return 0;
|
|
|
|
|
|
|
|
if (!deep) return 1;
|
|
|
|
|
|
|
|
unsigned int count = 0;
|
|
|
|
unsigned char *p = zm + 1; /* skip the count */
|
|
|
|
while (*p != ZIPMAP_END) {
|
|
|
|
/* read the field name length encoding type */
|
|
|
|
s = zipmapGetEncodedLengthSize(p);
|
2021-06-10 20:39:33 +08:00
|
|
|
/* make sure the entry length doesn't reach outside the edge of the zipmap */
|
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
|
|
|
if (OUT_OF_RANGE(p + s)) return 0;
|
|
|
|
|
|
|
|
/* read the field name length */
|
|
|
|
l = zipmapDecodeLength(p);
|
|
|
|
p += s; /* skip the encoded field size */
|
|
|
|
p += l; /* skip the field */
|
|
|
|
|
2021-06-10 20:39:33 +08:00
|
|
|
/* make sure the entry doesn't reach outside the edge of the zipmap */
|
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
|
|
|
if (OUT_OF_RANGE(p)) return 0;
|
|
|
|
|
|
|
|
/* read the value length encoding type */
|
|
|
|
s = zipmapGetEncodedLengthSize(p);
|
2021-06-10 20:39:33 +08:00
|
|
|
/* make sure the entry length doesn't reach outside the edge of the zipmap */
|
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
|
|
|
if (OUT_OF_RANGE(p + s)) return 0;
|
|
|
|
|
|
|
|
/* read the value length */
|
|
|
|
l = zipmapDecodeLength(p);
|
|
|
|
p += s; /* skip the encoded value size*/
|
|
|
|
e = *p++; /* skip the encoded free space (always encoded in one byte) */
|
|
|
|
p += l + e; /* skip the value and free space */
|
|
|
|
count++;
|
|
|
|
|
2021-06-10 20:39:33 +08:00
|
|
|
/* make sure the entry doesn't reach outside the edge of the zipmap */
|
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
|
|
|
if (OUT_OF_RANGE(p)) return 0;
|
|
|
|
}
|
|
|
|
|
2021-08-09 22:13:46 +08:00
|
|
|
/* check that the zipmap is not empty. */
|
|
|
|
if (count == 0) return 0;
|
|
|
|
|
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
2020-08-13 16:41:05 +03:00
|
|
|
/* check that the count in the header is correct */
|
|
|
|
if (zm[0] != ZIPMAP_BIGLEN && zm[0] != count) return 0;
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
#undef OUT_OF_RANGE
|
|
|
|
}
|