27350 Commits

Author SHA1 Message Date
John Sully
216e32910e Merge branch 'keydbpro' of https://github.com/JohnSully/KeyDB-Pro into keydbpro
Former-commit-id: 3d63d38aae762c60a22154109827ebfd46aae2a5
2020-01-05 20:19:55 -05:00
John Sully
5907dccdae Add new flags to example configuration file
Former-commit-id: bb38628f0ef4f417db968154cc37d103f11b4c63
2020-01-05 20:19:28 -05:00
John Sully
9863288764 Add new flags to example configuration file
Former-commit-id: bb38628f0ef4f417db968154cc37d103f11b4c63
2020-01-05 20:19:28 -05:00
John Sully
78924a295e Enforce seperate license keys for connected replicas
Former-commit-id: bc005cb50b1010a2bc9170e261cd93dba849c35f
2020-01-04 17:15:06 -05:00
John Sully
1ecc8a3eef Enforce seperate license keys for connected replicas
Former-commit-id: bc005cb50b1010a2bc9170e261cd93dba849c35f
2020-01-04 17:15:06 -05:00
Itamar Haber
74a4290063 Adjusts 'io_threads_num' max to 128
Instead of 512, use the defined max from networking.c
2020-01-04 18:33:24 +02:00
Itamar Haber
408e8e9f44 Adjusts 'io_threads_num' max to 128
Instead of 512, use the defined max from networking.c
2020-01-04 18:33:24 +02:00
John Sully
2109d8972e Additional flash tests
Former-commit-id: 3f9b1a35821cb3a3bf82aabb180c13a9eddf4e93
2020-01-03 17:11:23 -05:00
John Sully
db0e7bec78 Additional flash tests
Former-commit-id: 3f9b1a35821cb3a3bf82aabb180c13a9eddf4e93
2020-01-03 17:11:23 -05:00
John Sully
efdf09be36 Fix crash with subkey expire
Former-commit-id: 8e1d416714484ff6ff4242c5d9a24b1458bbfb7b
2020-01-03 17:06:07 -05:00
John Sully
d36af4c061 Fix crash with subkey expire
Former-commit-id: 8e1d416714484ff6ff4242c5d9a24b1458bbfb7b
2020-01-03 17:06:07 -05:00
John Sully
4301cfb8e9 Merge branch 'unstable' into keydbpro
Former-commit-id: 76ddbed0708277443660ffab2a2289e120fe87cd
2020-01-03 16:53:40 -05:00
John Sully
784f4c5b06 Merge branch 'unstable' into keydbpro
Former-commit-id: 76ddbed0708277443660ffab2a2289e120fe87cd
2020-01-03 16:53:40 -05:00
John Sully
e49ec97f98 subkey expire testes
Former-commit-id: 0cf3af6857c192bd03656c28b5a0a2bb11416b8c
2020-01-03 16:50:13 -05:00
John Sully
fc4d9e6d1f subkey expire testes
Former-commit-id: 0cf3af6857c192bd03656c28b5a0a2bb11416b8c
2020-01-03 16:50:13 -05:00
John Sully
ce0fde973a Add support for storing expirations in FLASH
Former-commit-id: 1dca07bd564042fce1b01d275641f35b918ae557
2020-01-03 15:53:36 -05:00
John Sully
29bcaae91d Add support for storing expirations in FLASH
Former-commit-id: 1dca07bd564042fce1b01d275641f35b918ae557
2020-01-03 15:53:36 -05:00
John Sully
6ab3e82e45 Drop severity of master disconnect log when multimaster is enabled
Former-commit-id: edb993d52b25c30392c6eb1e60896498f991a223
2020-01-02 15:36:02 -05:00
John Sully
6370309b58 Drop severity of master disconnect log when multimaster is enabled
Former-commit-id: edb993d52b25c30392c6eb1e60896498f991a223
2020-01-02 15:36:02 -05:00
John Sully
85c8fc72b7 Fix issue where expire is lost when performing a defrag
Former-commit-id: aea333bb78fafabbddb340dfd4c232c2e207cfba
2020-01-01 20:41:17 -05:00
John Sully
da917c4de5 Fix issue where expire is lost when performing a defrag
Former-commit-id: aea333bb78fafabbddb340dfd4c232c2e207cfba
2020-01-01 20:41:17 -05:00
John Sully
2e50d383a6 C++ wrapper classes for SDS
Former-commit-id: 45817db8c3a86815945359113dcbccfde4257ce5
2020-01-01 19:13:48 -05:00
John Sully
20e529c441 C++ wrapper classes for SDS
Former-commit-id: 45817db8c3a86815945359113dcbccfde4257ce5
2020-01-01 19:13:48 -05:00
antirez
d7691127f7 Fix active expire division by zero.
Likely fix #6723.

This is what happens AFAIK: we enter the main loop where we expire stuff
until a given percentage of keys is still found to be logically expired.
There are however other potential exit conditions.

However the "sampled" variable is not always incremented inside the
loop, because we may found no valid slot as we scan the hash table, but
just NULLs ad dict entries. So when the do/while loop condition is
triggered at the end, we do (expired*100/sampled), dividing by zero if
we sampled 0 keys.
2020-01-01 18:13:13 +01:00
antirez
0af467d18f Fix active expire division by zero.
Likely fix #6723.

This is what happens AFAIK: we enter the main loop where we expire stuff
until a given percentage of keys is still found to be logically expired.
There are however other potential exit conditions.

However the "sampled" variable is not always incremented inside the
loop, because we may found no valid slot as we scan the hash table, but
just NULLs ad dict entries. So when the do/while loop condition is
triggered at the end, we do (expired*100/sampled), dividing by zero if
we sampled 0 keys.
2020-01-01 18:13:13 +01:00
antirez
d22d40a2eb Fix active expire division by zero.
Likely fix #6723.

This is what happens AFAIK: we enter the main loop where we expire stuff
until a given percentage of keys is still found to be logically expired.
There are however other potential exit conditions.

However the "sampled" variable is not always incremented inside the
loop, because we may found no valid slot as we scan the hash table, but
just NULLs ad dict entries. So when the do/while loop condition is
triggered at the end, we do (expired*100/sampled), dividing by zero if
we sampled 0 keys.
2020-01-01 18:10:39 +01:00
antirez
5e7e5e6b61 Fix active expire division by zero.
Likely fix #6723.

This is what happens AFAIK: we enter the main loop where we expire stuff
until a given percentage of keys is still found to be logically expired.
There are however other potential exit conditions.

However the "sampled" variable is not always incremented inside the
loop, because we may found no valid slot as we scan the hash table, but
just NULLs ad dict entries. So when the do/while loop condition is
triggered at the end, we do (expired*100/sampled), dividing by zero if
we sampled 0 keys.
2020-01-01 18:10:39 +01:00
John Sully
ad0cb8da40 Fix issue #130 due to fastlock timeout reduction
Former-commit-id: dbef17c2e16f115733242721e9b5a43f01e7a554
2020-01-01 11:52:00 -05:00
John Sully
1790b02984 Fix issue #130 due to fastlock timeout reduction
Former-commit-id: dbef17c2e16f115733242721e9b5a43f01e7a554
2020-01-01 11:52:00 -05:00
John Sully
e5863f8ff5 Add support for incremental build with header files 2020-01-01 10:33:02 -05:00
John Sully
e5565a793e Add support for incremental build with header files 2020-01-01 10:33:02 -05:00
ShooterIT
fd67d71928 Rename rdb asynchronously 2019-12-31 21:45:32 +08:00
ShooterIT
2bc8db9ca5 Rename rdb asynchronously 2019-12-31 21:45:32 +08:00
wangyuan21
834f18ab03 free time event when delete eventloop 2019-12-31 19:57:02 +08:00
wangyuan21
3848849013 free time event when delete eventloop 2019-12-31 19:57:02 +08:00
WuYunlong
8aa821fd0a Fix petential cluster link error.
Funcion adjustOpenFilesLimit() has an implicit parameter, which is server.maxclients.
This function aims to ajust maximum file descriptor number according to server.maxclients
by best effort, which is "bestlimit" could be lower than "maxfiles" but greater than "oldlimit".
When we try to increase "maxclients" using CONFIG SET command, we could increase maximum
file descriptor number to a bigger value without calling aeResizeSetSize the same time.
When later more and more clients connect to server, the allocated fd could be bigger and bigger,
and eventually exceeds events size of aeEventLoop.events. When new nodes joins the cluster,
new link is created, together with new fd, but when calling aeCreateFileEvent, we did not
check the return value. In this case, we have a non-null "link" but the associated fd is not
registered.

So when we dynamically set "maxclients" we could reach an inconsistency between maximum file
descriptor number of the process and server.maxclients. And later could cause cluster link and link
fd inconsistency.

While setting "maxclients" dynamically, we consider it as failed when resulting "maxclients" is not
the same as expected. We try to restore back the maximum file descriptor number when we failed to set
"maxclients" to the specified value, so that server.maxclients could act as a guard as before.
2019-12-31 18:16:30 +08:00
WuYunlong
0992ada2fe Fix petential cluster link error.
Funcion adjustOpenFilesLimit() has an implicit parameter, which is server.maxclients.
This function aims to ajust maximum file descriptor number according to server.maxclients
by best effort, which is "bestlimit" could be lower than "maxfiles" but greater than "oldlimit".
When we try to increase "maxclients" using CONFIG SET command, we could increase maximum
file descriptor number to a bigger value without calling aeResizeSetSize the same time.
When later more and more clients connect to server, the allocated fd could be bigger and bigger,
and eventually exceeds events size of aeEventLoop.events. When new nodes joins the cluster,
new link is created, together with new fd, but when calling aeCreateFileEvent, we did not
check the return value. In this case, we have a non-null "link" but the associated fd is not
registered.

So when we dynamically set "maxclients" we could reach an inconsistency between maximum file
descriptor number of the process and server.maxclients. And later could cause cluster link and link
fd inconsistency.

While setting "maxclients" dynamically, we consider it as failed when resulting "maxclients" is not
the same as expected. We try to restore back the maximum file descriptor number when we failed to set
"maxclients" to the specified value, so that server.maxclients could act as a guard as before.
2019-12-31 18:16:30 +08:00
hayashier
027d609cb0 fix typo from fss to rss 2019-12-31 17:46:48 +09:00
hayashier
a3b0d8631f fix typo from fss to rss 2019-12-31 17:46:48 +09:00
Guy Benoish
fe65141135 Modules: Fix blocked-client-related memory leak
If a blocked module client times-out (or disconnects, unblocked
by CLIENT command, etc.) we need to call moduleUnblockClient
in order to free memory allocated by the module sub-system
and blocked-client private data

Other changes:
Made blockedonkeys.tcl tests a bit more aggressive in order
to smoke-out potential memory leaks
2019-12-30 10:10:59 +05:30
Guy Benoish
d7d13721d3 Modules: Fix blocked-client-related memory leak
If a blocked module client times-out (or disconnects, unblocked
by CLIENT command, etc.) we need to call moduleUnblockClient
in order to free memory allocated by the module sub-system
and blocked-client private data

Other changes:
Made blockedonkeys.tcl tests a bit more aggressive in order
to smoke-out potential memory leaks
2019-12-30 10:10:59 +05:30
Guy Benoish
13df904609 Blocking XREAD[GROUP] should always reply with valid data (or timeout)
This commit solves the following bug:
127.0.0.1:6379> XGROUP CREATE x grp $ MKSTREAM
OK
127.0.0.1:6379> XADD x 666 f v
"666-0"
127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x >
1) 1) "x"
   2) 1) 1) "666-0"
         2) 1) "f"
            2) "v"
127.0.0.1:6379> XADD x 667 f v
"667-0"
127.0.0.1:6379> XDEL x 667
(integer) 1
127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x >
1) 1) "x"
   2) (empty array)

The root cause is that we use s->last_id in streamCompareID
while we should use the last *valid* ID
2019-12-30 10:06:01 +05:30
Guy Benoish
a351e74fe9 Blocking XREAD[GROUP] should always reply with valid data (or timeout)
This commit solves the following bug:
127.0.0.1:6379> XGROUP CREATE x grp $ MKSTREAM
OK
127.0.0.1:6379> XADD x 666 f v
"666-0"
127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x >
1) 1) "x"
   2) 1) 1) "666-0"
         2) 1) "f"
            2) "v"
127.0.0.1:6379> XADD x 667 f v
"667-0"
127.0.0.1:6379> XDEL x 667
(integer) 1
127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x >
1) 1) "x"
   2) (empty array)

The root cause is that we use s->last_id in streamCompareID
while we should use the last *valid* ID
2019-12-30 10:06:01 +05:30
antirez
c2461ba531 Fix duplicated CLIENT SETNAME reply.
Happened when we set the name to "" to cancel the name.
Was introduced during the RESP3 refactoring.

See #6036.
2019-12-29 15:46:31 +01:00
antirez
e61dde8806 Fix duplicated CLIENT SETNAME reply.
Happened when we set the name to "" to cancel the name.
Was introduced during the RESP3 refactoring.

See #6036.
2019-12-29 15:46:31 +01:00
Guy Benoish
444dd1075c Stream: Handle streamID-related edge cases
This commit solves several edge cases that are related to
exhausting the streamID limits: We should correctly calculate
the succeeding streamID instead of blindly incrementing 'seq'
This affects both XREAD and XADD.

Other (unrelated) changes:
Reply with a better error message when trying to add an entry
to a stream that has exhausted last_id
2019-12-29 15:46:31 +01:00
Guy Benoish
cddf1da2e9 Stream: Handle streamID-related edge cases
This commit solves several edge cases that are related to
exhausting the streamID limits: We should correctly calculate
the succeeding streamID instead of blindly incrementing 'seq'
This affects both XREAD and XADD.

Other (unrelated) changes:
Reply with a better error message when trying to add an entry
to a stream that has exhausted last_id
2019-12-29 15:46:31 +01:00
Oran Agra
f866b31a74 config.c adjust config limits and mutable
- make lua-replicate-commands mutable (it never was, but i don't see why)
- make tcp-backlog immutable (fix a recent refactory mistake)
- increase the max limit of a few configs to match what they were before
the recent refactory
2019-12-29 15:46:31 +01:00
Oran Agra
52ea44e53b config.c adjust config limits and mutable
- make lua-replicate-commands mutable (it never was, but i don't see why)
- make tcp-backlog immutable (fix a recent refactory mistake)
- increase the max limit of a few configs to match what they were before
the recent refactory
2019-12-29 15:46:31 +01:00
antirez
1b56a873d5 Inline protocol: handle empty strings well.
This bug is from the first version of Redis. Probably the problem here
is that before we used an SDS split function that created empty strings
for additional spaces, like in "SET    foo          bar".
AFAIK later we replaced it with the curretn sdssplitarg() API that has
no such a problem. As a result, we introduced a bug, where it is no
longer possible to do something like:

    SET foo ""

Using the inline protocol. Now it is fixed.
2019-12-29 15:46:31 +01:00