20927 Commits

Author SHA1 Message Date
Rajat Pawar
59d437c727
Fix comment about ACLGetCommandPerm() 2020-08-10 23:11:26 -07:00
Jim Brunner
88e97fe2bd Prevent dictRehashMilliseconds from rehashing while a safe iterator is present (#5948) 2020-08-10 22:36:34 -07:00
Jim Brunner
f39c9404a3
Prevent dictRehashMilliseconds from rehashing while a safe iterator is present (#5948) 2020-08-10 22:36:34 -07:00
杨博东
51aa08d26d Avoid redundant calls to signalKeyAsReady (#7625)
signalKeyAsReady has some overhead (namely dictFind) so we should
only call it when there are clients blocked on the relevant type (BLOCKED_*)
2020-08-11 08:18:09 +03:00
杨博东
229327ad8b
Avoid redundant calls to signalKeyAsReady (#7625)
signalKeyAsReady has some overhead (namely dictFind) so we should
only call it when there are clients blocked on the relevant type (BLOCKED_*)
2020-08-11 08:18:09 +03:00
Itamar Haber
594691fb37 Removes dead code (#7642)
Appears to be handled by server.stream_node_max_bytes in reality.
2020-08-11 08:11:47 +03:00
Itamar Haber
efe92ee546
Removes dead code (#7642)
Appears to be handled by server.stream_node_max_bytes in reality.
2020-08-11 08:11:47 +03:00
John Sully
636ea4db9f 6.0.15 Merge branch 'keydbpro' into PRO_RELEASE_6
Former-commit-id: 05140de5a46d05e39815472f96be526868800f30
2020-08-11 00:27:31 -04:00
John Sully
1ba3718deb 6.0.15 Merge branch 'keydbpro' into PRO_RELEASE_6
Former-commit-id: 05140de5a46d05e39815472f96be526868800f30
2020-08-11 00:27:31 -04:00
WuYunlong
dd08aec539 see #7250, fix signature of RedisModule_DeauthenticateAndCloseClient (#7645)
In redismodule.h, RedisModule_DeauthenticateAndCloseClient returns void
`void REDISMODULE_API_FUNC(RedisModule_DeauthenticateAndCloseClient)(RedisModuleCtx *ctx, uint64_t client_id);`
But in module.c, RM_DeauthenticateAndCloseClient returns int
`int RM_DeauthenticateAndCloseClient(RedisModuleCtx *ctx, uint64_t client_id)`

It it safe to change return value from `void` to `int` from the user's perspective.
2020-08-10 19:18:21 -07:00
WuYunlong
d6220f12a9
see #7250, fix signature of RedisModule_DeauthenticateAndCloseClient (#7645)
In redismodule.h, RedisModule_DeauthenticateAndCloseClient returns void
`void REDISMODULE_API_FUNC(RedisModule_DeauthenticateAndCloseClient)(RedisModuleCtx *ctx, uint64_t client_id);`
But in module.c, RM_DeauthenticateAndCloseClient returns int
`int RM_DeauthenticateAndCloseClient(RedisModuleCtx *ctx, uint64_t client_id)`

It it safe to change return value from `void` to `int` from the user's perspective.
2020-08-10 19:18:21 -07:00
John Sully
9090e26aca Add build flag to disable MVCC tstamps
Former-commit-id: f17d178d03f44abcdaddd851a313dd3f7ec87ed5
2020-08-10 06:10:24 +00:00
John Sully
e87dee8dc7 Add build flag to disable MVCC tstamps
Former-commit-id: f17d178d03f44abcdaddd851a313dd3f7ec87ed5
2020-08-10 06:10:24 +00:00
John Sully
9928562dad Fix lock after free in module API
Former-commit-id: d88fd1588d292bffc0aa53c299cb52e7a4e91015
2020-08-10 06:10:24 +00:00
John Sully
e06152eb13 Fix lock after free in module API
Former-commit-id: d88fd1588d292bffc0aa53c299cb52e7a4e91015
2020-08-10 06:10:24 +00:00
John Sully
0cb8d4ca63 MVCC Perf fixes
Former-commit-id: 5a4afe5fb4231bec34d434f9e3214a7320842091
2020-08-10 05:45:56 +00:00
John Sully
85d7a4c1e2 MVCC Perf fixes
Former-commit-id: 5a4afe5fb4231bec34d434f9e3214a7320842091
2020-08-10 05:45:56 +00:00
John Sully
4037ee98a4 Fix assert caused by freeTombstoneObjects and null check in consolidate_children
Former-commit-id: 8565a02b331cd2bba2a1c7c6693dfb3f6e61c845
2020-08-10 05:01:36 +00:00
John Sully
3c1e63fa74 Fix assert caused by freeTombstoneObjects and null check in consolidate_children
Former-commit-id: 8565a02b331cd2bba2a1c7c6693dfb3f6e61c845
2020-08-10 05:01:36 +00:00
John Sully
848057ff19 RocksDB Read Performance Improvements
Former-commit-id: 80cca4869888e048e10e11f1f20796c482c3e5b3
2020-08-09 23:36:20 +00:00
John Sully
649745924b RocksDB Read Performance Improvements
Former-commit-id: 80cca4869888e048e10e11f1f20796c482c3e5b3
2020-08-09 23:36:20 +00:00
Yossi Gottlieb
c4c8e09f21 Merge pull request #7609 from michael-grunder/redis-cli-resp3-push
Add redis-cli RESP3 Push support
2020-08-09 11:19:04 +03:00
Yossi Gottlieb
3f073b1d9c
Merge pull request #7609 from michael-grunder/redis-cli-resp3-push
Add redis-cli RESP3 Push support
2020-08-09 11:19:04 +03:00
Meir Shpilraien (Spielrein)
4f99b22118 see #7544, added RedisModule_HoldString api. (#7577)
Added RedisModule_HoldString that either returns a
shallow copy of the given String (by increasing
the String ref count) or a new deep copy of String
in case its not possible to get a shallow copy.

Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-08-09 06:11:47 +03:00
Meir Shpilraien (Spielrein)
3f494cc49d
see #7544, added RedisModule_HoldString api. (#7577)
Added RedisModule_HoldString that either returns a
shallow copy of the given String (by increasing
the String ref count) or a new deep copy of String
in case its not possible to get a shallow copy.

Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-08-09 06:11:47 +03:00
John Sully
80dc68e497 Start off storage cache with a larger size
Former-commit-id: 5f6fb970a81cc73586ba595b35564e7865e7262d
2020-08-09 00:57:56 +00:00
John Sully
1610f4211f Start off storage cache with a larger size
Former-commit-id: 5f6fb970a81cc73586ba595b35564e7865e7262d
2020-08-09 00:57:56 +00:00
Wen Hui
31c079cc71 add sentinel command help (#7579) 2020-08-08 12:28:44 -07:00
Wen Hui
ca95b71f67
add sentinel command help (#7579) 2020-08-08 12:28:44 -07:00
WuYunlong
96655bd231 Optimize calls to mstime in trackInstantaneousMetric() (#6472) 2020-08-08 22:11:14 +03:00
WuYunlong
6dcc6898be
Optimize calls to mstime in trackInstantaneousMetric() (#6472) 2020-08-08 22:11:14 +03:00
Wang Yuan
514a1e223d Print error info if failed opening config file (#6943) 2020-08-08 22:03:56 +03:00
Wang Yuan
ea7eeb2fd2
Print error info if failed opening config file (#6943) 2020-08-08 22:03:56 +03:00
michael-grunder
1f0482942a Create PUSH handlers in redis-cli
Add logic to redis-cli to display RESP3 PUSH messages when we detect
STDOUT is a tty, with an optional command-line argument to override
the default behavior.

The new argument:  --show-pushes <yn>

Examples:

$ redis-cli -3 --show-pushes no
$ echo "client tracking on\nget k1\nset k1 v1"| redis-cli -3 --show-pushes y
2020-08-08 11:59:17 -07:00
michael-grunder
81879bc171 Create PUSH handlers in redis-cli
Add logic to redis-cli to display RESP3 PUSH messages when we detect
STDOUT is a tty, with an optional command-line argument to override
the default behavior.

The new argument:  --show-pushes <yn>

Examples:

$ redis-cli -3 --show-pushes no
$ echo "client tracking on\nget k1\nset k1 v1"| redis-cli -3 --show-pushes y
2020-08-08 11:59:17 -07:00
ShooterIT
8925fac395 [Redis-benchmark] Remove zrem test, add zpopmin test 2020-08-08 23:08:27 +08:00
ShooterIT
6a06a5a597 [Redis-benchmark] Remove zrem test, add zpopmin test 2020-08-08 23:08:27 +08:00
Wen Hui
30ead1edae fix memory leak in ACLLoadFromFile error handling (#7623) 2020-08-08 14:42:32 +03:00
Wen Hui
3f67b03378
fix memory leak in ACLLoadFromFile error handling (#7623) 2020-08-08 14:42:32 +03:00
Wang Yuan
921c633df9 Fix applying zero offset to null pointer when creating moduleFreeContextReusedClient (#7323)
Before this fix we where attempting to select a db before creating db the DB, see: #7323

This issue doesn't seem to have any implications, since the selected DB index is 0,
the db pointer remains NULL, and will later be correctly set before using this dummy
client for the first time.

As we know, we call 'moduleInitModulesSystem()' before 'initServer()'. We will allocate
memory for server.db in 'initServer', but we call 'createClient()' that will call 'selectDb()'
in 'moduleInitModulesSystem()', before the databases where created. Instead, we should call
'createClient()' for moduleFreeContextReusedClient after 'initServer()'.
2020-08-08 14:36:41 +03:00
Wang Yuan
1ef014ee6b
Fix applying zero offset to null pointer when creating moduleFreeContextReusedClient (#7323)
Before this fix we where attempting to select a db before creating db the DB, see: #7323

This issue doesn't seem to have any implications, since the selected DB index is 0,
the db pointer remains NULL, and will later be correctly set before using this dummy
client for the first time.

As we know, we call 'moduleInitModulesSystem()' before 'initServer()'. We will allocate
memory for server.db in 'initServer', but we call 'createClient()' that will call 'selectDb()'
in 'moduleInitModulesSystem()', before the databases where created. Instead, we should call
'createClient()' for moduleFreeContextReusedClient after 'initServer()'.
2020-08-08 14:36:41 +03:00
xuannianz
d478dc1c0c remove superfluous else block (#7620)
The else block would be executed when newlen == 0 and in the case memmove won't be called, so there's no need to set start.
2020-08-08 00:19:18 +03:00
xuannianz
b118502a05
remove superfluous else block (#7620)
The else block would be executed when newlen == 0 and in the case memmove won't be called, so there's no need to set start.
2020-08-08 00:19:18 +03:00
fayadexinqing
0054ba666f fix migration's broadcast PONG message, after the slot modification (#7590) 2020-08-07 13:01:14 -07:00
fayadexinqing
e966264188
fix migration's broadcast PONG message, after the slot modification (#7590) 2020-08-07 13:01:14 -07:00
John Sully
11a8ab1865 Disable compression it destroys read perf
Former-commit-id: d3fffc12ae5339886f54c064127497f277393b00
2020-08-06 22:30:56 +00:00
Oran Agra
cad93ed273 Accelerate diskless master connections, and general re-connections (#6271)
Diskless master has some inherent latencies.
1) fork starts with delay from cron rather than immediately
2) replica is put online only after an ACK. but the ACK
   was sent only once a second.
3) but even if it would arrive immediately, it will not
   register in case cron didn't yet detect that the fork is done.

Besides that, when a replica disconnects, it doesn't immediately
attempts to re-connect, it waits for replication cron (one per second).
in case it was already online, it may be important to try to re-connect
as soon as possible, so that the backlog at the master doesn't vanish.

In case it disconnected during rdb transfer, one can argue that it's
not very important to re-connect immediately, but this is needed for the
"diskless loading short read" test to be able to run 100 iterations in 5
seconds, rather than 3 (waiting for replication cron re-connection)

changes in this commit:
1) sync command starts a fork immediately if no sync_delay is configured
2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
3) when a replica unexpectedly disconnets, it immediately tries to
   re-connect rather than waiting 1s
4) when when a child exits, if there is another replica waiting, we spawn a new
   one right away, instead of waiting for 1s replicationCron.
5) added a call to connectWithMaster from replicationSetMaster. which is called
   from the REPLICAOF command but also in 3 places in cluster.c, in all of
   these the connection attempt will now be immediate instead of delayed by 1
   second.

side note:
we can add a call to rdbPipeReadHandler in replconfCommand when getting
a REPLCONF ACK from the replica to solve a race where the replica got
the entire rdb and EOF marker before we detected that the pipe was
closed.
in the test i did see this race happens in one about of some 300 runs,
but i concluded that this race is unlikely in real life (where the
replica is on another host and we're more likely to first detect the
pipe was closed.
the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
seconds instead (waiting for another REPLCONF ACK).

Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
Now that CheckChildrenDone is calling the new replicationStartPendingFork
(extracted from serverCron) there's actually no need to call
startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
calling replicationStartPendingFork that handles that anyway.
The code in updateSlavesWaitingForBgsave had a bug in which it ignored
repl-diskless-sync-delay, but removing that code shows that this bug was
hiding another bug, which is that the max_idle should have used >= and
not >, this one second delay has a big impact on my new test.
2020-08-06 16:53:06 +03:00
Oran Agra
c17e597d05
Accelerate diskless master connections, and general re-connections (#6271)
Diskless master has some inherent latencies.
1) fork starts with delay from cron rather than immediately
2) replica is put online only after an ACK. but the ACK
   was sent only once a second.
3) but even if it would arrive immediately, it will not
   register in case cron didn't yet detect that the fork is done.

Besides that, when a replica disconnects, it doesn't immediately
attempts to re-connect, it waits for replication cron (one per second).
in case it was already online, it may be important to try to re-connect
as soon as possible, so that the backlog at the master doesn't vanish.

In case it disconnected during rdb transfer, one can argue that it's
not very important to re-connect immediately, but this is needed for the
"diskless loading short read" test to be able to run 100 iterations in 5
seconds, rather than 3 (waiting for replication cron re-connection)

changes in this commit:
1) sync command starts a fork immediately if no sync_delay is configured
2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
3) when a replica unexpectedly disconnets, it immediately tries to
   re-connect rather than waiting 1s
4) when when a child exits, if there is another replica waiting, we spawn a new
   one right away, instead of waiting for 1s replicationCron.
5) added a call to connectWithMaster from replicationSetMaster. which is called
   from the REPLICAOF command but also in 3 places in cluster.c, in all of
   these the connection attempt will now be immediate instead of delayed by 1
   second.

side note:
we can add a call to rdbPipeReadHandler in replconfCommand when getting
a REPLCONF ACK from the replica to solve a race where the replica got
the entire rdb and EOF marker before we detected that the pipe was
closed.
in the test i did see this race happens in one about of some 300 runs,
but i concluded that this race is unlikely in real life (where the
replica is on another host and we're more likely to first detect the
pipe was closed.
the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
seconds instead (waiting for another REPLCONF ACK).

Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
Now that CheckChildrenDone is calling the new replicationStartPendingFork
(extracted from serverCron) there's actually no need to call
startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
calling replicationStartPendingFork that handles that anyway.
The code in updateSlavesWaitingForBgsave had a bug in which it ignored
repl-diskless-sync-delay, but removing that code shows that this bug was
hiding another bug, which is that the max_idle should have used >= and
not >, this one second delay has a big impact on my new test.
2020-08-06 16:53:06 +03:00
Oran Agra
10bfd2fc0d Fix potential race in bugReportStart
this race would only happen when two threads paniced at the same time,
and even then the only consequence is some extra log lines.

race reported in #7391
2020-08-06 16:47:27 +03:00
Oran Agra
81f8524a12 Fix potential race in bugReportStart
this race would only happen when two threads paniced at the same time,
and even then the only consequence is some extra log lines.

race reported in #7391
2020-08-06 16:47:27 +03:00