20927 Commits

Author SHA1 Message Date
John Sully
3eb99b4811 Merge branch 'redis_6_merge' into keydbpro
Former-commit-id: 44f1b065ed6d3b0ad2a62f093432743b98fad6be
2020-03-25 15:47:24 -04:00
John Sully
b1c9dcaa05 Merge branch 'unstable' into redis_6_merge
Former-commit-id: 718aee242dd75abd16a5a6a89353d2a35f37b010
2020-03-25 15:47:12 -04:00
John Sully
af459476ea Merge branch 'unstable' into redis_6_merge
Former-commit-id: 718aee242dd75abd16a5a6a89353d2a35f37b010
2020-03-25 15:47:12 -04:00
antirez
ce73158a9c PSYNC2: meaningful offset implemented.
A very commonly signaled operational problem with Redis master-replicas
sets is that, once the master becomes unavailable for some reason,
especially because of network problems, many times it wont be able to
perform a partial resynchronization with the new master, once it rejoins
the partition, for the following reason:

1. The master becomes isolated, however it keeps sending PINGs to the
replicas. Such PINGs will never be received since the link connection is
actually already severed.
2. On the other side, one of the replicas will turn into the new master,
setting its secondary replication ID offset to the one of the last
command received from the old master: this offset will not include the
PINGs sent by the master once the link was already disconnected.
3. When the master rejoins the partion and is turned into a replica, its
offset will be too advanced because of the PINGs, so a PSYNC will fail,
and a full synchronization will be required.

Related to issue #7002 and other discussion we had in the past around
this problem.
2020-03-25 15:55:24 +01:00
antirez
5f72f69688 PSYNC2: meaningful offset implemented.
A very commonly signaled operational problem with Redis master-replicas
sets is that, once the master becomes unavailable for some reason,
especially because of network problems, many times it wont be able to
perform a partial resynchronization with the new master, once it rejoins
the partition, for the following reason:

1. The master becomes isolated, however it keeps sending PINGs to the
replicas. Such PINGs will never be received since the link connection is
actually already severed.
2. On the other side, one of the replicas will turn into the new master,
setting its secondary replication ID offset to the one of the last
command received from the old master: this offset will not include the
PINGs sent by the master once the link was already disconnected.
3. When the master rejoins the partion and is turned into a replica, its
offset will be too advanced because of the PINGs, so a PSYNC will fail,
and a full synchronization will be required.

Related to issue #7002 and other discussion we had in the past around
this problem.
2020-03-25 15:55:24 +01:00
antirez
45ba72ad01 Explain why we allow transactions in -BUSY state.
Related to #7022.
2020-03-25 15:55:24 +01:00
antirez
8caa271476 Explain why we allow transactions in -BUSY state.
Related to #7022.
2020-03-25 15:55:24 +01:00
Oran Agra
e1e2f91589 MULTI/EXEC during LUA script timeout are messed up
Redis refusing to run MULTI or EXEC during script timeout may cause partial
transactions to run.

1) if the client sends MULTI+commands+EXEC in pipeline without waiting for
response, but these arrive to the shards partially while there's a busy script,
and partially after it eventually finishes: we'll end up running only part of
the transaction (since multi was ignored, and exec would fail).

2) similar to the above if EXEC arrives during busy script, it'll be ignored and
the client state remains in a transaction.

the 3rd test which i added for a case where MULTI and EXEC are ok, and
only the body arrives during busy script was already handled correctly
since processCommand calls flagTransaction
2020-03-25 15:55:24 +01:00
Oran Agra
e43cd8316f MULTI/EXEC during LUA script timeout are messed up
Redis refusing to run MULTI or EXEC during script timeout may cause partial
transactions to run.

1) if the client sends MULTI+commands+EXEC in pipeline without waiting for
response, but these arrive to the shards partially while there's a busy script,
and partially after it eventually finishes: we'll end up running only part of
the transaction (since multi was ignored, and exec would fail).

2) similar to the above if EXEC arrives during busy script, it'll be ignored and
the client state remains in a transaction.

the 3rd test which i added for a case where MULTI and EXEC are ok, and
only the body arrives during busy script was already handled correctly
since processCommand calls flagTransaction
2020-03-25 15:55:24 +01:00
antirez
484a14ebde Improve comments of replicationCacheMasterUsingMyself(). 2020-03-25 15:55:24 +01:00
antirez
34b8983220 Improve comments of replicationCacheMasterUsingMyself(). 2020-03-25 15:55:24 +01:00
antirez
4cfceac287 Fix BITFIELD_RO test. 2020-03-25 15:55:24 +01:00
antirez
70a98a43ea Fix BITFIELD_RO test. 2020-03-25 15:55:24 +01:00
antirez
6a1a5cb2a1 Abort transactions after -READONLY error. Fix #7014. 2020-03-25 15:55:24 +01:00
antirez
8783304a2d Abort transactions after -READONLY error. Fix #7014. 2020-03-25 15:55:24 +01:00
antirez
243b26d97d Minor changes to BITFIELD_RO PR #6951. 2020-03-25 15:55:24 +01:00
antirez
ec9cf002d5 Minor changes to BITFIELD_RO PR #6951. 2020-03-25 15:55:24 +01:00
bodong.ybd
015d1cb2ff Added BITFIELD_RO variants for read-only operations. 2020-03-25 15:55:24 +01:00
bodong.ybd
b3e4abf06e Added BITFIELD_RO variants for read-only operations. 2020-03-25 15:55:24 +01:00
antirez
5a13e0feb1 Modules: updated function doc after #7003. 2020-03-25 15:55:24 +01:00
antirez
50f8f9504b Modules: updated function doc after #7003. 2020-03-25 15:55:24 +01:00
Guy Benoish
6680c06705 Allow RM_GetContextFlags to work with ctx==NULL 2020-03-25 15:55:24 +01:00
Guy Benoish
f2f3dc5e73 Allow RM_GetContextFlags to work with ctx==NULL 2020-03-25 15:55:24 +01:00
hwware
2dcae61087 fix potentical memory leak in redis-cli 2020-03-25 15:55:24 +01:00
hwware
eb80887936 fix potentical memory leak in redis-cli 2020-03-25 15:55:24 +01:00
Yossi Gottlieb
700126e9cf Fix crashes related to failed/rejected accepts. 2020-03-25 15:55:24 +01:00
Yossi Gottlieb
cdcab0e820 Fix crashes related to failed/rejected accepts. 2020-03-25 15:55:24 +01:00
Yossi Gottlieb
1f1d642e01 Cluster: fix misleading accept errors. 2020-03-25 15:55:24 +01:00
Yossi Gottlieb
50dcd9f96d Cluster: fix misleading accept errors. 2020-03-25 15:55:24 +01:00
Yossi Gottlieb
1a948d0c5b Conns: Fix connClose() / connAccept() behavior.
We assume accept handlers may choose to reject a connection and close
it, but connAccept() callers can't distinguish between this state and
other error states requiring connClose().

This makes it safe (and mandatory!) to always call connClose() if
connAccept() fails, and safe for accept handlers to close connections
(which will defer).
2020-03-25 15:55:24 +01:00
Yossi Gottlieb
87dbd8f54c Conns: Fix connClose() / connAccept() behavior.
We assume accept handlers may choose to reject a connection and close
it, but connAccept() callers can't distinguish between this state and
other error states requiring connClose().

This makes it safe (and mandatory!) to always call connClose() if
connAccept() fails, and safe for accept handlers to close connections
(which will defer).
2020-03-25 15:55:24 +01:00
hwware
4a0249c0c8 remove redundant Semicolon 2020-03-25 15:55:24 +01:00
hwware
81e8686cc7 remove redundant Semicolon 2020-03-25 15:55:24 +01:00
hwware
9c9ef6fb9b clean CLIENT_TRACKING_CACHING flag when disabled caching 2020-03-25 15:55:24 +01:00
hwware
c7524a7e44 clean CLIENT_TRACKING_CACHING flag when disabled caching 2020-03-25 15:55:24 +01:00
hwware
04d838274f add missing commands in cluster help 2020-03-25 15:55:24 +01:00
hwware
2dd1ca6af0 add missing commands in cluster help 2020-03-25 15:55:24 +01:00
artix
5ccdb7a5be Support Redis Cluster Proxy PROXY INFO command 2020-03-25 15:55:24 +01:00
artix
95324b8190 Support Redis Cluster Proxy PROXY INFO command 2020-03-25 15:55:24 +01:00
antirez
07c75f60f3 Restore newline at the end of redis-cli.c 2020-03-25 15:54:34 +01:00
antirez
e628f94436 Restore newline at the end of redis-cli.c 2020-03-25 15:54:34 +01:00
chendianqiang
1994eda07e use correct list for moduleUnregisterUsedAPI 2020-03-25 15:54:34 +01:00
chendianqiang
5d4c4df3ef use correct list for moduleUnregisterUsedAPI 2020-03-25 15:54:34 +01:00
fengpf
e8c11fe29d fix comments in latency.c 2020-03-25 15:54:34 +01:00
fengpf
0e5820d893 fix comments in latency.c 2020-03-25 15:54:34 +01:00
WuYunlong
37c6571a6c Fix master replica inconsistency for upgrading scenario.
Before this commit, when upgrading a replica, expired keys will not
be loaded, thus causing replica having less keys in db. To this point,
master and replica's keys is logically consistent. However, before
the keys in master and replica are physically consistent, that is,
they have the same dbsize, if master got a problem and the replica
got promoted and becomes new master of that partition, and master
updates a key which does not exist on master, but physically exists
on the old master(new replica), the old master would refuse to update
the key, thus causing master and replica data inconsistent.

How could this happen?
That's all because of the wrong judgement of roles while starting up
the server. We can not use server.masterhost to judge if the server
is master or replica, since it fails in cluster mode.

When we start the server, we load rdb and do want to load expired keys,
and do not want to have the ability to active expire keys, if it is
a replica.
2020-03-25 15:54:34 +01:00
WuYunlong
0578157d56 Fix master replica inconsistency for upgrading scenario.
Before this commit, when upgrading a replica, expired keys will not
be loaded, thus causing replica having less keys in db. To this point,
master and replica's keys is logically consistent. However, before
the keys in master and replica are physically consistent, that is,
they have the same dbsize, if master got a problem and the replica
got promoted and becomes new master of that partition, and master
updates a key which does not exist on master, but physically exists
on the old master(new replica), the old master would refuse to update
the key, thus causing master and replica data inconsistent.

How could this happen?
That's all because of the wrong judgement of roles while starting up
the server. We can not use server.masterhost to judge if the server
is master or replica, since it fails in cluster mode.

When we start the server, we load rdb and do want to load expired keys,
and do not want to have the ability to active expire keys, if it is
a replica.
2020-03-25 15:54:34 +01:00
guodongxiaren
0fd48a7c53 string literal should be const char* 2020-03-25 15:54:34 +01:00
guodongxiaren
da14982d1e string literal should be const char* 2020-03-25 15:54:34 +01:00
Itamar Haber
2eec521f8c Adds keyspace notifications to migrate and restore 2020-03-25 15:54:34 +01:00