20963 Commits

Author SHA1 Message Date
youjiali1995
a8322f44b3 sentinel: fix randomized sentinelTimer. 2018-09-05 10:32:18 +08:00
youjiali1995
c64f8bd56a sentinel: fix randomized sentinelTimer. 2018-09-05 10:32:18 +08:00
antirez
5fda4a85dd Clarify why remaining may be zero in readQueryFromClient().
See #5304.
2018-09-04 13:29:27 +02:00
antirez
4e5e0d3719 Clarify why remaining may be zero in readQueryFromClient().
See #5304.
2018-09-04 13:29:27 +02:00
antirez
d3465f226c Clarify why remaining may be zero in readQueryFromClient().
See #5304.
2018-09-04 13:29:27 +02:00
antirez
d93b667464 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2018-09-04 13:26:06 +02:00
antirez
ff57b8d550 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2018-09-04 13:26:06 +02:00
antirez
d11a695425 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2018-09-04 13:26:06 +02:00
Salvatore Sanfilippo
3bde684596 Merge pull request #5304 from soloestoy/fix-unexpected-readlen
networking: fix unexpected negative or zero readlen
2018-09-04 13:25:28 +02:00
Salvatore Sanfilippo
2ef829d65c
Merge pull request #5304 from soloestoy/fix-unexpected-readlen
networking: fix unexpected negative or zero readlen
2018-09-04 13:25:28 +02:00
Salvatore Sanfilippo
f4d332756d Merge pull request #5304 from soloestoy/fix-unexpected-readlen
networking: fix unexpected negative or zero readlen
2018-09-04 13:25:28 +02:00
Sascha Roland
cbbf49b7d3 #5299 Fix blocking XREAD for streams that ran dry
The conclusion, that a xread request can be answered syncronously in
case that the stream's last_id is larger than the passed last-received-id
parameter, assumes, that there must be entries present, which could be
returned immediately.
This assumption fails for empty streams that actually contained some
entries which got removed by xdel, ... .

As result, the client is answered synchronously with an empty result,
instead of blocking for new entries to arrive.
An additional check for a non-empty stream is required.
2018-09-04 13:13:36 +02:00
Sascha Roland
c1e9186f06 #5299 Fix blocking XREAD for streams that ran dry
The conclusion, that a xread request can be answered syncronously in
case that the stream's last_id is larger than the passed last-received-id
parameter, assumes, that there must be entries present, which could be
returned immediately.
This assumption fails for empty streams that actually contained some
entries which got removed by xdel, ... .

As result, the client is answered synchronously with an empty result,
instead of blocking for new entries to arrive.
An additional check for a non-empty stream is required.
2018-09-04 13:13:36 +02:00
Sascha Roland
24fe102997 #5299 Fix blocking XREAD for streams that ran dry
The conclusion, that a xread request can be answered syncronously in
case that the stream's last_id is larger than the passed last-received-id
parameter, assumes, that there must be entries present, which could be
returned immediately.
This assumption fails for empty streams that actually contained some
entries which got removed by xdel, ... .

As result, the client is answered synchronously with an empty result,
instead of blocking for new entries to arrive.
An additional check for a non-empty stream is required.
2018-09-04 13:13:36 +02:00
Salvatore Sanfilippo
65b7c53e0d Merge pull request #5315 from soloestoy/optimize-parsing-large-bulk
networking: optimize parsing large bulk greater than 32k
2018-09-04 12:49:50 +02:00
Salvatore Sanfilippo
d60c17cbb3
Merge pull request #5315 from soloestoy/optimize-parsing-large-bulk
networking: optimize parsing large bulk greater than 32k
2018-09-04 12:49:50 +02:00
Salvatore Sanfilippo
b83530ac8d Merge pull request #5315 from soloestoy/optimize-parsing-large-bulk
networking: optimize parsing large bulk greater than 32k
2018-09-04 12:49:50 +02:00
antirez
02306736c9 Unblocked clients API refactoring. See #4418. 2018-09-03 18:39:18 +02:00
antirez
6c001bfc0d Unblocked clients API refactoring. See #4418. 2018-09-03 18:39:18 +02:00
antirez
1ee4350153 Unblocked clients API refactoring. See #4418. 2018-09-03 18:39:18 +02:00
Salvatore Sanfilippo
590b3157d4 Merge pull request #4418 from soloestoy/fix-multiple-unblock
fix multiple unblock for clientsArePaused()
2018-09-03 18:31:02 +02:00
Salvatore Sanfilippo
2b689ad641
Merge pull request #4418 from soloestoy/fix-multiple-unblock
fix multiple unblock for clientsArePaused()
2018-09-03 18:31:02 +02:00
Salvatore Sanfilippo
186809f8fc Merge pull request #4418 from soloestoy/fix-multiple-unblock
fix multiple unblock for clientsArePaused()
2018-09-03 18:31:02 +02:00
antirez
f0348a6543 Make pending buffer processing safe for CLIENT_MASTER client.
Related to #5305.
2018-09-03 18:17:31 +02:00
antirez
3e7349fdaf Make pending buffer processing safe for CLIENT_MASTER client.
Related to #5305.
2018-09-03 18:17:31 +02:00
antirez
700a94f653 Make pending buffer processing safe for CLIENT_MASTER client.
Related to #5305.
2018-09-03 18:17:31 +02:00
zhaozhao.zz
a26756ab59 networking: optimize parsing large bulk greater than 32k
If we are going to read a large object from network
try to make it likely that it will start at c->querybuf
boundary so that we can optimize object creation
avoiding a large copy of data.

But only when the data we have not parsed is less than
or equal to ll+2. If the data length is greater than
ll+2, trimming querybuf is just a waste of time, because
at this time the querybuf contains not only our bulk.

It's easy to reproduce the that:

Time1: call `client pause 10000` on slave.

Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000.

Then slave hung after 10 seconds.
2018-09-04 00:02:25 +08:00
zhaozhao.zz
247d2a734b networking: optimize parsing large bulk greater than 32k
If we are going to read a large object from network
try to make it likely that it will start at c->querybuf
boundary so that we can optimize object creation
avoiding a large copy of data.

But only when the data we have not parsed is less than
or equal to ll+2. If the data length is greater than
ll+2, trimming querybuf is just a waste of time, because
at this time the querybuf contains not only our bulk.

It's easy to reproduce the that:

Time1: call `client pause 10000` on slave.

Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000.

Then slave hung after 10 seconds.
2018-09-04 00:02:25 +08:00
zhaozhao.zz
8ca058d77c networking: optimize parsing large bulk greater than 32k
If we are going to read a large object from network
try to make it likely that it will start at c->querybuf
boundary so that we can optimize object creation
avoiding a large copy of data.

But only when the data we have not parsed is less than
or equal to ll+2. If the data length is greater than
ll+2, trimming querybuf is just a waste of time, because
at this time the querybuf contains not only our bulk.

It's easy to reproduce the that:

Time1: call `client pause 10000` on slave.

Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000.

Then slave hung after 10 seconds.
2018-09-04 00:02:25 +08:00
zhaozhao.zz
2a40cc26c5 if master is already unblocked, do not unblock it twice 2018-09-03 14:36:48 +08:00
zhaozhao.zz
2290c4bee1 if master is already unblocked, do not unblock it twice 2018-09-03 14:36:48 +08:00
zhaozhao.zz
8ef66552a1 if master is already unblocked, do not unblock it twice 2018-09-03 14:36:48 +08:00
zhaozhao.zz
145a5e01e3 fix multiple unblock for clientsArePaused() 2018-09-03 14:26:14 +08:00
zhaozhao.zz
e3dfd8c811 fix multiple unblock for clientsArePaused() 2018-09-03 14:26:14 +08:00
zhaozhao.zz
81627a7505 fix multiple unblock for clientsArePaused() 2018-09-03 14:26:14 +08:00
Amin Mesbah
410e2a9cbb Use geohash limit defines in constraint check
Slight robustness improvement, especially if the limit values are
changed, as was suggested in antires/redis#4291 [1].

[1] https://github.com/antirez/redis/pull/4291
2018-09-02 00:06:20 -07:00
Amin Mesbah
a036c64c01 Use geohash limit defines in constraint check
Slight robustness improvement, especially if the limit values are
changed, as was suggested in antires/redis#4291 [1].

[1] https://github.com/antirez/redis/pull/4291
2018-09-02 00:06:20 -07:00
Amin Mesbah
93fc77e39d Use geohash limit defines in constraint check
Slight robustness improvement, especially if the limit values are
changed, as was suggested in antires/redis#4291 [1].

[1] https://github.com/antirez/redis/pull/4291
2018-09-02 00:06:20 -07:00
antirez
99316560f1 After slave Lua script leaves busy state, re-process the master buffer.
Technically speaking we don't really need to put the master client in
the clients that need to be processed, since in practice the PING
commands from the master will take care, however it is conceptually more
sane to do so.
2018-08-31 16:45:02 +02:00
antirez
7fa493912e After slave Lua script leaves busy state, re-process the master buffer.
Technically speaking we don't really need to put the master client in
the clients that need to be processed, since in practice the PING
commands from the master will take care, however it is conceptually more
sane to do so.
2018-08-31 16:45:02 +02:00
antirez
3fa8315508 After slave Lua script leaves busy state, re-process the master buffer.
Technically speaking we don't really need to put the master client in
the clients that need to be processed, since in practice the PING
commands from the master will take care, however it is conceptually more
sane to do so.
2018-08-31 16:45:02 +02:00
antirez
e789b562f3 While the slave is busy, just accumulate master input.
Processing command from the master while the slave is in busy state is
not correct, however we cannot, also, just reply -BUSY to the
replication stream commands from the master. The correct solution is to
stop processing data from the master, but just accumulate the stream
into the buffers and resume the processing later.

Related to #5297.
2018-08-31 16:45:02 +02:00
antirez
9ab91b8c6c While the slave is busy, just accumulate master input.
Processing command from the master while the slave is in busy state is
not correct, however we cannot, also, just reply -BUSY to the
replication stream commands from the master. The correct solution is to
stop processing data from the master, but just accumulate the stream
into the buffers and resume the processing later.

Related to #5297.
2018-08-31 16:45:02 +02:00
antirez
23338b9d59 While the slave is busy, just accumulate master input.
Processing command from the master while the slave is in busy state is
not correct, however we cannot, also, just reply -BUSY to the
replication stream commands from the master. The correct solution is to
stop processing data from the master, but just accumulate the stream
into the buffers and resume the processing later.

Related to #5297.
2018-08-31 16:45:02 +02:00
antirez
c3d287793d Allow scripts to timeout even if from the master instance.
However the master scripts will be impossible to kill.

Related to #5297.
2018-08-31 16:45:02 +02:00
antirez
83af8ef1fd Allow scripts to timeout even if from the master instance.
However the master scripts will be impossible to kill.

Related to #5297.
2018-08-31 16:45:02 +02:00
antirez
adbc48bcc9 Allow scripts to timeout even if from the master instance.
However the master scripts will be impossible to kill.

Related to #5297.
2018-08-31 16:45:02 +02:00
antirez
15334f5355 Allow scripts to timeout on slaves as well.
See reasoning in #5297.
2018-08-31 16:45:01 +02:00
antirez
f5b29c6444 Allow scripts to timeout on slaves as well.
See reasoning in #5297.
2018-08-31 16:45:01 +02:00
antirez
4add364a4e Allow scripts to timeout on slaves as well.
See reasoning in #5297.
2018-08-31 16:45:01 +02:00