27398 Commits

Author SHA1 Message Date
antirez
1e7153831d Refactoring: unlinkClient() added to lower freeClient() complexity. 2015-09-30 17:10:03 +02:00
antirez
8268ed723a Refactoring: new function to test if client has pending output. 2015-09-30 16:41:48 +02:00
antirez
fdb3be939e Refactoring: new function to test if client has pending output. 2015-09-30 16:41:48 +02:00
antirez
5fad6cea15 Reverse list of clients with pending writes.
May potentially improve locality... not exactly clear if this makes a
difference or not. But for sure is harmless.
2015-09-30 16:29:42 +02:00
antirez
825f65d2bd Reverse list of clients with pending writes.
May potentially improve locality... not exactly clear if this makes a
difference or not. But for sure is harmless.
2015-09-30 16:29:42 +02:00
antirez
eaa5b8d9bc writeToClient(): don't remove write handler if not needed. 2015-09-30 16:29:42 +02:00
antirez
063ecbd5e5 writeToClient(): don't remove write handler if not needed. 2015-09-30 16:29:42 +02:00
antirez
2fe94ee816 handleClientsWithPendingWrites(): detect dead clients. 2015-09-30 16:29:42 +02:00
antirez
b741a90ce9 handleClientsWithPendingWrites(): detect dead clients. 2015-09-30 16:29:42 +02:00
antirez
7a7b8bc244 Move handleClientsWithPendingWrites() in networking.c. 2015-09-30 16:29:42 +02:00
antirez
481a0db315 Move handleClientsWithPendingWrites() in networking.c. 2015-09-30 16:29:42 +02:00
antirez
e5121eba31 Avoid installing the client write handler when possible. 2015-09-30 16:29:41 +02:00
antirez
1c7d87df0c Avoid installing the client write handler when possible. 2015-09-30 16:29:41 +02:00
antirez
d638d3de00 redis-cli pipe mode: don't stay in the write loop forever.
The code was broken and resulted in redis-cli --pipe to, most of the
times, writing everything received in the standard input to the Redis
connection socket without ever reading back the replies, until all the
content to write was written.

This means that Redis had to accumulate all the output in the output
buffers of the client, consuming a lot of memory.

Fixed thanks to the original report of anomalies in the behavior
provided by Twitter user @fsaintjacques.
2015-09-30 16:24:21 +02:00
antirez
d1b6a17d1e redis-cli pipe mode: don't stay in the write loop forever.
The code was broken and resulted in redis-cli --pipe to, most of the
times, writing everything received in the standard input to the Redis
connection socket without ever reading back the replies, until all the
content to write was written.

This means that Redis had to accumulate all the output in the output
buffers of the client, consuming a lot of memory.

Fixed thanks to the original report of anomalies in the behavior
provided by Twitter user @fsaintjacques.
2015-09-30 16:24:21 +02:00
antirez
c0eb960fc1 Mark version of unstable branch in an unique way. 2015-09-29 17:30:24 +02:00
antirez
622366aa74 Mark version of unstable branch in an unique way. 2015-09-29 17:30:24 +02:00
antirez
2a014229f0 Test: fix false positive in HSTRLEN test.
HINCRBY* tests later used the value "tmp" that was sometimes generated
by the random key generation function. The result was ovewriting what
Tcl expected to be inside Redis with another value, causing the next
HSTRLEN test to fail.
2015-09-15 09:37:30 +02:00
antirez
846da5b22e Test: fix false positive in HSTRLEN test.
HINCRBY* tests later used the value "tmp" that was sometimes generated
by the random key generation function. The result was ovewriting what
Tcl expected to be inside Redis with another value, causing the next
HSTRLEN test to fail.
2015-09-15 09:37:30 +02:00
antirez
3ba00aae8e GEORADIUS: Don't report duplicates when radius is huge.
Georadius works by computing the center + neighbors squares covering all
the area of the specified position and radius. Then a distance filter is
used to remove elements which are actually outside the range.

When a huge radius is used, like 5000 km or more, adjacent neighbors may
collide and be the same, leading to the reporting of the same element
multiple times. This only happens in the edge case of huge radius but is
not ideal.

A robust but slow solution would involve qsorting the range to remove
all the duplicates. However since the collisions are only in adjacent
boxes, for the way they are ordered in the code, it is much faster to
just check if the current box is the same as the previous one processed.

This commit adds a regression test for the bug.

Fixes #2767.
2015-09-14 23:10:50 +02:00
antirez
3c23b5ffd0 GEORADIUS: Don't report duplicates when radius is huge.
Georadius works by computing the center + neighbors squares covering all
the area of the specified position and radius. Then a distance filter is
used to remove elements which are actually outside the range.

When a huge radius is used, like 5000 km or more, adjacent neighbors may
collide and be the same, leading to the reporting of the same element
multiple times. This only happens in the edge case of huge radius but is
not ideal.

A robust but slow solution would involve qsorting the range to remove
all the duplicates. However since the collisions are only in adjacent
boxes, for the way they are ordered in the code, it is much faster to
just check if the current box is the same as the previous one processed.

This commit adds a regression test for the bug.

Fixes #2767.
2015-09-14 23:10:50 +02:00
antirez
f2a7d445c6 Test: MOVE expire test improved.
Related to #2765.
2015-09-14 12:35:55 +02:00
antirez
0a91fc459f Test: MOVE expire test improved.
Related to #2765.
2015-09-14 12:35:55 +02:00
antirez
cbb80c6107 MOVE re-add TTL check fixed.
getExpire() returns -1 when no expire exists.

Related to #2765.
2015-09-14 12:34:17 +02:00
antirez
4fec5ee165 MOVE re-add TTL check fixed.
getExpire() returns -1 when no expire exists.

Related to #2765.
2015-09-14 12:34:17 +02:00
antirez
13dff381b3 MOVE now can move TTL metadata as well.
MOVE was not able to move the TTL: when a key was moved into a different
database number, it became persistent like if PERSIST was used.

In some incredible way (I guess almost nobody uses Redis MOVE) this bug
remained unnoticed inside Redis internals for many years.
Finally Andy Grunwald discovered it and opened an issue.

This commit fixes the bug and adds a regression test.

Close #2765.
2015-09-14 12:30:00 +02:00
antirez
f529a01c1b MOVE now can move TTL metadata as well.
MOVE was not able to move the TTL: when a key was moved into a different
database number, it became persistent like if PERSIST was used.

In some incredible way (I guess almost nobody uses Redis MOVE) this bug
remained unnoticed inside Redis internals for many years.
Finally Andy Grunwald discovered it and opened an issue.

This commit fixes the bug and adds a regression test.

Close #2765.
2015-09-14 12:30:00 +02:00
antirez
124b4114c5 Sentinel: command arity check added where missing. 2015-09-08 09:27:43 +02:00
antirez
33769f840c Sentinel: command arity check added where missing. 2015-09-08 09:27:43 +02:00
Salvatore Sanfilippo
6c77dd1e64 Merge pull request #2695 from rogerlz/unstable
redis-sentinel crash if ckquorum command is executed without args
2015-09-08 09:24:45 +02:00
Salvatore Sanfilippo
0c62d95538 Merge pull request #2695 from rogerlz/unstable
redis-sentinel crash if ckquorum command is executed without args
2015-09-08 09:24:45 +02:00
antirez
331835ca06 Undo slaves state change on failed rdbSaveToSlavesSockets().
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
attempt returns an error, we scan the list of slaves in order to remove
them since there is no way to serve them currently.

However we check for the replication state BGSAVE_START, which was
modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
the state of slaves remain BGSAVE_END and no cleanup is performed.

This commit fixes the problem by making rdbSaveToSlavesSockets() able to
undo the state change on fork failure.
2015-09-07 16:09:23 +02:00
antirez
8e55537459 Undo slaves state change on failed rdbSaveToSlavesSockets().
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
attempt returns an error, we scan the list of slaves in order to remove
them since there is no way to serve them currently.

However we check for the replication state BGSAVE_START, which was
modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
the state of slaves remain BGSAVE_END and no cleanup is performed.

This commit fixes the problem by making rdbSaveToSlavesSockets() able to
undo the state change on fork failure.
2015-09-07 16:09:23 +02:00
Salvatore Sanfilippo
efa50c00ef Merge pull request #2753 from ofirluzon/unstable
SCAN iter parsing changed from atoi to chartoull
2015-09-07 13:24:43 +02:00
Salvatore Sanfilippo
5f81303503 Merge pull request #2753 from ofirluzon/unstable
SCAN iter parsing changed from atoi to chartoull
2015-09-07 13:24:43 +02:00
ubuntu
651741b5bd SCAN iter parsing changed from atoi to chartoull 2015-09-07 11:20:52 +00:00
ubuntu
11381b09d9 SCAN iter parsing changed from atoi to chartoull 2015-09-07 11:20:52 +00:00
antirez
e8bee5c796 Test: print info on HSTRLEN test failure.
This additional info may provide more clues about the test randomly
failing from time to time. Probably the failure is due to some previous
test that overwrites the logical content in the Tcl variable, but this
will make the problem more obvious.
2015-09-07 11:14:52 +02:00
antirez
467de61c84 Test: print info on HSTRLEN test failure.
This additional info may provide more clues about the test randomly
failing from time to time. Probably the failure is due to some previous
test that overwrites the logical content in the Tcl variable, but this
will make the problem more obvious.
2015-09-07 11:14:52 +02:00
kmiku7
b42dceab12 fix boundary case for _dictNextPower 2015-08-23 16:47:42 +08:00
kmiku7
413d8239df fix boundary case for _dictNextPower 2015-08-23 16:47:42 +08:00
antirez
338f302f05 Log client details on SLAVEOF command having an effect. 2015-08-21 15:29:07 +02:00
antirez
d036abe27d Log client details on SLAVEOF command having an effect. 2015-08-21 15:29:07 +02:00
antirez
7bfeb7f46a startBgsaveForReplication(): handle waiting slaves state change.
Before this commit, after triggering a BGSAVE it was up to the caller of
startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
order to update them accordingly. However when the replication target is
the socket, this is not possible since the process of updating the
slaves and sending the FULLRESYNC reply must be coupled with the process
of starting an RDB save (the reason is, we need to send the FULLSYNC
command and spawn a child that will start to send RDB data to the slaves
ASAP).

This commit moves the responsibility of handling slaves in
WAIT_BGSAVE_START to startBgsavForReplication() so that for both
diskless and disk-based replication we have the same chain of
responsiblity. In order accomodate such change, the syncCommand() also
needs to put the client in the slave list ASAP (just after the initial
checks) and not at the end, so that startBgsavForReplication() can find
the new slave alrady in the list.

Another related change is what happens if the BGSAVE fails because of
fork() or other errors: we now remove the slave from the list of slaves
and send an error, scheduling the slave connection to be terminated.

As a side effect of this change the following errors found by
Oran Agra are fixed (thanks!):

1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
up, otherwise they remain in a wrong state forever since we setup them
for full resync before actually trying to fork.

2. updateSlavesWaitingBgsave() with replication target set as "socket"
was broken since the function changed the slaves state from
WAIT_BGSAVE_START to WAIT_BGSAVE_END via
replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
2015-08-20 17:39:48 +02:00
antirez
f18e5b634d startBgsaveForReplication(): handle waiting slaves state change.
Before this commit, after triggering a BGSAVE it was up to the caller of
startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
order to update them accordingly. However when the replication target is
the socket, this is not possible since the process of updating the
slaves and sending the FULLRESYNC reply must be coupled with the process
of starting an RDB save (the reason is, we need to send the FULLSYNC
command and spawn a child that will start to send RDB data to the slaves
ASAP).

This commit moves the responsibility of handling slaves in
WAIT_BGSAVE_START to startBgsavForReplication() so that for both
diskless and disk-based replication we have the same chain of
responsiblity. In order accomodate such change, the syncCommand() also
needs to put the client in the slave list ASAP (just after the initial
checks) and not at the end, so that startBgsavForReplication() can find
the new slave alrady in the list.

Another related change is what happens if the BGSAVE fails because of
fork() or other errors: we now remove the slave from the list of slaves
and send an error, scheduling the slave connection to be terminated.

As a side effect of this change the following errors found by
Oran Agra are fixed (thanks!):

1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
up, otherwise they remain in a wrong state forever since we setup them
for full resync before actually trying to fork.

2. updateSlavesWaitingBgsave() with replication target set as "socket"
was broken since the function changed the slaves state from
WAIT_BGSAVE_START to WAIT_BGSAVE_END via
replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
2015-08-20 17:39:48 +02:00
Sebastian Waisbrot
7be8687677 Fix race condition in unit/introspection
Make sure monitor is attached in one connection before issuing commands to be monitored in another one
2015-08-11 22:56:17 -07:00
Sebastian Waisbrot
97a2248309 Fix race condition in unit/introspection
Make sure monitor is attached in one connection before issuing commands to be monitored in another one
2015-08-11 22:56:17 -07:00
antirez
565b7ce798 slaveTryPartialResynchronization and syncWithMaster: better synergy.
It is simpler if removing the read event handler from the FD is up to
slaveTryPartialResynchronization, after all it is only called in the
context of syncWithMaster.

This commit also makes sure that on error all the event handlers are
removed from the socket before closing it.
2015-08-07 12:04:37 +02:00
antirez
bea1259190 slaveTryPartialResynchronization and syncWithMaster: better synergy.
It is simpler if removing the read event handler from the FD is up to
slaveTryPartialResynchronization, after all it is only called in the
context of syncWithMaster.

This commit also makes sure that on error all the event handlers are
removed from the socket before closing it.
2015-08-07 12:04:37 +02:00
antirez
85b50ca125 syncWithMaster(): non blocking state machine. 2015-08-06 18:12:20 +02:00