27610 Commits

Author SHA1 Message Date
Brian J. McManus
bb58e517a9 Issue 804 Add Default-Start and Default-Stop LSB tags for RedHat startup and update-rc.d compatability. 2012-12-02 21:46:37 -07:00
antirez
b775d5c3e9 Blocking POP: use a dictionary to store keys clinet side.
To store the keys we block for during a blocking pop operation, in the
case the client is blocked for more data to arrive, we used a simple
linear array of redis objects, in the blockingState structure:

    robj **keys;
    int count;

However in order to fix issue #801 we also use a dictionary in order to
avoid to end in the blocked clients queue for the same key multiple
times with the same client.

The dictionary was only temporary, just to avoid duplicates, but since
we create / destroy it there is no point in doing this duplicated work,
so this commit simply use a dictionary as the main structure to store
the keys we are blocked for. So instead of the previous fields we now
just have:

    dict *keys;

This simplifies the code and reduces the work done by the server during
a blocking POP operation.
2012-12-02 20:43:15 +01:00
antirez
2f87cf8b01 Blocking POP: use a dictionary to store keys clinet side.
To store the keys we block for during a blocking pop operation, in the
case the client is blocked for more data to arrive, we used a simple
linear array of redis objects, in the blockingState structure:

    robj **keys;
    int count;

However in order to fix issue #801 we also use a dictionary in order to
avoid to end in the blocked clients queue for the same key multiple
times with the same client.

The dictionary was only temporary, just to avoid duplicates, but since
we create / destroy it there is no point in doing this duplicated work,
so this commit simply use a dictionary as the main structure to store
the keys we are blocked for. So instead of the previous fields we now
just have:

    dict *keys;

This simplifies the code and reduces the work done by the server during
a blocking POP operation.
2012-12-02 20:43:15 +01:00
antirez
e9f1288faf Test: regression for issue #801. 2012-12-02 20:43:11 +01:00
antirez
c135b856c6 Test: regression for issue #801. 2012-12-02 20:43:11 +01:00
antirez
5f622803ec Client should not block multiple times on the same key.
Sending a command like:

BLPOP foo foo foo foo 0

Resulted into a crash before this commit since the client ended being
inserted in the waiting list for this key multiple times.
This resulted into the function handleClientsBlockedOnLists() to fail
because we have code like that:

    if (de) {
        list *clients = dictGetVal(de);
        int numclients = listLength(clients);

        while(numclients--) {
            listNode *clientnode = listFirst(clients);

            /* server clients here... */
        }
    }

The code to serve clients used to remove the served client from the
waiting list, so if a client is blocking multiple times, eventually the
call to listFirst() will return NULL or worse will access random memory
since the list may no longer exist as it is removed by the function
unblockClientWaitingData() if there are no more clients waiting for this
list.

To avoid making the rest of the implementation more complex, this commit
modifies blockForKeys() so that a client will be put just a single time
into the waiting list for a given key.

Since it is Saturday, I hope this fixes issue #801.
2012-12-02 20:43:07 +01:00
antirez
4e6dd7bc86 Client should not block multiple times on the same key.
Sending a command like:

BLPOP foo foo foo foo 0

Resulted into a crash before this commit since the client ended being
inserted in the waiting list for this key multiple times.
This resulted into the function handleClientsBlockedOnLists() to fail
because we have code like that:

    if (de) {
        list *clients = dictGetVal(de);
        int numclients = listLength(clients);

        while(numclients--) {
            listNode *clientnode = listFirst(clients);

            /* server clients here... */
        }
    }

The code to serve clients used to remove the served client from the
waiting list, so if a client is blocking multiple times, eventually the
call to listFirst() will return NULL or worse will access random memory
since the list may no longer exist as it is removed by the function
unblockClientWaitingData() if there are no more clients waiting for this
list.

To avoid making the rest of the implementation more complex, this commit
modifies blockForKeys() so that a client will be put just a single time
into the waiting list for a given key.

Since it is Saturday, I hope this fixes issue #801.
2012-12-02 20:43:07 +01:00
antirez
68e2d2ce07 SDIFF is now able to select between two algorithms for speed.
SDIFF used an algorithm that was O(N) where N is the total number
of elements of all the sets involved in the operation.

The algorithm worked like that:

ALGORITHM 1:

1) For the first set, add all the members to an auxiliary set.
2) For all the other sets, remove all the members of the set from the
auxiliary set.

So it is an O(N) algorithm where N is the total number of elements in
all the sets involved in the diff operation.

Cristobal Viedma suggested to modify the algorithm to the following:

ALGORITHM 2:

1) Iterate all the elements of the first set.
2) For every element, check if the element also exists in all the other
remaining sets.
3) Add the element to the auxiliary set only if it does not exist in any
of the other sets.

The complexity of this algorithm on the worst case is O(N*M) where N is
the size of the first set and M the total number of sets involved in the
operation.

However when there are elements in common, with this algorithm we stop
the computation for a given element as long as we find a duplicated
element into another set.

I (antirez) added an additional step to algorithm 2 to make it faster,
that is to sort the set to subtract from the biggest to the
smallest, so that it is more likely to find a duplicate in a larger sets
that are checked before the smaller ones.

WHAT IS BETTER?

None of course, for instance if the first set is much larger than the
other sets the second algorithm does a lot more work compared to the
first algorithm.

Similarly if the first set is much smaller than the other sets, the
original algorithm will less work.

So this commit makes Redis able to guess the number of operations
required by each algorithm, and select the best at runtime according
to the input received.

However, since the second algorithm has better constant times and can do
less work if there are duplicated elements, an advantage is given to the
second algorithm.
2012-11-30 16:36:42 +01:00
antirez
f50e658455 SDIFF is now able to select between two algorithms for speed.
SDIFF used an algorithm that was O(N) where N is the total number
of elements of all the sets involved in the operation.

The algorithm worked like that:

ALGORITHM 1:

1) For the first set, add all the members to an auxiliary set.
2) For all the other sets, remove all the members of the set from the
auxiliary set.

So it is an O(N) algorithm where N is the total number of elements in
all the sets involved in the diff operation.

Cristobal Viedma suggested to modify the algorithm to the following:

ALGORITHM 2:

1) Iterate all the elements of the first set.
2) For every element, check if the element also exists in all the other
remaining sets.
3) Add the element to the auxiliary set only if it does not exist in any
of the other sets.

The complexity of this algorithm on the worst case is O(N*M) where N is
the size of the first set and M the total number of sets involved in the
operation.

However when there are elements in common, with this algorithm we stop
the computation for a given element as long as we find a duplicated
element into another set.

I (antirez) added an additional step to algorithm 2 to make it faster,
that is to sort the set to subtract from the biggest to the
smallest, so that it is more likely to find a duplicate in a larger sets
that are checked before the smaller ones.

WHAT IS BETTER?

None of course, for instance if the first set is much larger than the
other sets the second algorithm does a lot more work compared to the
first algorithm.

Similarly if the first set is much smaller than the other sets, the
original algorithm will less work.

So this commit makes Redis able to guess the number of operations
required by each algorithm, and select the best at runtime according
to the input received.

However, since the second algorithm has better constant times and can do
less work if there are duplicated elements, an advantage is given to the
second algorithm.
2012-11-30 16:36:42 +01:00
antirez
95cf003ba5 redis-benchmark: seed the PRNG with time() at startup. 2012-11-30 15:41:09 +01:00
antirez
b4abbaf755 redis-benchmark: seed the PRNG with time() at startup. 2012-11-30 15:41:09 +01:00
antirez
f76b95e713 SDIFF fuzz test added. 2012-11-30 01:35:34 +01:00
antirez
395d663d29 SDIFF fuzz test added. 2012-11-30 01:35:34 +01:00
antirez
3284fca0ae Make an EXEC test more latency proof. 2012-11-29 16:12:14 +01:00
antirez
925090f476 Make an EXEC test more latency proof. 2012-11-29 16:12:14 +01:00
antirez
a5af538d98 Introduced the Build ID in INFO and --version output.
The idea is to be able to identify a build in a unique way, so for
instance after a bug report we can recognize that the build is the one
of a popular Linux distribution and perform the debugging in the same
environment.
2012-11-29 14:20:08 +01:00
antirez
2f62c9663c Introduced the Build ID in INFO and --version output.
The idea is to be able to identify a build in a unique way, so for
instance after a bug report we can recognize that the build is the one
of a popular Linux distribution and perform the debugging in the same
environment.
2012-11-29 14:20:08 +01:00
antirez
8f5635d339 On crash memory test rewrote so that it actaully works.
1) We no longer test location by location, otherwise the CPU write cache
completely makes our business useless.
2) We still need a memory test that operates in steps from the first to
the last location in order to never hit the cache, but that is still
able to retain the memory content.

This was tested using a Linux box containing a bad memory module with a
zingle bit error (always zero).

So the final solution does has an error propagation step that is:

1) Invert bits at every location.
2) Swap adiacent locations.
3) Swap adiacent locations again.
4) Invert bits at every location.
5) Swap adiacent locations.
6) Swap adiacent locations again.

Before and after these steps, and after step 4, a CRC64 checksum is computed.
If the three CRC64 checksums don't match, a memory error was detected.
2012-11-29 10:24:35 +01:00
antirez
b1b602a928 On crash memory test rewrote so that it actaully works.
1) We no longer test location by location, otherwise the CPU write cache
completely makes our business useless.
2) We still need a memory test that operates in steps from the first to
the last location in order to never hit the cache, but that is still
able to retain the memory content.

This was tested using a Linux box containing a bad memory module with a
zingle bit error (always zero).

So the final solution does has an error propagation step that is:

1) Invert bits at every location.
2) Swap adiacent locations.
3) Swap adiacent locations again.
4) Invert bits at every location.
5) Swap adiacent locations.
6) Swap adiacent locations again.

Before and after these steps, and after step 4, a CRC64 checksum is computed.
If the three CRC64 checksums don't match, a memory error was detected.
2012-11-29 10:24:35 +01:00
antirez
24c94d3dc5 Jemalloc updated to version 3.2.0. 2012-11-28 18:39:35 +01:00
antirez
7383c3b129 Jemalloc updated to version 3.2.0. 2012-11-28 18:39:35 +01:00
charsyam
08402aee73 Remove unnecessary condition in _dictExpandIfNeeded (dict.c) 2012-11-28 11:44:39 +01:00
charsyam
dee0b939fc Remove unnecessary condition in _dictExpandIfNeeded (dict.c) 2012-11-28 11:44:39 +01:00
antirez
d6e0bcf436 Merge remote-tracking branch 'origin/unstable' into unstable 2012-11-28 11:41:27 +01:00
antirez
c87a40897c Merge remote-tracking branch 'origin/unstable' into unstable 2012-11-28 11:41:27 +01:00
Matt Arsenault
e1f5d3ca6f It's a watchdog, not a watchdong. 2012-11-28 11:35:19 +01:00
Matt Arsenault
504e5072eb It's a watchdog, not a watchdong. 2012-11-28 11:35:19 +01:00
Salvatore Sanfilippo
70654c8e0c Merge pull request #787 from charsyam/remove-warning-bio
remove compile warning bioKillThreads
2012-11-23 03:44:18 -08:00
Salvatore Sanfilippo
23d517d2c4 Merge pull request #787 from charsyam/remove-warning-bio
remove compile warning bioKillThreads
2012-11-23 03:44:18 -08:00
charsyam
edfe8811aa remove compile warning bioKillThreads 2012-11-23 05:52:39 +08:00
charsyam
d7c7ac4a57 remove compile warning bioKillThreads 2012-11-23 05:52:39 +08:00
antirez
ed54464034 EVALSHA is now case insensitive.
EVALSHA used to crash if the SHA1 was not lowercase (Issue #783).
Fixed using a case insensitive dictionary type for the sha -> script
map used for replication of scripts.
2012-11-22 15:50:00 +01:00
antirez
95f68f7b0f EVALSHA is now case insensitive.
EVALSHA used to crash if the SHA1 was not lowercase (Issue #783).
Fixed using a case insensitive dictionary type for the sha -> script
map used for replication of scripts.
2012-11-22 15:50:00 +01:00
antirez
35c7312f59 Fix integer overflow in zunionInterGenericCommand().
This fixes issue #761.
2012-11-22 15:28:28 +01:00
antirez
cceb0c5b4a Fix integer overflow in zunionInterGenericCommand().
This fixes issue #761.
2012-11-22 15:28:28 +01:00
antirez
7d7b9bada1 Test: MULTI state is cleared after EXECABORT error. 2012-11-22 10:32:20 +01:00
antirez
65606b3bc6 Test: MULTI state is cleared after EXECABORT error. 2012-11-22 10:32:20 +01:00
antirez
1495568cd8 Test: make sure EXEC fails after previous transaction errors. 2012-11-22 10:32:16 +01:00
antirez
4977ab79af Test: make sure EXEC fails after previous transaction errors. 2012-11-22 10:32:16 +01:00
antirez
5ba2c439b0 Test: MULTI/EXEC tests moved into multi.tcl. 2012-11-22 10:32:12 +01:00
antirez
9c00f07897 Test: MULTI/EXEC tests moved into multi.tcl. 2012-11-22 10:32:12 +01:00
antirez
0d4ca9a874 Safer handling of MULTI/EXEC on errors.
After the transcation starts with a MULIT, the previous behavior was to
return an error on problems such as maxmemory limit reached. But still
to execute the transaction with the subset of queued commands on EXEC.

While it is true that the client was able to check for errors
distinguish QUEUED by an error reply, MULTI/EXEC in most client
implementations uses pipelining for speed, so all the commands and EXEC
are sent without caring about replies.

With this change:

1) EXEC fails if at least one command was not queued because of an
error. The EXECABORT error is used.
2) A generic error is always reported on EXEC.
3) The client DISCARDs the MULTI state after a failed EXEC, otherwise
pipelining multiple transactions would be basically impossible:
After a failed EXEC the next transaction would be simply queued as
the tail of the previous transaction.
2012-11-22 10:32:07 +01:00
antirez
3d1391272a Safer handling of MULTI/EXEC on errors.
After the transcation starts with a MULIT, the previous behavior was to
return an error on problems such as maxmemory limit reached. But still
to execute the transaction with the subset of queued commands on EXEC.

While it is true that the client was able to check for errors
distinguish QUEUED by an error reply, MULTI/EXEC in most client
implementations uses pipelining for speed, so all the commands and EXEC
are sent without caring about replies.

With this change:

1) EXEC fails if at least one command was not queued because of an
error. The EXECABORT error is used.
2) A generic error is always reported on EXEC.
3) The client DISCARDs the MULTI state after a failed EXEC, otherwise
pipelining multiple transactions would be basically impossible:
After a failed EXEC the next transaction would be simply queued as
the tail of the previous transaction.
2012-11-22 10:32:07 +01:00
antirez
a68a4463f7 Make bio.c threads killable ASAP if needed.
We use this new bio.c feature in order to stop our I/O threads if there
is a memory test to do on crash. In this case we don't want anything
else than the main thread to run, otherwise the other threads may mess
with the heap and the memory test will report a false positive.
2012-11-22 10:12:11 +01:00
antirez
7536991726 Make bio.c threads killable ASAP if needed.
We use this new bio.c feature in order to stop our I/O threads if there
is a memory test to do on crash. In this case we don't want anything
else than the main thread to run, otherwise the other threads may mess
with the heap and the memory test will report a false positive.
2012-11-22 10:12:11 +01:00
antirez
022bd293a6 Fast memory test on Redis crash. 2012-11-21 13:24:44 +01:00
antirez
5a9e3f5842 Fast memory test on Redis crash. 2012-11-21 13:24:44 +01:00
antirez
49edc56291 Use more fine grained HAVE macros instead of HAVE_PROCFS. 2012-11-21 13:17:38 +01:00
antirez
3cb432837c Use more fine grained HAVE macros instead of HAVE_PROCFS. 2012-11-21 13:17:38 +01:00
charsyam
fad954fd74 fix randstring bug 2012-11-20 02:50:31 +08:00