The commit deals with the syncWithMaster and the ugly state machine in it.
It attempts to clean it a bit, but more importantly it uses pipeline for
part of the work (rather than 7 round trips, we now have 4).
i.e. the connect and PING are separate, then AUTH + 3 REPLCONF in one pipeline,
and finally the PSYNC (must be separate since the master has to have an empty
output buffer).
The commit deals with the syncWithMaster and the ugly state machine in it.
It attempts to clean it a bit, but more importantly it uses pipeline for
part of the work (rather than 7 round trips, we now have 4).
i.e. the connect and PING are separate, then AUTH + 3 REPLCONF in one pipeline,
and finally the PSYNC (must be separate since the master has to have an empty
output buffer).
This is just a refactoring commit.
This function was never actually used as a synchronous (do both send or
receive), it was always used only ine one of the two modes, which meant it
has to take extra arguments that are not relevant for the other.
Besides that, a tool that sends a synchronous command, it not something
we want in our toolbox (synchronous IO in single threaded app is evil).
sendSynchronousCommand was now refactored into separate sending and
receiving APIs, and the sending part has two variants, one taking vaargs,
and the other taking argc+argv (and an optional length array which means
you can use binary sds strings).
This is just a refactoring commit.
This function was never actually used as a synchronous (do both send or
receive), it was always used only ine one of the two modes, which meant it
has to take extra arguments that are not relevant for the other.
Besides that, a tool that sends a synchronous command, it not something
we want in our toolbox (synchronous IO in single threaded app is evil).
sendSynchronousCommand was now refactored into separate sending and
receiving APIs, and the sending part has two variants, one taking vaargs,
and the other taking argc+argv (and an optional length array which means
you can use binary sds strings).
As discussed in https://github.com/antirez/redis/issues/7364, it is good
to have a HELLO command variant, which does not switch the current proto
version of a redis server.
While `HELLO` will work, it introduced a certain difficulty on parsing
options of the command. We will need to offset the index of authentication
and setname option by -1.
So 0 is marked a special version meaning non-switching. And we do not
need to change the code much.
As discussed in https://github.com/antirez/redis/issues/7364, it is good
to have a HELLO command variant, which does not switch the current proto
version of a redis server.
While `HELLO` will work, it introduced a certain difficulty on parsing
options of the command. We will need to offset the index of authentication
and setname option by -1.
So 0 is marked a special version meaning non-switching. And we do not
need to change the code much.
Apparently the "leaks" took reports a different error string about process
that's not found in each version of MacOS.
This cause the test suite to fail on some OS versions, since some tests terminate
the process before looking for leaks.
Instead of looking at the error string, we now look at the (documented) exit code.
Apparently the "leaks" took reports a different error string about process
that's not found in each version of MacOS.
This cause the test suite to fail on some OS versions, since some tests terminate
the process before looking for leaks.
Instead of looking at the error string, we now look at the (documented) exit code.
When a database on a 64 bit build grows past 2^31 keys, the underlying hash table expands to 2^32 buckets. After this point, the algorithms for selecting random elements only return elements from half of the available buckets because they use random() which has a range of 0 to 2^31 - 1. This causes problems for eviction policies which use dictGetSomeKeys or dictGetRandomKey. Over time they cause the hash table to become unbalanced because, while new keys are spread out evenly across all buckets, evictions come from only half of the available buckets. Eventually this half of the table starts to run out of keys and it takes longer and longer to find candidates for eviction. This continues until no more evictions can happen.
This solution addresses this by using a 64 bit PRNG instead of libc random().
Co-authored-by: Greg Femec <gfemec@google.com>
When a database on a 64 bit build grows past 2^31 keys, the underlying hash table expands to 2^32 buckets. After this point, the algorithms for selecting random elements only return elements from half of the available buckets because they use random() which has a range of 0 to 2^31 - 1. This causes problems for eviction policies which use dictGetSomeKeys or dictGetRandomKey. Over time they cause the hash table to become unbalanced because, while new keys are spread out evenly across all buckets, evictions come from only half of the available buckets. Eventually this half of the table starts to run out of keys and it takes longer and longer to find candidates for eviction. This continues until no more evictions can happen.
This solution addresses this by using a 64 bit PRNG instead of libc random().
Co-authored-by: Greg Femec <gfemec@google.com>