The old test, designed to do a transformation on the bits that was
invertible, in order to avoid touching the original memory content, was
not effective as it was redis-server --test-memory. The former often
reported OK while the latter was able to spot the error.
So the test was substituted with one that may perform better, however
the new one must backup the memory tested, so it tests memory in small
pieces. This limits the effectiveness because of the CPU caches. However
some attempt is made in order to trash the CPU cache between the fill
and the check stages, but not for the addressing test unfortunately.
We'll see if this test will be able to find errors where the old failed.
The old test, designed to do a transformation on the bits that was
invertible, in order to avoid touching the original memory content, was
not effective as it was redis-server --test-memory. The former often
reported OK while the latter was able to spot the error.
So the test was substituted with one that may perform better, however
the new one must backup the memory tested, so it tests memory in small
pieces. This limits the effectiveness because of the CPU caches. However
some attempt is made in order to trash the CPU cache between the fill
and the check stages, but not for the addressing test unfortunately.
We'll see if this test will be able to find errors where the old failed.
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:
1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
which is planned.
Close#2710Close#2711
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:
1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
which is planned.
Close#2710Close#2711
We need to process replies after errors in order to delete keys
successfully transferred. Also argument rewriting was fixed since
it was broken in several ways. Now a fresh argument vector is created
and set if we are acknowledged of at least one key.
We need to process replies after errors in order to delete keys
successfully transferred. Also argument rewriting was fixed since
it was broken in several ways. Now a fresh argument vector is created
and set if we are acknowledged of at least one key.
We wait a fixed amount of time (5 seconds currently) much greater than
the usual Cluster node to node communication latency, before migrating.
This way when a failover occurs, before detecting the new master as a
target for migration, we give the time to its natural slaves (the slaves
of the failed over master) to announce they switched to the new master,
preventing an useless migration operation.
We wait a fixed amount of time (5 seconds currently) much greater than
the usual Cluster node to node communication latency, before migrating.
This way when a failover occurs, before detecting the new master as a
target for migration, we give the time to its natural slaves (the slaves
of the failed over master) to announce they switched to the new master,
preventing an useless migration operation.
The old version was modeled with two failovers, however after the first
it is possible that another slave will migrate to the new master, since
for some time the new master is not backed by any slave. Probably there
should be some pause after a failover, before the migration. Anyway the
test is simpler in this way, and depends less on timing.
The old version was modeled with two failovers, however after the first
it is possible that another slave will migrate to the new master, since
for some time the new master is not backed by any slave. Probably there
should be some pause after a failover, before the migration. Anyway the
test is simpler in this way, and depends less on timing.