We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:
1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
which is planned.
Close#2710Close#2711
We use the new variadic/pipelined MIGRATE for faster migration.
Testing is not easy because to see the time it takes for a slot to be
migrated requires a very large data set, but even with all the overhead
of migrating multiple slots and to setup them properly, what used to
take 4 seconds (1 million keys, 200 slots migrated) is now 1.6 which is
a good improvement. However the improvement can be a lot larger if:
1. We use large datasets where a single slot has many keys.
2. By moving more than 10 keys per iteration, making this configurable,
which is planned.
Close#2710Close#2711
We need to process replies after errors in order to delete keys
successfully transferred. Also argument rewriting was fixed since
it was broken in several ways. Now a fresh argument vector is created
and set if we are acknowledged of at least one key.
We need to process replies after errors in order to delete keys
successfully transferred. Also argument rewriting was fixed since
it was broken in several ways. Now a fresh argument vector is created
and set if we are acknowledged of at least one key.
We wait a fixed amount of time (5 seconds currently) much greater than
the usual Cluster node to node communication latency, before migrating.
This way when a failover occurs, before detecting the new master as a
target for migration, we give the time to its natural slaves (the slaves
of the failed over master) to announce they switched to the new master,
preventing an useless migration operation.
We wait a fixed amount of time (5 seconds currently) much greater than
the usual Cluster node to node communication latency, before migrating.
This way when a failover occurs, before detecting the new master as a
target for migration, we give the time to its natural slaves (the slaves
of the failed over master) to announce they switched to the new master,
preventing an useless migration operation.
The old version was modeled with two failovers, however after the first
it is possible that another slave will migrate to the new master, since
for some time the new master is not backed by any slave. Probably there
should be some pause after a failover, before the migration. Anyway the
test is simpler in this way, and depends less on timing.
The old version was modeled with two failovers, however after the first
it is possible that another slave will migrate to the new master, since
for some time the new master is not backed by any slave. Probably there
should be some pause after a failover, before the migration. Anyway the
test is simpler in this way, and depends less on timing.