This fixes the issue reported in #5570.
This was fixed the hard way, that is, propagating more information to
the lower level API about this being a request to read just the history,
so that the code is simpler and less likely to regress.
This fixes the issue reported in #5570.
This was fixed the hard way, that is, propagating more information to
the lower level API about this being a request to read just the history,
so that the code is simpler and less likely to regress.
- clusterManagerFixOpenSlot: ensure that the
slot is unassigned before ADDSLOTS
- clusterManagerFixSlotsCoverage: after cold
migration, the slot configuration
is now updated on all the nodes.
- clusterManagerFixOpenSlot: ensure that the
slot is unassigned before ADDSLOTS
- clusterManagerFixSlotsCoverage: after cold
migration, the slot configuration
is now updated on all the nodes.
This bug had a double effect:
1. Sometimes entries may not be emitted, producing broken protocol where
the array length was greater than the emitted entires, blocking the
client waiting for more data.
2. Some other time the right entry was claimed, but a wrong entry was
returned to the client.
This fix should correct both the instances.
This bug had a double effect:
1. Sometimes entries may not be emitted, producing broken protocol where
the array length was greater than the emitted entires, blocking the
client waiting for more data.
2. Some other time the right entry was claimed, but a wrong entry was
returned to the client.
This fix should correct both the instances.
server.hz was uninitialized between initServerConfig and initServer.
this can lead to someone (e.g. queued modules) doing createObject,
and accessing an uninitialized variable, that can potentially be 0,
and lead to a crash.
server.hz was uninitialized between initServerConfig and initServer.
this can lead to someone (e.g. queued modules) doing createObject,
and accessing an uninitialized variable, that can potentially be 0,
and lead to a crash.