This new command triggers a config flush to save the in-memory config to
disk. This is useful for cases of a configuration management system or a
package manager wiping out your sentinel config while the process is
still running - and has not yet been restarted. It can also be useful
for scripting a backup and migrate or clone of a running sentinel.
This new command triggers a config flush to save the in-memory config to
disk. This is useful for cases of a configuration management system or a
package manager wiping out your sentinel config while the process is
still running - and has not yet been restarted. It can also be useful
for scripting a backup and migrate or clone of a running sentinel.
Since with a previous commit Sentinels now persist their unique ID, we
no longer need to detect duplicated Sentinels and re-add them. We remove
and re-add back using different events only in the case of address
switch of the same Sentinel, without generating a new +sentinel event.
Since with a previous commit Sentinels now persist their unique ID, we
no longer need to detect duplicated Sentinels and re-add them. We remove
and re-add back using different events only in the case of address
switch of the same Sentinel, without generating a new +sentinel event.
Previously Sentinels always changed unique ID across restarts, relying
on the server.runid field. This is not a good idea, and forced Sentinel
to rely on detection of duplicated Sentinels and a potentially dangerous
clean-up and re-add operation of the Sentinel instance that was
rebooted.
Now the ID is generated at the first start and persisted in the
configuration file, so that a given Sentinel will have its unique
ID forever (unless the configuration is manually deleted or there is a
filesystem corruption).
Previously Sentinels always changed unique ID across restarts, relying
on the server.runid field. This is not a good idea, and forced Sentinel
to rely on detection of duplicated Sentinels and a potentially dangerous
clean-up and re-add operation of the Sentinel instance that was
rebooted.
Now the ID is generated at the first start and persisted in the
configuration file, so that a given Sentinel will have its unique
ID forever (unless the configuration is manually deleted or there is a
filesystem corruption).
Originally, only the +slave event which occurs when a slave is
reconfigured during sentinelResetMasterAndChangeAddress triggers a flush
of the config to disk. However, newly discovered slaves don't
apparently trigger this flush but do trigger the +slave event issuance.
So if you start up a sentinel, add a master, then add a slave to the
master (as a way to reproduce it) you'll see the +slave event issued,
but the sentinel config won't be updated with the known-slave entry.
This change makes sentinel do the flush of the config if a new slave is
deteted in sentinelRefreshInstanceInfo.
Originally, only the +slave event which occurs when a slave is
reconfigured during sentinelResetMasterAndChangeAddress triggers a flush
of the config to disk. However, newly discovered slaves don't
apparently trigger this flush but do trigger the +slave event issuance.
So if you start up a sentinel, add a master, then add a slave to the
master (as a way to reproduce it) you'll see the +slave event issued,
but the sentinel config won't be updated with the known-slave entry.
This change makes sentinel do the flush of the config if a new slave is
deteted in sentinelRefreshInstanceInfo.
To rewrite the config in the loop that adds slaves back after a master
reset, in order to handle switching to another master, is useless: it
just adds latency since there is an fsync call in the inner loop,
without providing any additional guarantee, but the contrary, since if
after the first loop iteration the server crashes we end with just a
single slave entry losing all the other informations.
It is wiser to rewrite the config at the end when the full new
state is configured.
To rewrite the config in the loop that adds slaves back after a master
reset, in order to handle switching to another master, is useless: it
just adds latency since there is an fsync call in the inner loop,
without providing any additional guarantee, but the contrary, since if
after the first loop iteration the server crashes we end with just a
single slave entry losing all the other informations.
It is wiser to rewrite the config at the end when the full new
state is configured.