Lua scripting uses a fake client in order to run commands in the context
of a client, accumulate the reply, and convert it into a Lua object
to return to the caller. This client is reused again and again, and is
referenced by the server.lua_client globally accessible pointer.
However after every call to redis.call() or redis.pcall(), that is
handled by the luaRedisGenericCommand() function, the reply_bytes field
of the client was not set back to zero. This filed is used to estimate
the amount of memory currently used in the reply. Because of the lack of
reset, script after script executed, this value used to get bigger and
bigger, and in the end on 32 bit systems it triggered the following
assert:
redisAssert(c->reply_bytes < ULONG_MAX-(1024*64));
On 64 bit systems this does not happen because it takes too much time to
reach values near to 2^64 for users to see the practical effect of the
bug.
Now in the cleanup stage of luaRedisGenericCommand() we reset the
reply_bytes counter to zero, avoiding the issue. It is not practical to
add a test for this bug, but the fix was manually tested using a
debugger.
This commit fixes issue #656.
Redis used to crash with a call like the following:
EVAL "redis.call()" 0
Now the explicit check for at least one argument prevents the problem.
This commit fixes issue #655.
Redis used to crash with a call like the following:
EVAL "redis.call()" 0
Now the explicit check for at least one argument prevents the problem.
This commit fixes issue #655.
The slave priority that is now published by Redis in INFO output is
now used by Sentinel in order to select the slave with minimum priority
for promotion, and in order to consider slaves with priority set to 0 as
not able to play the role of master (they will never be promoted by
Sentinel).
The "slave-priority" field is now one of the fileds that Sentinel
publishes when describing an instance via the SENTINEL commands such as
"SENTINEL slaves mastername".
The slave priority that is now published by Redis in INFO output is
now used by Sentinel in order to select the slave with minimum priority
for promotion, and in order to consider slaves with priority set to 0 as
not able to play the role of master (they will never be promoted by
Sentinel).
The "slave-priority" field is now one of the fileds that Sentinel
publishes when describing an instance via the SENTINEL commands such as
"SENTINEL slaves mastername".
A Redis slave can now be configured with a priority, that is an integer
number that is shown in INFO output and can be get and set using the
redis.conf file or the CONFIG GET/SET command.
This field is used by Sentinel during slave election. A slave with lower
priority is preferred. A slave with priority zero is never elected (and
is considered to be impossible to elect even if it is the only slave
available).
A next commit will add support in the Sentinel side as well.
A Redis slave can now be configured with a priority, that is an integer
number that is shown in INFO output and can be get and set using the
redis.conf file or the CONFIG GET/SET command.
This field is used by Sentinel during slave election. A slave with lower
priority is preferred. A slave with priority zero is never elected (and
is considered to be impossible to elect even if it is the only slave
available).
A next commit will add support in the Sentinel side as well.
This fixes issue #539.
Basically if there is enough free memory the OS may buffer the RDB file
that the slave transfers on disk from the master. The file may
actually be flused on disk at once by the operating system when it gets
closed by Redis, causing the close system call to block for a long time.
This patch is a modified version of one provided by yoav-steinberg of
@garantiadata (the original version was posted in the issue #539
comments), and tries to flush the OS buffers incrementally (every 8 MB
of loaded data).
This fixes issue #539.
Basically if there is enough free memory the OS may buffer the RDB file
that the slave transfers on disk from the master. The file may
actually be flused on disk at once by the operating system when it gets
closed by Redis, causing the close system call to block for a long time.
This patch is a modified version of one provided by yoav-steinberg of
@garantiadata (the original version was posted in the issue #539
comments), and tries to flush the OS buffers incrementally (every 8 MB
of loaded data).
The previous implementation of zmalloc.c was not able to handle out of
memory in an application-specific way. It just logged an error on
standard error, and aborted.
The result was that in the case of an actual out of memory in Redis
where malloc returned NULL (In Linux this actually happens under
specific overcommit policy settings and/or with no or little swap
configured) the error was not properly logged in the Redis log.
This commit fixes this problem, fixing issue #509.
Now the out of memory is properly reported in the Redis log and a stack
trace is generated.
The approach used is to provide a configurable out of memory handler
to zmalloc (otherwise the default one logging the event on the
standard output is used).
The previous implementation of zmalloc.c was not able to handle out of
memory in an application-specific way. It just logged an error on
standard error, and aborted.
The result was that in the case of an actual out of memory in Redis
where malloc returned NULL (In Linux this actually happens under
specific overcommit policy settings and/or with no or little swap
configured) the error was not properly logged in the Redis log.
This commit fixes this problem, fixing issue #509.
Now the out of memory is properly reported in the Redis log and a stack
trace is generated.
The approach used is to provide a configurable out of memory handler
to zmalloc (otherwise the default one logging the event on the
standard output is used).
From the point of view of Redis an instance replying -BUSY is down,
since it is effectively not able to reply to user requests. However
a looping script is a recoverable condition in Redis if the script still
did not performed any write to the dataset. In that case performing a
fail over is not optimal, so Sentinel now tries to restore the normal server
condition killing the script with a SCRIPT KILL command.
If the script already performed some write before entering an infinite
(or long enough to timeout) loop, SCRIPT KILL will not work and the
fail over will be triggered anyway.
From the point of view of Redis an instance replying -BUSY is down,
since it is effectively not able to reply to user requests. However
a looping script is a recoverable condition in Redis if the script still
did not performed any write to the dataset. In that case performing a
fail over is not optimal, so Sentinel now tries to restore the normal server
condition killing the script with a SCRIPT KILL command.
If the script already performed some write before entering an infinite
(or long enough to timeout) loop, SCRIPT KILL will not work and the
fail over will be triggered anyway.
This new hiredis features allows us to reuse a previous context reader
buffer even if already very big in order to maximize performances with
big payloads (Usually hiredis re-creates buffers when they are too big
and unused in order to save memory).
This new hiredis features allows us to reuse a previous context reader
buffer even if already very big in order to maximize performances with
big payloads (Usually hiredis re-creates buffers when they are too big
and unused in order to save memory).
This command can be used in order to force a Sentinel instance to start
a failover for the specified master, as leader, forcing the failover
even if the master is up.
The commit also adds some minor refactoring and other improvements to
functions already implemented that make them able to work when the
master is not in SDOWN condition. For instance slave selection
assumed that we ask INFO every second to every slave, this is true
only when the master is in SDOWN condition, so slave selection did not
worked when the master was not in SDOWN condition.
This command can be used in order to force a Sentinel instance to start
a failover for the specified master, as leader, forcing the failover
even if the master is up.
The commit also adds some minor refactoring and other improvements to
functions already implemented that make them able to work when the
master is not in SDOWN condition. For instance slave selection
assumed that we ask INFO every second to every slave, this is true
only when the master is in SDOWN condition, so slave selection did not
worked when the master was not in SDOWN condition.
This commit adds support to optionally execute a script when one of the
following events happen:
* The failover starts (with a slave already promoted).
* The failover ends.
* The failover is aborted.
The script is called with enough parameters (documented in the example
sentinel.conf file) to provide information about the old and new ip:port
pair of the master, the role of the sentinel (leader or observer) and
the name of the master.
The goal of the script is to inform clients of the configuration change
in a way specific to the environment Sentinel is running, that can't be
implemented in a genereal way inside Sentinel itself.
This commit adds support to optionally execute a script when one of the
following events happen:
* The failover starts (with a slave already promoted).
* The failover ends.
* The failover is aborted.
The script is called with enough parameters (documented in the example
sentinel.conf file) to provide information about the old and new ip:port
pair of the master, the role of the sentinel (leader or observer) and
the name of the master.
The goal of the script is to inform clients of the configuration change
in a way specific to the environment Sentinel is running, that can't be
implemented in a genereal way inside Sentinel itself.
When we are in wait start, if another leader (or any other external
entity) turns a slave into a master, abort the failover, and detect it
as an observer.
Note that the wait-start state is mainly there for this reason but the
abort was yet not implemented.
This adds a new sentinel event -failover-abort-race.
When we are in wait start, if another leader (or any other external
entity) turns a slave into a master, abort the failover, and detect it
as an observer.
Note that the wait-start state is mainly there for this reason but the
abort was yet not implemented.
This adds a new sentinel event -failover-abort-race.
Note by @antirez: this code was never compiled because utils.c lacked the
float.h include, so we never noticed this variable was mispelled in the
past.
This should provide a noticeable speed boost when saving certain types
of databases with many sorted sets inside.
Note by @antirez: this code was never compiled because utils.c lacked the
float.h include, so we never noticed this variable was mispelled in the
past.
This should provide a noticeable speed boost when saving certain types
of databases with many sorted sets inside.
When we are a Leader Sentinel in wait-start state, starting with this
commit the failover is aborted if the master returns online.
This improves the way we handle a notable case of net split, that is the
split between Sentinels and Redis servers, that will be a very common
case of split becase Sentinels will often be installed in the client's
network and servers can be in a differnt arm of the network.
When Sentinels and Redis servers are isolated the master is in ODOWN
condition since the Sentinels can agree about this state, however the
failover does not start since there are no good slaves to promote (in
this specific case all the slaves are unreachable).
However when the split is resolved, Sentinels may sense the slave back
a moment before they sense the master is back, so the failover may start
without a good reason (since the master is actually working too).
Now this condition is reversible, so the failover will be aborted
immediately after if the master is detected to be working again, that
is, not in SDOWN nor in ODOWN condition.
When we are a Leader Sentinel in wait-start state, starting with this
commit the failover is aborted if the master returns online.
This improves the way we handle a notable case of net split, that is the
split between Sentinels and Redis servers, that will be a very common
case of split becase Sentinels will often be installed in the client's
network and servers can be in a differnt arm of the network.
When Sentinels and Redis servers are isolated the master is in ODOWN
condition since the Sentinels can agree about this state, however the
failover does not start since there are no good slaves to promote (in
this specific case all the slaves are unreachable).
However when the split is resolved, Sentinels may sense the slave back
a moment before they sense the master is back, so the failover may start
without a good reason (since the master is actually working too).
Now this condition is reversible, so the failover will be aborted
immediately after if the master is detected to be working again, that
is, not in SDOWN nor in ODOWN condition.
We no longer use a vanilla fork+execve but take a queue of jobs of
scripts to execute, with retry on error, timeouts, and so forth.
Currently this is used only for notifications but soon the ability to
also call clients reconfiguration scripts will be added.
We no longer use a vanilla fork+execve but take a queue of jobs of
scripts to execute, with retry on error, timeouts, and so forth.
Currently this is used only for notifications but soon the ability to
also call clients reconfiguration scripts will be added.