Fix issues with writeConn() which resulted with corruption of the stream by leaving an extra byte in the buffer. The trigger for this is partial writes or write errors which were not experienced on Linux but reported on macOS.
Fix issues with writeConn() which resulted with corruption of the stream by leaving an extra byte in the buffer. The trigger for this is partial writes or write errors which were not experienced on Linux but reported on macOS.
1. default value of always-show-logo was not consistent with the default in the code
2. comment about cluster-replica-no-failover is wrong since we can only do manually failover upon replicas
3. improve description about always-show-logo
1. default value of always-show-logo was not consistent with the default in the code
2. comment about cluster-replica-no-failover is wrong since we can only do manually failover upon replicas
3. improve description about always-show-logo
During long running scripts or loading RDB/AOF, we may need to do some
defragging. Since processEventsWhileBlocked is called periodically at
unknown intervals, and many cron jobs either depend on run_with_period
(including active defrag), or rely on being called at server.hz rate
(i.e. active defrag knows ho much time to run by looking at server.hz),
the whileBlockedCron may have to run a loop triggering the cron jobs in it
(currently only active defrag) several times.
Other changes:
- Adding a test for defrag during aof loading.
- Changing key-load-delay config to take negative values for fractions
of a microsecond sleep
During long running scripts or loading RDB/AOF, we may need to do some
defragging. Since processEventsWhileBlocked is called periodically at
unknown intervals, and many cron jobs either depend on run_with_period
(including active defrag), or rely on being called at server.hz rate
(i.e. active defrag knows ho much time to run by looking at server.hz),
the whileBlockedCron may have to run a loop triggering the cron jobs in it
(currently only active defrag) several times.
Other changes:
- Adding a test for defrag during aof loading.
- Changing key-load-delay config to take negative values for fractions
of a microsecond sleep
DEBUG ZIPLIST <key> currently returns the following error string if the
key is not a ziplist: "ERR Not an sds encoded string.". This looks like
an accidental copy/paste error from the error returned in the else if
branch above where this string is returned if the key is not an sds
string. The command was added in
f898429fe149f476d61270ed4299dd1f8f75ae50 and looking at the commit,
nothing indicates that it is not an accidental typo.
The error string now returns a correct error: "Not a ziplist encoded
object", which accurately describes the error.
DEBUG ZIPLIST <key> currently returns the following error string if the
key is not a ziplist: "ERR Not an sds encoded string.". This looks like
an accidental copy/paste error from the error returned in the else if
branch above where this string is returned if the key is not an sds
string. The command was added in
ac61f9062583d67dd43f7d698824464d1e30d84b and looking at the commit,
nothing indicates that it is not an accidental typo.
The error string now returns a correct error: "Not a ziplist encoded
object", which accurately describes the error.
When runtest-cluster, at first, we need to create a cluster use spawn_instance,
a port which is not used is choosen, however sometimes we can't run server on
the port. possibley due to a race with another process taking it first.
such as redis/redis/runs/896537490. It may be due to the machine problem or
In order to reduce the probability of failure when start redis in
runtest-cluster, we attemp to use another port when find server do not start up.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: yanhui13 <yanhui13@meituan.com>
(cherry picked from commit 1deaad884c38e92e5b691f36b253ef4ee2201ca4)
When runtest-cluster, at first, we need to create a cluster use spawn_instance,
a port which is not used is choosen, however sometimes we can't run server on
the port. possibley due to a race with another process taking it first.
such as redis/redis/runs/896537490. It may be due to the machine problem or
In order to reduce the probability of failure when start redis in
runtest-cluster, we attemp to use another port when find server do not start up.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: yanhui13 <yanhui13@meituan.com>
(cherry picked from commit e2d64485b8262971776fb1be803c7296c98d1572)
This fixes the issue described in CVE-2014-5461. At this time we cannot
confirm that the original issue has a real impact on Redis, but it is
included as an extra safety measure.
(cherry picked from commit 374270d3a04e8b224a12655518c815497aeb497d)
This fixes the issue described in CVE-2014-5461. At this time we cannot
confirm that the original issue has a real impact on Redis, but it is
included as an extra safety measure.
(cherry picked from commit d75ad774a92bd7de0b9448be3d622d7a13b7af27)
Don't assume `ps` handles `-h` to display output without headers and
manually trim headers line from output.
(cherry picked from commit ae8420298cacc2737e8e3ffa3c5acc038cd27849)
Don't assume `ps` handles `-h` to display output without headers and
manually trim headers line from output.
(cherry picked from commit b61b663895f16d9f559a14c408c225062254a57b)
In order to keep the redismodule.h self-contained but still usable with
gcc v10 and later, annotate each API function tentative definition with
the __common__ attribute. This avoids the 'multiple definition' errors
modules will otherwise see for all API functions at link time.
Further details at gcc.gnu.org/gcc-10/porting_to.html
Turn the existing __attribute__ ((unused)), ((__common__)) and ((print))
annotations into conditional macros for any compilers not accepting this
syntax. These macros only expand to API annotations under gcc.
Provide a pre- and post- macro for every API function, so that they can
be defined differently by the file that includes redismodule.h.
Removing REDISMODULE_API_FUNC in the interest of keeping the function
declarations readable.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 9d4736b04441b609c17c414e0780882cf92c5e33)
In order to keep the redismodule.h self-contained but still usable with
gcc v10 and later, annotate each API function tentative definition with
the __common__ attribute. This avoids the 'multiple definition' errors
modules will otherwise see for all API functions at link time.
Further details at gcc.gnu.org/gcc-10/porting_to.html
Turn the existing __attribute__ ((unused)), ((__common__)) and ((print))
annotations into conditional macros for any compilers not accepting this
syntax. These macros only expand to API annotations under gcc.
Provide a pre- and post- macro for every API function, so that they can
be defined differently by the file that includes redismodule.h.
Removing REDISMODULE_API_FUNC in the interest of keeping the function
declarations readable.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 11cd983d58199b6ac7fa54049734457bd767a0b5)
Add Linux kernel OOM killer control option.
This adds the ability to control the Linux OOM killer oom_score_adj
parameter for all Redis processes, depending on the process role (i.e.
master, replica, background child).
A oom-score-adj global boolean flag control this feature. In addition,
specific values can be configured using oom-score-adj-values if
additional tuning is required.
(cherry picked from commit 70c823a64e800f22ac68f0172acdd1da82d7be32)
Add Linux kernel OOM killer control option.
This adds the ability to control the Linux OOM killer oom_score_adj
parameter for all Redis processes, depending on the process role (i.e.
master, replica, background child).
A oom-score-adj global boolean flag control this feature. In addition,
specific values can be configured using oom-score-adj-values if
additional tuning is required.
(cherry picked from commit 2530dc0ebd8be8d792f4673073401377cd5bdc42)
Added RedisModule_HoldString that either returns a
shallow copy of the given String (by increasing
the String ref count) or a new deep copy of String
in case its not possible to get a shallow copy.
Co-authored-by: Itamar Haber <itamar@redislabs.com>
(cherry picked from commit 4f99b22118ca91e3a7fe9c1c68c19dd717dfdbb5)
Added RedisModule_HoldString that either returns a
shallow copy of the given String (by increasing
the String ref count) or a new deep copy of String
in case its not possible to get a shallow copy.
Co-authored-by: Itamar Haber <itamar@redislabs.com>
(cherry picked from commit 3f494cc49d25929f27fa75a78d9921a9dee771f2)
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
(cherry picked from commit 73198c50194cbf0254afd4cc5245f9274a538d13)
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
(cherry picked from commit 8d826393191399e132bd9e56fb51ed83223cc5ca)
fe8d6fe74 (released in 6.0.6) has a side effect, when processCommand
rejects a command with pre-made shared object error string, it trims the
newlines from the end of the string. if that string is later used with
addReply, the newline will be missing, breaking the protocol, and
leaving the client hung.
It seems that the only scenario which this happens is when replying with
-LOADING to some command, and later using that reply from the CONFIG
SET command (still during loading). this will result in hung client.
Refactoring the code in order to avoid trimming these newlines from
shared string objects, and do the newline trimming only in other cases
where it's needed.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
(cherry picked from commit 2640897e3a01fbacb620c12e021c934e48eeccb9)
65a3307bc (released in 6.0.6) has a side effect, when processCommand
rejects a command with pre-made shared object error string, it trims the
newlines from the end of the string. if that string is later used with
addReply, the newline will be missing, breaking the protocol, and
leaving the client hung.
It seems that the only scenario which this happens is when replying with
-LOADING to some command, and later using that reply from the CONFIG
SET command (still during loading). this will result in hung client.
Refactoring the code in order to avoid trimming these newlines from
shared string objects, and do the newline trimming only in other cases
where it's needed.
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
(cherry picked from commit 9fcd9e191e6f54276688fb7c74e1d5c3c4be9a75)
If the server gets MULTI command followed by only read
commands, and right before it gets the EXEC it reaches OOM,
the client will get OOM response.
So, from now on, it will get OOM response only if there was
at least one command that was tagged with `use-memory` flag
(cherry picked from commit 0292720ccb0a189d3ed49d7bf912602360a4ecdd)
If the server gets MULTI command followed by only read
commands, and right before it gets the EXEC it reaches OOM,
the client will get OOM response.
So, from now on, it will get OOM response only if there was
at least one command that was tagged with `use-memory` flag
(cherry picked from commit b7289e912cbe1a011a5569cd67929e83731b9660)
Otherwise, it is treated as a single allocation and freed synchronously. The following logic is used for estimating the effort in constant-ish time complexity:
1. Check the number of nodes.
1. Add an allocation for each consumer group registered inside the stream.
1. Check the number of PELs in the first CG, and then add this count times the number of CGs.
1. Check the number of consumers in the first CG, and then add this count times the number of CGs.
(cherry picked from commit cb504d7fddd09149655e91496588c610e89ca131)