Have a clear separation between in and out flags
Other changes:
delete dead code in RM_ZsetIncrby: if zsetAdd returned error (happens only if
the result of the operation is NAN or if score is NAN) we return immediately so
there is no way that zsetAdd succeeded and returned NAN in the out-flags
Have a clear separation between in and out flags
Other changes:
delete dead code in RM_ZsetIncrby: if zsetAdd returned error (happens only if
the result of the operation is NAN or if score is NAN) we return immediately so
there is no way that zsetAdd succeeded and returned NAN in the out-flags
`master_sync_perc` and `loading_loaded_perc` don't have that sign,
and i think the info field should be a raw floating point number (the name suggests its units).
we already have `used_memory_peak_perc` and `used_memory_dataset_perc` which do add the `%` sign, but:
1) i think it was a mistake but maybe too late to fix now, and maybe not too late to fix for `current_fork_perc`
2) it is more important to be consistent with the two other "progress" "prec" metrics, and not with the "utilization" metric.
`master_sync_perc` and `loading_loaded_perc` don't have that sign,
and i think the info field should be a raw floating point number (the name suggests its units).
we already have `used_memory_peak_perc` and `used_memory_dataset_perc` which do add the `%` sign, but:
1) i think it was a mistake but maybe too late to fix now, and maybe not too late to fix for `current_fork_perc`
2) it is more important to be consistent with the two other "progress" "prec" metrics, and not with the "utilization" metric.
1. Add `redis-server test all` support to run all tests.
2. Add redis test to daily ci.
3. Add `--accurate` option to run slow tests for more iterations (so that
by default we run less cycles (shorter time, and less prints).
4. Move dict benchmark to REDIS_TEST.
5. fix some leaks in tests
6. make quicklist tests run on a specific fill set of options rather than huge ranges
7. move some prints in quicklist test outside their loops to reduce prints
8. removing sds.h from dict.c since it is now used in both redis-server and
redis-cli (uses hiredis sds)
1. Add `redis-server test all` support to run all tests.
2. Add redis test to daily ci.
3. Add `--accurate` option to run slow tests for more iterations (so that
by default we run less cycles (shorter time, and less prints).
4. Move dict benchmark to REDIS_TEST.
5. fix some leaks in tests
6. make quicklist tests run on a specific fill set of options rather than huge ranges
7. move some prints in quicklist test outside their loops to reduce prints
8. removing sds.h from dict.c since it is now used in both redis-server and
redis-cli (uses hiredis sds)
The obtained process_rss was incorrect (the OS reports pages, not
bytes), resulting with many other fields getting corrupted.
This has been tested on FreeBSD but not other platforms.
The obtained process_rss was incorrect (the OS reports pages, not
bytes), resulting with many other fields getting corrupted.
This has been tested on FreeBSD but not other platforms.
When a quicklist has quicklist->compress * 2 nodes, then call
__quicklistCompress, all nodes will be decompressed and the middle
two nodes will be recompressed again. This violates the fact that
quicklist->compress * 2 nodes are uncompressed. It's harmless
because when visit a node, we always try to uncompress node first.
This only happened when a quicklist has quicklist->compress * 2 + 1
nodes, then delete a node. For other scenarios like insert node and
iterate this will not happen.
When a quicklist has quicklist->compress * 2 nodes, then call
__quicklistCompress, all nodes will be decompressed and the middle
two nodes will be recompressed again. This violates the fact that
quicklist->compress * 2 nodes are uncompressed. It's harmless
because when visit a node, we always try to uncompress node first.
This only happened when a quicklist has quicklist->compress * 2 + 1
nodes, then delete a node. For other scenarios like insert node and
iterate this will not happen.
It seems like non-Linux sockets may be less greedy, resulting with more
transient client output buffers.
Haven't proven this but empirically when stressing this test on
non-Linux tends to exhibit increased mem_clients_normal values.
It seems like non-Linux sockets may be less greedy, resulting with more
transient client output buffers.
Haven't proven this but empirically when stressing this test on
non-Linux tends to exhibit increased mem_clients_normal values.
Scenario:
1. A module client is blocked on keys with a timeout
2. Shortly before the timeout expires, the key is being populated and signaled
as ready
3. Redis calls moduleTryServeClientBlockedOnKey (which replies to client) and
then moduleUnblockClient
4. moduleUnblockClient doesn't really unblock the client, it writes to
server.module_blocked_pipe and only marks the BC as unblocked.
5. beforeSleep kics in, by this time the client still exists and techincally
timed-out. beforeSleep replies to the timeout client (double reply) and
only then moduleHandleBlockedClients is called, reading from module_blocked_pipe
and calling unblockClient
The solution is similar to what was done in moduleTryServeClientBlockedOnKey: we
should avoid re-processing an already-unblocked client
Scenario:
1. A module client is blocked on keys with a timeout
2. Shortly before the timeout expires, the key is being populated and signaled
as ready
3. Redis calls moduleTryServeClientBlockedOnKey (which replies to client) and
then moduleUnblockClient
4. moduleUnblockClient doesn't really unblock the client, it writes to
server.module_blocked_pipe and only marks the BC as unblocked.
5. beforeSleep kics in, by this time the client still exists and techincally
timed-out. beforeSleep replies to the timeout client (double reply) and
only then moduleHandleBlockedClients is called, reading from module_blocked_pipe
and calling unblockClient
The solution is similar to what was done in moduleTryServeClientBlockedOnKey: we
should avoid re-processing an already-unblocked client