10340 Commits

Author SHA1 Message Date
antirez
813960dbdd Fix ziplist prevlen encoding description. See #4705. 2018-02-23 12:19:35 +01:00
antirez
ac49eb0c8a Fix ziplist prevlen encoding description. See #4705. 2018-02-23 12:19:35 +01:00
gechunlin
d4e6d1086f
Update object.c 2018-02-22 20:57:54 -06:00
gechunlin
c857ac5840 Update object.c 2018-02-22 20:57:54 -06:00
artix
8f4f001dc3 Cluster Manager:
- Almost all Cluster Manager related code moved to
  the same section.
- Many macroes converted to functions
- Added various comments
- Little code restyling
2018-02-22 18:35:40 +01:00
artix
749e093591 Cluster Manager:
- Almost all Cluster Manager related code moved to
  the same section.
- Many macroes converted to functions
- Added various comments
- Little code restyling
2018-02-22 18:35:40 +01:00
artix
87f5a7c0b4 - Fixed bug in clusterManagerGetAntiAffinityScore
- Code improvements
2018-02-22 18:35:40 +01:00
artix
4a89b1b7c5 - Fixed bug in clusterManagerGetAntiAffinityScore
- Code improvements
2018-02-22 18:35:40 +01:00
artix
605d7262e6 Cluster Manager: colorized output 2018-02-22 18:35:40 +01:00
artix
2e64e25ee1 Cluster Manager: colorized output 2018-02-22 18:35:40 +01:00
artix
4ca8dbdc2b Cluster Manager: improved cleanup/error handling in various functions 2018-02-22 18:35:40 +01:00
artix
b0e77e6afa Cluster Manager: improved cleanup/error handling in various functions 2018-02-22 18:35:40 +01:00
artix
8128f1bf03 Cluster Manager: 'call' command. 2018-02-22 18:35:40 +01:00
artix
46465a7942 Cluster Manager: 'call' command. 2018-02-22 18:35:40 +01:00
artix
7b9f945b37 Cluster Manager: CLUSTER_MANAGER_NODE_CONNECT macro 2018-02-22 18:35:40 +01:00
artix
273ba95485 Cluster Manager: CLUSTER_MANAGER_NODE_CONNECT macro 2018-02-22 18:35:40 +01:00
artix
dad69ac320 ClusterManager: added replicas count to clusterManagerNode 2018-02-22 18:35:40 +01:00
artix
bb958098f4 ClusterManager: added replicas count to clusterManagerNode 2018-02-22 18:35:40 +01:00
artix
956bec4ca8 Cluster Manager: cluster is considered consistent if only one node has been found 2018-02-22 18:35:40 +01:00
artix
8251a1b736 Cluster Manager: cluster is considered consistent if only one node has been found 2018-02-22 18:35:40 +01:00
artix
1b1f80e60f Cluster Manager: reply error catch for MEET command 2018-02-22 18:35:40 +01:00
artix
c6e8eae7ae Cluster Manager: reply error catch for MEET command 2018-02-22 18:35:40 +01:00
artix
be7e2b84bd Cluster Manager: slots coverage check. 2018-02-22 18:35:40 +01:00
artix
6b13b265e4 Cluster Manager: slots coverage check. 2018-02-22 18:35:40 +01:00
artix
d38045805d - Cluster Manager: fixed various memory leaks
- Cluster Manager: fixed flags assignment in
  clusterManagerNodeLoadInfo
2018-02-22 18:35:40 +01:00
artix
7769aa0f98 - Cluster Manager: fixed various memory leaks
- Cluster Manager: fixed flags assignment in
  clusterManagerNodeLoadInfo
2018-02-22 18:35:40 +01:00
artix
74dcd14d13 Added check for open slots (clusterManagerCheckCluster) 2018-02-22 18:35:40 +01:00
artix
8d343fa60b Added check for open slots (clusterManagerCheckCluster) 2018-02-22 18:35:40 +01:00
artix
bafdc1a56c Cluster Manager: 'create', 'info' and 'check' commands 2018-02-22 18:35:40 +01:00
artix
e674d6f6c0 Cluster Manager: 'create', 'info' and 'check' commands 2018-02-22 18:35:40 +01:00
artix
1dd67ebceb Cluster Manager mode 2018-02-22 18:35:39 +01:00
artix
67065d00a1 Cluster Manager mode 2018-02-22 18:35:39 +01:00
Oran Agra
5def65008f Fix zrealloc to behave similarly to je_realloc when size is 0
According to C11, the behavior of realloc with size 0 is now deprecated.
it can either behave as free(ptr) and return NULL, or return a valid pointer.
but in zmalloc it can lead to zmalloc_oom_handler and panic.
and that can affect modules that use it.

It looks like both glibc allocator and jemalloc behave like so:
  realloc(malloc(32),0) returns NULL
  realloc(NULL,0) returns a valid pointer

This commit changes zmalloc to behave the same
2018-02-21 11:04:13 +02:00
antirez
ffde73c57d Track number of logically expired keys still in memory.
This commit adds two new fields in the INFO output, stats section:

expired_stale_perc:0.34
expired_time_cap_reached_count:58

The first field is an estimate of the number of keys that are yet in
memory but are already logically expired. They reason why those keys are
yet not reclaimed is because the active expire cycle can't spend more
time on the process of reclaiming the keys, and at the same time nobody
is accessing such keys. However as the active expire cycle runs, while
it will eventually have to return to the caller, because of time limit
or because there are less than 25% of keys logically expired in each
given database, it collects the stats in order to populate this INFO
field.

Note that expired_stale_perc is a running average, where the current
sample accounts for 5% and the history for 95%, so you'll see it
changing smoothly over time.

The other field, expired_time_cap_reached_count, counts the number
of times the expire cycle had to stop, even if still it was finding a
sizeable number of keys yet to expire, because of the time limit.
This allows people handling operations to understand if the Redis
server, during mass-expiration events, is able to collect keys fast
enough usually. It is normal for this field to increment during mass
expires, but normally it should very rarely increment. When instead it
constantly increments, it means that the current workloads is using
a very important percentage of CPU time to expire keys.

This feature was created thanks to the hints of Rashmi Ramesh and
Bart Robinson from Twitter. In private email exchanges, they noted how
it was important to improve the observability of this parameter in the
Redis server. Actually in big deployments, the amount of keys that are
yet to expire in each server, even if they are logically expired, may
account for a very big amount of wasted memory.
2018-02-19 11:12:49 +01:00
antirez
ee84fc714a Track number of logically expired keys still in memory.
This commit adds two new fields in the INFO output, stats section:

expired_stale_perc:0.34
expired_time_cap_reached_count:58

The first field is an estimate of the number of keys that are yet in
memory but are already logically expired. They reason why those keys are
yet not reclaimed is because the active expire cycle can't spend more
time on the process of reclaiming the keys, and at the same time nobody
is accessing such keys. However as the active expire cycle runs, while
it will eventually have to return to the caller, because of time limit
or because there are less than 25% of keys logically expired in each
given database, it collects the stats in order to populate this INFO
field.

Note that expired_stale_perc is a running average, where the current
sample accounts for 5% and the history for 95%, so you'll see it
changing smoothly over time.

The other field, expired_time_cap_reached_count, counts the number
of times the expire cycle had to stop, even if still it was finding a
sizeable number of keys yet to expire, because of the time limit.
This allows people handling operations to understand if the Redis
server, during mass-expiration events, is able to collect keys fast
enough usually. It is normal for this field to increment during mass
expires, but normally it should very rarely increment. When instead it
constantly increments, it means that the current workloads is using
a very important percentage of CPU time to expire keys.

This feature was created thanks to the hints of Rashmi Ramesh and
Bart Robinson from Twitter. In private email exchanges, they noted how
it was important to improve the observability of this parameter in the
Redis server. Actually in big deployments, the amount of keys that are
yet to expire in each server, even if they are logically expired, may
account for a very big amount of wasted memory.
2018-02-19 11:12:49 +01:00
antirez
aa57481d8c Remove non semantical spaces from module.c. 2018-02-15 21:41:03 +01:00
antirez
f4395e232b Remove non semantical spaces from module.c. 2018-02-15 21:41:03 +01:00
Salvatore Sanfilippo
7830f8492f
Merge pull request #4479 from dvirsky/notify
Keyspace notifications API for modules
2018-02-15 21:36:32 +01:00
Salvatore Sanfilippo
1d0a91aecb Merge pull request #4479 from dvirsky/notify
Keyspace notifications API for modules
2018-02-15 21:36:32 +01:00
antirez
f4dc736cca Fix typo in notifyKeyspaceEvent() comment. 2018-02-15 21:33:06 +01:00
antirez
906b095592 Fix typo in notifyKeyspaceEvent() comment. 2018-02-15 21:33:06 +01:00
Dvir Volk
0a36196ce4 Add doc comment about notification flags 2018-02-14 21:54:00 +02:00
Dvir Volk
0690168116 Add doc comment about notification flags 2018-02-14 21:54:00 +02:00
Dvir Volk
10efdf307b Add REDISMODULE_NOTIFY_STREAM flag to support stream notifications 2018-02-14 21:50:42 +02:00
Dvir Volk
d3abc6e3ae Add REDISMODULE_NOTIFY_STREAM flag to support stream notifications 2018-02-14 21:50:42 +02:00
Dvir Volk
613831f820 Fix indentation and comment style in testmodule 2018-02-14 21:43:06 +02:00
Dvir Volk
4991298fb0 Fix indentation and comment style in testmodule 2018-02-14 21:43:06 +02:00
Dvir Volk
f27a64232e Use one static client for all keyspace notification callbacks 2018-02-14 21:40:10 +02:00
Dvir Volk
fbd0514a1f Use one static client for all keyspace notification callbacks 2018-02-14 21:40:10 +02:00
Dvir Volk
3aab12414f Remove the NOTIFY_MODULE flag and simplify the module notification flow if there aren't subscribers 2018-02-14 21:40:10 +02:00