2012-11-08 18:25:23 +01:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2009-2012, Salvatore Sanfilippo <antirez at gmail dot com>
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions are met:
|
|
|
|
*
|
|
|
|
* * Redistributions of source code must retain the above copyright notice,
|
|
|
|
* this list of conditions and the following disclaimer.
|
|
|
|
* * Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* * Neither the name of Redis nor the names of its contributors may be used
|
|
|
|
* to endorse or promote products derived from this software without
|
|
|
|
* specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
|
|
|
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
|
|
|
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
|
|
|
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
|
|
|
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
|
|
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
|
|
|
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
|
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
|
|
|
* POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2015-07-26 15:14:57 +02:00
|
|
|
#include "server.h"
|
2021-06-16 06:35:13 +03:00
|
|
|
#include "cluster.h"
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2022-01-03 01:54:47 +01:00
|
|
|
/* Structure to hold the pubsub related metadata. Currently used
|
|
|
|
* for pubsub and pubsubshard feature. */
|
|
|
|
typedef struct pubsubtype {
|
|
|
|
int shard;
|
|
|
|
dict *(*clientPubSubChannels)(client*);
|
|
|
|
int (*subscriptionCount)(client*);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kvstore **serverPubSubChannels;
|
2022-01-03 01:54:47 +01:00
|
|
|
robj **subscribeMsg;
|
|
|
|
robj **unsubscribeMsg;
|
2022-05-30 22:03:59 -07:00
|
|
|
robj **messageBulk;
|
2022-01-03 01:54:47 +01:00
|
|
|
}pubsubtype;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get client's global Pub/Sub channels subscription count.
|
|
|
|
*/
|
2018-12-18 12:33:51 +01:00
|
|
|
int clientSubscriptionsCount(client *c);
|
|
|
|
|
2022-01-03 01:54:47 +01:00
|
|
|
/*
|
|
|
|
* Get client's shard level Pub/Sub channels subscription count.
|
|
|
|
*/
|
|
|
|
int clientShardSubscriptionsCount(client *c);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get client's global Pub/Sub channels dict.
|
|
|
|
*/
|
|
|
|
dict* getClientPubSubChannels(client *c);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get client's shard level Pub/Sub channels dict.
|
|
|
|
*/
|
|
|
|
dict* getClientPubSubShardChannels(client *c);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get list of channels client is subscribed to.
|
|
|
|
* If a pattern is provided, the subset of channels is returned
|
|
|
|
* matching the pattern.
|
|
|
|
*/
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
void channelList(client *c, sds pat, kvstore *pubsub_channels);
|
2022-01-03 01:54:47 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Pub/Sub type for global channels.
|
|
|
|
*/
|
|
|
|
pubsubtype pubSubType = {
|
|
|
|
.shard = 0,
|
|
|
|
.clientPubSubChannels = getClientPubSubChannels,
|
|
|
|
.subscriptionCount = clientSubscriptionsCount,
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
.serverPubSubChannels = &server.pubsub_channels,
|
2022-01-03 01:54:47 +01:00
|
|
|
.subscribeMsg = &shared.subscribebulk,
|
|
|
|
.unsubscribeMsg = &shared.unsubscribebulk,
|
2022-05-30 22:03:59 -07:00
|
|
|
.messageBulk = &shared.messagebulk,
|
2022-01-03 01:54:47 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Pub/Sub type for shard level channels bounded to a slot.
|
|
|
|
*/
|
|
|
|
pubsubtype pubSubShardType = {
|
|
|
|
.shard = 1,
|
|
|
|
.clientPubSubChannels = getClientPubSubShardChannels,
|
|
|
|
.subscriptionCount = clientShardSubscriptionsCount,
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
.serverPubSubChannels = &server.pubsubshard_channels,
|
2022-01-03 01:54:47 +01:00
|
|
|
.subscribeMsg = &shared.ssubscribebulk,
|
2022-05-30 22:03:59 -07:00
|
|
|
.unsubscribeMsg = &shared.sunsubscribebulk,
|
|
|
|
.messageBulk = &shared.smessagebulk,
|
2022-01-03 01:54:47 +01:00
|
|
|
};
|
|
|
|
|
2018-12-18 12:33:51 +01:00
|
|
|
/*-----------------------------------------------------------------------------
|
|
|
|
* Pubsub client replies API
|
|
|
|
*----------------------------------------------------------------------------*/
|
|
|
|
|
2020-02-10 13:42:18 +01:00
|
|
|
/* Send a pubsub message of type "message" to the client.
|
|
|
|
* Normally 'msg' is a Redis object containing the string to send as
|
|
|
|
* message. However if the caller sets 'msg' as NULL, it will be able
|
|
|
|
* to send a special message (for instance an Array type) by using the
|
|
|
|
* addReply*() API family. */
|
2022-05-30 22:03:59 -07:00
|
|
|
void addReplyPubsubMessage(client *c, robj *channel, robj *msg, robj *message_bulk) {
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
uint64_t old_flags = c->flags;
|
|
|
|
c->flags |= CLIENT_PUSHING;
|
2018-12-19 17:41:15 +01:00
|
|
|
if (c->resp == 2)
|
|
|
|
addReply(c,shared.mbulkhdr[3]);
|
|
|
|
else
|
|
|
|
addReplyPushLen(c,3);
|
2022-05-30 22:03:59 -07:00
|
|
|
addReply(c,message_bulk);
|
2018-12-18 12:33:51 +01:00
|
|
|
addReplyBulk(c,channel);
|
2020-02-10 13:42:18 +01:00
|
|
|
if (msg) addReplyBulk(c,msg);
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;
|
2018-12-18 12:33:51 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Send a pubsub message of type "pmessage" to the client. The difference
|
|
|
|
* with the "message" type delivered by addReplyPubsubMessage() is that
|
|
|
|
* this message format also includes the pattern that matched the message. */
|
|
|
|
void addReplyPubsubPatMessage(client *c, robj *pat, robj *channel, robj *msg) {
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
uint64_t old_flags = c->flags;
|
|
|
|
c->flags |= CLIENT_PUSHING;
|
2018-12-19 17:41:15 +01:00
|
|
|
if (c->resp == 2)
|
|
|
|
addReply(c,shared.mbulkhdr[4]);
|
|
|
|
else
|
|
|
|
addReplyPushLen(c,4);
|
2018-12-18 12:33:51 +01:00
|
|
|
addReply(c,shared.pmessagebulk);
|
|
|
|
addReplyBulk(c,pat);
|
|
|
|
addReplyBulk(c,channel);
|
|
|
|
addReplyBulk(c,msg);
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;
|
2018-12-18 12:33:51 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Send the pubsub subscription notification to the client. */
|
2022-01-03 01:54:47 +01:00
|
|
|
void addReplyPubsubSubscribed(client *c, robj *channel, pubsubtype type) {
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
uint64_t old_flags = c->flags;
|
|
|
|
c->flags |= CLIENT_PUSHING;
|
2018-12-19 17:41:15 +01:00
|
|
|
if (c->resp == 2)
|
|
|
|
addReply(c,shared.mbulkhdr[3]);
|
|
|
|
else
|
|
|
|
addReplyPushLen(c,3);
|
2022-01-03 01:54:47 +01:00
|
|
|
addReply(c,*type.subscribeMsg);
|
2018-12-18 12:33:51 +01:00
|
|
|
addReplyBulk(c,channel);
|
2022-01-03 01:54:47 +01:00
|
|
|
addReplyLongLong(c,type.subscriptionCount(c));
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;
|
2018-12-18 12:33:51 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Send the pubsub unsubscription notification to the client.
|
|
|
|
* Channel can be NULL: this is useful when the client sends a mass
|
|
|
|
* unsubscribe command but there are no channels to unsubscribe from: we
|
|
|
|
* still send a notification. */
|
2022-01-03 01:54:47 +01:00
|
|
|
void addReplyPubsubUnsubscribed(client *c, robj *channel, pubsubtype type) {
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
uint64_t old_flags = c->flags;
|
|
|
|
c->flags |= CLIENT_PUSHING;
|
2018-12-19 17:41:15 +01:00
|
|
|
if (c->resp == 2)
|
|
|
|
addReply(c,shared.mbulkhdr[3]);
|
|
|
|
else
|
|
|
|
addReplyPushLen(c,3);
|
2022-01-03 01:54:47 +01:00
|
|
|
addReply(c, *type.unsubscribeMsg);
|
2018-12-18 12:33:51 +01:00
|
|
|
if (channel)
|
|
|
|
addReplyBulk(c,channel);
|
|
|
|
else
|
|
|
|
addReplyNull(c);
|
2022-01-03 01:54:47 +01:00
|
|
|
addReplyLongLong(c,type.subscriptionCount(c));
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;
|
2018-12-18 12:33:51 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Send the pubsub pattern subscription notification to the client. */
|
|
|
|
void addReplyPubsubPatSubscribed(client *c, robj *pattern) {
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
uint64_t old_flags = c->flags;
|
|
|
|
c->flags |= CLIENT_PUSHING;
|
2018-12-19 17:41:15 +01:00
|
|
|
if (c->resp == 2)
|
|
|
|
addReply(c,shared.mbulkhdr[3]);
|
|
|
|
else
|
|
|
|
addReplyPushLen(c,3);
|
2018-12-18 12:33:51 +01:00
|
|
|
addReply(c,shared.psubscribebulk);
|
|
|
|
addReplyBulk(c,pattern);
|
|
|
|
addReplyLongLong(c,clientSubscriptionsCount(c));
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;
|
2018-12-18 12:33:51 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Send the pubsub pattern unsubscription notification to the client.
|
|
|
|
* Pattern can be NULL: this is useful when the client sends a mass
|
|
|
|
* punsubscribe command but there are no pattern to unsubscribe from: we
|
|
|
|
* still send a notification. */
|
|
|
|
void addReplyPubsubPatUnsubscribed(client *c, robj *pattern) {
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
uint64_t old_flags = c->flags;
|
|
|
|
c->flags |= CLIENT_PUSHING;
|
2018-12-19 17:41:15 +01:00
|
|
|
if (c->resp == 2)
|
|
|
|
addReply(c,shared.mbulkhdr[3]);
|
|
|
|
else
|
|
|
|
addReplyPushLen(c,3);
|
2018-12-18 12:33:51 +01:00
|
|
|
addReply(c,shared.punsubscribebulk);
|
|
|
|
if (pattern)
|
|
|
|
addReplyBulk(c,pattern);
|
|
|
|
else
|
|
|
|
addReplyNull(c);
|
|
|
|
addReplyLongLong(c,clientSubscriptionsCount(c));
|
Fix the bug that CLIENT REPLY OFF|SKIP cannot receive push notifications (#11875)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes #11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
2023-03-12 23:50:44 +08:00
|
|
|
if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;
|
2018-12-18 12:33:51 +01:00
|
|
|
}
|
|
|
|
|
2010-07-01 15:14:25 +02:00
|
|
|
/*-----------------------------------------------------------------------------
|
|
|
|
* Pubsub low level API
|
|
|
|
*----------------------------------------------------------------------------*/
|
|
|
|
|
2022-01-03 01:54:47 +01:00
|
|
|
/* Return the number of pubsub channels + patterns is handled. */
|
2023-05-02 17:31:32 -07:00
|
|
|
int serverPubsubSubscriptionCount(void) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
return kvstoreSize(server.pubsub_channels) + dictSize(server.pubsub_patterns);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Return the number of pubsub shard level channels is handled. */
|
2023-05-02 17:31:32 -07:00
|
|
|
int serverPubsubShardSubscriptionCount(void) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
return kvstoreSize(server.pubsubshard_channels);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
|
|
|
|
2014-07-16 17:34:07 +02:00
|
|
|
/* Return the number of channels + patterns a client is subscribed to. */
|
2015-07-26 15:20:46 +02:00
|
|
|
int clientSubscriptionsCount(client *c) {
|
2023-06-19 21:31:18 +08:00
|
|
|
return dictSize(c->pubsub_channels) + dictSize(c->pubsub_patterns);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Return the number of shard level channels a client is subscribed to. */
|
|
|
|
int clientShardSubscriptionsCount(client *c) {
|
|
|
|
return dictSize(c->pubsubshard_channels);
|
|
|
|
}
|
|
|
|
|
|
|
|
dict* getClientPubSubChannels(client *c) {
|
|
|
|
return c->pubsub_channels;
|
|
|
|
}
|
|
|
|
|
|
|
|
dict* getClientPubSubShardChannels(client *c) {
|
|
|
|
return c->pubsubshard_channels;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Return the number of pubsub + pubsub shard level channels
|
|
|
|
* a client is subscribed to. */
|
|
|
|
int clientTotalPubSubSubscriptionCount(client *c) {
|
|
|
|
return clientSubscriptionsCount(c) + clientShardSubscriptionsCount(c);
|
2014-07-16 17:34:07 +02:00
|
|
|
}
|
|
|
|
|
2023-12-13 13:44:13 +08:00
|
|
|
void markClientAsPubSub(client *c) {
|
|
|
|
if (!(c->flags & CLIENT_PUBSUB)) {
|
|
|
|
c->flags |= CLIENT_PUBSUB;
|
|
|
|
server.pubsub_clients++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void unmarkClientAsPubSub(client *c) {
|
|
|
|
if (c->flags & CLIENT_PUBSUB) {
|
|
|
|
c->flags &= ~CLIENT_PUBSUB;
|
|
|
|
server.pubsub_clients--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
/* Subscribe a client to a channel. Returns 1 if the operation succeeded, or
|
|
|
|
* 0 if the client was already subscribed to that channel. */
|
2022-01-03 01:54:47 +01:00
|
|
|
int pubsubSubscribeChannel(client *c, robj *channel, pubsubtype type) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
dictEntry *de, *existing;
|
2024-01-08 16:32:31 +08:00
|
|
|
dict *clients = NULL;
|
2010-06-22 00:07:48 +02:00
|
|
|
int retval = 0;
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
unsigned int slot = 0;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
/* Add the channel to the client -> channels hash table */
|
2024-03-04 22:56:50 +08:00
|
|
|
void *position = dictFindPositionForInsert(type.clientPubSubChannels(c),channel,NULL);
|
|
|
|
if (position) { /* Not yet subscribed to this channel */
|
2010-06-22 00:07:48 +02:00
|
|
|
retval = 1;
|
|
|
|
/* Add the client to the channel -> list of clients hash table */
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
if (server.cluster_enabled && type.shard) {
|
fix scripts access wrong slot if they disagree with pre-declared keys (#12906)
Regarding how to obtain the hash slot of a key, there is an optimization
in `getKeySlot()`, it is used to avoid redundant hash calculations for
keys: when the current client is in the process of executing a command,
it can directly use the slot of the current client because the slot to
access has already been calculated in advance in `processCommand()`.
However, scripts are a special case where, in default mode or with
`allow-cross-slot-keys` enabled, they are allowed to access keys beyond
the pre-declared range. This means that the keys they operate on may not
belong to the slot of the pre-declared keys. Currently, when the
commands in a script are executed, the slot of the original client
(i.e., the current client) is not correctly updated, leading to
subsequent access to the wrong slot.
This PR fixes the above issue. When checking the cluster constraints in
a script, the slot to be accessed by the current command is set for the
original client (i.e., the current client). This ensures that
`getKeySlot()` gets the correct slot cache.
Additionally, the following modifications are made:
1. The 'sort' and 'sort_ro' commands use `getKeySlot()` instead of
`c->slot` because the client could be an engine client in a script and
can lead to potential bug.
2. `getKeySlot()` is also used in pubsub to obtain the slot for the
channel, standardizing the way slots are retrieved.
2024-01-15 09:57:12 +08:00
|
|
|
slot = getKeySlot(channel->ptr);
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
}
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
|
|
|
|
de = kvstoreDictAddRaw(*type.serverPubSubChannels, slot, channel, &existing);
|
|
|
|
|
|
|
|
if (existing) {
|
|
|
|
clients = dictGetVal(existing);
|
2024-03-04 22:56:50 +08:00
|
|
|
channel = dictGetKey(existing);
|
2023-12-28 14:32:51 +08:00
|
|
|
} else {
|
2024-01-08 16:32:31 +08:00
|
|
|
clients = dictCreate(&clientDictType);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kvstoreDictSetVal(*type.serverPubSubChannels, slot, de, clients);
|
2010-06-22 00:07:48 +02:00
|
|
|
incrRefCount(channel);
|
|
|
|
}
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
|
2024-01-08 16:32:31 +08:00
|
|
|
serverAssert(dictAdd(clients, c, NULL) != DICT_ERR);
|
2024-03-04 22:56:50 +08:00
|
|
|
serverAssert(dictInsertAtPosition(type.clientPubSubChannels(c), channel, position));
|
|
|
|
incrRefCount(channel);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
/* Notify the client */
|
2022-01-03 01:54:47 +01:00
|
|
|
addReplyPubsubSubscribed(c,channel,type);
|
2010-06-22 00:07:48 +02:00
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Unsubscribe a client from a channel. Returns 1 if the operation succeeded, or
|
|
|
|
* 0 if the client was not subscribed to the specified channel. */
|
2022-01-03 01:54:47 +01:00
|
|
|
int pubsubUnsubscribeChannel(client *c, robj *channel, int notify, pubsubtype type) {
|
2014-03-20 16:20:37 +01:00
|
|
|
dictEntry *de;
|
2024-01-08 16:32:31 +08:00
|
|
|
dict *clients;
|
2010-06-22 00:07:48 +02:00
|
|
|
int retval = 0;
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
int slot = 0;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
/* Remove the channel from the client -> channels hash table */
|
|
|
|
incrRefCount(channel); /* channel may be just a pointer to the same object
|
|
|
|
we have in the hash tables. Protect it... */
|
2022-01-03 01:54:47 +01:00
|
|
|
if (dictDelete(type.clientPubSubChannels(c),channel) == DICT_OK) {
|
2010-06-22 00:07:48 +02:00
|
|
|
retval = 1;
|
|
|
|
/* Remove the client from the channel -> clients list hash table */
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
if (server.cluster_enabled && type.shard) {
|
fix scripts access wrong slot if they disagree with pre-declared keys (#12906)
Regarding how to obtain the hash slot of a key, there is an optimization
in `getKeySlot()`, it is used to avoid redundant hash calculations for
keys: when the current client is in the process of executing a command,
it can directly use the slot of the current client because the slot to
access has already been calculated in advance in `processCommand()`.
However, scripts are a special case where, in default mode or with
`allow-cross-slot-keys` enabled, they are allowed to access keys beyond
the pre-declared range. This means that the keys they operate on may not
belong to the slot of the pre-declared keys. Currently, when the
commands in a script are executed, the slot of the original client
(i.e., the current client) is not correctly updated, leading to
subsequent access to the wrong slot.
This PR fixes the above issue. When checking the cluster constraints in
a script, the slot to be accessed by the current command is set for the
original client (i.e., the current client). This ensures that
`getKeySlot()` gets the correct slot cache.
Additionally, the following modifications are made:
1. The 'sort' and 'sort_ro' commands use `getKeySlot()` instead of
`c->slot` because the client could be an engine client in a script and
can lead to potential bug.
2. `getKeySlot()` is also used in pubsub to obtain the slot for the
channel, standardizing the way slots are retrieved.
2024-01-15 09:57:12 +08:00
|
|
|
slot = getKeySlot(channel->ptr);
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
}
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
de = kvstoreDictFind(*type.serverPubSubChannels, slot, channel);
|
2015-07-26 15:29:53 +02:00
|
|
|
serverAssertWithInfo(c,NULL,de != NULL);
|
2011-11-08 17:07:55 +01:00
|
|
|
clients = dictGetVal(de);
|
2024-01-08 16:32:31 +08:00
|
|
|
serverAssertWithInfo(c, NULL, dictDelete(clients, c) == DICT_OK);
|
|
|
|
if (dictSize(clients) == 0) {
|
|
|
|
/* Free the dict and associated hash entry at all if this was
|
2010-06-22 00:07:48 +02:00
|
|
|
* the latest client, so that it will be possible to abuse
|
|
|
|
* Redis PUBSUB creating millions of channels. */
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kvstoreDictDelete(*type.serverPubSubChannels, slot, channel);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Notify the client */
|
2022-01-03 01:54:47 +01:00
|
|
|
if (notify) {
|
|
|
|
addReplyPubsubUnsubscribed(c,channel,type);
|
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
decrRefCount(channel); /* it is finally safe to release it */
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
/* Unsubscribe all shard channels in a slot. */
|
|
|
|
void pubsubShardUnsubscribeAllChannelsInSlot(unsigned int slot) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
if (!kvstoreDictSize(server.pubsubshard_channels, slot))
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
return;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
|
2024-02-07 20:53:50 +08:00
|
|
|
kvstoreDictIterator *kvs_di = kvstoreGetDictSafeIterator(server.pubsubshard_channels, slot);
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
dictEntry *de;
|
2024-02-07 20:53:50 +08:00
|
|
|
while ((de = kvstoreDictIteratorNext(kvs_di)) != NULL) {
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
robj *channel = dictGetKey(de);
|
2024-01-08 16:32:31 +08:00
|
|
|
dict *clients = dictGetVal(de);
|
2022-01-03 01:54:47 +01:00
|
|
|
/* For each client subscribed to the channel, unsubscribe it. */
|
2024-01-19 23:03:20 +08:00
|
|
|
dictIterator *iter = dictGetIterator(clients);
|
2024-01-08 16:32:31 +08:00
|
|
|
dictEntry *entry;
|
|
|
|
while ((entry = dictNext(iter)) != NULL) {
|
|
|
|
client *c = dictGetKey(entry);
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
int retval = dictDelete(c->pubsubshard_channels, channel);
|
2022-01-03 01:54:47 +01:00
|
|
|
serverAssertWithInfo(c,channel,retval == DICT_OK);
|
|
|
|
addReplyPubsubUnsubscribed(c, channel, pubSubShardType);
|
|
|
|
/* If the client has no other pubsub subscription,
|
|
|
|
* move out of pubsub mode. */
|
|
|
|
if (clientTotalPubSubSubscriptionCount(c) == 0) {
|
2023-12-13 13:44:13 +08:00
|
|
|
unmarkClientAsPubSub(c);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
|
|
|
}
|
2024-01-08 16:32:31 +08:00
|
|
|
dictReleaseIterator(iter);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
kvstoreDictDelete(server.pubsubshard_channels, slot, channel);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
2024-02-07 20:53:50 +08:00
|
|
|
kvstoreReleaseDictIterator(kvs_di);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
|
|
|
|
2013-01-17 01:00:20 +08:00
|
|
|
/* Subscribe a client to a pattern. Returns 1 if the operation succeeded, or 0 if the client was already subscribed to that pattern. */
|
2015-07-26 15:20:46 +02:00
|
|
|
int pubsubSubscribePattern(client *c, robj *pattern) {
|
2018-03-01 11:46:56 +08:00
|
|
|
dictEntry *de;
|
2024-01-08 16:32:31 +08:00
|
|
|
dict *clients;
|
2010-06-22 00:07:48 +02:00
|
|
|
int retval = 0;
|
|
|
|
|
2023-06-19 21:31:18 +08:00
|
|
|
if (dictAdd(c->pubsub_patterns, pattern, NULL) == DICT_OK) {
|
2010-06-22 00:07:48 +02:00
|
|
|
retval = 1;
|
|
|
|
incrRefCount(pattern);
|
2018-03-01 11:46:56 +08:00
|
|
|
/* Add the client to the pattern -> list of clients hash table */
|
2021-02-17 23:13:50 +01:00
|
|
|
de = dictFind(server.pubsub_patterns,pattern);
|
2018-03-01 11:46:56 +08:00
|
|
|
if (de == NULL) {
|
2024-01-08 16:32:31 +08:00
|
|
|
clients = dictCreate(&clientDictType);
|
2021-02-17 23:13:50 +01:00
|
|
|
dictAdd(server.pubsub_patterns,pattern,clients);
|
2018-03-01 11:46:56 +08:00
|
|
|
incrRefCount(pattern);
|
|
|
|
} else {
|
|
|
|
clients = dictGetVal(de);
|
|
|
|
}
|
2024-01-08 16:32:31 +08:00
|
|
|
serverAssert(dictAdd(clients, c, NULL) != DICT_ERR);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
/* Notify the client */
|
2018-12-18 12:33:51 +01:00
|
|
|
addReplyPubsubPatSubscribed(c,pattern);
|
2010-06-22 00:07:48 +02:00
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Unsubscribe a client from a channel. Returns 1 if the operation succeeded, or
|
|
|
|
* 0 if the client was not subscribed to the specified channel. */
|
2015-07-26 15:20:46 +02:00
|
|
|
int pubsubUnsubscribePattern(client *c, robj *pattern, int notify) {
|
2018-03-01 11:46:56 +08:00
|
|
|
dictEntry *de;
|
2024-01-08 16:32:31 +08:00
|
|
|
dict *clients;
|
2010-06-22 00:07:48 +02:00
|
|
|
int retval = 0;
|
|
|
|
|
|
|
|
incrRefCount(pattern); /* Protect the object. May be the same we remove */
|
2023-06-19 21:31:18 +08:00
|
|
|
if (dictDelete(c->pubsub_patterns, pattern) == DICT_OK) {
|
2010-06-22 00:07:48 +02:00
|
|
|
retval = 1;
|
2018-03-01 11:46:56 +08:00
|
|
|
/* Remove the client from the pattern -> clients list hash table */
|
2021-02-17 23:13:50 +01:00
|
|
|
de = dictFind(server.pubsub_patterns,pattern);
|
2018-03-01 11:46:56 +08:00
|
|
|
serverAssertWithInfo(c,NULL,de != NULL);
|
|
|
|
clients = dictGetVal(de);
|
2024-01-08 16:32:31 +08:00
|
|
|
serverAssertWithInfo(c, NULL, dictDelete(clients, c) == DICT_OK);
|
|
|
|
if (dictSize(clients) == 0) {
|
|
|
|
/* Free the dict and associated hash entry at all if this was
|
2018-03-01 11:46:56 +08:00
|
|
|
* the latest client. */
|
2021-02-17 23:13:50 +01:00
|
|
|
dictDelete(server.pubsub_patterns,pattern);
|
2018-03-01 11:46:56 +08:00
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
/* Notify the client */
|
2018-12-18 12:33:51 +01:00
|
|
|
if (notify) addReplyPubsubPatUnsubscribed(c,pattern);
|
2010-06-22 00:07:48 +02:00
|
|
|
decrRefCount(pattern);
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Unsubscribe from all the channels. Return the number of channels the
|
2014-07-16 17:34:07 +02:00
|
|
|
* client was subscribed to. */
|
2022-01-03 01:54:47 +01:00
|
|
|
int pubsubUnsubscribeAllChannelsInternal(client *c, int notify, pubsubtype type) {
|
2010-06-22 00:07:48 +02:00
|
|
|
int count = 0;
|
2022-01-03 01:54:47 +01:00
|
|
|
if (dictSize(type.clientPubSubChannels(c)) > 0) {
|
|
|
|
dictIterator *di = dictGetSafeIterator(type.clientPubSubChannels(c));
|
2021-02-28 13:11:18 +01:00
|
|
|
dictEntry *de;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2021-02-28 13:11:18 +01:00
|
|
|
while((de = dictNext(di)) != NULL) {
|
|
|
|
robj *channel = dictGetKey(de);
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2022-01-03 01:54:47 +01:00
|
|
|
count += pubsubUnsubscribeChannel(c,channel,notify,type);
|
2021-02-28 13:11:18 +01:00
|
|
|
}
|
|
|
|
dictReleaseIterator(di);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2013-01-21 18:50:16 +01:00
|
|
|
/* We were subscribed to nothing? Still reply to the client. */
|
2022-01-03 01:54:47 +01:00
|
|
|
if (notify && count == 0) {
|
|
|
|
addReplyPubsubUnsubscribed(c,NULL,type);
|
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Unsubscribe a client from all global channels.
|
|
|
|
*/
|
|
|
|
int pubsubUnsubscribeAllChannels(client *c, int notify) {
|
|
|
|
int count = pubsubUnsubscribeAllChannelsInternal(c,notify,pubSubType);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Unsubscribe a client from all shard subscribed channels.
|
|
|
|
*/
|
|
|
|
int pubsubUnsubscribeShardAllChannels(client *c, int notify) {
|
|
|
|
int count = pubsubUnsubscribeAllChannelsInternal(c, notify, pubSubShardType);
|
2010-06-22 00:07:48 +02:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Unsubscribe from all the patterns. Return the number of patterns the
|
|
|
|
* client was subscribed from. */
|
2015-07-26 15:20:46 +02:00
|
|
|
int pubsubUnsubscribeAllPatterns(client *c, int notify) {
|
2010-06-22 00:07:48 +02:00
|
|
|
int count = 0;
|
|
|
|
|
2023-06-19 21:31:18 +08:00
|
|
|
if (dictSize(c->pubsub_patterns) > 0) {
|
|
|
|
dictIterator *di = dictGetSafeIterator(c->pubsub_patterns);
|
|
|
|
dictEntry *de;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
2023-06-19 21:31:18 +08:00
|
|
|
while ((de = dictNext(di)) != NULL) {
|
|
|
|
robj *pattern = dictGetKey(de);
|
|
|
|
count += pubsubUnsubscribePattern(c, pattern, notify);
|
|
|
|
}
|
|
|
|
dictReleaseIterator(di);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2023-06-19 21:31:18 +08:00
|
|
|
|
|
|
|
/* We were subscribed to nothing? Still reply to the client. */
|
2018-12-18 12:33:51 +01:00
|
|
|
if (notify && count == 0) addReplyPubsubPatUnsubscribed(c,NULL);
|
2010-06-22 00:07:48 +02:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2022-01-03 01:54:47 +01:00
|
|
|
/*
|
|
|
|
* Publish a message to all the subscribers.
|
|
|
|
*/
|
|
|
|
int pubsubPublishMessageInternal(robj *channel, robj *message, pubsubtype type) {
|
2010-06-22 00:07:48 +02:00
|
|
|
int receivers = 0;
|
2014-03-20 16:20:37 +01:00
|
|
|
dictEntry *de;
|
2018-03-01 11:46:56 +08:00
|
|
|
dictIterator *di;
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
unsigned int slot = 0;
|
2010-06-22 00:07:48 +02:00
|
|
|
|
|
|
|
/* Send to clients listening for that channel */
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
if (server.cluster_enabled && type.shard) {
|
|
|
|
slot = keyHashSlot(channel->ptr, sdslen(channel->ptr));
|
|
|
|
}
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
de = kvstoreDictFind(*type.serverPubSubChannels, slot, channel);
|
2010-06-22 00:07:48 +02:00
|
|
|
if (de) {
|
2024-01-08 16:32:31 +08:00
|
|
|
dict *clients = dictGetVal(de);
|
|
|
|
dictEntry *entry;
|
2024-01-19 23:03:20 +08:00
|
|
|
dictIterator *iter = dictGetIterator(clients);
|
2024-01-08 16:32:31 +08:00
|
|
|
while ((entry = dictNext(iter)) != NULL) {
|
|
|
|
client *c = dictGetKey(entry);
|
2022-05-30 22:03:59 -07:00
|
|
|
addReplyPubsubMessage(c,channel,message,*type.messageBulk);
|
2022-12-06 22:26:56 -08:00
|
|
|
updateClientMemUsageAndBucket(c);
|
2010-06-22 00:07:48 +02:00
|
|
|
receivers++;
|
|
|
|
}
|
2024-01-08 16:32:31 +08:00
|
|
|
dictReleaseIterator(iter);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2022-01-03 01:54:47 +01:00
|
|
|
|
|
|
|
if (type.shard) {
|
|
|
|
/* Shard pubsub ignores patterns. */
|
|
|
|
return receivers;
|
|
|
|
}
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
/* Send to clients listening to matching channels */
|
2021-02-17 23:13:50 +01:00
|
|
|
di = dictGetIterator(server.pubsub_patterns);
|
2018-03-01 11:46:56 +08:00
|
|
|
if (di) {
|
2010-06-22 00:07:48 +02:00
|
|
|
channel = getDecodedObject(channel);
|
2018-03-01 11:46:56 +08:00
|
|
|
while((de = dictNext(di)) != NULL) {
|
|
|
|
robj *pattern = dictGetKey(de);
|
2024-01-08 16:32:31 +08:00
|
|
|
dict *clients = dictGetVal(de);
|
2018-03-01 11:46:56 +08:00
|
|
|
if (!stringmatchlen((char*)pattern->ptr,
|
|
|
|
sdslen(pattern->ptr),
|
2010-06-22 00:07:48 +02:00
|
|
|
(char*)channel->ptr,
|
2020-03-31 12:40:08 +02:00
|
|
|
sdslen(channel->ptr),0)) continue;
|
|
|
|
|
2024-01-08 16:32:31 +08:00
|
|
|
dictEntry *entry;
|
2024-01-19 23:03:20 +08:00
|
|
|
dictIterator *iter = dictGetIterator(clients);
|
2024-01-08 16:32:31 +08:00
|
|
|
while ((entry = dictNext(iter)) != NULL) {
|
|
|
|
client *c = dictGetKey(entry);
|
2020-03-31 12:40:08 +02:00
|
|
|
addReplyPubsubPatMessage(c,pattern,channel,message);
|
2022-12-06 22:26:56 -08:00
|
|
|
updateClientMemUsageAndBucket(c);
|
2010-06-22 00:07:48 +02:00
|
|
|
receivers++;
|
|
|
|
}
|
2024-01-08 16:32:31 +08:00
|
|
|
dictReleaseIterator(iter);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
decrRefCount(channel);
|
2018-03-01 11:46:56 +08:00
|
|
|
dictReleaseIterator(di);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
return receivers;
|
|
|
|
}
|
|
|
|
|
2022-01-03 01:54:47 +01:00
|
|
|
/* Publish a message to all the subscribers. */
|
2022-04-17 14:43:22 +02:00
|
|
|
int pubsubPublishMessage(robj *channel, robj *message, int sharded) {
|
|
|
|
return pubsubPublishMessageInternal(channel, message, sharded? pubSubShardType : pubSubType);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
|
|
|
|
2010-07-01 15:14:25 +02:00
|
|
|
/*-----------------------------------------------------------------------------
|
|
|
|
* Pubsub commands implementation
|
|
|
|
*----------------------------------------------------------------------------*/
|
|
|
|
|
Adds pub/sub channel patterns to ACL (#7993)
Fixes #7923.
This PR appropriates the special `&` symbol (because `@` and `*` are taken),
followed by a literal value or pattern for describing the Pub/Sub patterns that
an ACL user can interact with. It is similar to the existing key patterns
mechanism in function (additive) and implementation (copy-pasta). It also adds
the allchannels and resetchannels ACL keywords, naturally.
The default user is given allchannels permissions, whereas new users get
whatever is defined by the acl-pubsub-default configuration directive. For
backward compatibility in 6.2, the default of this directive is allchannels but
this is likely to be changed to resetchannels in the next major version for
stronger default security settings.
Unless allchannels is set for the user, channel access permissions are checked
as follows :
* Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the
argumentative channel name(s) exists for the user.
* Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument
literally exist(s) in the user's list.
Such failures are logged to the ACL log.
Runtime changes to channel permissions for a user with existing subscribing
clients cause said clients to disconnect unless the new permissions permit the
connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched
literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be
disconnected.
Notes/questions:
* UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons
for touching them.
2020-12-01 14:21:39 +02:00
|
|
|
/* SUBSCRIBE channel [channel ...] */
|
2015-07-26 15:20:46 +02:00
|
|
|
void subscribeCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
int j;
|
Unified MULTI, LUA, and RM_Call with respect to blocking commands (#8025)
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because,
the caller, who executes the command in this context, expects a reply.
Today, LUA and MULTI have a special (and different) treatment to blocking commands:
LUA - Most commands are marked with no-script flag which are checked when executing
and command from LUA, commands that are not marked (like XREAD) verify that their
blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag).
MULTI - Command that is going to block, first verify that the client is not inside
multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they
return a result which is a match to the empty key with no timeout (for example blpop
inside MULTI will act as lpop)
For modules that perform RM_Call with blocking command, the returned results type is
REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened.
Disadvantages of the current state are:
No unified approach, LUA, MULTI, and RM_Call, each has a different treatment
Module can not safely execute blocking command (and get reply or error).
Though It is true that modules are not like LUA or MULTI and should be smarter not
to execute blocking commands on RM_Call, sometimes you want to execute a command base
on client input (for example if you create a module that provides a new scripting
language like javascript or python).
While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or
REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to
check if the command came from another module using RM_Call. So there is no way
for a module to know not to block another module RM_Call execution.
This commit adds a way to unify the treatment for blocking clients by introducing
a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag
turned on to signify that the client should not be blocked. A blocking command
verifies that the flag is turned off before blocking. If a blocking command sees
that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results
which are matches to empty key with no timeout (as MULTI does today).
The new flag is checked on the following commands:
List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE,
Zset blocking commands: BZPOPMIN, BZPOPMAX
Stream blocking commands: XREAD, XREADGROUP
SUBSCRIBE, PSUBSCRIBE, MONITOR
In addition, the new flag is turned on inside the AOF client, we do not want to
block the AOF client to prevent deadlocks and commands ordering issues (and there
is also an existing assert in the code that verifies it).
To keep backward compatibility on LUA, all the no-script flags on existing commands
were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept.
To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE).
We added a special treatment on those commands to allow executing them on MULTI.
The only backward compatibility issue that this PR introduces is that now MONITOR
is not allowed inside MULTI.
Tests were added to verify blocking commands are not blocking the client on LUA, MULTI,
or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-11-17 18:58:55 +02:00
|
|
|
if ((c->flags & CLIENT_DENY_BLOCKING) && !(c->flags & CLIENT_MULTI)) {
|
|
|
|
/**
|
|
|
|
* A client that has CLIENT_DENY_BLOCKING flag on
|
|
|
|
* expect a reply per command and so can not execute subscribe.
|
|
|
|
*
|
|
|
|
* Notice that we have a special treatment for multi because of
|
2021-06-10 20:39:33 +08:00
|
|
|
* backward compatibility
|
Unified MULTI, LUA, and RM_Call with respect to blocking commands (#8025)
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because,
the caller, who executes the command in this context, expects a reply.
Today, LUA and MULTI have a special (and different) treatment to blocking commands:
LUA - Most commands are marked with no-script flag which are checked when executing
and command from LUA, commands that are not marked (like XREAD) verify that their
blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag).
MULTI - Command that is going to block, first verify that the client is not inside
multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they
return a result which is a match to the empty key with no timeout (for example blpop
inside MULTI will act as lpop)
For modules that perform RM_Call with blocking command, the returned results type is
REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened.
Disadvantages of the current state are:
No unified approach, LUA, MULTI, and RM_Call, each has a different treatment
Module can not safely execute blocking command (and get reply or error).
Though It is true that modules are not like LUA or MULTI and should be smarter not
to execute blocking commands on RM_Call, sometimes you want to execute a command base
on client input (for example if you create a module that provides a new scripting
language like javascript or python).
While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or
REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to
check if the command came from another module using RM_Call. So there is no way
for a module to know not to block another module RM_Call execution.
This commit adds a way to unify the treatment for blocking clients by introducing
a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag
turned on to signify that the client should not be blocked. A blocking command
verifies that the flag is turned off before blocking. If a blocking command sees
that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results
which are matches to empty key with no timeout (as MULTI does today).
The new flag is checked on the following commands:
List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE,
Zset blocking commands: BZPOPMIN, BZPOPMAX
Stream blocking commands: XREAD, XREADGROUP
SUBSCRIBE, PSUBSCRIBE, MONITOR
In addition, the new flag is turned on inside the AOF client, we do not want to
block the AOF client to prevent deadlocks and commands ordering issues (and there
is also an existing assert in the code that verifies it).
To keep backward compatibility on LUA, all the no-script flags on existing commands
were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept.
To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE).
We added a special treatment on those commands to allow executing them on MULTI.
The only backward compatibility issue that this PR introduces is that now MONITOR
is not allowed inside MULTI.
Tests were added to verify blocking commands are not blocking the client on LUA, MULTI,
or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-11-17 18:58:55 +02:00
|
|
|
*/
|
Adds pub/sub channel patterns to ACL (#7993)
Fixes #7923.
This PR appropriates the special `&` symbol (because `@` and `*` are taken),
followed by a literal value or pattern for describing the Pub/Sub patterns that
an ACL user can interact with. It is similar to the existing key patterns
mechanism in function (additive) and implementation (copy-pasta). It also adds
the allchannels and resetchannels ACL keywords, naturally.
The default user is given allchannels permissions, whereas new users get
whatever is defined by the acl-pubsub-default configuration directive. For
backward compatibility in 6.2, the default of this directive is allchannels but
this is likely to be changed to resetchannels in the next major version for
stronger default security settings.
Unless allchannels is set for the user, channel access permissions are checked
as follows :
* Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the
argumentative channel name(s) exists for the user.
* Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument
literally exist(s) in the user's list.
Such failures are logged to the ACL log.
Runtime changes to channel permissions for a user with existing subscribing
clients cause said clients to disconnect unless the new permissions permit the
connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched
literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be
disconnected.
Notes/questions:
* UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons
for touching them.
2020-12-01 14:21:39 +02:00
|
|
|
addReplyError(c, "SUBSCRIBE isn't allowed for a DENY BLOCKING client");
|
Unified MULTI, LUA, and RM_Call with respect to blocking commands (#8025)
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because,
the caller, who executes the command in this context, expects a reply.
Today, LUA and MULTI have a special (and different) treatment to blocking commands:
LUA - Most commands are marked with no-script flag which are checked when executing
and command from LUA, commands that are not marked (like XREAD) verify that their
blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag).
MULTI - Command that is going to block, first verify that the client is not inside
multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they
return a result which is a match to the empty key with no timeout (for example blpop
inside MULTI will act as lpop)
For modules that perform RM_Call with blocking command, the returned results type is
REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened.
Disadvantages of the current state are:
No unified approach, LUA, MULTI, and RM_Call, each has a different treatment
Module can not safely execute blocking command (and get reply or error).
Though It is true that modules are not like LUA or MULTI and should be smarter not
to execute blocking commands on RM_Call, sometimes you want to execute a command base
on client input (for example if you create a module that provides a new scripting
language like javascript or python).
While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or
REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to
check if the command came from another module using RM_Call. So there is no way
for a module to know not to block another module RM_Call execution.
This commit adds a way to unify the treatment for blocking clients by introducing
a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag
turned on to signify that the client should not be blocked. A blocking command
verifies that the flag is turned off before blocking. If a blocking command sees
that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results
which are matches to empty key with no timeout (as MULTI does today).
The new flag is checked on the following commands:
List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE,
Zset blocking commands: BZPOPMIN, BZPOPMAX
Stream blocking commands: XREAD, XREADGROUP
SUBSCRIBE, PSUBSCRIBE, MONITOR
In addition, the new flag is turned on inside the AOF client, we do not want to
block the AOF client to prevent deadlocks and commands ordering issues (and there
is also an existing assert in the code that verifies it).
To keep backward compatibility on LUA, all the no-script flags on existing commands
were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept.
To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE).
We added a special treatment on those commands to allow executing them on MULTI.
The only backward compatibility issue that this PR introduces is that now MONITOR
is not allowed inside MULTI.
Tests were added to verify blocking commands are not blocking the client on LUA, MULTI,
or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-11-17 18:58:55 +02:00
|
|
|
return;
|
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
for (j = 1; j < c->argc; j++)
|
2022-01-03 01:54:47 +01:00
|
|
|
pubsubSubscribeChannel(c,c->argv[j],pubSubType);
|
2023-12-13 13:44:13 +08:00
|
|
|
markClientAsPubSub(c);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2022-01-03 01:54:47 +01:00
|
|
|
/* UNSUBSCRIBE [channel ...] */
|
2015-07-26 15:20:46 +02:00
|
|
|
void unsubscribeCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
if (c->argc == 1) {
|
|
|
|
pubsubUnsubscribeAllChannels(c,1);
|
|
|
|
} else {
|
|
|
|
int j;
|
|
|
|
|
|
|
|
for (j = 1; j < c->argc; j++)
|
2022-01-03 01:54:47 +01:00
|
|
|
pubsubUnsubscribeChannel(c,c->argv[j],1,pubSubType);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
2023-12-13 13:44:13 +08:00
|
|
|
if (clientTotalPubSubSubscriptionCount(c) == 0) {
|
|
|
|
unmarkClientAsPubSub(c);
|
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
Adds pub/sub channel patterns to ACL (#7993)
Fixes #7923.
This PR appropriates the special `&` symbol (because `@` and `*` are taken),
followed by a literal value or pattern for describing the Pub/Sub patterns that
an ACL user can interact with. It is similar to the existing key patterns
mechanism in function (additive) and implementation (copy-pasta). It also adds
the allchannels and resetchannels ACL keywords, naturally.
The default user is given allchannels permissions, whereas new users get
whatever is defined by the acl-pubsub-default configuration directive. For
backward compatibility in 6.2, the default of this directive is allchannels but
this is likely to be changed to resetchannels in the next major version for
stronger default security settings.
Unless allchannels is set for the user, channel access permissions are checked
as follows :
* Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the
argumentative channel name(s) exists for the user.
* Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument
literally exist(s) in the user's list.
Such failures are logged to the ACL log.
Runtime changes to channel permissions for a user with existing subscribing
clients cause said clients to disconnect unless the new permissions permit the
connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched
literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be
disconnected.
Notes/questions:
* UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons
for touching them.
2020-12-01 14:21:39 +02:00
|
|
|
/* PSUBSCRIBE pattern [pattern ...] */
|
2015-07-26 15:20:46 +02:00
|
|
|
void psubscribeCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
int j;
|
Unified MULTI, LUA, and RM_Call with respect to blocking commands (#8025)
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because,
the caller, who executes the command in this context, expects a reply.
Today, LUA and MULTI have a special (and different) treatment to blocking commands:
LUA - Most commands are marked with no-script flag which are checked when executing
and command from LUA, commands that are not marked (like XREAD) verify that their
blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag).
MULTI - Command that is going to block, first verify that the client is not inside
multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they
return a result which is a match to the empty key with no timeout (for example blpop
inside MULTI will act as lpop)
For modules that perform RM_Call with blocking command, the returned results type is
REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened.
Disadvantages of the current state are:
No unified approach, LUA, MULTI, and RM_Call, each has a different treatment
Module can not safely execute blocking command (and get reply or error).
Though It is true that modules are not like LUA or MULTI and should be smarter not
to execute blocking commands on RM_Call, sometimes you want to execute a command base
on client input (for example if you create a module that provides a new scripting
language like javascript or python).
While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or
REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to
check if the command came from another module using RM_Call. So there is no way
for a module to know not to block another module RM_Call execution.
This commit adds a way to unify the treatment for blocking clients by introducing
a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag
turned on to signify that the client should not be blocked. A blocking command
verifies that the flag is turned off before blocking. If a blocking command sees
that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results
which are matches to empty key with no timeout (as MULTI does today).
The new flag is checked on the following commands:
List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE,
Zset blocking commands: BZPOPMIN, BZPOPMAX
Stream blocking commands: XREAD, XREADGROUP
SUBSCRIBE, PSUBSCRIBE, MONITOR
In addition, the new flag is turned on inside the AOF client, we do not want to
block the AOF client to prevent deadlocks and commands ordering issues (and there
is also an existing assert in the code that verifies it).
To keep backward compatibility on LUA, all the no-script flags on existing commands
were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept.
To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE).
We added a special treatment on those commands to allow executing them on MULTI.
The only backward compatibility issue that this PR introduces is that now MONITOR
is not allowed inside MULTI.
Tests were added to verify blocking commands are not blocking the client on LUA, MULTI,
or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-11-17 18:58:55 +02:00
|
|
|
if ((c->flags & CLIENT_DENY_BLOCKING) && !(c->flags & CLIENT_MULTI)) {
|
|
|
|
/**
|
|
|
|
* A client that has CLIENT_DENY_BLOCKING flag on
|
|
|
|
* expect a reply per command and so can not execute subscribe.
|
|
|
|
*
|
|
|
|
* Notice that we have a special treatment for multi because of
|
2021-06-10 20:39:33 +08:00
|
|
|
* backward compatibility
|
Unified MULTI, LUA, and RM_Call with respect to blocking commands (#8025)
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because,
the caller, who executes the command in this context, expects a reply.
Today, LUA and MULTI have a special (and different) treatment to blocking commands:
LUA - Most commands are marked with no-script flag which are checked when executing
and command from LUA, commands that are not marked (like XREAD) verify that their
blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag).
MULTI - Command that is going to block, first verify that the client is not inside
multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they
return a result which is a match to the empty key with no timeout (for example blpop
inside MULTI will act as lpop)
For modules that perform RM_Call with blocking command, the returned results type is
REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened.
Disadvantages of the current state are:
No unified approach, LUA, MULTI, and RM_Call, each has a different treatment
Module can not safely execute blocking command (and get reply or error).
Though It is true that modules are not like LUA or MULTI and should be smarter not
to execute blocking commands on RM_Call, sometimes you want to execute a command base
on client input (for example if you create a module that provides a new scripting
language like javascript or python).
While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or
REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to
check if the command came from another module using RM_Call. So there is no way
for a module to know not to block another module RM_Call execution.
This commit adds a way to unify the treatment for blocking clients by introducing
a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag
turned on to signify that the client should not be blocked. A blocking command
verifies that the flag is turned off before blocking. If a blocking command sees
that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results
which are matches to empty key with no timeout (as MULTI does today).
The new flag is checked on the following commands:
List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE,
Zset blocking commands: BZPOPMIN, BZPOPMAX
Stream blocking commands: XREAD, XREADGROUP
SUBSCRIBE, PSUBSCRIBE, MONITOR
In addition, the new flag is turned on inside the AOF client, we do not want to
block the AOF client to prevent deadlocks and commands ordering issues (and there
is also an existing assert in the code that verifies it).
To keep backward compatibility on LUA, all the no-script flags on existing commands
were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept.
To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE).
We added a special treatment on those commands to allow executing them on MULTI.
The only backward compatibility issue that this PR introduces is that now MONITOR
is not allowed inside MULTI.
Tests were added to verify blocking commands are not blocking the client on LUA, MULTI,
or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-11-17 18:58:55 +02:00
|
|
|
*/
|
Adds pub/sub channel patterns to ACL (#7993)
Fixes #7923.
This PR appropriates the special `&` symbol (because `@` and `*` are taken),
followed by a literal value or pattern for describing the Pub/Sub patterns that
an ACL user can interact with. It is similar to the existing key patterns
mechanism in function (additive) and implementation (copy-pasta). It also adds
the allchannels and resetchannels ACL keywords, naturally.
The default user is given allchannels permissions, whereas new users get
whatever is defined by the acl-pubsub-default configuration directive. For
backward compatibility in 6.2, the default of this directive is allchannels but
this is likely to be changed to resetchannels in the next major version for
stronger default security settings.
Unless allchannels is set for the user, channel access permissions are checked
as follows :
* Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the
argumentative channel name(s) exists for the user.
* Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument
literally exist(s) in the user's list.
Such failures are logged to the ACL log.
Runtime changes to channel permissions for a user with existing subscribing
clients cause said clients to disconnect unless the new permissions permit the
connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched
literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be
disconnected.
Notes/questions:
* UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons
for touching them.
2020-12-01 14:21:39 +02:00
|
|
|
addReplyError(c, "PSUBSCRIBE isn't allowed for a DENY BLOCKING client");
|
Unified MULTI, LUA, and RM_Call with respect to blocking commands (#8025)
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because,
the caller, who executes the command in this context, expects a reply.
Today, LUA and MULTI have a special (and different) treatment to blocking commands:
LUA - Most commands are marked with no-script flag which are checked when executing
and command from LUA, commands that are not marked (like XREAD) verify that their
blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag).
MULTI - Command that is going to block, first verify that the client is not inside
multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they
return a result which is a match to the empty key with no timeout (for example blpop
inside MULTI will act as lpop)
For modules that perform RM_Call with blocking command, the returned results type is
REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened.
Disadvantages of the current state are:
No unified approach, LUA, MULTI, and RM_Call, each has a different treatment
Module can not safely execute blocking command (and get reply or error).
Though It is true that modules are not like LUA or MULTI and should be smarter not
to execute blocking commands on RM_Call, sometimes you want to execute a command base
on client input (for example if you create a module that provides a new scripting
language like javascript or python).
While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or
REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to
check if the command came from another module using RM_Call. So there is no way
for a module to know not to block another module RM_Call execution.
This commit adds a way to unify the treatment for blocking clients by introducing
a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag
turned on to signify that the client should not be blocked. A blocking command
verifies that the flag is turned off before blocking. If a blocking command sees
that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results
which are matches to empty key with no timeout (as MULTI does today).
The new flag is checked on the following commands:
List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE,
Zset blocking commands: BZPOPMIN, BZPOPMAX
Stream blocking commands: XREAD, XREADGROUP
SUBSCRIBE, PSUBSCRIBE, MONITOR
In addition, the new flag is turned on inside the AOF client, we do not want to
block the AOF client to prevent deadlocks and commands ordering issues (and there
is also an existing assert in the code that verifies it).
To keep backward compatibility on LUA, all the no-script flags on existing commands
were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept.
To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE).
We added a special treatment on those commands to allow executing them on MULTI.
The only backward compatibility issue that this PR introduces is that now MONITOR
is not allowed inside MULTI.
Tests were added to verify blocking commands are not blocking the client on LUA, MULTI,
or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
2020-11-17 18:58:55 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2010-06-22 00:07:48 +02:00
|
|
|
for (j = 1; j < c->argc; j++)
|
|
|
|
pubsubSubscribePattern(c,c->argv[j]);
|
2023-12-13 13:44:13 +08:00
|
|
|
markClientAsPubSub(c);
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
Adds pub/sub channel patterns to ACL (#7993)
Fixes #7923.
This PR appropriates the special `&` symbol (because `@` and `*` are taken),
followed by a literal value or pattern for describing the Pub/Sub patterns that
an ACL user can interact with. It is similar to the existing key patterns
mechanism in function (additive) and implementation (copy-pasta). It also adds
the allchannels and resetchannels ACL keywords, naturally.
The default user is given allchannels permissions, whereas new users get
whatever is defined by the acl-pubsub-default configuration directive. For
backward compatibility in 6.2, the default of this directive is allchannels but
this is likely to be changed to resetchannels in the next major version for
stronger default security settings.
Unless allchannels is set for the user, channel access permissions are checked
as follows :
* Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the
argumentative channel name(s) exists for the user.
* Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument
literally exist(s) in the user's list.
Such failures are logged to the ACL log.
Runtime changes to channel permissions for a user with existing subscribing
clients cause said clients to disconnect unless the new permissions permit the
connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched
literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be
disconnected.
Notes/questions:
* UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons
for touching them.
2020-12-01 14:21:39 +02:00
|
|
|
/* PUNSUBSCRIBE [pattern [pattern ...]] */
|
2015-07-26 15:20:46 +02:00
|
|
|
void punsubscribeCommand(client *c) {
|
2010-06-22 00:07:48 +02:00
|
|
|
if (c->argc == 1) {
|
|
|
|
pubsubUnsubscribeAllPatterns(c,1);
|
|
|
|
} else {
|
|
|
|
int j;
|
|
|
|
|
|
|
|
for (j = 1; j < c->argc; j++)
|
|
|
|
pubsubUnsubscribePattern(c,c->argv[j],1);
|
|
|
|
}
|
2023-12-13 13:44:13 +08:00
|
|
|
if (clientTotalPubSubSubscriptionCount(c) == 0) {
|
|
|
|
unmarkClientAsPubSub(c);
|
|
|
|
}
|
2010-06-22 00:07:48 +02:00
|
|
|
}
|
|
|
|
|
2022-04-17 14:43:22 +02:00
|
|
|
/* This function wraps pubsubPublishMessage and also propagates the message to cluster.
|
|
|
|
* Used by the commands PUBLISH/SPUBLISH and their respective module APIs.*/
|
|
|
|
int pubsubPublishMessageAndPropagateToCluster(robj *channel, robj *message, int sharded) {
|
|
|
|
int receivers = pubsubPublishMessage(channel, message, sharded);
|
|
|
|
if (server.cluster_enabled)
|
|
|
|
clusterPropagatePublish(channel, message, sharded);
|
|
|
|
return receivers;
|
|
|
|
}
|
|
|
|
|
Adds pub/sub channel patterns to ACL (#7993)
Fixes #7923.
This PR appropriates the special `&` symbol (because `@` and `*` are taken),
followed by a literal value or pattern for describing the Pub/Sub patterns that
an ACL user can interact with. It is similar to the existing key patterns
mechanism in function (additive) and implementation (copy-pasta). It also adds
the allchannels and resetchannels ACL keywords, naturally.
The default user is given allchannels permissions, whereas new users get
whatever is defined by the acl-pubsub-default configuration directive. For
backward compatibility in 6.2, the default of this directive is allchannels but
this is likely to be changed to resetchannels in the next major version for
stronger default security settings.
Unless allchannels is set for the user, channel access permissions are checked
as follows :
* Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the
argumentative channel name(s) exists for the user.
* Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument
literally exist(s) in the user's list.
Such failures are logged to the ACL log.
Runtime changes to channel permissions for a user with existing subscribing
clients cause said clients to disconnect unless the new permissions permit the
connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched
literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be
disconnected.
Notes/questions:
* UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons
for touching them.
2020-12-01 14:21:39 +02:00
|
|
|
/* PUBLISH <channel> <message> */
|
2015-07-26 15:20:46 +02:00
|
|
|
void publishCommand(client *c) {
|
Treat subcommands as commands (#9504)
## Intro
The purpose is to allow having different flags/ACL categories for
subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't)
We create a small command table for every command that has subcommands
and each subcommand has its own flags, etc. (same as a "regular" command)
This commit also unites the Redis and the Sentinel command tables
## Affected commands
CONFIG
Used to have "admin ok-loading ok-stale no-script"
Changes:
1. Dropped "ok-loading" in all except GET (this doesn't change behavior since
there were checks in the code doing that)
XINFO
Used to have "read-only random"
Changes:
1. Dropped "random" in all except CONSUMERS
XGROUP
Used to have "write use-memory"
Changes:
1. Dropped "use-memory" in all except CREATE and CREATECONSUMER
COMMAND
No changes.
MEMORY
Used to have "random read-only"
Changes:
1. Dropped "random" in PURGE and USAGE
ACL
Used to have "admin no-script ok-loading ok-stale"
Changes:
1. Dropped "admin" in WHOAMI, GENPASS, and CAT
LATENCY
No changes.
MODULE
No changes.
SLOWLOG
Used to have "admin random ok-loading ok-stale"
Changes:
1. Dropped "random" in RESET
OBJECT
Used to have "read-only random"
Changes:
1. Dropped "random" in ENCODING and REFCOUNT
SCRIPT
Used to have "may-replicate no-script"
Changes:
1. Dropped "may-replicate" in all except FLUSH and LOAD
CLIENT
Used to have "admin no-script random ok-loading ok-stale"
Changes:
1. Dropped "random" in all except INFO and LIST
2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY
STRALGO
No changes.
PUBSUB
No changes.
CLUSTER
Changes:
1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots
SENTINEL
No changes.
(note that DEBUG also fits, but we decided not to convert it since it's for
debugging and anyway undocumented)
## New sub-command
This commit adds another element to the per-command output of COMMAND,
describing the list of subcommands, if any (in the same structure as "regular" commands)
Also, it adds a new subcommand:
```
COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)]
```
which returns a set of all commands (unless filters), but excluding subcommands.
## Module API
A new module API, RM_CreateSubcommand, was added, in order to allow
module writer to define subcommands
## ACL changes:
1. Now, that each subcommand is actually a command, each has its own ACL id.
2. The old mechanism of allowed_subcommands is redundant
(blocking/allowing a subcommand is the same as blocking/allowing a regular command),
but we had to keep it, to support the widespread usage of allowed_subcommands
to block commands with certain args, that aren't subcommands (e.g. "-select +select|0").
3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference.
4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands
(e.g. "+client -client|kill"), which wasn't possible in the past.
5. It is also possible to use the allowed_firstargs mechanism with subcommand.
For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except
for setting the log level.
6. All of the ACL changes above required some amount of refactoring.
## Misc
1. There are two approaches: Either each subcommand has its own function or all
subcommands use the same function, determining what to do according to argv[0].
For now, I took the former approaches only with CONFIG and COMMAND,
while other commands use the latter approach (for smaller blamelog diff).
2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec.
4. Bugfix: GETNAME was missing from CLIENT's help message.
5. Sentinel and Redis now use the same table, with the same function pointer.
Some commands have a different implementation in Sentinel, so we redirect
them (these are ROLE, PUBLISH, and INFO).
6. Command stats now show the stats per subcommand (e.g. instead of stats just
for "config" you will have stats for "config|set", "config|get", etc.)
7. It is now possible to use COMMAND directly on subcommands:
COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and
can be used in functions lookupCommandBySds and lookupCommandByCString)
8. STRALGO is now a container command (has "help")
## Breaking changes:
1. Command stats now show the stats per subcommand (see (5) above)
2021-10-20 10:52:57 +02:00
|
|
|
if (server.sentinel_mode) {
|
|
|
|
sentinelPublishCommand(c);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-04-17 14:43:22 +02:00
|
|
|
int receivers = pubsubPublishMessageAndPropagateToCluster(c->argv[1],c->argv[2],0);
|
|
|
|
if (!server.cluster_enabled)
|
2015-07-27 09:41:48 +02:00
|
|
|
forceCommandPropagation(c,PROPAGATE_REPL);
|
2010-06-22 00:07:48 +02:00
|
|
|
addReplyLongLong(c,receivers);
|
|
|
|
}
|
2013-06-20 15:32:00 +02:00
|
|
|
|
|
|
|
/* PUBSUB command for Pub/Sub introspection. */
|
2015-07-26 15:20:46 +02:00
|
|
|
void pubsubCommand(client *c) {
|
2017-11-27 17:57:44 +02:00
|
|
|
if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,"help")) {
|
|
|
|
const char *help[] = {
|
2021-01-04 17:02:57 +02:00
|
|
|
"CHANNELS [<pattern>]",
|
|
|
|
" Return the currently active channels matching a <pattern> (default: '*').",
|
|
|
|
"NUMPAT",
|
|
|
|
" Return number of subscriptions to patterns.",
|
|
|
|
"NUMSUB [<channel> ...]",
|
|
|
|
" Return the number of subscribers for the specified channels, excluding",
|
2022-05-31 13:55:25 +08:00
|
|
|
" pattern subscriptions(default: no channels).",
|
2022-01-03 01:54:47 +01:00
|
|
|
"SHARDCHANNELS [<pattern>]",
|
|
|
|
" Return the currently active shard level channels matching a <pattern> (default: '*').",
|
2022-05-31 13:55:25 +08:00
|
|
|
"SHARDNUMSUB [<shardchannel> ...]",
|
2022-01-03 01:54:47 +01:00
|
|
|
" Return the number of subscribers for the specified shard level channel(s)",
|
2017-12-06 12:05:11 +01:00
|
|
|
NULL
|
2017-11-27 17:57:44 +02:00
|
|
|
};
|
|
|
|
addReplyHelp(c, help);
|
|
|
|
} else if (!strcasecmp(c->argv[1]->ptr,"channels") &&
|
|
|
|
(c->argc == 2 || c->argc == 3))
|
2013-06-20 15:32:00 +02:00
|
|
|
{
|
|
|
|
/* PUBSUB CHANNELS [<pattern>] */
|
|
|
|
sds pat = (c->argc == 2) ? NULL : c->argv[2]->ptr;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
channelList(c, pat, server.pubsub_channels);
|
2013-06-20 15:34:56 +02:00
|
|
|
} else if (!strcasecmp(c->argv[1]->ptr,"numsub") && c->argc >= 2) {
|
|
|
|
/* PUBSUB NUMSUB [Channel_1 ... Channel_N] */
|
2013-06-20 15:32:00 +02:00
|
|
|
int j;
|
|
|
|
|
2018-11-23 12:40:01 +01:00
|
|
|
addReplyArrayLen(c,(c->argc-2)*2);
|
2013-06-20 15:32:00 +02:00
|
|
|
for (j = 2; j < c->argc; j++) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
dict *d = kvstoreDictFetchValue(server.pubsub_channels, 0, c->argv[j]);
|
2013-06-20 15:32:00 +02:00
|
|
|
|
|
|
|
addReplyBulk(c,c->argv[j]);
|
2024-01-08 16:32:31 +08:00
|
|
|
addReplyLongLong(c, d ? dictSize(d) : 0);
|
2013-06-20 15:32:00 +02:00
|
|
|
}
|
|
|
|
} else if (!strcasecmp(c->argv[1]->ptr,"numpat") && c->argc == 2) {
|
|
|
|
/* PUBSUB NUMPAT */
|
2021-02-17 23:13:50 +01:00
|
|
|
addReplyLongLong(c,dictSize(server.pubsub_patterns));
|
2022-01-03 01:54:47 +01:00
|
|
|
} else if (!strcasecmp(c->argv[1]->ptr,"shardchannels") &&
|
|
|
|
(c->argc == 2 || c->argc == 3))
|
|
|
|
{
|
|
|
|
/* PUBSUB SHARDCHANNELS */
|
|
|
|
sds pat = (c->argc == 2) ? NULL : c->argv[2]->ptr;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
channelList(c,pat,server.pubsubshard_channels);
|
2022-01-03 01:54:47 +01:00
|
|
|
} else if (!strcasecmp(c->argv[1]->ptr,"shardnumsub") && c->argc >= 2) {
|
2022-05-31 13:55:25 +08:00
|
|
|
/* PUBSUB SHARDNUMSUB [ShardChannel_1 ... ShardChannel_N] */
|
2022-01-03 01:54:47 +01:00
|
|
|
int j;
|
|
|
|
addReplyArrayLen(c, (c->argc-2)*2);
|
|
|
|
for (j = 2; j < c->argc; j++) {
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
unsigned int slot = calculateKeySlot(c->argv[j]->ptr);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
dict *clients = kvstoreDictFetchValue(server.pubsubshard_channels, slot, c->argv[j]);
|
2022-01-03 01:54:47 +01:00
|
|
|
|
|
|
|
addReplyBulk(c,c->argv[j]);
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
addReplyLongLong(c, clients ? dictSize(clients) : 0);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
2013-06-20 15:32:00 +02:00
|
|
|
} else {
|
2018-07-02 18:49:34 +02:00
|
|
|
addReplySubcommandSyntaxError(c);
|
2013-06-20 15:32:00 +02:00
|
|
|
}
|
|
|
|
}
|
2022-01-03 01:54:47 +01:00
|
|
|
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
void channelList(client *c, sds pat, kvstore *pubsub_channels) {
|
2022-01-03 01:54:47 +01:00
|
|
|
long mblen = 0;
|
|
|
|
void *replylen;
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
unsigned int slot_cnt = kvstoreNumDicts(pubsub_channels);
|
2022-01-03 01:54:47 +01:00
|
|
|
|
|
|
|
replylen = addReplyDeferredLen(c);
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
for (unsigned int i = 0; i < slot_cnt; i++) {
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
if (!kvstoreDictSize(pubsub_channels, i))
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
continue;
|
2024-02-07 20:53:50 +08:00
|
|
|
kvstoreDictIterator *kvs_di = kvstoreGetDictIterator(pubsub_channels, i);
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
dictEntry *de;
|
2024-02-07 20:53:50 +08:00
|
|
|
while((de = kvstoreDictIteratorNext(kvs_di)) != NULL) {
|
Replace slots_to_channels radix tree with slot specific dictionaries for shard channels. (#12804)
We have achieved replacing `slots_to_keys` radix tree with key->slot
linked list (#9356), and then replacing the list with slot specific
dictionaries for keys (#11695).
Shard channels behave just like keys in many ways, and we also need a
slots->channels mapping. Currently this is still done by using a radix
tree. So we should split `server.pubsubshard_channels` into 16384 dicts
and drop the radix tree, just like what we did to DBs.
Some benefits (basically the benefits of what we've done to DBs):
1. Optimize counting channels in a slot. This is currently used only in
removing channels in a slot. But this is potentially more useful:
sometimes we need to know how many channels there are in a specific slot
when doing slot migration. Counting is now implemented by traversing the
radix tree, and with this PR it will be as simple as calling `dictSize`,
from O(n) to O(1).
2. The radix tree in the cluster has been removed. The shard channel
names no longer require additional storage, which can save memory.
3. Potentially useful in slot migration, as shard channels are logically
split by slots, thus making it easier to migrate, remove or add as a
whole.
4. Avoid rehashing a big dict when there is a large number of channels.
Drawbacks:
1. Takes more memory than using radix tree when there are relatively few
shard channels.
What this PR does:
1. in cluster mode, split `server.pubsubshard_channels` into 16384
dicts, in standalone mode, still use only one dict.
2. drop the `slots_to_channels` radix tree.
3. to save memory (to solve the drawback above), all 16384 dicts are
created lazily, which means only when a channel is about to be inserted
to the dict will the dict be initialized, and when all channels are
deleted, the dict would delete itself.
5. use `server.shard_channel_count` to keep track of the number of all
shard channels.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-12-27 17:40:45 +08:00
|
|
|
robj *cobj = dictGetKey(de);
|
|
|
|
sds channel = cobj->ptr;
|
|
|
|
|
|
|
|
if (!pat || stringmatchlen(pat, sdslen(pat),
|
|
|
|
channel, sdslen(channel),0))
|
|
|
|
{
|
|
|
|
addReplyBulk(c,cobj);
|
|
|
|
mblen++;
|
|
|
|
}
|
|
|
|
}
|
2024-02-07 20:53:50 +08:00
|
|
|
kvstoreReleaseDictIterator(kvs_di);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
|
|
|
setDeferredArrayLen(c,replylen,mblen);
|
|
|
|
}
|
|
|
|
|
2022-05-31 13:55:25 +08:00
|
|
|
/* SPUBLISH <shardchannel> <message> */
|
2022-01-03 01:54:47 +01:00
|
|
|
void spublishCommand(client *c) {
|
2022-04-17 14:43:22 +02:00
|
|
|
int receivers = pubsubPublishMessageAndPropagateToCluster(c->argv[1],c->argv[2],1);
|
|
|
|
if (!server.cluster_enabled)
|
2022-01-03 01:54:47 +01:00
|
|
|
forceCommandPropagation(c,PROPAGATE_REPL);
|
|
|
|
addReplyLongLong(c,receivers);
|
|
|
|
}
|
|
|
|
|
2022-05-31 13:55:25 +08:00
|
|
|
/* SSUBSCRIBE shardchannel [shardchannel ...] */
|
2022-01-03 01:54:47 +01:00
|
|
|
void ssubscribeCommand(client *c) {
|
|
|
|
if (c->flags & CLIENT_DENY_BLOCKING) {
|
|
|
|
/* A client that has CLIENT_DENY_BLOCKING flag on
|
|
|
|
* expect a reply per command and so can not execute subscribe. */
|
|
|
|
addReplyError(c, "SSUBSCRIBE isn't allowed for a DENY BLOCKING client");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int j = 1; j < c->argc; j++) {
|
|
|
|
pubsubSubscribeChannel(c, c->argv[j], pubSubShardType);
|
|
|
|
}
|
2023-12-13 13:44:13 +08:00
|
|
|
markClientAsPubSub(c);
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
|
|
|
|
2022-05-31 13:55:25 +08:00
|
|
|
/* SUNSUBSCRIBE [shardchannel [shardchannel ...]] */
|
2022-01-03 01:54:47 +01:00
|
|
|
void sunsubscribeCommand(client *c) {
|
|
|
|
if (c->argc == 1) {
|
|
|
|
pubsubUnsubscribeShardAllChannels(c, 1);
|
|
|
|
} else {
|
|
|
|
for (int j = 1; j < c->argc; j++) {
|
|
|
|
pubsubUnsubscribeChannel(c, c->argv[j], 1, pubSubShardType);
|
|
|
|
}
|
|
|
|
}
|
2023-12-13 13:44:13 +08:00
|
|
|
if (clientTotalPubSubSubscriptionCount(c) == 0) {
|
|
|
|
unmarkClientAsPubSub(c);
|
|
|
|
}
|
2022-01-03 01:54:47 +01:00
|
|
|
}
|
2022-07-03 23:18:57 -07:00
|
|
|
|
|
|
|
size_t pubsubMemOverhead(client *c) {
|
|
|
|
/* PubSub patterns */
|
2023-06-19 21:31:18 +08:00
|
|
|
size_t mem = dictMemUsage(c->pubsub_patterns);
|
2022-07-03 23:18:57 -07:00
|
|
|
/* Global PubSub channels */
|
2023-01-11 09:57:10 +01:00
|
|
|
mem += dictMemUsage(c->pubsub_channels);
|
2022-07-03 23:18:57 -07:00
|
|
|
/* Sharded PubSub channels */
|
2023-01-11 09:57:10 +01:00
|
|
|
mem += dictMemUsage(c->pubsubshard_channels);
|
2022-07-03 23:18:57 -07:00
|
|
|
return mem;
|
|
|
|
}
|
Refactor the per-slot dict-array db.c into a new kvstore data structure (#12822)
# Description
Gather most of the scattered `redisDb`-related code from the per-slot
dict PR (#11695) and turn it to a new data structure, `kvstore`. i.e.
it's a class that represents an array of dictionaries.
# Motivation
The main motivation is code cleanliness, the idea of using an array of
dictionaries is very well-suited to becoming a self-contained data
structure.
This allowed cleaning some ugly code, among others: loops that run twice
on the main dict and expires dict, and duplicate code for allocating and
releasing this data structure.
# Notes
1. This PR reverts the part of https://github.com/redis/redis/pull/12848
where the `rehashing` list is global (handling rehashing `dict`s is
under the responsibility of `kvstore`, and should not be managed by the
server)
2. This PR also replaces the type of `server.pubsubshard_channels` from
`dict**` to `kvstore` (original PR:
https://github.com/redis/redis/pull/12804). After that was done,
server.pubsub_channels was also chosen to be a `kvstore` (with only one
`dict`, which seems odd) just to make the code cleaner by making it the
same type as `server.pubsubshard_channels`, see
`pubsubtype.serverPubSubChannels`
3. the keys and expires kvstores are currenlty configured to allocate
the individual dicts only when the first key is added (unlike before, in
which they allocated them in advance), but they won't release them when
the last key is deleted.
Worth mentioning that due to the recent change the reply of DEBUG
HTSTATS changed, in case no keys were ever added to the db.
before:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
[Expires HT]
Hash table 0 stats (main hash table):
No stats available for empty dictionaries
```
after:
```
127.0.0.1:6379> DEBUG htstats 9
[Dictionary HT]
[Expires HT]
```
2024-02-05 22:21:35 +07:00
|
|
|
|
|
|
|
int pubsubTotalSubscriptions(void) {
|
|
|
|
return dictSize(server.pubsub_patterns) +
|
|
|
|
kvstoreSize(server.pubsub_channels) +
|
|
|
|
kvstoreSize(server.pubsubshard_channels);
|
|
|
|
}
|