Prevent repetitive backlog trimming (#12155)

When `replicationFeedSlaves()` serializes a command, it repeatedly calls
`feedReplicationBuffer()` to feed it to the replication backlog piece by piece.
It is unnecessary to call `incrementalTrimReplicationBacklog()` for every small
amount of data added with `feedReplicationBuffer()` as the chance of the conditions
being met for trimming are very low and these frequent calls add up to a notable
performance cost. Instead, we will only attempt trimming when a new block is added
to the replication backlog.

Using redis-benchmark to saturate a local redis server indicated a performance
improvement of around 3-3.5% for 100 byte SET commands with this change.
This commit is contained in:
Brennan 2023-05-17 23:25:56 -07:00 committed by GitHub
parent 49845c24b1
commit 40e6131ba5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -418,13 +418,13 @@ void feedReplicationBuffer(char *s, size_t len) {
}
if (add_new_block) {
createReplicationBacklogIndex(listLast(server.repl_buffer_blocks));
/* It is important to trim after adding replication data to keep the backlog size close to
* repl_backlog_size in the common case. We wait until we add a new block to avoid repeated
* unnecessary trimming attempts when small amounts of data are added. See comments in
* freeMemoryGetNotCountedMemory() for details on replication backlog memory tracking. */
incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);
}
/* Try to trim replication backlog since replication backlog may exceed
* our setting when we add replication stream. Note that it is important to
* try to trim at least one node since in the common case this is where one
* new backlog node is added and one should be removed. See also comments
* in freeMemoryGetNotCountedMemory for details. */
incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);
}
}