292 Commits

Author SHA1 Message Date
ShooterIT
fc18f9a798 Implements sendfile for redis. 2020-05-25 12:08:01 +02:00
antirez
ed3fd6c524 Fix #7306 less aggressively.
Citing from the issue:

btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.

Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:

    CLIENT KILL TYPE master

May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.
2020-05-25 12:08:01 +02:00
Madelyn Olson
8802cbbcde EAGAIN for tls during diskless load 2020-05-22 12:37:59 +02:00
Qu Chen
5d59bbb6d9 Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas. 2020-05-22 12:37:59 +02:00
antirez
ae73a26f14 Remove the client from CLOSE_ASAP list before caching the master.
This was broken in 146201c: we identified a crash in the CI, what
was happening before the fix should be like that:

1. The client gets in the async free list.
2. However freeClient() gets called again against the same client
   which is a master.
3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
   list.
5. The master client that was cached was immediately removed since it
   was still in the list.
6. Redis accessed a freed cached master.

This is how the crash looked like:

=== REDIS BUG REPORT START: Cut & paste starting from here ===
1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
1092:S 16 May 2020 11:44:09.731 # Failed assertion:  (:0)

------ STACK TRACE ------
EIP:
src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]

And the 0xffff address access likely comes from accessing an SDS that is
set to NULL (we go -1 offset to read the header).
2020-05-16 18:04:17 +02:00
antirez
22a6d152f3 Cache master without checking of deferred close flags.
The context is issue #7205: since the introduction of threaded I/O we close
clients asynchronously by default from readQueryFromClient(). So we
should no longer prevent the caching of the master client, to later
PSYNC incrementally, if such flags are set. However we also don't want
the master client to be cached with such flags (would be closed
immediately after being restored). And yet we want a way to understand
if a master was closed because of a protocol error, and in that case
prevent the caching.
2020-05-15 22:23:24 +02:00
Oran Agra
58619c1286 Keep track of meaningful replication offset in replicas too
Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and #3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when #3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.
2020-04-27 15:52:49 +02:00
zhaozhao.zz
a85d4dd136 PSYNC2: reset backlog_idx and master_repl_offset correctly 2020-03-31 16:57:20 +02:00
antirez
577656841f PSYNC2: fix backlog_idx when adjusting for meaningful offset
See #7002.
2020-03-31 16:56:46 +02:00
antirez
ce73158a9c PSYNC2: meaningful offset implemented.
A very commonly signaled operational problem with Redis master-replicas
sets is that, once the master becomes unavailable for some reason,
especially because of network problems, many times it wont be able to
perform a partial resynchronization with the new master, once it rejoins
the partition, for the following reason:

1. The master becomes isolated, however it keeps sending PINGs to the
replicas. Such PINGs will never be received since the link connection is
actually already severed.
2. On the other side, one of the replicas will turn into the new master,
setting its secondary replication ID offset to the one of the last
command received from the old master: this offset will not include the
PINGs sent by the master once the link was already disconnected.
3. When the master rejoins the partion and is turned into a replica, its
offset will be too advanced because of the PINGs, so a PSYNC will fail,
and a full synchronization will be required.

Related to issue #7002 and other discussion we had in the past around
this problem.
2020-03-25 15:55:24 +01:00
antirez
484a14ebde Improve comments of replicationCacheMasterUsingMyself(). 2020-03-25 15:55:24 +01:00
Johannes Truschnigg
9f1b635067 Signal systemd readiness atfer Partial Resync
"Partial Resynchronization" is a special variant of replication success
that we have to tell systemd about if it is managing redis-server via a
Type=Notify service unit.
2020-03-12 15:53:47 +01:00
antirez
749a4e962a Make sync RDB deletion configurable. Default to no. 2020-03-05 12:51:15 +01:00
antirez
a0fec660f6 Check that the file exists in removeRDBUsedToSyncReplicas(). 2020-03-05 12:51:15 +01:00
antirez
93328171ca Log RDB deletion in persistence-less instances. 2020-03-05 12:51:14 +01:00
antirez
00e53fb140 Introduce bg_unlink(). 2020-03-05 12:51:14 +01:00
antirez
e91ca9fee9 Remove RDB files used for replication in persistence-less instances. 2020-03-05 12:51:14 +01:00
Hengjian Tang
6718d5d375 modify the read buf size according to the write buf size PROTO_IOBUF_LEN defined before 2020-02-27 18:02:30 +01:00
Guy Benoish
05b2e8c5c8 Diskless-load emptyDb-related fixes
1. Call emptyDb even in case of diskless-load: We want modules
   to get the same FLUSHDB event as disk-based replication.
2. Do not fire any module events when flushing the backups array.
3. Delete redundant call to signalFlushedDb (Called from emptyDb).
2020-02-12 14:17:54 +01:00
Oran Agra
2af4039b3e reduce repeated calls to use_diskless_load
this function possibly iterates on the module list
2020-02-12 14:15:56 +01:00
Oran Agra
ad1c21283e move restartAOFAfterSYNC from replicaofCommand to replicationUnsetMaster
replicationUnsetMaster can be called from other places, not just
replicaofCOmmand, and all of these need to restart AOF
2020-02-12 14:15:56 +01:00
ShooterIT
4e0c2ff1d8 Rename rdb asynchronously 2020-01-10 13:16:25 +01:00
Johannes Truschnigg
fb25e68f91 Use libsystemd's sd_notify for communicating redis status to systemd
Instead of replicating a subset of libsystemd's sd_notify(3) internally,
use the dynamic library provided by systemd to communicate with the
service manager.

When systemd supervision was auto-detected or configured, communicate
the actual server status (i.e. "Loading dataset", "Waiting for
master<->replica sync") to systemd, instead of declaring readiness right
after initializing the server process.
2019-11-19 18:55:44 +02:00
Oran Agra
68c6aacf3b Modules hooks: complete missing hooks for the initial set of hooks
* replication hooks: role change, master link status, replica online/offline
* persistence hooks: saving, loading, loading progress
* misc hooks: cron loop, shutdown, module loaded/unloaded
* change the way hooks test work, and add tests for all of the above

startLoading() now gets flag indicating what is loaded.
stopLoading() now gets an indication of success or failure.
adding startSaving() and stopSaving() with similar args and role.
2019-10-29 17:59:09 +02:00
Wander Hillen
3ca35b8024 Merge branch 'unstable' into minor-typos 2019-10-25 10:18:26 +02:00
Yossi Gottlieb
85d7f38136 Merge remote-tracking branch 'upstream/unstable' into tls 2019-10-16 17:08:07 +03:00
Salvatore Sanfilippo
15e422686b Merge pull request #6429 from charsyam/feature/typo-slave
[trivial] fix typos salves to slaves in replication.c
2019-10-10 14:56:43 +02:00
antirez
9aba7564d6 Cluster: fix memory leak of cached master.
This is what happened:

1. Instance starts, is a slave in the cluster configuration, but
actually server.masterhost is not set, so technically the instance
is acting like a master.

2. loadDataFromDisk() calls replicationCacheMasterUsingMyself() even if
the instance is a master, in the case it is logically a slave and the
cluster is enabled. So now we have a cached master even if the instance
is practically configured as a master (from the POV of
server.masterhost value and so forth).

3. clusterCron() sees that the instance requires to replicate from its
master, because logically it is a slave, so it calls
replicationSetMaster() that will in turn call
replicationCacheMasterUsingMyself(): before this commit, this call would
overwrite the old cached master, creating a memory leak.
2019-10-10 10:23:34 +02:00
Yossi Gottlieb
df08b624bd TLS: Configuration options.
Add configuration options for TLS protocol versions, ciphers/cipher
suites selection, etc.
2019-10-07 21:07:27 +03:00
Oran Agra
6fd5ff8c98 diskless replication rdb transfer uses pipe, and writes to sockets form the parent process.
misc:
- handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
- fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
- add key-load-delay config for testing
- trim connShutdown which is no longer needed
- rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
- don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
- Cleanup bad optimization from rio.c, add another one
2019-10-07 21:06:30 +03:00
Yossi Gottlieb
10ffeb03e4 TLS: Connections refactoring and TLS support.
* Introduce a connection abstraction layer for all socket operations and
integrate it across the code base.
* Provide an optional TLS connections implementation based on OpenSSL.
* Pull a newer version of hiredis with TLS support.
* Tests, redis-cli updates for TLS support.
2019-10-07 21:06:13 +03:00
charsyam
d453899c0f fix type salves to slaves 2019-10-07 23:48:11 +09:00
antirez
1ef0c2f630 Function renamed hasForkChild() -> hasActiveChildProcess(). 2019-09-27 12:03:09 +02:00
Salvatore Sanfilippo
5c5761bcd6 Merge branch 'unstable' into modules_fork 2019-09-27 11:24:06 +02:00
Salvatore Sanfilippo
f11b71b9b8 Merge pull request #6235 from oranagra/module_rdb_load_errors
Allow modules to handle RDB loading errors.
2019-09-26 11:52:42 +02:00
antirez
e87768e8ca Replication: clarify why repl_put_online_on_ack exists at all. 2019-08-05 17:38:15 +02:00
Oran Agra
4b9c5d8536 Log message when modules prevent diskless-load 2019-07-30 16:32:58 +03:00
Oran Agra
9fa285523c Avoid diskelss-load if modules did not declare they handle read errors 2019-07-30 15:11:57 +03:00
Oran Agra
e70fbad802 Module API for Forking
* create module API for forking child processes.
* refactor duplicate code around creating and tracking forks by AOF and RDB.
* child processes listen to SIGUSR1 and dies exitFromChild in order to
  eliminate a valgrind warning of unhandled signal.
* note that BGSAVE error reply has changed.

valgrind error is:
  Process terminating with default action of signal 10 (SIGUSR1)
2019-07-17 16:40:24 +03:00
antirez
ecd65cf9cf Diskless replica: fix disklessLoadRestoreBackups() bug. 2019-07-10 12:36:26 +02:00
antirez
26c4e8a965 Diskless replica: refactoring of DBs backups. 2019-07-10 11:42:26 +02:00
antirez
6e9ffa93eb Diskless replica: a few aesthetic changes to replication.c. 2019-07-08 18:32:47 +02:00
Oran Agra
29754ebe22 diskless replication on slave side (don't store rdb to file), plus some other related fixes
The implementation of the diskless replication was currently diskless only on the master side.
The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.

This commit adds two modes to load rdb directly from socket:
1) when-empty
2) using "swapdb"
the third mode of using diskless slave by flushdb is risky and currently not included.

other changes:
--------------
distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
also a CONFIG GET and INFO during rdb loading would have lied

When loading rdb from the network, don't kill the server on short read (that can be a network error)

Fix rdb check when performed on preamble AOF

tests:
run replication tests for diskless slave too
make replication test a bit more aggressive
Add test for diskless load swapdb
2019-07-08 15:37:48 +03:00
antirez
9eea57cc31 Narrow the effects of PR #6029 to the exact state.
CLIENT PAUSE may be used, in other contexts, for a long time making all
the slaves time out. Better for now to be more specific about what
should disable senidng PINGs.

An alternative to that would be to virtually refresh the slave
interactions when clients are paused, however for now I went for this
more conservative solution.
2019-05-15 12:16:43 +02:00
Salvatore Sanfilippo
3f4f7aff1a Merge pull request #6029 from chendq8/clientpause
fix cluster failover time out
2019-05-15 12:03:19 +02:00
chendianqiang
1e29403134 stop ping when client pause 2019-04-17 21:20:10 +08:00
Salvatore Sanfilippo
641359787c Merge pull request #3830 from oranagra/diskless_capa_pr
several bugfixes to diskless replication
2019-03-22 17:41:40 +01:00
antirez
7e191d3ea3 More sensible name for function: restartAOFAfterSYNC().
Related to #3829.
2019-03-21 17:21:29 +01:00
antirez
4c49e7ad6f Mostly aesthetic changes to restartAOF().
See #3829.
2019-03-21 17:18:24 +01:00
Oran Agra
544b9b0826 diskless replication - notify slave when rdb transfer failed
in diskless replication - master was not notifing the slave that rdb transfer
terminated on error, and lets slave wait for replication timeout
2019-03-20 17:46:19 +02:00