205 Commits

Author SHA1 Message Date
Salvatore Sanfilippo
acfece8854 Merge pull request #4412 from soloestoy/bugfix-psync2
PSYNC2: safe free backlog when reach the time limit and others
2017-11-24 10:56:18 +01:00
zhaozhao.zz
1e4dbdd0a1 PSYNC2: persist cached_master's dbid inside the RDB 2017-11-22 12:11:26 +08:00
zhaozhao.zz
07842aaaab PSYNC2: make repl_stream_db never be -1
it means that after this change all the replication
info in RDB is valid, and it can distinguish us from
the older version.
2017-11-22 12:05:34 +08:00
antirez
fe6dfb3136 Fix saving of zero-length lists.
Normally in modern Redis you can't create zero-len lists, however it's
possible to load them from old RDB files generated, for instance, using
Redis 2.8 (see issue #4409). The "Right Thing" would be not loading such
lists at all, but this requires to hook in rdb.c random places in a not
great way, for a problem that is at this point, at best, minor.

Here in this commit instead I just fix the fact that zero length lists,
materialized as quicklists with the first node set to NULL, were
iterated in the wrong way while they are saved, leading to a crash.

The other parts of the list implementation are apparently able to deal
with empty lists correctly, even if they are no longer a thing.
2017-11-06 12:37:03 +01:00
zhaozhao.zz
98f1bcf1cb PSYNC2: clarify the scenario when repl_stream_db can be -1 2017-11-02 10:45:33 +08:00
zhaozhao.zz
b99ca4211a PSYNC2 & RDB: fix the missing rdbSaveInfo for BGSAVE 2017-11-01 17:52:43 +08:00
antirez
0d68dd2fad PSYNC2: More refinements related to #4316. 2017-09-20 11:28:13 +02:00
zhaozhao.zz
019c7fa546 PSYNC2: make persisiting replication info more solid
This commit is a reinforcement of commit af78ec8.

1. Replication information can be stored when the RDB file is
generated by a mater using server.slaveseldb when server.repl_backlog
is not NULL, or set repl_stream_db be -1. That's safe, because
NULL server.repl_backlog will trigger full synchronization,
then master will send SELECT command to replicaiton stream.
2. Only do rdbSave* when rsiptr is not NULL,
if we do rdbSave* without rdbSaveInfo, slave will miss repl-stream-db.
3. Save the replication informations also in the case of
SAVE command, FLUSHALL command and DEBUG reload.
2017-09-20 11:18:10 +02:00
antirez
af78ec8ccf PSYNC2: Fix the way replication info is saved/loaded from RDB.
This commit attempts to fix a number of bugs reported in #4316.
They are related to the way replication info like replication ID,
offsets, and currently selected DB in the master client, are stored
and loaded by Redis. In order to avoid inconsistencies the changes in
this commit try to enforce that:

1. Replication information are only stored when the RDB file is
generated by a slave that has a valid 'master' client, so that we can
always extract the currently selected DB.
2. When replication informations are persisted in the RDB file, all the
info for a successful PSYNC or nothing is persisted.
3. The RDB replication informations are only loaded if the instance is
configured as a slave, otherwise a master can start with IDs that relate
to a different history of the data set, and stil retain such IDs in the
future while receiving unrelated writes.
2017-09-19 23:03:39 +02:00
antirez
1aab881fe1 AOF check utility: ability to check files with RDB preamble. 2017-07-10 13:38:23 +02:00
antirez
4e0dab2c26 Free IO context if any in RDB loading code.
Thanks to @oranagra for spotting this bug.
2017-07-06 11:20:49 +02:00
antirez
2a26f2a9f6 RDB modules values serialization format version 2.
The original RDB serialization format was not parsable without the
module loaded, becuase the structure was managed only by the module
itself. Moreover RDB is a streaming protocol in the sense that it is
both produce di an append-only fashion, and is also sometimes directly
sent to the socket (in the case of diskless replication).

The fact that modules values cannot be parsed without the relevant
module loaded is a problem in many ways: RDB checking tools must have
loaded modules even for doing things not involving the value at all,
like splitting an RDB into N RDBs by key or alike, or just checking the
RDB for sanity.

In theory module values could be just a blob of data with a prefixed
length in order for us to be able to skip it. However prefixing the values
with a length would mean one of the following:

1. To be able to write some data at a previous offset. This breaks
stremaing.
2. To bufferize values before outputting them. This breaks performances.
3. To have some chunked RDB output format. This breaks simplicity.

Moreover, the above solution, still makes module values a totally opaque
matter, with the fowllowing problems:

1. The RDB check tool can just skip the value without being able to at
least check the general structure. For datasets composed mostly of
modules values this means to just check the outer level of the RDB not
actually doing any checko on most of the data itself.
2. It is not possible to do any recovering or processing of data for which a
module no longer exists in the future, or is unknown.

So this commit implements a different solution. The modules RDB
serialization API is composed if well defined calls to store integers,
floats, doubles or strings. After this commit, the parts generated by
the module API have a one-byte prefix for each of the above emitted
parts, and there is a final EOF byte as well. So even if we don't know
exactly how to interpret a module value, we can always parse it at an
high level, check the overall structure, understand the types used to
store the information, and easily skip the whole value.

The change is backward compatible: older RDB files can be still loaded
since the new encoding has a new RDB type: MODULE_2 (of value 7).
The commit also implements the ability to check RDB files for sanity
taking advantage of the new feature.
2017-06-27 13:19:16 +02:00
antirez
e5ec0a9f4b Collect fork() timing info only if fork succeeded. 2017-05-19 11:10:36 +02:00
antirez
ff81c5802a Clarify why we save ziplist elements in revserse order.
Also get rid of variables that are now kinda redundant, since the
dictionary iterator was removed.

This is related to PR #3949.
2017-04-18 11:01:47 +02:00
spinlock
31d8ae93eb rdb: saving skiplist in reversed order to accelerate the deserialisation process 2017-04-17 13:22:34 +08:00
antirez
86981fce60 Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj

Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.

To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.

THE BUG

Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.

Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:

1. No DEL will be received from the master, which does not know about
such a key at all.

2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.

THE FIX

In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.

My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.

The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).

This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.

TODO

I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
2016-12-13 10:59:54 +01:00
Chris Lamb
fd6546e08d src/rdb.c: Correct "whenver" -> "whenever" typo. 2016-12-01 13:16:30 +01:00
antirez
996bc7b3e1 PSYNC2: Save replication ID/offset on RDB file.
This means that stopping a slave and restarting it will still make it
able to PSYNC with the master. Moreover the master itself will retain
its ID/offset, in case it gets turned into a slave, or if a slave will
try to PSYNC with it with an exactly updated offset (otherwise there is
no backlog).

This change was possible thanks to PSYNC v2 that makes saving the current
replication state much simpler.
2016-11-10 12:35:29 +01:00
antirez
f53ed7a969 PSYNC2: different improvements to Redis replication.
The gist of the changes is that now, partial resynchronizations between
slaves and masters (without the need of a full resync with RDB transfer
and so forth), work in a number of cases when it was impossible
in the past. For instance:

1. When a slave is promoted to mastrer, the slaves of the old master can
partially resynchronize with the new master.

2. Chained slalves (slaves of slaves) can be moved to replicate to other
slaves or the master itsef, without requiring a full resync.

3. The master itself, after being turned into a slave, is able to
partially resynchronize with the new master, when it joins replication
again.

In order to obtain this, the following main changes were operated:

* Slaves also take a replication backlog, not just masters.

* Same stream replication for all the slaves and sub slaves. The
replication stream is identical from the top level master to its slaves
and is also the same from the slaves to their sub-slaves and so forth.
This means that if a slave is later promoted to master, it has the
same replication backlong, and can partially resynchronize with its
slaves (that were previously slaves of the old master).

* A given replication history is no longer identified by the `runid` of
a Redis node. There is instead a `replication ID` which changes every
time the instance has a new history no longer coherent with the past
one. So, for example, slaves publish the same replication history of
their master, however when they are turned into masters, they publish
a new replication ID, but still remember the old ID, so that they are
able to partially resynchronize with slaves of the old master (up to a
given offset).

* The replication protocol was slightly modified so that a new extended
+CONTINUE reply from the master is able to inform the slave of a
replication ID change.

* REPLCONF CAPA is used in order to notify masters that a slave is able
to understand the new +CONTINUE reply.

* The RDB file was extended with an auxiliary field that is able to
select a given DB after loading in the slave, so that the slave can
continue receiving the replication stream from the point it was
disconnected without requiring the master to insert "SELECT" statements.
This is useful in order to guarantee the "same stream" property, because
the slave must be able to accumulate an identical backlog.

* Slave pings to sub-slaves are now sent in a special form, when the
top-level master is disconnected, in order to don't interfer with the
replication stream. We just use out of band "\n" bytes as in other parts
of the Redis protocol.

An old design document is available here:

https://gist.github.com/antirez/ae068f95c0d084891305

However the implementation is not identical to the description because
during the work to implement it, different changes were needed in order
to make things working well.
2016-11-09 15:37:15 +01:00
antirez
56f2406e09 Module: Ability to get context from IO context.
It was noted by @dvirsky that it is not possible to use string functions
when writing the AOF file. This sometimes is critical since the command
rewriting may need to be built in the context of the AOF callback, and
without access to the context, and the limited types that the AOF
production functions will accept, this can be an issue.

Moreover there are other needs that we can't anticipate regarding the
ability to use Redis Modules APIs using the context in order to build
representations to emit AOF / RDB.

Because of this a new API was added that allows the user to get a
temporary context from the IO context. The context is auto released
if obtained when the RDB / AOF callback returns.

Calling multiple time the function to get the context, always returns
the same one, since it is invalid to have more than a single context.
2016-10-06 17:09:26 +02:00
antirez
adcafbd283 Modules: API to save/load single precision floating point numbers.
When double precision is not needed, to take 2x space in the
serialization is not good.
2016-10-03 00:08:35 +02:00
antirez
62e7179de9 Child -> Parent pipe for COW info transferring. 2016-09-19 13:45:20 +02:00
antirez
7e86bce6b0 zmalloc: zmalloc_get_smap_bytes_by_field() modified to work for any PID.
The goal is to get copy-on-write amount of the child from the parent.
2016-09-19 10:28:42 +02:00
antirez
19ba3dbb70 Merge branch 'aofrdb' into unstable 2016-09-09 15:03:21 +02:00
antirez
a7e65c9a19 Fix rdb.c var types when calling rdbLoadLen().
Technically as soon as Redis 64 bit gets proper support for loading
collections and/or DBs with more than 2^32 elements, the 32 bit version
should be modified in order to check if what we read from rdbLoadLen()
overflows. This would only apply to huge RDB files created with a 64 bit
instance and later loaded into a 32 bit instance.
2016-09-01 11:08:44 +02:00
antirez
16214c0658 RDB AOF preamble: WIP 3 (RDB loading refactoring). 2016-08-11 15:27:29 +02:00
antirez
3d6b2655b9 RDB AOF preamble: WIP 2. 2016-08-09 16:41:40 +02:00
antirez
805f8c2c90 RDB AOF preamble: WIP 1. 2016-08-09 11:07:32 +02:00
antirez
276dfeb3d0 Avoid simultaneous RDB and AOF child process.
This patch, written in collaboration with Oran Agra (@oranagra) is a companion
to 03a4432. Together the two patches should avoid that the AOF and RDB saving
processes can be spawned at the same time. Previously conditions that
could lead to two saving processes at the same time were:

1. When AOF is enabled via CONFIG SET and an RDB saving process is
   already active.

2. When the SYNC command decides to start an RDB saving process ASAP in
   order to serve a new slave that cannot partially resynchronize (but
   only if we have a disk target for replication, for diskless
   replication there is not such a problem).

Condition "1" is not very severe but "2" can happen often and is
definitely good at degrading Redis performances in an unexpected way.

The two commits have the effect of always spawning RDB savings for
replication in replicationCron() instead of attempting to start an RDB
save synchronously. Moreover when a BGSAVE or AOF rewrite must be
performed, they are instead just postponed using flags that will try to
perform such operations ASAP.

Finally the BGSAVE command was modified in order to accept a SCHEDULE
option so that if an AOF rewrite is in progress, when this option is
given, the command no longer returns an error, but instead schedules an
RDB rewrite operation for when it will be possible to start it.
2016-07-21 18:35:01 +02:00
antirez
bad6463f55 In Redis RDB check: more details in error reportings. 2016-07-01 15:26:55 +02:00
antirez
8b44311d67 In Redis RDB check: log decompression errors. 2016-07-01 11:59:25 +02:00
antirez
a89f5d2bdf In Redis RDB check: better error reporting. 2016-07-01 09:36:52 +02:00
Pierre Chapuis
f0d0231473 fix some compiler warnings 2016-06-05 16:48:45 +02:00
antirez
9deb98167b Modules: support for modules native data types. 2016-06-03 18:14:04 +02:00
antirez
4e37d7d2c8 RDB v8: fix rdbLoadLen() return value. 2016-06-01 20:18:28 +02:00
antirez
fb9173a888 RDB v8: new ZSET storage format with binary doubles. 2016-06-01 12:12:26 +02:00
antirez
8bfdd07667 RDB v8: ability to save uint64_t lengths. 2016-06-01 11:35:47 +02:00
Oran Agra
7496636280 various cleanups and minor fixes 2016-04-25 16:49:57 +03:00
Oran Agra
1c396991dc fix small issues in redis 3.2 2016-04-25 14:19:28 +03:00
antirez
26e5f3ff33 Include full paths on RDB/AOF files errors.
Close #3086.
2016-02-15 16:15:01 +01:00
antirez
cd94476e0b Lazyfree: Hash converted to use plain SDS WIP 5. 2015-10-01 13:02:25 +02:00
antirez
c4175281f6 Lazyfree: Sorted sets convereted to plain SDS. (several commits squashed) 2015-10-01 13:02:24 +02:00
antirez
062bf5ce19 Lazyfree: Convert Sets to use plains SDS (several commits squashed). 2015-10-01 13:02:24 +02:00
antirez
331835ca06 Undo slaves state change on failed rdbSaveToSlavesSockets().
As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
attempt returns an error, we scan the list of slaves in order to remove
them since there is no way to serve them currently.

However we check for the replication state BGSAVE_START, which was
modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
the state of slaves remain BGSAVE_END and no cleanup is performed.

This commit fixes the problem by making rdbSaveToSlavesSockets() able to
undo the state change on fork failure.
2015-09-07 16:09:23 +02:00
antirez
b83005ae8a Remove slave state change handled by replicationSetupSlaveForFullResync(). 2015-08-05 13:58:56 +02:00
antirez
c3f2bcafc8 Make sure we re-emit SELECT after each new slave full sync setup.
In previous commits we moved the FULLRESYNC to the moment we start the
BGSAVE, so that the offset we provide is the right one. However this
also means that we need to re-emit the SELECT statement every time a new
slave starts to accumulate the changes.

To obtian this effect in a more clean way, the function that sends the
FULLRESYNC reply was overloaded with a more important role of also doing
this and chanigng the slave state. So it was renamed to
replicationSetupSlaveForFullResync() to better reflect what it does now.
2015-08-05 13:34:46 +02:00
antirez
142611882a PSYNC initial offset fix.
This commit attempts to fix a bug involving PSYNC and diskless
replication (currently experimental) found by Yuval Inbar from Redis Labs
and that was later found to have even more far reaching effects (the bug also
exists when diskstore is off).

The gist of the bug is that, a Redis master replies with +FULLRESYNC to
a PSYNC attempt that fails and requires a full resynchronization.
However, the baseline offset sent along with FULLRESYNC was always the
current master replication offset. This is not ok, because there are
many reasosn that may delay the RDB file creation. And... guess what,
the master offset we communicate must be the one of the time the RDB
was created. So for example:

1) When the BGSAVE for replication is delayed since there is one
   already but is not good for replication.
2) When the BGSAVE is not needed as we attach one currently ongoing.
3) When because of diskless replication the BGSAVE is delayed.

In all the above cases the PSYNC reply is wrong and the slave may
reconnect later claiming to need a wrong offset: this may cause
data curruption later.
2015-08-04 17:06:10 +02:00
antirez
c15cac0d77 RDMF: More consistent define names. 2015-07-27 14:37:58 +02:00
antirez
8a893fa4cf RDMF: REDIS_OK REDIS_ERR -> C_OK C_ERR. 2015-07-26 23:17:55 +02:00
antirez
58844a7bfe RDMF: redisAssert -> serverAssert. 2015-07-26 15:29:53 +02:00