7611 Commits

Author SHA1 Message Date
Salvatore Sanfilippo
7a4c9474f8 Use ARM unaligned accesses ifdefs for SPARC as well. 2017-02-23 22:39:44 +08:00
Salvatore Sanfilippo
08007ed442 Fix BITPOS unaligned memory access. 2017-02-23 22:38:44 +08:00
oranagra
161a3a174b when a slave experiances an error on commands that come from master, print to the log
since slave isn't replying to it's master, these errors go unnoticed.
since we don't expect the master to send garbadge to the slave, this should be safe.
(as long as we don't log OOM errors there)
2017-02-23 03:44:42 -08:00
oranagra
8f9e82e868 when a slave experiances an error on commands that come from master, print to the log
since slave isn't replying to it's master, these errors go unnoticed.
since we don't expect the master to send garbadge to the slave, this should be safe.
(as long as we don't log OOM errors there)
2017-02-23 03:44:42 -08:00
oranagra
f86df924b0 add SDS_NOINIT option to sdsnewlen to avoid unnecessary memsets.
this commit also contains small bugfix in rdbLoadLzfStringObject
a bug that currently has no implications.
2017-02-23 03:04:08 -08:00
oranagra
ecda721a3e add SDS_NOINIT option to sdsnewlen to avoid unnecessary memsets.
this commit also contains small bugfix in rdbLoadLzfStringObject
a bug that currently has no implications.
2017-02-23 03:04:08 -08:00
antirez
95883313b5 Solaris fixes about tail usage and atomic vars.
Testing with Solaris C compiler (SunOS 5.11 11.2 sun4v sparc sun4v)
there were issues compiling due to atomicvar.h and running the
tests also failed because of "tail" usage not conform with Solaris
tail implementation. This commit fixes both the issues.
2017-02-22 13:08:21 +01:00
antirez
9df5191d0a Solaris fixes about tail usage and atomic vars.
Testing with Solaris C compiler (SunOS 5.11 11.2 sun4v sparc sun4v)
there were issues compiling due to atomicvar.h and running the
tests also failed because of "tail" usage not conform with Solaris
tail implementation. This commit fixes both the issues.
2017-02-22 13:08:21 +01:00
antirez
06263485d4 Merge branch 'siphash' into unstable 2017-02-21 17:10:10 +01:00
antirez
20a4c6cff3 Merge branch 'siphash' into unstable 2017-02-21 17:10:10 +01:00
antirez
e084b5a39f Merge branch 'arm' into unstable 2017-02-21 17:10:06 +01:00
antirez
d4cf5babd2 Merge branch 'arm' into unstable 2017-02-21 17:10:06 +01:00
antirez
0285c2714b SipHash 2-4 -> SipHash 1-2.
For performance reasons we use a reduced rounds variant of
SipHash. This should still provide enough protection and the
effects in the hash table distribution are non existing.
If some real world attack on SipHash 1-2 will be found we can
trivially switch to something more secure. Anyway it is a
big step forward from Murmurhash, for which it is trivial to
generate *seed independent* colliding keys... The speed
penatly introduced by SipHash 2-4, around 4%, was a too big
price to pay compared to the effectiveness of the HashDoS
attack against SipHash 1-2, and considering so far in the
Redis history, no such an incident ever happened even while
using trivially to collide hash functions.
2017-02-21 17:07:28 +01:00
antirez
afd60f3f74 SipHash 2-4 -> SipHash 1-2.
For performance reasons we use a reduced rounds variant of
SipHash. This should still provide enough protection and the
effects in the hash table distribution are non existing.
If some real world attack on SipHash 1-2 will be found we can
trivially switch to something more secure. Anyway it is a
big step forward from Murmurhash, for which it is trivial to
generate *seed independent* colliding keys... The speed
penatly introduced by SipHash 2-4, around 4%, was a too big
price to pay compared to the effectiveness of the HashDoS
attack against SipHash 1-2, and considering so far in the
Redis history, no such an incident ever happened even while
using trivially to collide hash functions.
2017-02-21 17:07:28 +01:00
antirez
cd90389b30 freeMemoryIfNeeded(): improve code and lazyfree handling.
1. Refactor memory overhead computation into a function.
2. Every 10 keys evicted, check if memory usage already reached
   the target value directly, since we otherwise don't count all
   the memory reclaimed by the background thread right now.
2017-02-21 12:55:59 +01:00
antirez
5ed1b45aad freeMemoryIfNeeded(): improve code and lazyfree handling.
1. Refactor memory overhead computation into a function.
2. Every 10 keys evicted, check if memory usage already reached
   the target value directly, since we otherwise don't count all
   the memory reclaimed by the background thread right now.
2017-02-21 12:55:59 +01:00
antirez
84fa8230e5 Use locale agnostic tolower() in dict.c hash function. 2017-02-20 17:39:44 +01:00
antirez
49cb8be3fd Use locale agnostic tolower() in dict.c hash function. 2017-02-20 17:39:44 +01:00
antirez
05ea8c6122 SipHash x86 optimizations. 2017-02-20 17:32:46 +01:00
antirez
fc20100861 SipHash x86 optimizations. 2017-02-20 17:32:46 +01:00
antirez
adeed29a99 Use SipHash hash function to mitigate HashDos attempts.
This change attempts to switch to an hash function which mitigates
the effects of the HashDoS attack (denial of service attack trying
to force data structures to worst case behavior) while at the same time
providing Redis with an hash function that does not expect the input
data to be word aligned, a condition no longer true now that sds.c
strings have a varialbe length header.

Note that it is possible sometimes that even using an hash function
for which collisions cannot be generated without knowing the seed,
special implementation details or the exposure of the seed in an
indirect way (for example the ability to add elements to a Set and
check the return in which Redis returns them with SMEMBERS) may
make the attacker's life simpler in the process of trying to guess
the correct seed, however the next step would be to switch to a
log(N) data structure when too many items in a single bucket are
detected: this seems like an overkill in the case of Redis.

SPEED REGRESION TESTS:

In order to verify that switching from MurmurHash to SipHash had
no impact on speed, a set of benchmarks involving fast insertion
of 5 million of keys were performed.

The result shows Redis with SipHash in high pipelining conditions
to be about 4% slower compared to using the previous hash function.
However this could partially be related to the fact that the current
implementation does not attempt to hash whole words at a time but
reads single bytes, in order to have an output which is endian-netural
and at the same time working on systems where unaligned memory accesses
are a problem.

Further X86 specific optimizations should be tested, the function
may easily get at the same level of MurMurHash2 if a few optimizations
are performed.
2017-02-20 17:29:17 +01:00
antirez
b49721d57d Use SipHash hash function to mitigate HashDos attempts.
This change attempts to switch to an hash function which mitigates
the effects of the HashDoS attack (denial of service attack trying
to force data structures to worst case behavior) while at the same time
providing Redis with an hash function that does not expect the input
data to be word aligned, a condition no longer true now that sds.c
strings have a varialbe length header.

Note that it is possible sometimes that even using an hash function
for which collisions cannot be generated without knowing the seed,
special implementation details or the exposure of the seed in an
indirect way (for example the ability to add elements to a Set and
check the return in which Redis returns them with SMEMBERS) may
make the attacker's life simpler in the process of trying to guess
the correct seed, however the next step would be to switch to a
log(N) data structure when too many items in a single bucket are
detected: this seems like an overkill in the case of Redis.

SPEED REGRESION TESTS:

In order to verify that switching from MurmurHash to SipHash had
no impact on speed, a set of benchmarks involving fast insertion
of 5 million of keys were performed.

The result shows Redis with SipHash in high pipelining conditions
to be about 4% slower compared to using the previous hash function.
However this could partially be related to the fact that the current
implementation does not attempt to hash whole words at a time but
reads single bytes, in order to have an output which is endian-netural
and at the same time working on systems where unaligned memory accesses
are a problem.

Further X86 specific optimizations should be tested, the function
may easily get at the same level of MurMurHash2 if a few optimizations
are performed.
2017-02-20 17:29:17 +01:00
John.Koepi
9b05aafb50 fix #2883, #2857 pipe fds leak when fork() failed on bg aof rw 2017-02-20 10:22:57 +01:00
John.Koepi
9383cda5ff fix #2883, #2857 pipe fds leak when fork() failed on bg aof rw 2017-02-20 10:22:57 +01:00
antirez
76d87f47c7 Don't leak file descriptor on syncWithMaster().
Close #3804.
2017-02-20 10:18:41 +01:00
antirez
c9f6868c51 Don't leak file descriptor on syncWithMaster().
Close #3804.
2017-02-20 10:18:41 +01:00
Salvatore Sanfilippo
7329cc3981 ARM: Avoid fast path for BITOP.
GCC will produce certain unaligned multi load-store instructions
that will be trapped by the Linux kernel since ARM v6 cannot
handle them with unaligned addresses. Better to use the slower
but safer implementation instead of generating the exception which
should be anyway very slow.
2017-02-19 15:07:08 +00:00
Salvatore Sanfilippo
d921cbea3c ARM: Avoid fast path for BITOP.
GCC will produce certain unaligned multi load-store instructions
that will be trapped by the Linux kernel since ARM v6 cannot
handle them with unaligned addresses. Better to use the slower
but safer implementation instead of generating the exception which
should be anyway very slow.
2017-02-19 15:07:08 +00:00
Salvatore Sanfilippo
4e9cf4cc7e ARM: Use libc malloc by default.
I'm not sure how much test Jemalloc gets on ARM, moreover
compiling Redis with Jemalloc support in not very powerful
devices, like most ARMs people will build Redis on, is extremely
slow. It is possible to enable Jemalloc build anyway if needed
by using "make MALLOC=jemalloc".
2017-02-19 15:02:37 +00:00
Salvatore Sanfilippo
bd6b031738 ARM: Use libc malloc by default.
I'm not sure how much test Jemalloc gets on ARM, moreover
compiling Redis with Jemalloc support in not very powerful
devices, like most ARMs people will build Redis on, is extremely
slow. It is possible to enable Jemalloc build anyway if needed
by using "make MALLOC=jemalloc".
2017-02-19 15:02:37 +00:00
Salvatore Sanfilippo
72d6d64771 ARM: Avoid memcpy() in MurmurHash64A() if we are using 64 bit ARM.
However note that in architectures supporting 64 bit unaligned
accesses memcpy(...,...,8) is likely translated to a simple
word memory movement anyway.
2017-02-19 15:00:46 +00:00
Salvatore Sanfilippo
d17abbf4e2 ARM: Avoid memcpy() in MurmurHash64A() if we are using 64 bit ARM.
However note that in architectures supporting 64 bit unaligned
accesses memcpy(...,...,8) is likely translated to a simple
word memory movement anyway.
2017-02-19 15:00:46 +00:00
Salvatore Sanfilippo
1e272a6b52 ARM: Fix 64 bit unaligned access in MurmurHash64A(). 2017-02-19 14:01:58 +00:00
Salvatore Sanfilippo
18b4a41899 ARM: Fix 64 bit unaligned access in MurmurHash64A(). 2017-02-19 14:01:58 +00:00
minghang.zmh
de07deb4d2 fix server.stat_net_output_bytes calc bug 2017-02-10 20:13:01 +08:00
minghang.zmh
a20c8bba4a fix server.stat_net_output_bytes calc bug 2017-02-10 20:13:01 +08:00
antirez
f917e0da4c Fix MIGRATE closing of cached socket on error.
After investigating issue #3796, it was discovered that MIGRATE
could call migrateCloseSocket() after the original MIGRATE c->argv
was already rewritten as a DEL operation. As a result the host/port
passed to migrateCloseSocket() could be anything, often a NULL pointer
that gets deferenced crashing the server.

Now the socket is closed at an earlier time when there is a socket
error in a later stage where no retry will be performed, before we
rewrite the argument vector. Moreover a check was added so that later,
in the socket_err label, there is no further attempt at closing the
socket if the argument was rewritten.

This fix should resolve the bug reported in #3796.
2017-02-09 09:58:38 +01:00
antirez
38894de7f5 Fix MIGRATE closing of cached socket on error.
After investigating issue #3796, it was discovered that MIGRATE
could call migrateCloseSocket() after the original MIGRATE c->argv
was already rewritten as a DEL operation. As a result the host/port
passed to migrateCloseSocket() could be anything, often a NULL pointer
that gets deferenced crashing the server.

Now the socket is closed at an earlier time when there is a socket
error in a later stage where no retry will be performed, before we
rewrite the argument vector. Moreover a check was added so that later,
in the socket_err label, there is no further attempt at closing the
socket if the argument was rewritten.

This fix should resolve the bug reported in #3796.
2017-02-09 09:58:38 +01:00
antirez
0dbfb1d154 Fix ziplist fix... 2017-02-01 17:01:31 +01:00
antirez
34b3b01261 Fix ziplist fix... 2017-02-01 17:01:31 +01:00
antirez
c495d095ae Ziplist: insertion bug under particular conditions fixed.
Ziplists had a bug that was discovered while investigating a different
issue, resulting in a corrupted ziplist representation, and a likely
segmentation foult and/or data corruption of the last element of the
ziplist, once the ziplist is accessed again.

The bug happens when a specific set of insertions / deletions is
performed so that an entry is encoded to have a "prevlen" field (the
length of the previous entry) of 5 bytes but with a count that could be
encoded in a "prevlen" field of a since byte. This could happen when the
"cascading update" process called by ziplistInsert()/ziplistDelete() in
certain contitious forces the prevlen to be bigger than necessary in
order to avoid too much data moving around.

Once such an entry is generated, inserting a very small entry
immediately before it will result in a resizing of the ziplist for a
count smaller than the current ziplist length (which is a violation,
inserting code expects the ziplist to get bigger actually). So an FF
byte is inserted in a misplaced position. Moreover a realloc() is
performed with a count smaller than the ziplist current length so the
final bytes could be trashed as well.

SECURITY IMPLICATIONS:

Currently it looks like an attacker can only crash a Redis server by
providing specifically choosen commands. However a FF byte is written
and there are other memory operations that depend on a wrong count, so
even if it is not immediately apparent how to mount an attack in order
to execute code remotely, it is not impossible at all that this could be
done. Attacks always get better... and we did not spent enough time in
order to think how to exploit this issue, but security researchers
or malicious attackers could.
2017-02-01 15:01:59 +01:00
antirez
c750d3215e Ziplist: insertion bug under particular conditions fixed.
Ziplists had a bug that was discovered while investigating a different
issue, resulting in a corrupted ziplist representation, and a likely
segmentation foult and/or data corruption of the last element of the
ziplist, once the ziplist is accessed again.

The bug happens when a specific set of insertions / deletions is
performed so that an entry is encoded to have a "prevlen" field (the
length of the previous entry) of 5 bytes but with a count that could be
encoded in a "prevlen" field of a since byte. This could happen when the
"cascading update" process called by ziplistInsert()/ziplistDelete() in
certain contitious forces the prevlen to be bigger than necessary in
order to avoid too much data moving around.

Once such an entry is generated, inserting a very small entry
immediately before it will result in a resizing of the ziplist for a
count smaller than the current ziplist length (which is a violation,
inserting code expects the ziplist to get bigger actually). So an FF
byte is inserted in a misplaced position. Moreover a realloc() is
performed with a count smaller than the ziplist current length so the
final bytes could be trashed as well.

SECURITY IMPLICATIONS:

Currently it looks like an attacker can only crash a Redis server by
providing specifically choosen commands. However a FF byte is written
and there are other memory operations that depend on a wrong count, so
even if it is not immediately apparent how to mount an attack in order
to execute code remotely, it is not impossible at all that this could be
done. Attacks always get better... and we did not spent enough time in
order to think how to exploit this issue, but security researchers
or malicious attackers could.
2017-02-01 15:01:59 +01:00
antirez
3a7410a8a6 ziplist: better comments, some refactoring. 2017-01-30 10:12:47 +01:00
antirez
5bd9578445 ziplist: better comments, some refactoring. 2017-01-30 10:12:47 +01:00
Jan-Erik Rediger
3c9b817217 Don't divide by zero
Previously Redis crashed on `MEMORY DOCTOR` when it has no slaves attached.

Fixes #3783
2017-01-27 16:24:14 +01:00
Jan-Erik Rediger
978e62da15 Don't divide by zero
Previously Redis crashed on `MEMORY DOCTOR` when it has no slaves attached.

Fixes #3783
2017-01-27 16:24:14 +01:00
miter
3ec1a001fb Change switch statment to if statment 2017-01-26 21:36:26 +09:00
miter
8b565df63c Change switch statment to if statment 2017-01-26 21:36:26 +09:00
Salvatore Sanfilippo
41d16f7a4a Merge pull request #3657 from itamarhaber/patch-9
Verify pairs are provided after ZADD's subcommands
2017-01-25 09:31:47 +01:00
Salvatore Sanfilippo
3c1963d513 Merge pull request #3657 from itamarhaber/patch-9
Verify pairs are provided after ZADD's subcommands
2017-01-25 09:31:47 +01:00