Redis fails to compile on MacOS 10.8.5 with Clang 4, version 421.0.57
(based on LLVM 3.1svn).
When compiling zmalloc.c, we get these warnings:
CC zmalloc.o
zmalloc.c:109:5: warning: implicit declaration of function '__atomic_add_fetch' is invalid in C99 [-Wimplicit-function-declaration]
update_zmalloc_stat_alloc(zmalloc_size(ptr));
^
zmalloc.c:75:9: note: expanded from macro 'update_zmalloc_stat_alloc'
atomicIncr(used_memory,__n,used_memory_mutex); \
^
./atomicvar.h:57:37: note: expanded from macro 'atomicIncr'
#define atomicIncr(var,count,mutex) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)
^
zmalloc.c:145:5: warning: implicit declaration of function '__atomic_sub_fetch' is invalid in C99 [-Wimplicit-function-declaration]
update_zmalloc_stat_free(oldsize);
^
zmalloc.c:85:9: note: expanded from macro 'update_zmalloc_stat_free'
atomicDecr(used_memory,__n,used_memory_mutex); \
^
./atomicvar.h:58:37: note: expanded from macro 'atomicDecr'
#define atomicDecr(var,count,mutex) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)
^
zmalloc.c:205:9: warning: implicit declaration of function '__atomic_load_n' is invalid in C99 [-Wimplicit-function-declaration]
atomicGet(used_memory,um,used_memory_mutex);
^
./atomicvar.h:60:14: note: expanded from macro 'atomicGet'
dstvar = __atomic_load_n(&var,__ATOMIC_RELAXED); \
^
3 warnings generated.
Also on lazyfree.c:
CC lazyfree.o
lazyfree.c:68:13: warning: implicit declaration of function '__atomic_add_fetch' is invalid in C99 [-Wimplicit-function-declaration]
atomicIncr(lazyfree_objects,1,lazyfree_objects_mutex);
^
./atomicvar.h:57:37: note: expanded from macro 'atomicIncr'
#define atomicIncr(var,count,mutex) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)
^
lazyfree.c:111:5: warning: implicit declaration of function '__atomic_sub_fetch' is invalid in C99 [-Wimplicit-function-declaration]
atomicDecr(lazyfree_objects,1,lazyfree_objects_mutex);
^
./atomicvar.h:58:37: note: expanded from macro 'atomicDecr'
#define atomicDecr(var,count,mutex) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)
^
2 warnings generated.
Then in the linking stage:
LINK redis-server
Undefined symbols for architecture x86_64:
"___atomic_add_fetch", referenced from:
_zmalloc in zmalloc.o
_zcalloc in zmalloc.o
_zrealloc in zmalloc.o
_dbAsyncDelete in lazyfree.o
_emptyDbAsync in lazyfree.o
_slotToKeyFlushAsync in lazyfree.o
"___atomic_load_n", referenced from:
_zmalloc_used_memory in zmalloc.o
_zmalloc_get_fragmentation_ratio in zmalloc.o
"___atomic_sub_fetch", referenced from:
_zrealloc in zmalloc.o
_zfree in zmalloc.o
_lazyfreeFreeObjectFromBioThread in lazyfree.o
_lazyfreeFreeDatabaseFromBioThread in lazyfree.o
_lazyfreeFreeSlotsMapFromBioThread in lazyfree.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [redis-server] Error 1
make: *** [all] Error 2
With this patch, the compilation is sucessful, no warnings.
Running `make test` we get a almost clean bill of health. Test pass with
one exception:
[err]: Check for memory leaks (pid 52793) in tests/unit/dump.tcl
[err]: Check for memory leaks (pid 53103) in tests/unit/auth.tcl
[err]: Check for memory leaks (pid 53117) in tests/unit/auth.tcl
[err]: Check for memory leaks (pid 53131) in tests/unit/protocol.tcl
[err]: Check for memory leaks (pid 53145) in tests/unit/protocol.tcl
[ok]: Check for memory leaks (pid 53160)
[err]: Check for memory leaks (pid 53175) in tests/unit/scan.tcl
[ok]: Check for memory leaks (pid 53189)
[err]: Check for memory leaks (pid 53221) in tests/unit/type/incr.tcl
.
.
.
Full debug log (289MB, uncompressed) available at
https://dl.dropboxusercontent.com/u/75548/logs/redis-debug-log-macos-10.8.5.log.xz
Most if not all of the memory leak tests fail. Not sure if this is
related. They are the only ones that fail. I belive they are not related,
but just the memory leak detector is not working properly on 10.8.5.
Signed-off-by: Pedro Melo <melo@simplicidade.org>
Redis fails to compile on MacOS 10.8.5 with Clang 4, version 421.0.57
(based on LLVM 3.1svn).
When compiling zmalloc.c, we get these warnings:
CC zmalloc.o
zmalloc.c:109:5: warning: implicit declaration of function '__atomic_add_fetch' is invalid in C99 [-Wimplicit-function-declaration]
update_zmalloc_stat_alloc(zmalloc_size(ptr));
^
zmalloc.c:75:9: note: expanded from macro 'update_zmalloc_stat_alloc'
atomicIncr(used_memory,__n,used_memory_mutex); \
^
./atomicvar.h:57:37: note: expanded from macro 'atomicIncr'
#define atomicIncr(var,count,mutex) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)
^
zmalloc.c:145:5: warning: implicit declaration of function '__atomic_sub_fetch' is invalid in C99 [-Wimplicit-function-declaration]
update_zmalloc_stat_free(oldsize);
^
zmalloc.c:85:9: note: expanded from macro 'update_zmalloc_stat_free'
atomicDecr(used_memory,__n,used_memory_mutex); \
^
./atomicvar.h:58:37: note: expanded from macro 'atomicDecr'
#define atomicDecr(var,count,mutex) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)
^
zmalloc.c:205:9: warning: implicit declaration of function '__atomic_load_n' is invalid in C99 [-Wimplicit-function-declaration]
atomicGet(used_memory,um,used_memory_mutex);
^
./atomicvar.h:60:14: note: expanded from macro 'atomicGet'
dstvar = __atomic_load_n(&var,__ATOMIC_RELAXED); \
^
3 warnings generated.
Also on lazyfree.c:
CC lazyfree.o
lazyfree.c:68:13: warning: implicit declaration of function '__atomic_add_fetch' is invalid in C99 [-Wimplicit-function-declaration]
atomicIncr(lazyfree_objects,1,lazyfree_objects_mutex);
^
./atomicvar.h:57:37: note: expanded from macro 'atomicIncr'
#define atomicIncr(var,count,mutex) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)
^
lazyfree.c:111:5: warning: implicit declaration of function '__atomic_sub_fetch' is invalid in C99 [-Wimplicit-function-declaration]
atomicDecr(lazyfree_objects,1,lazyfree_objects_mutex);
^
./atomicvar.h:58:37: note: expanded from macro 'atomicDecr'
#define atomicDecr(var,count,mutex) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)
^
2 warnings generated.
Then in the linking stage:
LINK redis-server
Undefined symbols for architecture x86_64:
"___atomic_add_fetch", referenced from:
_zmalloc in zmalloc.o
_zcalloc in zmalloc.o
_zrealloc in zmalloc.o
_dbAsyncDelete in lazyfree.o
_emptyDbAsync in lazyfree.o
_slotToKeyFlushAsync in lazyfree.o
"___atomic_load_n", referenced from:
_zmalloc_used_memory in zmalloc.o
_zmalloc_get_fragmentation_ratio in zmalloc.o
"___atomic_sub_fetch", referenced from:
_zrealloc in zmalloc.o
_zfree in zmalloc.o
_lazyfreeFreeObjectFromBioThread in lazyfree.o
_lazyfreeFreeDatabaseFromBioThread in lazyfree.o
_lazyfreeFreeSlotsMapFromBioThread in lazyfree.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [redis-server] Error 1
make: *** [all] Error 2
With this patch, the compilation is sucessful, no warnings.
Running `make test` we get a almost clean bill of health. Test pass with
one exception:
[err]: Check for memory leaks (pid 52793) in tests/unit/dump.tcl
[err]: Check for memory leaks (pid 53103) in tests/unit/auth.tcl
[err]: Check for memory leaks (pid 53117) in tests/unit/auth.tcl
[err]: Check for memory leaks (pid 53131) in tests/unit/protocol.tcl
[err]: Check for memory leaks (pid 53145) in tests/unit/protocol.tcl
[ok]: Check for memory leaks (pid 53160)
[err]: Check for memory leaks (pid 53175) in tests/unit/scan.tcl
[ok]: Check for memory leaks (pid 53189)
[err]: Check for memory leaks (pid 53221) in tests/unit/type/incr.tcl
.
.
.
Full debug log (289MB, uncompressed) available at
https://dl.dropboxusercontent.com/u/75548/logs/redis-debug-log-macos-10.8.5.log.xz
Most if not all of the memory leak tests fail. Not sure if this is
related. They are the only ones that fail. I belive they are not related,
but just the memory leak detector is not working properly on 10.8.5.
Signed-off-by: Pedro Melo <melo@simplicidade.org>
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:
SWAPDB 0 1
This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.
MOTIVATION AND HISTORY
---
The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.
The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.
This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.
At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.
BEHAVIOR WITH BLOCKING OPERATIONS
---
If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.
However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.
It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.
Tests and documentation are yet to be provided.
It was noted by @dvirsky that it is not possible to use string functions
when writing the AOF file. This sometimes is critical since the command
rewriting may need to be built in the context of the AOF callback, and
without access to the context, and the limited types that the AOF
production functions will accept, this can be an issue.
Moreover there are other needs that we can't anticipate regarding the
ability to use Redis Modules APIs using the context in order to build
representations to emit AOF / RDB.
Because of this a new API was added that allows the user to get a
temporary context from the IO context. The context is auto released
if obtained when the RDB / AOF callback returns.
Calling multiple time the function to get the context, always returns
the same one, since it is invalid to have more than a single context.
It was noted by @dvirsky that it is not possible to use string functions
when writing the AOF file. This sometimes is critical since the command
rewriting may need to be built in the context of the AOF callback, and
without access to the context, and the limited types that the AOF
production functions will accept, this can be an issue.
Moreover there are other needs that we can't anticipate regarding the
ability to use Redis Modules APIs using the context in order to build
representations to emit AOF / RDB.
Because of this a new API was added that allows the user to get a
temporary context from the IO context. The context is auto released
if obtained when the RDB / AOF callback returns.
Calling multiple time the function to get the context, always returns
the same one, since it is invalid to have more than a single context.