This adds support for explicit configuration of a CA certs directory (in
addition to the previously supported bundle file). For redis-cli, if no
explicit CA configuration is supplied the system-wide default
configuration will be adopted.
misc:
- handle SSL_has_pending by iterating though these in beforeSleep, and setting timeout of 0 to aeProcessEvents
- fix issue with epoll signaling EPOLLHUP and EPOLLERR only to the write handlers. (needed to detect the rdb pipe was closed)
- add key-load-delay config for testing
- trim connShutdown which is no longer needed
- rioFdsetWrite -> rioFdWrite - simplified since there's no longer need to write to multiple FDs
- don't detect rdb child exited (don't call wait3) until we detect the pipe is closed
- Cleanup bad optimization from rio.c, add another one
* Introduce a connection abstraction layer for all socket operations and
integrate it across the code base.
* Provide an optional TLS connections implementation based on OpenSSL.
* Pull a newer version of hiredis with TLS support.
* Tests, redis-cli updates for TLS support.
When implementing the code that saves and loads these aux fields we used rdb
format that was added for that in redis 5.0, but then we added the 'when' field
which meant that the old redis-check-rdb won't be able to skip these.
this fix adds an opcode as if that 'when' is part of the module data.
Before this commit we may have not consumer buffers when a read error is
encountered. Such buffers may contain errors that are important clues
for the user: for instance a protocol error in the payload we send in
pipe mode will cause the server to abort the connection. If the user
does not get the protocol error, debugging what is happening can be a
nightmare.
This commit fixes issue #3756.
This is extremely useful in order to simulate an high load of requests
about different keys, and force Redis to track a lot of informations
about several clients, to simulate real world workloads.
Now that the call also invalidates client side caching slots, it is
important that after an internal flush operation we both send the
notifications to the clients and, at the same time, are able to reclaim
the memory of the tracking table. This may even fix a few edge cases
related to MULTI/EXEC + WATCH during resync, not sure, but in general
looks more correct.