futriix/tests/integration/valkey-cli.tcl

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

858 lines
28 KiB
Tcl
Raw Normal View History

source tests/support/cli.tcl
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 15:13:24 +03:00
if {$::singledb} {
set ::dbnum 0
} else {
set ::dbnum 9
}
start_server {tags {"cli logreqres:skip"}} {
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 15:13:24 +03:00
proc open_cli {{opts ""} {infile ""}} {
if { $opts == "" } {
set opts "-n $::dbnum"
}
set ::env(TERM) dumb
set cmdline [valkeycli [srv host] [srv port] $opts]
if {$infile ne ""} {
set cmdline "$cmdline < $infile"
set mode "r"
} else {
set mode "r+"
}
set fd [open "|$cmdline" $mode]
fconfigure $fd -buffering none
fconfigure $fd -blocking false
fconfigure $fd -translation binary
set _ $fd
}
proc close_cli {fd} {
close $fd
}
proc read_cli {fd} {
set ret [read $fd]
while {[string length $ret] == 0} {
after 10
set ret [read $fd]
}
# We may have a short read, try to read some more.
set empty_reads 0
while {$empty_reads < 5} {
set buf [read $fd]
if {[string length $buf] == 0} {
after 10
incr empty_reads
} else {
append ret $buf
set empty_reads 0
}
}
return $ret
}
proc write_cli {fd buf} {
puts $fd $buf
flush $fd
}
2010-08-25 14:08:32 +02:00
# Helpers to run tests in interactive mode
proc format_output {output} {
set _ [string trimright $output "\n"]
}
proc run_command {fd cmd} {
write_cli $fd $cmd
Async IO threads (#758) This PR is 1 of 3 PRs intended to achieve the goal of 1 million requests per second, as detailed by [dan touitou](https://github.com/touitou-dan) in https://github.com/valkey-io/valkey/issues/22. This PR modifies the IO threads to be fully asynchronous, which is a first and necessary step to allow more work offloading and better utilization of the IO threads. ### Current IO threads state: Valkey IO threads were introduced in Redis 6.0 to allow better utilization of multi-core machines. Before this, Redis was single-threaded and could only use one CPU core for network and command processing. The introduction of IO threads helps in offloading the IO operations to multiple threads. **Current IO Threads flow:** 1. Initialization: When Redis starts, it initializes a specified number of IO threads. These threads are in addition to the main thread, each thread starts with an empty list, the main thread will populate that list in each event-loop with pending-read-clients or pending-write-clients. 2. Read Phase: The main thread accepts incoming connections and reads requests from clients. The reading of requests are offloaded to IO threads. The main thread puts the clients ready-to-read in a list and set the global io_threads_op to IO_THREADS_OP_READ, the IO threads pick the clients up, perform the read operation and parse the first incoming command. 3. Command Processing: After reading the requests, command processing is still single-threaded and handled by the main thread. 4. Write Phase: Similar to the read phase, the write phase is also be offloaded to IO threads. The main thread prepares the response in the clients’ output buffer then the main thread puts the client in the list, and sets the global io_threads_op to the IO_THREADS_OP_WRITE. The IO threads then pick the clients up and perform the write operation to send the responses back to clients. 5. Synchronization: The main-thread communicate with the threads on how many jobs left per each thread with atomic counter. The main-thread doesn’t access the clients while being handled by the IO threads. **Issues with current implementation:** * Underutilized Cores: The current implementation of IO-threads leads to the underutilization of CPU cores. * The main thread remains responsible for a significant portion of IO-related tasks that could be offloaded to IO-threads. * When the main-thread is processing client’s commands, the IO threads are idle for a considerable amount of time. * Notably, the main thread's performance during the IO-related tasks is constrained by the speed of the slowest IO-thread. * Limited Offloading: Currently, Since the Main-threads waits synchronously for the IO threads, the Threads perform only read-parse, and write operations, with parsing done only for the first command. If the threads can do work asynchronously we may offload more work to the threads reducing the load from the main-thread. * TLS: Currently, we don't support IO threads with TLS (where offloading IO would be more beneficial) since TLS read/write operations are not thread-safe with the current implementation. ### Suggested change Non-blocking main thread - The main thread and IO threads will operate in parallel to maximize efficiency. The main thread will not be blocked by IO operations. It will continue to process commands independently of the IO thread's activities. **Implementation details** **Inter-thread communication.** * We use a static, lock-free ring buffer of fixed size (2048 jobs) for the main thread to send jobs and for the IO to receive them. If the ring buffer fills up, the main thread will handle the task itself, acting as back pressure (in case IO operations are more expensive than command processing). A static ring buffer is a better candidate than a dynamic job queue as it eliminates the need for allocation/freeing per job. * An IO job will be in the format: ` [void* function-call-back | void *data] `where data is either a client to read/write from and the function-ptr is the function to be called with the data for example readQueryFromClient using this format we can use it later to offload other types of works to the IO threads. * The Ring buffer is one way from the main-thread to the IO thread, Upon read/write event the main thread will send a read/write job then in before sleep it will iterate over the pending read/write clients to checking for each client if the IO threads has already finished handling it. The IO thread signals it has finished handling a client read/write by toggling an atomic flag read_state / write_state on the client struct. **Thread Safety** As suggested in this solution, the IO threads are reading from and writing to the clients' buffers while the main thread may access those clients. We must ensure no race conditions or unsafe access occurs while keeping the Valkey code simple and lock free. Minimal Action in the IO Threads The main change is to limit the IO thread operations to the bare minimum. The IO thread will access only the client's struct and only the necessary fields in this struct. The IO threads will be responsible for the following: * Read Operation: The IO thread will only read and parse a single command. It will not update the server stats, handle read errors, or parsing errors. These tasks will be taken care of by the main thread. * Write Operation: The IO thread will only write the available data. It will not free the client's replies, handle write errors, or update the server statistics. To achieve this without code duplication, the read/write code has been refactored into smaller, independent components: * Functions that perform only the read/parse/write calls. * Functions that handle the read/parse/write results. This refactor accounts for the majority of the modifications in this PR. **Client Struct Safe Access** As we ensure that the IO threads access memory only within the client struct, we need to ensure thread safety only for the client's struct's shared fields. * Query Buffer * Command parsing - The main thread will not try to parse a command from the query buffer when a client is offloaded to the IO thread. * Client's memory checks in client-cron - The main thread will not access the client query buffer if it is offloaded and will handle the querybuf grow/shrink when the client is back. * CLIENT LIST command - The main thread will busy-wait for the IO thread to finish handling the client, falling back to the current behavior where the main thread waits for the IO thread to finish their processing. * Output Buffer * The IO thread will not change the client's bufpos and won't free the client's reply lists. These actions will be done by the main thread on the client's return from the IO thread. * bufpos / block→used: As the main thread may change the bufpos, the reply-block→used, or add/delete blocks to the reply list while the IO thread writes, we add two fields to the client struct: io_last_bufpos and io_last_reply_block. The IO thread will write until the io_last_bufpos, which was set by the main-thread before sending the client to the IO thread. If more data has been added to the cob in between, it will be written in the next write-job. In addition, the main thread will not trim or merge reply blocks while the client is offloaded. * Parsing Fields * Client's cmd, argc, argv, reqtype, etc., are set during parsing. * The main thread will indicate to the IO thread not to parse a cmd if the client is not reset. In this case, the IO thread will only read from the network and won't attempt to parse a new command. * The main thread won't access the c→cmd/c→argv in the CLIENT LIST command as stated before it will busy wait for the IO threads. * Client Flags * c→flags, which may be changed by the main thread in multiple places, won't be accessed by the IO thread. Instead, the main thread will set the c→io_flags with the information necessary for the IO thread to know the client's state. * Client Close * On freeClient, the main thread will busy wait for the IO thread to finish processing the client's read/write before proceeding to free the client. * Client's Memory Limits * The IO thread won't handle the qb/cob limits. In case a client crosses the qb limit, the IO thread will stop reading for it, letting the main thread know that the client crossed the limit. **TLS** TLS is currently not supported with IO threads for the following reasons: 1. Pending reads - If SSL has pending data that has already been read from the socket, there is a risk of not calling the read handler again. To handle this, a list is used to hold the pending clients. With IO threads, multiple threads can access the list concurrently. 2. Event loop modification - Currently, the TLS code registers/unregisters the file descriptor from the event loop depending on the read/write results. With IO threads, multiple threads can modify the event loop struct simultaneously. 3. The same client can be sent to 2 different threads concurrently (https://github.com/redis/redis/issues/12540). Those issues were handled in the current PR: 1. The IO thread only performs the read operation. The main thread will check for pending reads after the client returns from the IO thread and will be the only one to access the pending list. 2. The registering/unregistering of events will be similarly postponed and handled by the main thread only. 3. Each client is being sent to the same dedicated thread (c→id % num_of_threads). **Sending Replies Immediately with IO threads.** Currently, after processing a command, we add the client to the pending_writes_list. Only after processing all the clients do we send all the replies. Since the IO threads are now working asynchronously, we can send the reply immediately after processing the client’s requests, reducing the command latency. However, if we are using AOF=always, we must wait for the AOF buffer to be written, in which case we revert to the current behavior. **IO threads dynamic adjustment** Currently, we use an all-or-nothing approach when activating the IO threads. The current logic is as follows: if the number of pending write clients is greater than twice the number of threads (including the main thread), we enable all threads; otherwise, we enable none. For example, if 8 IO threads are defined, we enable all 8 threads if there are 16 pending clients; else, we enable none. It makes more sense to enable partial activation of the IO threads. If we have 10 pending clients, we will enable 5 threads, and so on. This approach allows for a more granular and efficient allocation of resources based on the current workload. In addition, the user will now be able to change the number of I/O threads at runtime. For example, when decreasing the number of threads from 4 to 2, threads 3 and 4 will be closed after flushing their job queues. **Tests** Currently, we run the io-threads tests with 4 IO threads (https://github.com/valkey-io/valkey/blob/443d80f1686377ad42cbf92d98ecc6d240325ee1/.github/workflows/daily.yml#L353). This means that we will not activate the IO threads unless there are 8 (threads * 2) pending write clients per single loop, which is unlikely to happened in most of tests, meaning the IO threads are not currently being tested. To enforce the main thread to always offload work to the IO threads, regardless of the number of pending events, we add an events-per-io-thread configuration with a default value of 2. When set to 0, this configuration will force the main thread to always offload work to the IO threads. When we offload every single read/write operation to the IO threads, the IO-threads are running with 100% CPU when running multiple tests concurrently some tests fail as a result of larger than expected command latencies. To address this issue, we have to add some after or wait_for calls to some of the tests to ensure they pass with IO threads as well. Signed-off-by: Uri Yagelnik <uriy@amazon.com>
2024-07-09 06:01:39 +03:00
after 50
set _ [format_output [read_cli $fd]]
}
proc test_interactive_cli {name code} {
set ::env(FAKETTY) 1
set fd [open_cli]
test "Interactive CLI: $name" $code
close_cli $fd
unset ::env(FAKETTY)
}
proc test_interactive_nontty_cli {name code} {
set fd [open_cli]
test "Interactive non-TTY CLI: $name" $code
close_cli $fd
}
2010-08-25 14:08:32 +02:00
# Helpers to run tests where stdout is not a tty
proc write_tmpfile {contents} {
set tmp [tmpfile "cli"]
set tmpfd [open $tmp "w"]
puts -nonewline $tmpfd $contents
close $tmpfd
set _ $tmp
}
proc _run_cli {host port db opts args} {
set cmd [valkeycli $host $port [list -n $db {*}$args]]
foreach {key value} $opts {
if {$key eq "pipe"} {
set cmd "sh -c \"$value | $cmd\""
}
if {$key eq "path"} {
set cmd "$cmd < $value"
}
}
set fd [open "|$cmd" "r"]
fconfigure $fd -buffering none
fconfigure $fd -translation binary
set resp [read $fd 1048576]
close $fd
set _ [format_output $resp]
}
proc run_cli {args} {
_run_cli [srv host] [srv port] $::dbnum {} {*}$args
}
proc run_cli_with_input_pipe {mode cmd args} {
if {$mode == "x" } {
_run_cli [srv host] [srv port] $::dbnum [list pipe $cmd] -x {*}$args
} elseif {$mode == "X"} {
_run_cli [srv host] [srv port] $::dbnum [list pipe $cmd] -X tag {*}$args
}
}
proc run_cli_with_input_file {mode path args} {
if {$mode == "x" } {
_run_cli [srv host] [srv port] $::dbnum [list path $path] -x {*}$args
} elseif {$mode == "X"} {
_run_cli [srv host] [srv port] $::dbnum [list path $path] -X tag {*}$args
}
}
proc run_cli_host_port_db {host port db args} {
_run_cli $host $port $db {} {*}$args
}
proc test_nontty_cli {name code} {
test "Non-interactive non-TTY CLI: $name" $code
}
2010-08-25 14:15:41 +02:00
# Helpers to run tests where stdout is a tty (fake it)
proc test_tty_cli {name code} {
2010-08-25 14:15:41 +02:00
set ::env(FAKETTY) 1
test "Non-interactive TTY CLI: $name" $code
2010-08-25 14:15:41 +02:00
unset ::env(FAKETTY)
}
test_interactive_cli "INFO response should be printed raw" {
set lines [split [run_command $fd info] "\n"]
foreach line $lines {
# Info lines end in \r\n, so they now end in \r.
if {![regexp {^\r$|^#|^[^#:]+:} $line]} {
fail "Malformed info line: $line"
}
}
}
test_interactive_cli "Status reply" {
assert_equal "OK" [run_command $fd "set key foo"]
}
test_interactive_cli "Integer reply" {
assert_equal "(integer) 1" [run_command $fd "incr counter"]
}
test_interactive_cli "Bulk reply" {
r set key foo
assert_equal "\"foo\"" [run_command $fd "get key"]
}
test_interactive_cli "Multi-bulk reply" {
r rpush list foo
r rpush list bar
assert_equal "1) \"foo\"\n2) \"bar\"" [run_command $fd "lrange list 0 -1"]
}
test_interactive_cli "Parsing quotes" {
assert_equal "OK" [run_command $fd "set key \"bar\""]
assert_equal "bar" [r get key]
assert_equal "OK" [run_command $fd "set key \" bar \""]
assert_equal " bar " [r get key]
assert_equal "OK" [run_command $fd "set key \"\\\"bar\\\"\""]
assert_equal "\"bar\"" [r get key]
assert_equal "OK" [run_command $fd "set key \"\tbar\t\""]
assert_equal "\tbar\t" [r get key]
# invalid quotation
assert_equal "Invalid argument(s)" [run_command $fd "get \"\"key"]
assert_equal "Invalid argument(s)" [run_command $fd "get \"key\"x"]
# quotes after the argument are weird, but should be allowed
assert_equal "OK" [run_command $fd "set key\"\" bar"]
assert_equal "bar" [r get key]
}
test_interactive_cli "Subscribed mode" {
if {$::force_resp3} {
run_command $fd "hello 3"
}
set reading "Reading messages... (press Ctrl-C to quit or any key to type command)\r"
set erase "\033\[K"; # Erases the "Reading messages..." line.
# Subscribe to some channels.
set sub1 "1) \"subscribe\"\n2) \"ch1\"\n3) (integer) 1\n"
set sub2 "1) \"subscribe\"\n2) \"ch2\"\n3) (integer) 2\n"
set sub3 "1) \"subscribe\"\n2) \"ch3\"\n3) (integer) 3\n"
assert_equal $sub1$sub2$sub3$reading \
[run_command $fd "subscribe ch1 ch2 ch3"]
# Receive pubsub message.
r publish ch2 hello
set message "1) \"message\"\n2) \"ch2\"\n3) \"hello\"\n"
assert_equal $erase$message$reading [read_cli $fd]
# Unsubscribe some.
set unsub1 "1) \"unsubscribe\"\n2) \"ch1\"\n3) (integer) 2\n"
set unsub2 "1) \"unsubscribe\"\n2) \"ch2\"\n3) (integer) 1\n"
assert_equal $erase$unsub1$unsub2$reading \
[run_command $fd "unsubscribe ch1 ch2"]
run_command $fd "hello 2"
# Command forbidden in subscribed mode (RESP2).
set err "(error) ERR Can't execute 'get': only (P|S)SUBSCRIBE / (P|S)UNSUBSCRIBE / PING / QUIT / RESET are allowed in this context\n"
assert_equal $erase$err$reading [run_command $fd "get k"]
# Command allowed in subscribed mode.
set pong "1) \"pong\"\n2) \"\"\n"
assert_equal $erase$pong$reading [run_command $fd "ping"]
# Reset exits subscribed mode.
assert_equal ${erase}RESET [run_command $fd "reset"]
assert_equal PONG [run_command $fd "ping"]
# Check TTY output of push messages in RESP3 has ")" prefix (to be changed to ">" in the future).
assert_match "1#*" [run_command $fd "hello 3"]
set sub1 "1) \"subscribe\"\n2) \"ch1\"\n3) (integer) 1\n"
assert_equal $sub1$reading \
[run_command $fd "subscribe ch1"]
}
test_interactive_nontty_cli "Subscribed mode" {
# Raw output and no "Reading messages..." info message.
# Use RESP3 in this test case.
assert_match {*proto 3*} [run_command $fd "hello 3"]
# Subscribe to some channels.
set sub1 "subscribe\nch1\n1"
set sub2 "subscribe\nch2\n2"
assert_equal $sub1\n$sub2 \
[run_command $fd "subscribe ch1 ch2"]
assert_equal OK [run_command $fd "client tracking on"]
assert_equal OK [run_command $fd "set k 42"]
assert_equal 42 [run_command $fd "get k"]
# Interleaving invalidate and pubsub messages.
r publish ch1 hello
r del k
r publish ch2 world
set message1 "message\nch1\nhello"
set invalidate "invalidate\nk"
set message2 "message\nch2\nworld"
assert_equal $message1\n$invalidate\n$message2\n [read_cli $fd]
# Unsubscribe all.
set unsub1 "unsubscribe\nch1\n1"
set unsub2 "unsubscribe\nch2\n0"
assert_equal $unsub1\n$unsub2 [run_command $fd "unsubscribe ch1 ch2"]
}
test_tty_cli "Status reply" {
assert_equal "OK" [run_cli set key bar]
assert_equal "bar" [r get key]
}
test_tty_cli "Integer reply" {
r del counter
assert_equal "(integer) 1" [run_cli incr counter]
}
test_tty_cli "Bulk reply" {
r set key "tab\tnewline\n"
assert_equal "\"tab\\tnewline\\n\"" [run_cli get key]
}
test_tty_cli "Multi-bulk reply" {
r del list
r rpush list foo
r rpush list bar
assert_equal "1) \"foo\"\n2) \"bar\"" [run_cli lrange list 0 -1]
}
test_tty_cli "Read last argument from pipe" {
assert_equal "OK" [run_cli_with_input_pipe x "echo foo" set key]
assert_equal "foo\n" [r get key]
assert_equal "OK" [run_cli_with_input_pipe X "echo foo" set key2 tag]
assert_equal "foo\n" [r get key2]
}
test_tty_cli "Read last argument from file" {
set tmpfile [write_tmpfile "from file"]
assert_equal "OK" [run_cli_with_input_file x $tmpfile set key]
assert_equal "from file" [r get key]
assert_equal "OK" [run_cli_with_input_file X $tmpfile set key2 tag]
assert_equal "from file" [r get key2]
file delete $tmpfile
}
test_tty_cli "Escape character in JSON mode" {
# reverse solidus
r hset solidus \/ \/
assert_equal \/ \/ [run_cli hgetall solidus]
set escaped_reverse_solidus \"\\"
assert_equal $escaped_reverse_solidus $escaped_reverse_solidus [run_cli --json hgetall \/]
# non printable (0xF0 in ISO-8859-1, not UTF-8(0xC3 0xB0))
set eth "\u00f0\u0065"
r hset eth test $eth
assert_equal \"\\xf0e\" [run_cli hget eth test]
assert_equal \"\u00f0e\" [run_cli --json hget eth test]
assert_equal \"\\\\xf0e\" [run_cli --quoted-json hget eth test]
# control characters
r hset control test "Hello\x00\x01\x02\x03World"
assert_equal \"Hello\\u0000\\u0001\\u0002\\u0003World" [run_cli --json hget control test]
# non-string keys
r hset numkey 1 One
assert_equal \{\"1\":\"One\"\} [run_cli --json hgetall numkey]
# non-string, non-printable keys
r hset npkey "K\u0000\u0001ey" "V\u0000\u0001alue"
assert_equal \{\"K\\u0000\\u0001ey\":\"V\\u0000\\u0001alue\"\} [run_cli --json hgetall npkey]
assert_equal \{\"K\\\\x00\\\\x01ey\":\"V\\\\x00\\\\x01alue\"\} [run_cli --quoted-json hgetall npkey]
}
test_nontty_cli "Status reply" {
2010-08-25 14:15:41 +02:00
assert_equal "OK" [run_cli set key bar]
assert_equal "bar" [r get key]
}
test_nontty_cli "Integer reply" {
r del counter
2010-08-25 14:15:41 +02:00
assert_equal "1" [run_cli incr counter]
}
test_nontty_cli "Bulk reply" {
r set key "tab\tnewline\n"
assert_equal "tab\tnewline" [run_cli get key]
}
test_nontty_cli "Multi-bulk reply" {
r del list
r rpush list foo
r rpush list bar
2010-08-25 14:15:41 +02:00
assert_equal "foo\nbar" [run_cli lrange list 0 -1]
}
if {!$::tls} { ;# fake_redis_node doesn't support TLS
test_nontty_cli "ASK redirect test" {
# Set up two fake nodes.
set tclsh [info nameofexecutable]
set script "tests/helpers/fake_redis_node.tcl"
set port1 [find_available_port $::baseport $::portcount]
set port2 [find_available_port $::baseport $::portcount]
set p1 [exec $tclsh $script $port1 \
"SET foo bar" "-ASK 12182 127.0.0.1:$port2" &]
set p2 [exec $tclsh $script $port2 \
"ASKING" "+OK" \
"SET foo bar" "+OK" &]
# Make sure both fake nodes have started listening
wait_for_condition 50 50 {
[catch {close [socket "127.0.0.1" $port1]}] == 0 && \
[catch {close [socket "127.0.0.1" $port2]}] == 0
} else {
fail "Failed to start fake Valkey nodes"
}
# Run the cli
assert_equal "OK" [run_cli_host_port_db "127.0.0.1" $port1 0 -c SET foo bar]
}
}
test_nontty_cli "Quoted input arguments" {
r set "\x00\x00" "value"
assert_equal "value" [run_cli --quoted-input get {"\x00\x00"}]
}
test_nontty_cli "No accidental unquoting of input arguments" {
run_cli --quoted-input set {"\x41\x41"} quoted-val
run_cli set {"\x41\x41"} unquoted-val
assert_equal "quoted-val" [r get AA]
assert_equal "unquoted-val" [r get {"\x41\x41"}]
}
test_nontty_cli "Invalid quoted input arguments" {
catch {run_cli --quoted-input set {"Unterminated}} err
assert_match {*exited abnormally*} $err
# A single arg that unquotes to two arguments is also not expected
catch {run_cli --quoted-input set {"arg1" "arg2"}} err
assert_match {*exited abnormally*} $err
}
test_nontty_cli "Read last argument from pipe" {
assert_equal "OK" [run_cli_with_input_pipe x "echo foo" set key]
assert_equal "foo\n" [r get key]
assert_equal "OK" [run_cli_with_input_pipe X "echo foo" set key2 tag]
assert_equal "foo\n" [r get key2]
}
test_nontty_cli "Read last argument from file" {
set tmpfile [write_tmpfile "from file"]
assert_equal "OK" [run_cli_with_input_file x $tmpfile set key]
assert_equal "from file" [r get key]
assert_equal "OK" [run_cli_with_input_file X $tmpfile set key2 tag]
assert_equal "from file" [r get key2]
file delete $tmpfile
}
Reimplement cli hints based on command arg docs (#10515) Now that the command argument specs are available at runtime (#9656), this PR addresses #8084 by implementing a complete solution for command-line hinting in `redis-cli`. It correctly handles nearly every case in Redis's complex command argument definitions, including `BLOCK` and `ONEOF` arguments, reordering of optional arguments, and repeated arguments (even when followed by mandatory arguments). It also validates numerically-typed arguments. It may not correctly handle all possible combinations of those, but overall it is quite robust. Arguments are only matched after the space bar is typed, so partial word matching is not supported - that proved to be more confusing than helpful. When the user's current input cannot be matched against the argument specs, hinting is disabled. Partial support has been implemented for legacy (pre-7.0) servers that do not support `COMMAND DOCS`, by falling back to a statically-compiled command argument table. On startup, if the server does not support `COMMAND DOCS`, `redis-cli` will now issue an `INFO SERVER` command to retrieve the server version (unless `HELLO` has already been sent, in which case the server version will be extracted from the reply to `HELLO`). The server version will be used to filter the commands and arguments in the command table, removing those not supported by that version of the server. However, the static table only includes core Redis commands, so with a legacy server hinting will not be supported for module commands. The auto generated help.h and the scripts that generates it are gone. Command and argument tables for the server and CLI use different structs, due primarily to the need to support different runtime data. In order to generate code for both, macros have been added to `commands.def` (previously `commands.c`) to make it possible to configure the code generation differently for different use cases (one linked with redis-server, and one with redis-cli). Also adding a basic testing framework for the command hints based on new (undocumented) command line options to `redis-cli`: `--test_hint 'INPUT'` prints out the command-line hint for a given input string, and `--test_hint_file <filename>` runs a suite of test cases for the hinting mechanism. The test suite is in `tests/assets/test_cli_hint_suite.txt`, and it is run from `tests/integration/redis-cli.tcl`. Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2023-03-30 19:03:56 +03:00
test_nontty_cli "Test command-line hinting - latest server" {
# cli will connect to the running server and will use COMMAND DOCS
catch {run_cli --test_hint_file tests/assets/test_cli_hint_suite.txt} output
assert_match "*SUCCESS*" $output
}
test_nontty_cli "Test command-line hinting - no server" {
# cli will fail to connect to the server and will use the cached commands.c
catch {run_cli -p 123 --test_hint_file tests/assets/test_cli_hint_suite.txt} output
assert_match "*SUCCESS*" $output
}
test_nontty_cli "Test command-line hinting - old server" {
# cli will connect to the server but will not use COMMAND DOCS,
# and complete the missing info from the cached commands.c
r ACL setuser clitest on nopass +@all -command|docs
catch {run_cli --user clitest -a nopass --no-auth-warning --test_hint_file tests/assets/test_cli_hint_suite.txt} output
assert_match "*SUCCESS*" $output
r acl deluser clitest
}
proc test_valkey_cli_rdb_dump {functions_only} {
r flushdb
r function flush
set dir [lindex [r config get dir] 1]
assert_equal "OK" [r debug populate 100000 key 1000]
assert_equal "lib1" [r function load "#!lua name=lib1\nserver.register_function('func1', function() return 123 end)"]
if {$functions_only} {
set args "--functions-rdb $dir/cli.rdb"
} else {
set args "--rdb $dir/cli.rdb"
}
catch {run_cli {*}$args} output
assert_match {*Transfer finished with success*} $output
file delete "$dir/dump.rdb"
file rename "$dir/cli.rdb" "$dir/dump.rdb"
assert_equal "OK" [r set should-not-exist 1]
assert_equal "should_not_exist_func" [r function load "#!lua name=should_not_exist_func\nserver.register_function('should_not_exist_func', function() return 456 end)"]
assert_equal "OK" [r debug reload nosave]
assert_equal {} [r get should-not-exist]
Functions: Move library meta data to be part of the library payload. (#10500) ## Move library meta data to be part of the library payload. Following the discussion on https://github.com/redis/redis/issues/10429 and the intention to add (in the future) library versioning support, we believe that the entire library metadata (like name and engine) should be part of the library payload and not provided by the `FUNCTION LOAD` command. The reasoning behind this is that the programmer who developed the library should be the one who set those values (name, engine, and in the future also version). **It is not the responsibility of the admin who load the library into the database.** The PR moves all the library metadata (engine and function name) to be part of the library payload. The metadata needs to be provided on the first line of the payload using the shebang format (`#!<engine> name=<name>`), example: ```lua #!lua name=test redis.register_function('foo', function() return 1 end) ``` The above script will run on the Lua engine and will create a library called `test`. ## API Changes (compare to 7.0 rc2) * `FUNCTION LOAD` command was change and now it simply gets the library payload and extract the engine and name from the payload. In addition, the command will now return the function name which can later be used on `FUNCTION DELETE` and `FUNCTION LIST`. * The description field was completely removed from`FUNCTION LOAD`, and `FUNCTION LIST` ## Breaking Changes (compare to 7.0 rc2) * Library description was removed (we can re-add it in the future either as part of the shebang line or an additional line). * Loading an AOF file that was generated by either 7.0 rc1 or 7.0 rc2 will fail because the old command syntax is invalid. ## Notes * Loading an RDB file that was generated by rc1 / rc2 **is** supported, Redis will automatically add the shebang to the libraries payloads (we can probably delete that code after 7.0.3 or so since there's no need to keep supporting upgrades from an RC build).
2022-04-05 10:27:24 +03:00
assert_equal {{library_name lib1 engine LUA functions {{name func1 description {} flags {}}}}} [r function list]
if {$functions_only} {
assert_equal 0 [r dbsize]
} else {
assert_equal 100000 [r dbsize]
}
}
foreach {functions_only} {no yes} {
test "Dumping an RDB - functions only: $functions_only" {
# Disk-based master
assert_match "OK" [r config set repl-diskless-sync no]
test_valkey_cli_rdb_dump $functions_only
# Disk-less master
assert_match "OK" [r config set repl-diskless-sync yes]
assert_match "OK" [r config set repl-diskless-sync-delay 0]
test_valkey_cli_rdb_dump $functions_only
} {} {needs:repl needs:debug}
} ;# foreach functions_only
test "Scan mode" {
r flushdb
populate 1000 key: 1
# basic use
assert_equal 1000 [llength [split [run_cli --scan]]]
# pattern
Replace dict with hashtable for keys, expires and pubsub channels Instead of a dictEntry with pointers to key and value, the hashtable has a pointer directly to the value (robj) which can hold an embedded key and acts as a key-value in the hashtable. This minimizes the number of pointers to follow and thus the number of memory accesses to lookup a key-value pair. Keys robj hashtable +-------+ +-----------------------+ | 0 | | type, encoding, LRU | | 1 ------->| refcount, expire | | 2 | | ptr | | ... | | optional embedded key | +-------+ | optional embedded val | +-----------------------+ The expire timestamp (TTL) is also stored in the robj, if any. The expire hash table points to the same robj. Overview of changes: * Replace dict with hashtable in kvstore (kvstore.c) * Add functions for embedding key and expire in robj (object.c) * When there's unused space, reserve an expire field to avoid realloting it later if expire is added. * Always reserve space for expire for large key names to avoid realloc if it's set later. * Update db functions (db.c) * dbAdd, setKey and setExpire reallocate the object when embedding a key * setKey does not increment the reference counter, since it would require duplicating the object. This responsibility is moved to the caller. * Remove logic for shared integer objects as values in the database. The keys are now embedded in the objects, so all objects in the database need to be unique. Thus, we can't use shared objects as values. Also delete test cases for shared integers. * Adjust various commands to the changes mentioned above. * Adjust defrag code * Improvement: Don't access the expires table before defrag has actually reallocated the object. * Adjust test cases that were using hard-coded sizes for dict when realloc would happen, and some other adjustments in test cases. * Adjust memory prefetch for new hash table implementation in IO-threading, using new `hashtableIncrementalFind` API * Adjust offloading of free() to IO threads: Object free to be done in main thread while keeping obj->ptr offloading in IO-thread since the DB object is now allocated by the main-thread and not by the IO-thread as it used to be. * Let expireIfNeeded take an optional value, to avoid looking up the expires table when possible. --------- Signed-off-by: Uri Yagelnik <uriy@amazon.com> Signed-off-by: uriyage <78144248+uriyage@users.noreply.github.com> Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by: Uri Yagelnik <uriy@amazon.com>
2024-09-11 16:24:26 +02:00
assert_equal {key:2} [split [run_cli --scan --pattern "*:2"]]
# pattern matching with a quoted string
Replace dict with hashtable for keys, expires and pubsub channels Instead of a dictEntry with pointers to key and value, the hashtable has a pointer directly to the value (robj) which can hold an embedded key and acts as a key-value in the hashtable. This minimizes the number of pointers to follow and thus the number of memory accesses to lookup a key-value pair. Keys robj hashtable +-------+ +-----------------------+ | 0 | | type, encoding, LRU | | 1 ------->| refcount, expire | | 2 | | ptr | | ... | | optional embedded key | +-------+ | optional embedded val | +-----------------------+ The expire timestamp (TTL) is also stored in the robj, if any. The expire hash table points to the same robj. Overview of changes: * Replace dict with hashtable in kvstore (kvstore.c) * Add functions for embedding key and expire in robj (object.c) * When there's unused space, reserve an expire field to avoid realloting it later if expire is added. * Always reserve space for expire for large key names to avoid realloc if it's set later. * Update db functions (db.c) * dbAdd, setKey and setExpire reallocate the object when embedding a key * setKey does not increment the reference counter, since it would require duplicating the object. This responsibility is moved to the caller. * Remove logic for shared integer objects as values in the database. The keys are now embedded in the objects, so all objects in the database need to be unique. Thus, we can't use shared objects as values. Also delete test cases for shared integers. * Adjust various commands to the changes mentioned above. * Adjust defrag code * Improvement: Don't access the expires table before defrag has actually reallocated the object. * Adjust test cases that were using hard-coded sizes for dict when realloc would happen, and some other adjustments in test cases. * Adjust memory prefetch for new hash table implementation in IO-threading, using new `hashtableIncrementalFind` API * Adjust offloading of free() to IO threads: Object free to be done in main thread while keeping obj->ptr offloading in IO-thread since the DB object is now allocated by the main-thread and not by the IO-thread as it used to be. * Let expireIfNeeded take an optional value, to avoid looking up the expires table when possible. --------- Signed-off-by: Uri Yagelnik <uriy@amazon.com> Signed-off-by: uriyage <78144248+uriyage@users.noreply.github.com> Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by: Uri Yagelnik <uriy@amazon.com>
2024-09-11 16:24:26 +02:00
assert_equal {key:2} [split [run_cli --scan --quoted-pattern {"*:\x32"}]]
}
proc test_valkey_cli_repl {} {
set fd [open_cli "--replica"]
wait_for_condition 500 100 {
[string match {*slave0:*state=online*} [r info]]
} else {
fail "valkey-cli --replica did not connect"
}
for {set i 0} {$i < 100} {incr i} {
r set test-key test-value-$i
}
wait_for_condition 500 100 {
[string match {*test-value-99*} [read_cli $fd]]
} else {
fail "valkey-cli --replica didn't read commands"
}
fconfigure $fd -blocking true
r client kill type slave
catch { close_cli $fd } err
assert_match {*Server closed the connection*} $err
}
test "Connecting as a replica" {
# Disk-based master
assert_match "OK" [r config set repl-diskless-sync no]
test_valkey_cli_repl
# Disk-less master
assert_match "OK" [r config set repl-diskless-sync yes]
assert_match "OK" [r config set repl-diskless-sync-delay 0]
test_valkey_cli_repl
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 15:13:24 +03:00
} {} {needs:repl}
test "Piping raw protocol" {
set cmds [tmpfile "cli_cmds"]
set cmds_fd [open $cmds "w"]
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 15:13:24 +03:00
set cmds_count 2101
if {!$::singledb} {
puts $cmds_fd [formatCommand select 9]
incr cmds_count
}
puts $cmds_fd [formatCommand del test-counter]
for {set i 0} {$i < 1000} {incr i} {
puts $cmds_fd [formatCommand incr test-counter]
puts $cmds_fd [formatCommand set large-key [string repeat "x" 20000]]
}
for {set i 0} {$i < 100} {incr i} {
puts $cmds_fd [formatCommand set very-large-key [string repeat "x" 512000]]
}
close $cmds_fd
set cli_fd [open_cli "--pipe" $cmds]
fconfigure $cli_fd -blocking true
set output [read_cli $cli_fd]
assert_equal {1000} [r get test-counter]
Improve test suite to handle external servers better. (#9033) This commit revives the improves the ability to run the test suite against external servers, instead of launching and managing `redis-server` processes as part of the test fixture. This capability existed in the past, using the `--host` and `--port` options. However, it was quite limited and mostly useful when running a specific tests. Attempting to run larger chunks of the test suite experienced many issues: * Many tests depend on being able to start and control `redis-server` themselves, and there's no clear distinction between external server compatible and other tests. * Cluster mode is not supported (resulting with `CROSSSLOT` errors). This PR cleans up many things and makes it possible to run the entire test suite against an external server. It also provides more fine grained controls to handle cases where the external server supports a subset of the Redis commands, limited number of databases, cluster mode, etc. The tests directory now contains a `README.md` file that describes how this works. This commit also includes additional cleanups and fixes: * Tests can now be tagged. * Tag-based selection is now unified across `start_server`, `tags` and `test`. * More information is provided about skipped or ignored tests. * Repeated patterns in tests have been extracted to common procedures, both at a global level and on a per-test file basis. * Cleaned up some cases where test setup was based on a previous test executing (a major anti-pattern that repeats itself in many places). * Cleaned up some cases where test teardown was not part of a test (in the future we should have dedicated teardown code that executes even when tests fail). * Fixed some tests that were flaky running on external servers.
2021-06-09 15:13:24 +03:00
assert_match "*All data transferred*errors: 0*replies: ${cmds_count}*" $output
file delete $cmds
}
test "Options -X with illegal argument" {
assert_error "*-x and -X are mutually exclusive*" {run_cli -x -X tag}
assert_error "*Unrecognized option or bad number*" {run_cli -X}
assert_error "*tag not match*" {run_cli_with_input_pipe X "echo foo" set key wrong_tag}
}
test "DUMP RESTORE with -x option" {
set cmdline [valkeycli [srv host] [srv port]]
exec {*}$cmdline DEL set new_set
exec {*}$cmdline SADD set 1 2 3 4 5 6
assert_equal 6 [exec {*}$cmdline SCARD set]
assert_equal "OK" [exec {*}$cmdline -D "" --raw DUMP set | \
{*}$cmdline -x RESTORE new_set 0]
assert_equal 6 [exec {*}$cmdline SCARD new_set]
assert_equal "1\n2\n3\n4\n5\n6" [exec {*}$cmdline SMEMBERS new_set]
}
test "DUMP RESTORE with -X option" {
set cmdline [valkeycli [srv host] [srv port]]
exec {*}$cmdline DEL zset new_zset
exec {*}$cmdline ZADD zset 1 a 2 b 3 c
assert_equal 3 [exec {*}$cmdline ZCARD zset]
assert_equal "OK" [exec {*}$cmdline -D "" --raw DUMP zset | \
{*}$cmdline -X dump_tag RESTORE new_zset 0 dump_tag REPLACE]
assert_equal 3 [exec {*}$cmdline ZCARD new_zset]
assert_equal "a\n1\nb\n2\nc\n3" [exec {*}$cmdline ZRANGE new_zset 0 -1 WITHSCORES]
}
test "valkey-cli pubsub mode with single standard channel subscription" {
set fd [open_cli]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
write_cli $fd "SUBSCRIBE ch1"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "UNSUBSCRIBE ch1"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
close_cli $fd
}
test "valkey-cli pubsub mode with multiple standard channel subscriptions" {
set fd [open_cli]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
write_cli $fd "SUBSCRIBE ch1 ch2 ch3"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "UNSUBSCRIBE"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
close_cli $fd
}
test "valkey-cli pubsub mode with single shard channel subscription" {
set fd [open_cli]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
write_cli $fd "SSUBSCRIBE schannel1"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "SUNSUBSCRIBE schannel1"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
close_cli $fd
}
test "valkey-cli pubsub mode with multiple shard channel subscriptions" {
set fd [open_cli]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
write_cli $fd "SSUBSCRIBE {schannel}1 {schannel}2 {schannel}3"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "SUNSUBSCRIBE"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
close_cli $fd
}
test "valkey-cli pubsub mode with single pattern channel subscription" {
set fd [open_cli]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
write_cli $fd "PSUBSCRIBE pattern1*"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "PUNSUBSCRIBE pattern1*"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
close_cli $fd
}
test "valkey-cli pubsub mode with multiple pattern channel subscriptions" {
set fd [open_cli]
write_cli $fd "PSUBSCRIBE pattern1* pattern2* pattern3*"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "PUNSUBSCRIBE"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
close_cli $fd
}
test "valkey-cli pubsub mode when subscribing to the same channel" {
set fd [open_cli]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
write_cli $fd "SUBSCRIBE ch1 ch1"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "UNSUBSCRIBE"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
close_cli $fd
}
test "valkey-cli pubsub mode with multiple subscription types" {
set fd [open_cli]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
write_cli $fd "SUBSCRIBE ch1 ch2 ch3"
set response [read_cli $fd]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "PSUBSCRIBE pattern*"
set response [read_cli $fd]
set lines [split $response "\n"]
assert_equal "psubscribe" [lindex $lines 0]
assert_equal "pattern*" [lindex $lines 1]
assert_equal "4" [lindex $lines 2]
write_cli $fd "SSUBSCRIBE schannel"
set response [read_cli $fd]
set lines [split $response "\n"]
assert_equal "ssubscribe" [lindex $lines 0]
assert_equal "schannel" [lindex $lines 1]
assert_equal "1" [lindex $lines 2]
write_cli $fd "PUNSUBSCRIBE pattern*"
set response [read_cli $fd]
set lines [split $response "\n"]
assert_equal "punsubscribe" [lindex $lines 0]
assert_equal "pattern*" [lindex $lines 1]
assert_equal "3" [lindex $lines 2]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "SUNSUBSCRIBE schannel"
set response [read_cli $fd]
set lines [split $response "\n"]
assert_equal "sunsubscribe" [lindex $lines 0]
assert_equal "schannel" [lindex $lines 1]
assert_equal "0" [lindex $lines 2]
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "1" $pubsub_status
write_cli $fd "UNSUBSCRIBE"
set response [read_cli $fd]
# Verify pubsub mode is no longer active
write_cli $fd ":get pubsub"
set pubsub_status [string trim [read_cli $fd]]
assert_equal "0" $pubsub_status
close_cli $fd
}
test "Valid Connection Scheme: redis://" {
set cmdline [valkeycliuri "redis://" [srv host] [srv port]]
assert_equal {PONG} [exec {*}$cmdline PING]
}
test "Valid Connection Scheme: valkey://" {
set cmdline [valkeycliuri "valkey://" [srv host] [srv port]]
assert_equal {PONG} [exec {*}$cmdline PING]
}
if {$::tls} {
test "Valid Connection Scheme: rediss://" {
set cmdline [valkeycliuri "rediss://" [srv host] [srv port]]
assert_equal {PONG} [exec {*}$cmdline PING]
}
test "Valid Connection Scheme: valkeys://" {
set cmdline [valkeycliuri "valkeys://" [srv host] [srv port]]
assert_equal {PONG} [exec {*}$cmdline PING]
}
}
}