Fix redis-cli cluster add-node race in cli.tcl (#11349)

There is a race condition in the test:
```
*** [err]: redis-cli --cluster add-node with cluster-port in tests/unit/cluster/cli.tcl
Expected '5' to be equal to '4' {assert_equal 5 [CI 0 cluster_known_nodes]} proc ::test)
```

When using cli to add node, there can potentially be a race condition
in which all nodes presenting cluster state o.k even though the added
node did not yet meet all cluster nodes.

This comment and the fix were taken from #11221. Also apply it in several
other similar places.
This commit is contained in:
Binbin 2022-10-03 14:21:41 +08:00 committed by GitHub
parent ff80809053
commit a549b78c48
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -172,6 +172,8 @@ start_multiple_servers 5 [list overrides $base_conf] {
127.0.0.1:[srv -3 port] \
127.0.0.1:[srv 0 port]
wait_for_cluster_size 4
wait_for_condition 1000 50 {
[CI 0 cluster_state] eq {ok} &&
[CI 1 cluster_state] eq {ok} &&
@ -230,7 +232,7 @@ test {Migrate the last slot away from a node using redis-cli} {
127.0.0.1:[srv -3 port] \
127.0.0.1:[srv 0 port]
# First we wait for new node to be recognized by entire cluster
# First we wait for new node to be recognized by entire cluster
wait_for_cluster_size 4
wait_for_condition 1000 50 {
@ -350,6 +352,8 @@ start_server [list overrides [list cluster-enabled yes cluster-node-timeout 1 cl
127.0.0.1:[srv -3 port] \
127.0.0.1:[srv 0 port]
wait_for_cluster_size 4
wait_for_condition 1000 50 {
[CI 0 cluster_state] eq {ok} &&
[CI 1 cluster_state] eq {ok} &&
@ -364,6 +368,8 @@ start_server [list overrides [list cluster-enabled yes cluster-node-timeout 1 cl
127.0.0.1:[srv -4 port] \
127.0.0.1:[srv 0 port]
wait_for_cluster_size 5
wait_for_condition 1000 50 {
[CI 0 cluster_state] eq {ok} &&
[CI 1 cluster_state] eq {ok} &&