Skip to content

Console Output

Skipping 1,788 KB.. Full Log
++ cat
+ credential_json='
    {
        "client_id":"1234",
        "client_secret":"e0KVlA2EiBfjoN13olyZd2kv1KL",
        "audience":"cdc-api-uri",
        "issuer_url":"http://localhost:9096",
        "type": "client_credentials"
    }'
++ cat
+ cert_server_conf='[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn

[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=critical, digitalSignature, keyEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names

[ dn ]
CN = server

[ alt_names ]
DNS.1 = localhost
IP.1 = 127.0.0.1'
+ echo '
webServiceUrl=http://localhost:8080/
brokerServiceUrl=pulsar://localhost:6650/'
+ cp /usr/local/pulsar/conf/standalone.conf /tmp/tidb_cdc_test/changefeed_auto_stop/pulsar_standalone.conf
+ pulsar_port=6650
+ '[' normal == mtls ']'
+ '[' normal == oauth ']'
+ echo 'no cluster type specified, using default configuration.'
no cluster type specified, using default configuration.
++ date
+ echo '[Mon May  6 17:25:01 CST 2024] <<<<<< START pulsar cluster in normal mode in changefeed_auto_stop case >>>>>>'
[Mon May  6 17:25:01 CST 2024] <<<<<< START pulsar cluster in normal mode in changefeed_auto_stop case >>>>>>
+ echo 'Waiting for pulsar port to be ready...'
Waiting for pulsar port to be ready...
+ i=0
+ /usr/local/pulsar/bin/pulsar standalone --config /tmp/tidb_cdc_test/changefeed_auto_stop/pulsar_standalone.conf -nfw --metadata-dir /tmp/tidb_cdc_test/changefeed_auto_stop/pulsar-metadata --bookkeeper-dir /tmp/tidb_cdc_test/changefeed_auto_stop/pulsar-bookie
+ nc -z localhost 6650
+ i=1
+ '[' 1 -gt 20 ']'
+ sleep 2
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34e2173c0013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-mxvb-1q05l, pid:15142, start at 2024-05-06 17:25:01.037792688 +0800 CST m=+7.188940048	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:27:01.045 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:25:01.007 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:15:01.007 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
+ nc -z localhost 6650
+ i=2
+ '[' 2 -gt 20 ']'
+ sleep 2
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34e2173c0013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-mxvb-1q05l, pid:15142, start at 2024-05-06 17:25:01.037792688 +0800 CST m=+7.188940048	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:27:01.045 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:25:01.007 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:15:01.007 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34e1fefc0003	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-mxvb-1q05l, pid:15227, start at 2024-05-06 17:24:59.456908109 +0800 CST m=+5.553099412	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:26:59.464 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:24:59.455 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:14:59.455 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v8.2.0-alpha-17-g8e50de84e
Edition:         Community
Git Commit Hash: 8e50de84e6d6ecdcc108990217b70b6bb3f50271
Git Branch:      HEAD
UTC Build Time:  2024-05-06 04:04:42
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO
Compiler:        clang++ 13.0.0

Raft Proxy
Git Commit Hash:   7dc50b4eb06124e31f03adb06c20ff7ab61c5f79
Git Commit Branch: HEAD
UTC Build Time:    2024-05-06 04:09:34
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["8e50de84e6d6ecdcc108990217b70b6bb3f50271"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status_with_redo/tiflash/db/proxy"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/proxy.log"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status_with_redo/tiflash-proxy.toml"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v8.2.0-alpha-17-g8e50de84e"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ nc -z localhost 6650
+ echo 'Waiting for pulsar namespace to be ready...'
Waiting for pulsar namespace to be ready...
+ i=0
+ /usr/local/pulsar/bin/pulsar-admin namespaces list public
+ cd /tmp/tidb_cdc_test/synced_status_with_redo
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
+ is_tls=false
+ '[' false == true ']'
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status_with_redo.cli.16588.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449573684944830465
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449573684944830465 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449573684944830465
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status_with_redo --binary cdc.test
[Mon May  6 17:25:08 CST 2024] <<<<<< START cdc server in synced_status_with_redo case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status_with_redo.1662316625.out server --log-file /tmp/tidb_cdc_test/synced_status_with_redo/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
public/default
++ date
+ echo '[Mon May  6 17:25:08 CST 2024] <<<<<< pulsar is ready >>>>>>'
[Mon May  6 17:25:08 CST 2024] <<<<<< pulsar is ready >>>>>>
[Mon May  6 17:25:09 CST 2024] <<<<<< START Pulsar consumer in changefeed_auto_stop case >>>>>>
check_changefeed_state http://127.0.0.1:2379 c3c5677f-d54a-4b7b-9d14-e2acebda0cea normal null
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=c3c5677f-d54a-4b7b-9d14-e2acebda0cea
+ expected_state=normal
+ error_msg=null
+ tls_dir=null
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c c3c5677f-d54a-4b7b-9d14-e2acebda0cea -s
+ info='{
  "upstream_id": 7365815130060939486,
  "namespace": "default",
  "id": "c3c5677f-d54a-4b7b-9d14-e2acebda0cea",
  "state": "normal",
  "checkpoint_tso": 449573680816586753,
  "checkpoint_time": "2024-05-06 17:24:50.908",
  "error": null
}'
+ echo '{
  "upstream_id": 7365815130060939486,
  "namespace": "default",
  "id": "c3c5677f-d54a-4b7b-9d14-e2acebda0cea",
  "state": "normal",
  "checkpoint_tso": 449573680816586753,
  "checkpoint_time": "2024-05-06 17:24:50.908",
  "error": null
}'
{
  "upstream_id": 7365815130060939486,
  "namespace": "default",
  "id": "c3c5677f-d54a-4b7b-9d14-e2acebda0cea",
  "state": "normal",
  "checkpoint_tso": 449573680816586753,
  "checkpoint_time": "2024-05-06 17:24:50.908",
  "error": null
}
++ echo '{' '"upstream_id":' 7365815130060939486, '"namespace":' '"default",' '"id":' '"c3c5677f-d54a-4b7b-9d14-e2acebda0cea",' '"state":' '"normal",' '"checkpoint_tso":' 449573680816586753, '"checkpoint_time":' '"2024-05-06' '17:24:50.908",' '"error":' null '}'
++ jq -r .state
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
++ echo '{' '"upstream_id":' 7365815130060939486, '"namespace":' '"default",' '"id":' '"c3c5677f-d54a-4b7b-9d14-e2acebda0cea",' '"state":' '"normal",' '"checkpoint_tso":' 449573680816586753, '"checkpoint_time":' '"2024-05-06' '17:24:50.908",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
run task successfully
table changefeed_auto_stop_1.usertable not exists for 1-th check, retry later
table changefeed_auto_stop_1.usertable exists
table changefeed_auto_stop_2.usertable not exists for 1-th check, retry later
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Mon, 06 May 2024 09:25:12 GMT
< Content-Length: 816
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/7a6046ad-5917-4bc7-b406-dd9fca450387
	{"id":"7a6046ad-5917-4bc7-b406-dd9fca450387","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987508}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d385206d9
	7a6046ad-5917-4bc7-b406-dd9fca450387

/tidb/cdc/default/default/upstream/7365815183365363670
	{"id":7365815183365363670,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/7a6046ad-5917-4bc7-b406-dd9fca450387
	{"id":"7a6046ad-5917-4bc7-b406-dd9fca450387","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987508}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d385206d9
	7a6046ad-5917-4bc7-b406-dd9fca450387

/tidb/cdc/default/default/upstream/7365815183365363670
	{"id":7365815183365363670,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/7a6046ad-5917-4bc7-b406-dd9fca450387
	{"id":"7a6046ad-5917-4bc7-b406-dd9fca450387","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987508}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d385206d9
	7a6046ad-5917-4bc7-b406-dd9fca450387

/tidb/cdc/default/default/upstream/7365815183365363670
	{"id":7365815183365363670,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed-redo.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449573684944830465 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/tests/integration_tests/synced_status_with_redo/conf/changefeed-redo.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status_with_redo.cli.16680.out cli changefeed create --start-ts=449573684944830465 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/tests/integration_tests/synced_status_with_redo/conf/changefeed-redo.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7365815183365363670,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-06T17:25:12.51147552+08:00","start_ts":449573684944830465,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"eventual","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"storage":"file:///tmp/tidb_cdc_test/synced_status/redo","use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v8.2.0-alpha-23-g3bdd6915f","resolved_ts":449573684944830465,"checkpoint_ts":449573684944830465,"checkpoint_time":"2024-05-06 17:25:06.656"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
table changefeed_auto_stop_2.usertable not exists for 2-th check, retry later
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 not exists for 1-th check, retry later
table changefeed_auto_stop_2.usertable exists
table changefeed_auto_stop_3.usertable exists
table changefeed_auto_stop_4.usertable not exists for 1-th check, retry later
table test.t1 exists
+ sleep 5
table changefeed_auto_stop_4.usertable exists
check diff failed 1-th time, retry later
check diff successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
+ kill_tikv
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status_with_redo
+ info='jenkins    14381 21.1  0.5 4690108 2207356 ?     Sl   17:24   0:06 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20160 --status-addr 127.0.0.1:20181 --log-file /tmp/tidb_cdc_test/synced_status_with_redo/tikv1.log --log-level debug -C /tmp/tidb_cdc_test/synced_status_with_redo/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status_with_redo/tikv1
jenkins    14382 21.2  0.5 4696764 2219604 ?     Sl   17:24   0:06 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20161 --status-addr 127.0.0.1:20182 --log-file /tmp/tidb_cdc_test/synced_status_with_redo/tikv2.log --log-level debug -C /tmp/tidb_cdc_test/synced_status_with_redo/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status_with_redo/tikv2
jenkins    14383 29.4  0.5 4731064 2274568 ?     Sl   17:24   0:08 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20162 --status-addr 127.0.0.1:20183 --log-file /tmp/tidb_cdc_test/synced_status_with_redo/tikv3.log --log-level debug -C /tmp/tidb_cdc_test/synced_status_with_redo/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status_with_redo/tikv3
jenkins    14385 28.1  0.5 4723384 2264984 ?     Sl   17:24   0:08 tikv-server --pd 127.0.0.1:2479 -A 127.0.0.1:21160 --status-addr 127.0.0.1:21180 --log-file /tmp/tidb_cdc_test/synced_status_with_redo/tikv_down.log --log-level debug -C /tmp/tidb_cdc_test/synced_status_with_redo/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status_with_redo/tikv_down'
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status_with_redo
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   1769      0 --:--:-- --:--:-- --:--:--  1773
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-06 17:25:20.106","puller_resolved_ts":"2024-05-06 17:25:14.006","last_synced_ts":"2024-05-06 17:25:14.056","now_ts":"2024-05-06 17:25:21.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:25:20.106","puller_resolved_ts":"2024-05-06' '17:25:14.006","last_synced_ts":"2024-05-06' '17:25:14.056","now_ts":"2024-05-06' '17:25:21.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:25:20.106","puller_resolved_ts":"2024-05-06' '17:25:14.006","last_synced_ts":"2024-05-06' '17:25:14.056","now_ts":"2024-05-06' '17:25:21.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
wait process cdc.test exit for 3-th time...
cdc.test: no process found
wait process cdc.test exit for 4-th time...
process cdc.test already exit
[Mon May  6 17:25:21 CST 2024] <<<<<< run test case changefeed_auto_stop success! >>>>>>
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-pull_cdc_integration_pulsar_test-1556/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   221  100   221    0     0   2805      0 --:--:-- --:--:-- --:--:--  2833
+ synced_status='{"synced":true,"sink_checkpoint_ts":"2024-05-06 17:26:24.461","puller_resolved_ts":"2024-05-06 17:26:16.460","last_synced_ts":"2024-05-06 17:24:08.311","now_ts":"2024-05-06 17:26:25.000","info":"Data syncing is finished"}'
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-06' '17:26:24.461","puller_resolved_ts":"2024-05-06' '17:26:16.460","last_synced_ts":"2024-05-06' '17:24:08.311","now_ts":"2024-05-06' '17:26:25.000","info":"Data' syncing is 'finished"}'
++ jq .synced
+ status=true
+ '[' true '!=' true ']'
+ kill_pd
++ ps aux
++ grep pd-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    18611  9.0  0.0 13562124 143848 ?     Sl   17:23   0:14 pd-server --advertise-client-urls http://127.0.0.1:2379 --client-urls http://0.0.0.0:2379 --advertise-peer-urls http://127.0.0.1:2380 --peer-urls http://0.0.0.0:2380 --config /tmp/tidb_cdc_test/synced_status/pd-config.toml --log-file /tmp/tidb_cdc_test/synced_status/pd1.log --data-dir /tmp/tidb_cdc_test/synced_status/pd1 --name=pd1 --initial-cluster=pd1=http://127.0.0.1:2380
jenkins    18672  6.0  0.0 13636048 140500 ?     Sl   17:23   0:09 pd-server --advertise-client-urls http://127.0.0.1:2479 --client-urls http://0.0.0.0:2479 --advertise-peer-urls http://127.0.0.1:2480 --peer-urls http://0.0.0.0:2480 --config /tmp/tidb_cdc_test/synced_status/pd-config.toml --log-file /tmp/tidb_cdc_test/synced_status/down_pd.log --data-dir /tmp/tidb_cdc_test/synced_status/down_pd'
++ ps aux
++ grep pd-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
+ sleep 20
{"level":"warn","ts":1714987590.853403,"caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0037a5500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1714987590.8534713,"caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1714987590.9490829,"caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0026d08c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1714987590.9491649,"caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1714987591.5059147,"caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002309500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1714987591.5060012,"caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-06T17:26:35.362957+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001528000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:35.365201+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00109ec40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:35.421679+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00105ac40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:41.364377+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001528000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:41.367031+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00109ec40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:41.422781+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00105ac40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0{"level":"warn","ts":"2024-05-06T17:26:47.365929+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001528000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:47.368171+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00109ec40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:47.424794+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00105ac40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0{"level":"warn","ts":"2024-05-06T17:26:53.367116+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001528000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:53.369582+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00109ec40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:53.426332+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00105ac40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0{"level":"warn","ts":"2024-05-06T17:26:55.35437+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001528000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2024-05-06T17:26:55.354433+0800","logger":"etcd-client","caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-06T17:26:55.355163+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00109ec40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2024-05-06T17:26:55.355221+0800","logger":"etcd-client","caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-06T17:26:55.409601+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00105ac40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":"2024-05-06T17:26:55.409661+0800","logger":"etcd-client","caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0{"level":"warn","ts":"2024-05-06T17:26:59.368978+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001528000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:59.371322+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00109ec40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:26:59.427419+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00105ac40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0{"level":"warn","ts":"2024-05-06T17:27:05.370512+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001528000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:27:05.372133+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00109ec40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:27:05.4286+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00105ac40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0{"level":"warn","ts":1714987625.8549378,"caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0037a5500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1714987625.8550234,"caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1714987625.9501364,"caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0026d08c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1714987625.9501965,"caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1714987626.5067463,"caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002309500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1714987626.5068061,"caller":"v3@v3.5.12/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0{"level":"warn","ts":"2024-05-06T17:27:11.371532+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001528000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:27:11.373214+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00109ec40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-06T17:27:11.43063+0800","logger":"etcd-client","caller":"v3@v3.5.12/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00105ac40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    27
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    33
+ synced_status='{
    "error_msg": "[CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded",
    "error_code": "CDC:ErrPDEtcdAPIError"
}'
++ jq -r .error_code
++ echo '{' '"error_msg":' '"[CDC:ErrPDEtcdAPIError]etcd' api call error: context deadline 'exceeded",' '"error_code":' '"CDC:ErrPDEtcdAPIError"' '}'
+ error_code=CDC:ErrPDEtcdAPIError
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_unavailable_tikv conf/changefeed.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v8.2.0-alpha-15-gf83febabe
Edition: Community
Git Commit Hash: f83febabecb98b95b098fd31a3664178f8a6b437
Git Branch: master
UTC Build Time:  2024-05-06 08:48:58
Starting Downstream PD...
Release Version: v8.2.0-alpha-15-gf83febabe
Edition: Community
Git Commit Hash: f83febabecb98b95b098fd31a3664178f8a6b437
Git Branch: master
UTC Build Time:  2024-05-06 08:48:58
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   88099c95a3c0c13a827c0795c3d45070264665e4
Git Commit Branch: master
UTC Build Time:    2024-05-06 06:37:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   88099c95a3c0c13a827c0795c3d45070264665e4
Git Commit Branch: master
UTC Build Time:    2024-05-06 06:37:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v8.2.0-alpha-82-g659f32a813
Edition: Community
Git Commit Hash: 659f32a81300f9dbcea9032b3c8e4825555ccfd1
Git Branch: master
UTC Build Time: 2024-05-06 07:58:59
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v8.2.0-alpha-82-g659f32a813
Edition: Community
Git Commit Hash: 659f32a81300f9dbcea9032b3c8e4825555ccfd1
Git Branch: master
UTC Build Time: 2024-05-06 07:58:59
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34ebc0d80022	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-k448-n8mw7, pid:22294, start at 2024-05-06 17:27:39.367065437 +0800 CST m=+5.159856813	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:29:39.373 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:27:39.368 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:17:39.368 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34ebc0d80022	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-k448-n8mw7, pid:22294, start at 2024-05-06 17:27:39.367065437 +0800 CST m=+5.159856813	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:29:39.373 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:27:39.368 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:17:39.368 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34ebc5700009	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-k448-n8mw7, pid:22367, start at 2024-05-06 17:27:39.623195287 +0800 CST m=+5.369457155	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:29:39.629 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:27:39.612 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:17:39.612 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v8.2.0-alpha-17-g8e50de84e
Edition:         Community
Git Commit Hash: 8e50de84e6d6ecdcc108990217b70b6bb3f50271
Git Branch:      HEAD
UTC Build Time:  2024-05-06 04:04:42
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO
Compiler:        clang++ 13.0.0

Raft Proxy
Git Commit Hash:   7dc50b4eb06124e31f03adb06c20ff7ab61c5f79
Git Commit Branch: HEAD
UTC Build Time:    2024-05-06 04:09:34
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["8e50de84e6d6ecdcc108990217b70b6bb3f50271"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v8.2.0-alpha-17-g8e50de84e"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
+ is_tls=false
+ '[' false == true ']'
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.23748.out cli tso query --pd=http://127.0.0.1:2379
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   723  100   723    0     0   9095      0 --:--:-- --:--:-- --:--:--  9151
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-06 17:25:20.106","puller_resolved_ts":"2024-05-06 17:25:20.106","last_synced_ts":"2024-05-06 17:25:14.056","now_ts":"2024-05-06 17:27:31.000","info":"Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' \u003e '\''Resolved-Ts'\'' \u003e '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:25:20.106","puller_resolved_ts":"2024-05-06' '17:25:20.106","last_synced_ts":"2024-05-06' '17:25:14.056","now_ts":"2024-05-06' '17:27:31.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:25:20.106","puller_resolved_ts":"2024-05-06' '17:25:20.106","last_synced_ts":"2024-05-06' '17:25:14.056","now_ts":"2024-05-06' '17:27:31.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq -r .info
+ info='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ target_message='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ '[' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' '!=' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' ']'
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
wait process cdc.test exit for 3-th time...
cdc.test: no process found
wait process cdc.test exit for 4-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_unavailable_tidb conf/changefeed-redo.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status_with_redo
+ mkdir -p /tmp/tidb_cdc_test/synced_status_with_redo
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status_with_redo
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
+ set +x
+ tso='449573725894868993
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449573725894868993 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449573725894868993
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Mon May  6 17:27:44 CST 2024] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS=
+ (( i = 0 ))
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.2378223784.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Mon, 06 May 2024 09:27:47 GMT
< Content-Length: 816
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/baf915f7-5135-442b-9049-8ced16addd18
	{"id":"baf915f7-5135-442b-9049-8ced16addd18","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987664}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3ac470d1
	baf915f7-5135-442b-9049-8ced16addd18

/tidb/cdc/default/default/upstream/7365815877014738898
	{"id":7365815877014738898,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/baf915f7-5135-442b-9049-8ced16addd18
	{"id":"baf915f7-5135-442b-9049-8ced16addd18","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987664}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3ac470d1
	baf915f7-5135-442b-9049-8ced16addd18

/tidb/cdc/default/default/upstream/7365815877014738898
	{"id":7365815877014738898,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/baf915f7-5135-442b-9049-8ced16addd18
	{"id":"baf915f7-5135-442b-9049-8ced16addd18","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987664}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3ac470d1
	baf915f7-5135-442b-9049-8ced16addd18

/tidb/cdc/default/default/upstream/7365815877014738898
	{"id":7365815877014738898,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449573725894868993 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/tests/integration_tests/synced_status/conf/changefeed.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.23841.out cli changefeed create --start-ts=449573725894868993 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/tests/integration_tests/synced_status/conf/changefeed.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7365815877014738898,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-06T17:27:47.873502973+08:00","start_ts":449573725894868993,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v8.2.0-alpha-23-g3bdd6915f","resolved_ts":449573725894868993,"checkpoint_ts":449573725894868993,"checkpoint_time":"2024-05-06 17:27:42.868"}
PASS
coverage: 2.4% of statements in github.com/pingcap/tiflow/...
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 not exists for 1-th check, retry later
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status_with_redo
Starting Upstream PD...
Release Version: v8.2.0-alpha-15-gf83febabe
Edition: Community
Git Commit Hash: f83febabecb98b95b098fd31a3664178f8a6b437
Git Branch: master
UTC Build Time:  2024-05-06 08:48:58
Starting Downstream PD...
Release Version: v8.2.0-alpha-15-gf83febabe
Edition: Community
Git Commit Hash: f83febabecb98b95b098fd31a3664178f8a6b437
Git Branch: master
UTC Build Time:  2024-05-06 08:48:58
Verifying upstream PD is started...
Verifying downstream PD is started...
table test.t1 exists
+ sleep 5
Starting Upstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   88099c95a3c0c13a827c0795c3d45070264665e4
Git Commit Branch: master
UTC Build Time:    2024-05-06 06:37:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   88099c95a3c0c13a827c0795c3d45070264665e4
Git Commit Branch: master
UTC Build Time:    2024-05-06 06:37:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v8.2.0-alpha-82-g659f32a813
Edition: Community
Git Commit Hash: 659f32a81300f9dbcea9032b3c8e4825555ccfd1
Git Branch: master
UTC Build Time: 2024-05-06 07:58:59
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v8.2.0-alpha-82-g659f32a813
Edition: Community
Git Commit Hash: 659f32a81300f9dbcea9032b3c8e4825555ccfd1
Git Branch: master
UTC Build Time: 2024-05-06 07:58:59
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
+ kill_tikv
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    21595 24.3  0.5 4691120 2204452 ?     Sl   17:27   0:05 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20160 --status-addr 127.0.0.1:20181 --log-file /tmp/tidb_cdc_test/synced_status/tikv1.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv1
jenkins    21596 24.7  0.5 4703412 2226904 ?     Sl   17:27   0:05 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20161 --status-addr 127.0.0.1:20182 --log-file /tmp/tidb_cdc_test/synced_status/tikv2.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv2
jenkins    21597 32.9  0.5 4727992 2274860 ?     Sl   17:27   0:07 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20162 --status-addr 127.0.0.1:20183 --log-file /tmp/tidb_cdc_test/synced_status/tikv3.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv3
jenkins    21601 32.4  0.5 4726456 2264388 ?     Sl   17:27   0:07 tikv-server --pd 127.0.0.1:2479 -A 127.0.0.1:21160 --status-addr 127.0.0.1:21180 --log-file /tmp/tidb_cdc_test/synced_status/tikv_down.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv_down'
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   3093      0 --:--:-- --:--:-- --:--:--  3075
100   243  100   243    0     0   3089      0 --:--:-- --:--:-- --:--:--  3075
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-06 17:27:56.268","puller_resolved_ts":"2024-05-06 17:27:49.368","last_synced_ts":"2024-05-06 17:27:49.868","now_ts":"2024-05-06 17:27:56.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:27:56.268","puller_resolved_ts":"2024-05-06' '17:27:49.368","last_synced_ts":"2024-05-06' '17:27:49.868","now_ts":"2024-05-06' '17:27:56.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:27:56.268","puller_resolved_ts":"2024-05-06' '17:27:49.368","last_synced_ts":"2024-05-06' '17:27:49.868","now_ts":"2024-05-06' '17:27:56.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34ecffac000c	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-mxvb-1q05l, pid:17915, start at 2024-05-06 17:27:59.739316744 +0800 CST m=+5.474637830	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:29:59.748 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:27:59.723 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:17:59.723 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34ecffac000c	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-mxvb-1q05l, pid:17915, start at 2024-05-06 17:27:59.739316744 +0800 CST m=+5.474637830	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:29:59.748 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:27:59.723 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:17:59.723 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34ed01240014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-mxvb-1q05l, pid:17987, start at 2024-05-06 17:27:59.849888134 +0800 CST m=+5.532404814	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:29:59.859 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:27:59.817 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:17:59.817 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v8.2.0-alpha-17-g8e50de84e
Edition:         Community
Git Commit Hash: 8e50de84e6d6ecdcc108990217b70b6bb3f50271
Git Branch:      HEAD
UTC Build Time:  2024-05-06 04:04:42
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO
Compiler:        clang++ 13.0.0

Raft Proxy
Git Commit Hash:   7dc50b4eb06124e31f03adb06c20ff7ab61c5f79
Git Commit Branch: HEAD
UTC Build Time:    2024-05-06 04:09:34
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["8e50de84e6d6ecdcc108990217b70b6bb3f50271"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v8.2.0-alpha-17-g8e50de84e"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/proxy.log"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status_with_redo/tiflash/db/proxy"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status_with_redo/tiflash-proxy.toml"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status_with_redo
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
+ is_tls=false
+ '[' false == true ']'
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status_with_redo.cli.19381.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449573731873062914
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449573731873062914 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449573731873062914
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status_with_redo --binary cdc.test
[Mon May  6 17:28:07 CST 2024] <<<<<< START cdc server in synced_status_with_redo case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS=
+ (( i = 0 ))
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status_with_redo.1941619418.out server --log-file /tmp/tidb_cdc_test/synced_status_with_redo/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data --cluster-id default
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Mon, 06 May 2024 09:28:10 GMT
< Content-Length: 816
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/bf949184-a44a-4dfc-954f-3977d201cbeb
	{"id":"bf949184-a44a-4dfc-954f-3977d201cbeb","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987687}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3b1a96d4
	bf949184-a44a-4dfc-954f-3977d201cbeb

/tidb/cdc/default/default/upstream/7365815963312012358
	{"id":7365815963312012358,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/bf949184-a44a-4dfc-954f-3977d201cbeb
	{"id":"bf949184-a44a-4dfc-954f-3977d201cbeb","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987687}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3b1a96d4
	bf949184-a44a-4dfc-954f-3977d201cbeb

/tidb/cdc/default/default/upstream/7365815963312012358
	{"id":7365815963312012358,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/bf949184-a44a-4dfc-954f-3977d201cbeb
	{"id":"bf949184-a44a-4dfc-954f-3977d201cbeb","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987687}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3b1a96d4
	bf949184-a44a-4dfc-954f-3977d201cbeb

/tidb/cdc/default/default/upstream/7365815963312012358
	{"id":7365815963312012358,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed-redo.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449573731873062914 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/tests/integration_tests/synced_status_with_redo/conf/changefeed-redo.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status_with_redo.cli.19479.out cli changefeed create --start-ts=449573731873062914 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/tests/integration_tests/synced_status_with_redo/conf/changefeed-redo.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7365815963312012358,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-06T17:28:10.692843229+08:00","start_ts":449573731873062914,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"eventual","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"storage":"file:///tmp/tidb_cdc_test/synced_status/redo","use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v8.2.0-alpha-23-g3bdd6915f","resolved_ts":449573731873062914,"checkpoint_ts":449573731873062914,"checkpoint_time":"2024-05-06 17:28:05.673"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 not exists for 1-th check, retry later
table test.t1 exists
+ sleep 5
+ kill_tidb
++ ps aux
++ grep tidb-server
++ grep /tmp/tidb_cdc_test/synced_status_with_redo
+ info='jenkins    17915 13.4  0.0 2574784 261508 ?      Sl   17:27   0:03 tidb-server -P 4000 -config /tmp/tidb_cdc_test/synced_status_with_redo/tidb-config-1714987674257907440.toml --store tikv --path 127.0.0.1:2379 --status=10080 --log-file /tmp/tidb_cdc_test/synced_status_with_redo/tidb.log
jenkins    17919  3.9  0.0 2564280 197764 ?      Sl   17:27   0:00 tidb-server -P 4001 -config /tmp/tidb_cdc_test/synced_status_with_redo/tidb-config-1714987674261362678.toml --store tikv --path 127.0.0.1:2379 --status=10081 --log-file /tmp/tidb_cdc_test/synced_status_with_redo/tidb_other.log
jenkins    17987 14.1  0.0 2650712 278592 ?      Sl   17:27   0:03 tidb-server -P 3306 -config /tmp/tidb_cdc_test/synced_status_with_redo/tidb-config-1714987674308739995.toml --store tikv --path 127.0.0.1:2479 --status=20080 --log-file /tmp/tidb_cdc_test/synced_status_with_redo/tidb_down.log'
++ ps aux
++ grep tidb-server
++ grep /tmp/tidb_cdc_test/synced_status_with_redo
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   1908      0 --:--:-- --:--:-- --:--:--  1913
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-06 17:28:18.323","puller_resolved_ts":"2024-05-06 17:28:12.172","last_synced_ts":"2024-05-06 17:28:12.273","now_ts":"2024-05-06 17:28:19.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:28:18.323","puller_resolved_ts":"2024-05-06' '17:28:12.172","last_synced_ts":"2024-05-06' '17:28:12.273","now_ts":"2024-05-06' '17:28:19.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:28:18.323","puller_resolved_ts":"2024-05-06' '17:28:12.172","last_synced_ts":"2024-05-06' '17:28:12.273","now_ts":"2024-05-06' '17:28:19.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
[2024/05/06 17:29:10.522 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc001bc81e0/127.0.0.1:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:12.874 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc001bc81e0/127.0.0.1:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:13.799 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc001bc8000/127.0.0.1:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:14.998 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc000c041e0/127.0.0.1:2479] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:15.587 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc001bc81e0/127.0.0.1:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:17.623 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc000c041e0/127.0.0.1:2479] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:37.098 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc000c041e0/127.0.0.1:2479] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:37.100 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc001bc81e0/127.0.0.1:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:38.377 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc000152000/127.0.0.1:2479] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:39.537 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc001bc81e0/127.0.0.1:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:39.776 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc000c041e0/127.0.0.1:2479] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:42.064 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc001bc81e0/127.0.0.1:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
[2024/05/06 17:29:42.760 +08:00] [WARN] [retry_interceptor.go:62] ["retrying of unary invoker failed"] [target=etcd-endpoints://0xc000c041e0/127.0.0.1:2479] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   723  100   723    0     0   8740      0 --:--:-- --:--:-- --:--:--  8817
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-06 17:27:56.268","puller_resolved_ts":"2024-05-06 17:27:56.268","last_synced_ts":"2024-05-06 17:27:49.868","now_ts":"2024-05-06 17:30:07.000","info":"Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' \u003e '\''Resolved-Ts'\'' \u003e '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:27:56.268","puller_resolved_ts":"2024-05-06' '17:27:56.268","last_synced_ts":"2024-05-06' '17:27:49.868","now_ts":"2024-05-06' '17:30:07.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:27:56.268","puller_resolved_ts":"2024-05-06' '17:27:56.268","last_synced_ts":"2024-05-06' '17:27:49.868","now_ts":"2024-05-06' '17:30:07.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq -r .info
+ info='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ target_message='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ '[' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' '!=' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' ']'
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
wait process cdc.test exit for 3-th time...
cdc.test: no process found
wait process cdc.test exit for 4-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_unavailable_tidb conf/changefeed.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v8.2.0-alpha-15-gf83febabe
Edition: Community
Git Commit Hash: f83febabecb98b95b098fd31a3664178f8a6b437
Git Branch: master
UTC Build Time:  2024-05-06 08:48:58
Starting Downstream PD...
Release Version: v8.2.0-alpha-15-gf83febabe
Edition: Community
Git Commit Hash: f83febabecb98b95b098fd31a3664178f8a6b437
Git Branch: master
UTC Build Time:  2024-05-06 08:48:58
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   88099c95a3c0c13a827c0795c3d45070264665e4
Git Commit Branch: master
UTC Build Time:    2024-05-06 06:37:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   88099c95a3c0c13a827c0795c3d45070264665e4
Git Commit Branch: master
UTC Build Time:    2024-05-06 06:37:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v8.2.0-alpha-82-g659f32a813
Edition: Community
Git Commit Hash: 659f32a81300f9dbcea9032b3c8e4825555ccfd1
Git Branch: master
UTC Build Time: 2024-05-06 07:58:59
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v8.2.0-alpha-82-g659f32a813
Edition: Community
Git Commit Hash: 659f32a81300f9dbcea9032b3c8e4825555ccfd1
Git Branch: master
UTC Build Time: 2024-05-06 07:58:59
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   3551      0 --:--:-- --:--:-- --:--:--  3573
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-06 17:29:04.372","puller_resolved_ts":"2024-05-06 17:30:19.422","last_synced_ts":"1970-01-01 08:00:00.000","now_ts":"2024-05-06 17:30:29.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:29:04.372","puller_resolved_ts":"2024-05-06' '17:30:19.422","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2024-05-06' '17:30:29.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' true ']'
+ echo 'synced status isn'\''t correct'
synced status isn't correct
+ exit 1
+ stop_tidb_cluster
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34f636e00014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-k448-n8mw7, pid:25074, start at 2024-05-06 17:30:30.761542939 +0800 CST m=+5.358223738	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:32:30.770 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:30:30.762 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:20:30.762 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34f636e00014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-k448-n8mw7, pid:25074, start at 2024-05-06 17:30:30.761542939 +0800 CST m=+5.358223738	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:32:30.770 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:30:30.762 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:20:30.762 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	196	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d34f639300014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-pulsar-test-1556-k448-n8mw7, pid:25151, start at 2024-05-06 17:30:30.902674523 +0800 CST m=+5.446554372	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240506-17:32:30.910 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240506-17:30:30.909 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240506-17:20:30.909 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v8.2.0-alpha-17-g8e50de84e
Edition:         Community
Git Commit Hash: 8e50de84e6d6ecdcc108990217b70b6bb3f50271
Git Branch:      HEAD
UTC Build Time:  2024-05-06 04:04:42
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO
Compiler:        clang++ 13.0.0

Raft Proxy
Git Commit Hash:   7dc50b4eb06124e31f03adb06c20ff7ab61c5f79
Git Commit Branch: HEAD
UTC Build Time:    2024-05-06 04:09:34
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["8e50de84e6d6ecdcc108990217b70b6bb3f50271"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v8.2.0-alpha-17-g8e50de84e"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
+ is_tls=false
+ '[' false == true ']'
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.26537.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449573771296899073
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449573771296899073 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449573771296899073
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Mon May  6 17:30:37 CST 2024] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.2657526577.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Mon, 06 May 2024 09:30:40 GMT
< Content-Length: 816
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/d5b134dd-cc5a-4867-b5a1-bbdb948dec29
	{"id":"d5b134dd-cc5a-4867-b5a1-bbdb948dec29","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987837}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3d6515d6
	d5b134dd-cc5a-4867-b5a1-bbdb948dec29

/tidb/cdc/default/default/upstream/7365816609633512699
	{"id":7365816609633512699,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/d5b134dd-cc5a-4867-b5a1-bbdb948dec29
	{"id":"d5b134dd-cc5a-4867-b5a1-bbdb948dec29","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987837}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3d6515d6
	d5b134dd-cc5a-4867-b5a1-bbdb948dec29

/tidb/cdc/default/default/upstream/7365816609633512699
	{"id":7365816609633512699,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/d5b134dd-cc5a-4867-b5a1-bbdb948dec29
	{"id":"d5b134dd-cc5a-4867-b5a1-bbdb948dec29","address":"127.0.0.1:8300","version":"v8.2.0-alpha-23-g3bdd6915f","git-hash":"3bdd6915f4d64ba9eb399e3678bd2c0e2573706a","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/bin/cdc.test","start-timestamp":1714987837}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f4d3d6515d6
	d5b134dd-cc5a-4867-b5a1-bbdb948dec29

/tidb/cdc/default/default/upstream/7365816609633512699
	{"id":7365816609633512699,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449573771296899073 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/tests/integration_tests/synced_status/conf/changefeed.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.26626.out cli changefeed create --start-ts=449573771296899073 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_pulsar_test/tiflow/tests/integration_tests/synced_status/conf/changefeed.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7365816609633512699,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-06T17:30:41.118042485+08:00","start_ts":449573771296899073,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v8.2.0-alpha-23-g3bdd6915f","resolved_ts":449573771296899073,"checkpoint_ts":449573771296899073,"checkpoint_time":"2024-05-06 17:30:36.063"}
PASS
coverage: 2.4% of statements in github.com/pingcap/tiflow/...
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 not exists for 1-th check, retry later
table test.t1 exists
+ sleep 5
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
Post stage
[Pipeline] sh
+ ls /tmp/tidb_cdc_test/
changefeed_reconstruct
cov.changefeed_reconstruct.67606763.out
cov.changefeed_reconstruct.71137115.out
cov.multi_capture.1016810170.out
cov.multi_capture.1021610218.out
cov.multi_capture.1027110273.out
cov.multi_capture.cli.10040.out
cov.multi_capture.cli.10818.out
cov.processor_err_chan.34553457.out
cov.synced_status_with_redo.1374113743.out
cov.synced_status_with_redo.1662316625.out
cov.synced_status_with_redo.cli.13705.out
cov.synced_status_with_redo.cli.13802.out
cov.synced_status_with_redo.cli.16588.out
cov.synced_status_with_redo.cli.16680.out
cov.synced_status_with_redo.cli.19381.out
cov.synced_status_with_redo.cli.19479.out
multi_capture
processor_err_chan
sql_res.changefeed_reconstruct.txt
sql_res.multi_capture.txt
sql_res.processor_err_chan.txt
sql_res.synced_status_with_redo.txt
synced_status
synced_status_with_redo
++ find /tmp/tidb_cdc_test/ -type f -name '*.log'
+ tar -cvzf log-G08.tar.gz /tmp/tidb_cdc_test/changefeed_reconstruct/stdoutserver1.log /tmp/tidb_cdc_test/changefeed_reconstruct/tikv1.log /tmp/tidb_cdc_test/changefeed_reconstruct/down_pd.log /tmp/tidb_cdc_test/changefeed_reconstruct/cdc_pulsar_consumer_stdout.log /tmp/tidb_cdc_test/changefeed_reconstruct/tidb_down.log /tmp/tidb_cdc_test/changefeed_reconstruct/pulsar_stdout.log /tmp/tidb_cdc_test/changefeed_reconstruct/cdcserver2.log /tmp/tidb_cdc_test/changefeed_reconstruct/stdoutserver2.log /tmp/tidb_cdc_test/changefeed_reconstruct/tidb.log /tmp/tidb_cdc_test/changefeed_reconstruct/tikv2.log /tmp/tidb_cdc_test/changefeed_reconstruct/pd1.log /tmp/tidb_cdc_test/changefeed_reconstruct/tidb_other.log /tmp/tidb_cdc_test/changefeed_reconstruct/tidb-slow.log /tmp/tidb_cdc_test/changefeed_reconstruct/tikv3.log /tmp/tidb_cdc_test/changefeed_reconstruct/cdc_pulsar_consumer.log /tmp/tidb_cdc_test/changefeed_reconstruct/cdcserver1.log /tmp/tidb_cdc_test/changefeed_reconstruct/sync_diff_inspector.log /tmp/tidb_cdc_test/changefeed_reconstruct/tikv_down.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc.log /tmp/tidb_cdc_test/synced_status_with_redo/tikv1.log /tmp/tidb_cdc_test/synced_status_with_redo/down_pd.log /tmp/tidb_cdc_test/synced_status_with_redo/pd1/region-meta/000001.log /tmp/tidb_cdc_test/synced_status_with_redo/pd1/hot-region/000001.log /tmp/tidb_cdc_test/synced_status_with_redo/stdout.log /tmp/tidb_cdc_test/synced_status_with_redo/tikv_down/db/000005.log /tmp/tidb_cdc_test/synced_status_with_redo/tidb_down.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0004/000002.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0007/000002.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0005/000002.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0006/000002.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0002/000002.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0001/000002.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0003/000002.log /tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0000/000002.log /tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/error.log /tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/server.log /tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/proxy.log /tmp/tidb_cdc_test/synced_status_with_redo/tiflash/db/proxy/db/000005.log /tmp/tidb_cdc_test/synced_status_with_redo/tikv3/db/000005.log /tmp/tidb_cdc_test/synced_status_with_redo/tidb.log /tmp/tidb_cdc_test/synced_status_with_redo/tikv2.log /tmp/tidb_cdc_test/synced_status_with_redo/down_pd/region-meta/000001.log /tmp/tidb_cdc_test/synced_status_with_redo/down_pd/hot-region/000001.log /tmp/tidb_cdc_test/synced_status_with_redo/tikv1/db/000005.log /tmp/tidb_cdc_test/synced_status_with_redo/pd1.log /tmp/tidb_cdc_test/synced_status_with_redo/tidb_other.log /tmp/tidb_cdc_test/synced_status_with_redo/tikv2/db/000005.log /tmp/tidb_cdc_test/synced_status_with_redo/tidb-slow.log /tmp/tidb_cdc_test/synced_status_with_redo/tikv3.log /tmp/tidb_cdc_test/synced_status_with_redo/tikv_down.log /tmp/tidb_cdc_test/multi_capture/stdout3.log /tmp/tidb_cdc_test/multi_capture/tikv1.log /tmp/tidb_cdc_test/multi_capture/down_pd.log /tmp/tidb_cdc_test/multi_capture/cdc_pulsar_consumer_stdout.log /tmp/tidb_cdc_test/multi_capture/tidb_down.log /tmp/tidb_cdc_test/multi_capture/pulsar_stdout.log /tmp/tidb_cdc_test/multi_capture/tidb.log /tmp/tidb_cdc_test/multi_capture/tikv2.log /tmp/tidb_cdc_test/multi_capture/pd1.log /tmp/tidb_cdc_test/multi_capture/tidb_other.log /tmp/tidb_cdc_test/multi_capture/cdc1.log /tmp/tidb_cdc_test/multi_capture/tidb-slow.log /tmp/tidb_cdc_test/multi_capture/tikv3.log /tmp/tidb_cdc_test/multi_capture/stdout1.log /tmp/tidb_cdc_test/multi_capture/cdc_pulsar_consumer.log /tmp/tidb_cdc_test/multi_capture/stdout2.log /tmp/tidb_cdc_test/multi_capture/cdc2.log /tmp/tidb_cdc_test/multi_capture/sync_diff_inspector.log /tmp/tidb_cdc_test/multi_capture/tikv_down.log /tmp/tidb_cdc_test/multi_capture/cdc3.log /tmp/tidb_cdc_test/processor_err_chan/cdc.log /tmp/tidb_cdc_test/processor_err_chan/tikv1.log /tmp/tidb_cdc_test/processor_err_chan/down_pd.log /tmp/tidb_cdc_test/processor_err_chan/stdout.log /tmp/tidb_cdc_test/processor_err_chan/cdc_pulsar_consumer_stdout.log /tmp/tidb_cdc_test/processor_err_chan/tidb_down.log /tmp/tidb_cdc_test/processor_err_chan/pulsar_stdout.log /tmp/tidb_cdc_test/processor_err_chan/tidb.log /tmp/tidb_cdc_test/processor_err_chan/tikv2.log /tmp/tidb_cdc_test/processor_err_chan/pd1.log /tmp/tidb_cdc_test/processor_err_chan/tidb_other.log /tmp/tidb_cdc_test/processor_err_chan/tidb-slow.log /tmp/tidb_cdc_test/processor_err_chan/tikv3.log /tmp/tidb_cdc_test/processor_err_chan/cdc_pulsar_consumer.log /tmp/tidb_cdc_test/processor_err_chan/sync_diff_inspector.log /tmp/tidb_cdc_test/processor_err_chan/tikv_down.log
tar: Removing leading `/' from member names
/tmp/tidb_cdc_test/changefeed_reconstruct/stdoutserver1.log
/tmp/tidb_cdc_test/changefeed_reconstruct/tikv1.log
/tmp/tidb_cdc_test/changefeed_reconstruct/down_pd.log
/tmp/tidb_cdc_test/changefeed_reconstruct/cdc_pulsar_consumer_stdout.log
/tmp/tidb_cdc_test/changefeed_reconstruct/tidb_down.log
/tmp/tidb_cdc_test/changefeed_reconstruct/pulsar_stdout.log
/tmp/tidb_cdc_test/changefeed_reconstruct/cdcserver2.log
/tmp/tidb_cdc_test/changefeed_reconstruct/stdoutserver2.log
/tmp/tidb_cdc_test/changefeed_reconstruct/tidb.log
/tmp/tidb_cdc_test/changefeed_reconstruct/tikv2.log
/tmp/tidb_cdc_test/changefeed_reconstruct/pd1.log
/tmp/tidb_cdc_test/changefeed_reconstruct/tidb_other.log
/tmp/tidb_cdc_test/changefeed_reconstruct/tidb-slow.log
/tmp/tidb_cdc_test/changefeed_reconstruct/tikv3.log
/tmp/tidb_cdc_test/changefeed_reconstruct/cdc_pulsar_consumer.log
/tmp/tidb_cdc_test/changefeed_reconstruct/cdcserver1.log
/tmp/tidb_cdc_test/changefeed_reconstruct/sync_diff_inspector.log
/tmp/tidb_cdc_test/changefeed_reconstruct/tikv_down.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc.log
/tmp/tidb_cdc_test/synced_status_with_redo/tikv1.log
/tmp/tidb_cdc_test/synced_status_with_redo/down_pd.log
/tmp/tidb_cdc_test/synced_status_with_redo/pd1/region-meta/000001.log
/tmp/tidb_cdc_test/synced_status_with_redo/pd1/hot-region/000001.log
/tmp/tidb_cdc_test/synced_status_with_redo/stdout.log
/tmp/tidb_cdc_test/synced_status_with_redo/tikv_down/db/000005.log
/tmp/tidb_cdc_test/synced_status_with_redo/tidb_down.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0004/000002.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0007/000002.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0005/000002.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0006/000002.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0002/000002.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0001/000002.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0003/000002.log
/tmp/tidb_cdc_test/synced_status_with_redo/cdc_data/tmp/sorter/0000/000002.log
/tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/error.log
/tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/server.log
/tmp/tidb_cdc_test/synced_status_with_redo/tiflash/log/proxy.log
/tmp/tidb_cdc_test/synced_status_with_redo/tiflash/db/proxy/db/000005.log
/tmp/tidb_cdc_test/synced_status_with_redo/tikv3/db/000005.log
/tmp/tidb_cdc_test/synced_status_with_redo/tidb.log
/tmp/tidb_cdc_test/synced_status_with_redo/tikv2.log
/tmp/tidb_cdc_test/synced_status_with_redo/down_pd/region-meta/000001.log
/tmp/tidb_cdc_test/synced_status_with_redo/down_pd/hot-region/000001.log
/tmp/tidb_cdc_test/synced_status_with_redo/tikv1/db/000005.log
/tmp/tidb_cdc_test/synced_status_with_redo/pd1.log
/tmp/tidb_cdc_test/synced_status_with_redo/tidb_other.log
/tmp/tidb_cdc_test/synced_status_with_redo/tikv2/db/000005.log
/tmp/tidb_cdc_test/synced_status_with_redo/tidb-slow.log
/tmp/tidb_cdc_test/synced_status_with_redo/tikv3.log
/tmp/tidb_cdc_test/synced_status_with_redo/tikv_down.log
/tmp/tidb_cdc_test/multi_capture/stdout3.log
/tmp/tidb_cdc_test/multi_capture/tikv1.log
/tmp/tidb_cdc_test/multi_capture/down_pd.log
/tmp/tidb_cdc_test/multi_capture/cdc_pulsar_consumer_stdout.log
/tmp/tidb_cdc_test/multi_capture/tidb_down.log
/tmp/tidb_cdc_test/multi_capture/pulsar_stdout.log
/tmp/tidb_cdc_test/multi_capture/tidb.log
/tmp/tidb_cdc_test/multi_capture/tikv2.log
/tmp/tidb_cdc_test/multi_capture/pd1.log
/tmp/tidb_cdc_test/multi_capture/tidb_other.log
/tmp/tidb_cdc_test/multi_capture/cdc1.log
/tmp/tidb_cdc_test/multi_capture/tidb-slow.log
/tmp/tidb_cdc_test/multi_capture/tikv3.log
/tmp/tidb_cdc_test/multi_capture/stdout1.log
/tmp/tidb_cdc_test/multi_capture/cdc_pulsar_consumer.log
/tmp/tidb_cdc_test/multi_capture/stdout2.log
/tmp/tidb_cdc_test/multi_capture/cdc2.log
/tmp/tidb_cdc_test/multi_capture/sync_diff_inspector.log
/tmp/tidb_cdc_test/multi_capture/tikv_down.log
+ kill_tidb
++ ps aux
++ grep tidb-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    25074 14.0  0.0 2697912 258316 ?      Sl   17:30   0:03 tidb-server -P 4000 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1714987825396439379.toml --store tikv --path 127.0.0.1:2379 --status=10080 --log-file /tmp/tidb_cdc_test/synced_status/tidb.log
jenkins    25078  4.1  0.0 2358960 196208 ?      Sl   17:30   0:01 tidb-server -P 4001 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1714987825399778466.toml --store tikv --path 127.0.0.1:2379 --status=10081 --log-file /tmp/tidb_cdc_test/synced_status/tidb_other.log
jenkins    25151 15.0  0.0 2427316 255868 ?      Sl   17:30   0:03 tidb-server -P 3306 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1714987825449600771.toml --store tikv --path 127.0.0.1:2479 --status=20080 --log-file /tmp/tidb_cdc_test/synced_status/tidb_down.log'
++ ps aux
++ grep tidb-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   1576      0 --:--:-- --:--:-- --:--:--  1567
100   243  100   243    0     0   1575      0 --:--:-- --:--:-- --:--:--  1567
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-06 17:30:49.463","puller_resolved_ts":"2024-05-06 17:30:42.663","last_synced_ts":"2024-05-06 17:30:42.713","now_ts":"2024-05-06 17:30:49.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:30:49.463","puller_resolved_ts":"2024-05-06' '17:30:42.663","last_synced_ts":"2024-05-06' '17:30:42.713","now_ts":"2024-05-06' '17:30:49.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-06' '17:30:49.463","puller_resolved_ts":"2024-05-06' '17:30:42.663","last_synced_ts":"2024-05-06' '17:30:42.713","now_ts":"2024-05-06' '17:30:49.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
/tmp/tidb_cdc_test/multi_capture/cdc3.log
/tmp/tidb_cdc_test/processor_err_chan/cdc.log
/tmp/tidb_cdc_test/processor_err_chan/tikv1.log
/tmp/tidb_cdc_test/processor_err_chan/down_pd.log
/tmp/tidb_cdc_test/processor_err_chan/stdout.log
/tmp/tidb_cdc_test/processor_err_chan/cdc_pulsar_consumer_stdout.log
/tmp/tidb_cdc_test/processor_err_chan/tidb_down.log
/tmp/tidb_cdc_test/processor_err_chan/pulsar_stdout.log
/tmp/tidb_cdc_test/processor_err_chan/tidb.log
/tmp/tidb_cdc_test/processor_err_chan/tikv2.log
/tmp/tidb_cdc_test/processor_err_chan/pd1.log
/tmp/tidb_cdc_test/processor_err_chan/tidb_other.log
/tmp/tidb_cdc_test/processor_err_chan/tidb-slow.log
/tmp/tidb_cdc_test/processor_err_chan/tikv3.log
/tmp/tidb_cdc_test/processor_err_chan/cdc_pulsar_consumer.log
/tmp/tidb_cdc_test/processor_err_chan/sync_diff_inspector.log
/tmp/tidb_cdc_test/processor_err_chan/tikv_down.log
+ ls -alh log-G08.tar.gz
-rw-r--r-- 1 jenkins jenkins 14M May  6 17:30 log-G08.tar.gz
[Pipeline] archiveArtifacts
Archiving artifacts
Recording fingerprints
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G08'
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
++ stop_tidb_cluster
script returned exit code 143
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G09'
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE