Skip to content

Console Output

Skipping 2,277 KB.. Full Log
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
table test.finish_mark not exists for 1-th check, retry later
table test.finish_mark not exists for 2-th check, retry later
table test.finish_mark exists
check diff successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Wed May 15 19:55:15 CST 2024] <<<<<< run test case canal_json_claim_check success! >>>>>>
/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/canal_json_claim_check/run.sh: line 1: 33538 Killed                  cdc_kafka_consumer --upstream-uri $SINK_URI --downstream-uri="mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false" --upstream-tidb-dsn="root@tcp(${UP_TIDB_HOST}:${UP_TIDB_PORT})/?" --config="$CUR/conf/changefeed.toml" 2>&1
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/open_protocol_claim_check/run.sh using Sink-Type: kafka... <<=================
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/open_protocol_claim_check
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df06ed1f40013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-vrzxw, pid:34491, start at 2024-05-15 19:55:41.607319485 +0800 CST m=+5.455856350	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:57:41.614 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:55:41.621 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:45:41.621 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df06ed1f40013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-vrzxw, pid:34491, start at 2024-05-15 19:55:41.607319485 +0800 CST m=+5.455856350	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:57:41.614 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:55:41.621 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:45:41.621 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df06ed2a40004	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-vrzxw, pid:34565, start at 2024-05-15 19:55:41.612688828 +0800 CST m=+5.367460455	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:57:41.628 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:55:41.609 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:45:41.609 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-19-gb9e45523c
Edition:         Community
Git Commit Hash: b9e45523c76c544235842fd3a78bb711c0d627c9
Git Branch:      HEAD
UTC Build Time:  2024-05-13 08:44:12
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-13 08:48:26
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/open_protocol_claim_check/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/open_protocol_claim_check/tiflash/log/error.log
arg matches is ArgMatches { args: {"log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/open_protocol_claim_check/tiflash/log/proxy.log"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/open_protocol_claim_check/tiflash/db/proxy"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-19-gb9e45523c"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/open_protocol_claim_check/tiflash-proxy.toml"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["b9e45523c76c544235842fd3a78bb711c0d627c9"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.open_protocol_claim_check.cli.35859.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449779897955778561
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449779897955778561 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.open_protocol_claim_check.cli.35907.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449779899738619905
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449779899738619905 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
[Wed May 15 19:55:55 CST 2024] <<<<<< START cdc server in open_protocol_claim_check case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ GO_FAILPOINTS=
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.open_protocol_claim_check.3593235934.out server --log-file /tmp/tidb_cdc_test/open_protocol_claim_check/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/open_protocol_claim_check/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Wed, 15 May 2024 11:55:58 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56
	{"id":"0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1b875a3e
	0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56

/tidb/cdc/default/default/upstream/7369193777738647632
	{"id":7369193777738647632,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56
	{"id":"0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1b875a3e
	0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56

/tidb/cdc/default/default/upstream/7369193777738647632
	{"id":7369193777738647632,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56
	{"id":"0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1b875a3e
	0bd1f5b7-f3a1-4bfc-bbd8-cc321ea5cc56

/tidb/cdc/default/default/upstream/7369193777738647632
	{"id":7369193777738647632,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.open_protocol_claim_check.cli.35980.out cli changefeed create --start-ts=449779897955778561 --target-ts=449779899738619905 '--sink-uri=kafka://127.0.0.1:9092/open-protocol-claim-check?protocol=open-protocol&max-message-bytes=800&kafka-version=2.4.1' --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/open_protocol_claim_check/conf/changefeed.toml
Create changefeed successfully!
ID: 0a4f9fbc-968b-4be3-b30a-ff0b364607cb
Info: {"upstream_id":7369193777738647632,"namespace":"default","id":"0a4f9fbc-968b-4be3-b30a-ff0b364607cb","sink_uri":"kafka://127.0.0.1:9092/open-protocol-claim-check?protocol=open-protocol\u0026max-message-bytes=800\u0026kafka-version=2.4.1","create_time":"2024-05-15T19:55:58.586476338+08:00","start_ts":449779897955778561,"target_ts":449779899738619905,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"kafka_config":{"large_message_handle":{"large_message_handle_option":"claim-check","large_message_handle_compression":"lz4","claim_check_storage_uri":"file:///tmp/open-protocol-claim-check"}},"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-30-g92884c9e7","resolved_ts":449779897955778561,"checkpoint_ts":449779897955778561,"checkpoint_time":"2024-05-15 19:55:46.865"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
table test.finish_mark not exists for 1-th check, retry later
table test.finish_mark not exists for 2-th check, retry later
table test.finish_mark exists
check diff successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Wed May 15 19:56:05 CST 2024] <<<<<< run test case open_protocol_claim_check success! >>>>>>
/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/open_protocol_claim_check/run.sh: line 1: 36017 Killed                  cdc_kafka_consumer --upstream-uri $SINK_URI --downstream-uri="mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false" --upstream-tidb-dsn="root@tcp(${UP_TIDB_HOST}:${UP_TIDB_PORT})/?" --config="$CUR/conf/changefeed.toml" 2>&1
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/canal_json_storage_basic/run.sh using Sink-Type: kafka... <<=================
[Wed May 15 19:56:21 CST 2024] <<<<<< run test case canal_json_storage_basic success! >>>>>>
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/canal_json_storage_partition_table/run.sh using Sink-Type: kafka... <<=================
[Wed May 15 19:56:24 CST 2024] <<<<<< run test case canal_json_storage_partition_table success! >>>>>>
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/multi_tables_ddl/run.sh using Sink-Type: kafka... <<=================
* About to connect() to 127.0.0.1 port 24927 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:24927; Connection refused
* Closing connection 0

 You are running an older version of MinIO released 3 years ago 
 Update: Run `mc admin update` 


Attempting encryption of all config, IAM users and policies on MinIO backend
Endpoint:  http://127.0.0.1:24927

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide
* About to connect() to 127.0.0.1 port 24927 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 24927 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:24927
> Accept: */*
> 
< HTTP/1.1 403 Forbidden
< Accept-Ranges: bytes
< Content-Length: 226
< Content-Security-Policy: block-all-mixed-content
< Content-Type: application/xml
< Server: MinIO/RELEASE.2020-07-27T18-37-02Z
< Vary: Origin
< X-Amz-Request-Id: 17CFA788FE568F58
< X-Xss-Protection: 1; mode=block
< Date: Wed, 15 May 2024 11:56:29 GMT
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
Bucket 's3://logbucket/' created
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/multi_tables_ddl
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df072e6600005	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-vrzxw, pid:37135, start at 2024-05-15 19:56:48.414422848 +0800 CST m=+5.636443299	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:58:48.422 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:56:48.408 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:46:48.408 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df072e6600005	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-vrzxw, pid:37135, start at 2024-05-15 19:56:48.414422848 +0800 CST m=+5.636443299	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:58:48.422 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:56:48.408 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:46:48.408 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df072e7000007	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-vrzxw, pid:37223, start at 2024-05-15 19:56:48.457786564 +0800 CST m=+5.523707837	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:58:48.464 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:56:48.448 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:46:48.448 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-19-gb9e45523c
Edition:         Community
Git Commit Hash: b9e45523c76c544235842fd3a78bb711c0d627c9
Git Branch:      HEAD
UTC Build Time:  2024-05-13 08:44:12
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-13 08:48:26
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/multi_tables_ddl/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/multi_tables_ddl/tiflash/log/error.log
arg matches is ArgMatches { args: {"advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["b9e45523c76c544235842fd3a78bb711c0d627c9"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-19-gb9e45523c"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/multi_tables_ddl/tiflash/db/proxy"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/multi_tables_ddl/tiflash/log/proxy.log"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/multi_tables_ddl/tiflash-proxy.toml"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
[Wed May 15 19:56:53 CST 2024] <<<<<< START cdc server in multi_tables_ddl case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ (( i = 0 ))
+ (( i <= 50 ))
+ GO_FAILPOINTS=
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.multi_tables_ddl.3859138593.out server --log-file /tmp/tidb_cdc_test/multi_tables_ddl/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/multi_tables_ddl/cdc_data --cluster-id default
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Wed, 15 May 2024 11:56:56 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/fc90db6a-a221-4031-b1f2-5baf990c8b23
	{"id":"fc90db6a-a221-4031-b1f2-5baf990c8b23","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1c8d9ff3
	fc90db6a-a221-4031-b1f2-5baf990c8b23

/tidb/cdc/default/default/upstream/7369194070519101266
	{"id":7369194070519101266,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/fc90db6a-a221-4031-b1f2-5baf990c8b23
	{"id":"fc90db6a-a221-4031-b1f2-5baf990c8b23","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1c8d9ff3
	fc90db6a-a221-4031-b1f2-5baf990c8b23

/tidb/cdc/default/default/upstream/7369194070519101266
	{"id":7369194070519101266,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/fc90db6a-a221-4031-b1f2-5baf990c8b23
	{"id":"fc90db6a-a221-4031-b1f2-5baf990c8b23","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1c8d9ff3
	fc90db6a-a221-4031-b1f2-5baf990c8b23

/tidb/cdc/default/default/upstream/7369194070519101266
	{"id":7369194070519101266,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
Create changefeed successfully!
ID: test-normal
Info: {"upstream_id":7369194070519101266,"namespace":"default","id":"test-normal","sink_uri":"kafka://127.0.0.1:9092/ticdc-multi-tables-ddl-test-normal-17686?protocol=open-protocol\u0026partition-num=4\u0026kafka-version=2.4.1\u0026max-message-bytes=10485760","create_time":"2024-05-15T19:56:56.623008232+08:00","start_ts":449779915386519553,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["multi_tables_ddl_test.t1","multi_tables_ddl_test.t2","multi_tables_ddl_test.t3","multi_tables_ddl_test.t4","multi_tables_ddl_test.t1_7","multi_tables_ddl_test.t2_7","multi_tables_ddl_test.finish_mark"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":true,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-30-g92884c9e7","resolved_ts":449779915386519553,"checkpoint_ts":449779915386519553,"checkpoint_time":"2024-05-15 19:56:53.358"}
Create changefeed successfully!
ID: test-error-1
Info: {"upstream_id":7369194070519101266,"namespace":"default","id":"test-error-1","sink_uri":"kafka://127.0.0.1:9092/ticdc-multi-tables-ddl-test-error-1-20922?protocol=open-protocol\u0026partition-num=4\u0026kafka-version=2.4.1\u0026max-message-bytes=10485760","create_time":"2024-05-15T19:56:56.83311039+08:00","start_ts":449779915386519553,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["multi_tables_ddl_test.t5","multi_tables_ddl_test.t6","multi_tables_ddl_test.t7","multi_tables_ddl_test.t8"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":true,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-30-g92884c9e7","resolved_ts":449779915386519553,"checkpoint_ts":449779915386519553,"checkpoint_time":"2024-05-15 19:56:53.358"}
Create changefeed successfully!
ID: test-error-2
Info: {"upstream_id":7369194070519101266,"namespace":"default","id":"test-error-2","sink_uri":"kafka://127.0.0.1:9092/ticdc-multi-tables-ddl-test-error-2-5375?protocol=open-protocol\u0026partition-num=4\u0026kafka-version=2.4.1\u0026max-message-bytes=10485760","create_time":"2024-05-15T19:56:57.017290581+08:00","start_ts":449779915386519553,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["multi_tables_ddl_test.t9","multi_tables_ddl_test.t10"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":true,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-30-g92884c9e7","resolved_ts":449779915386519553,"checkpoint_ts":449779915386519553,"checkpoint_time":"2024-05-15 19:56:53.358"}
[Wed May 15 19:56:57 CST 2024] <<<<<< START kafka consumer in multi_tables_ddl case >>>>>>
[Wed May 15 19:56:57 CST 2024] <<<<<< START kafka consumer in multi_tables_ddl case >>>>>>
[Wed May 15 19:56:57 CST 2024] <<<<<< START kafka consumer in multi_tables_ddl case >>>>>>
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   221  100   221    0     0   2665      0 --:--:-- --:--:-- --:--:--  2695
+ synced_status='{"synced":true,"sink_checkpoint_ts":"2024-05-15 19:56:46.581","puller_resolved_ts":"2024-05-15 19:56:40.581","last_synced_ts":"2024-05-15 19:54:30.631","now_ts":"2024-05-15 19:56:47.000","info":"Data syncing is finished"}'
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-15' '19:56:46.581","puller_resolved_ts":"2024-05-15' '19:56:40.581","last_synced_ts":"2024-05-15' '19:54:30.631","now_ts":"2024-05-15' '19:56:47.000","info":"Data' syncing is 'finished"}'
++ jq .synced
+ status=true
+ '[' true '!=' true ']'
+ kill_pd
++ ps aux
++ grep pd-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    24585  7.9  0.0 13445900 142932 ?     Sl   19:54   0:12 pd-server --advertise-client-urls http://127.0.0.1:2379 --client-urls http://0.0.0.0:2379 --advertise-peer-urls http://127.0.0.1:2380 --peer-urls http://0.0.0.0:2380 --config /tmp/tidb_cdc_test/synced_status/pd-config.toml --log-file /tmp/tidb_cdc_test/synced_status/pd1.log --data-dir /tmp/tidb_cdc_test/synced_status/pd1 --name=pd1 --initial-cluster=pd1=http://127.0.0.1:2380
jenkins    24649  5.1  0.0 13379532 135676 ?     Sl   19:54   0:08 pd-server --advertise-client-urls http://127.0.0.1:2479 --client-urls http://0.0.0.0:2479 --advertise-peer-urls http://127.0.0.1:2480 --peer-urls http://0.0.0.0:2480 --config /tmp/tidb_cdc_test/synced_status/pd-config.toml --log-file /tmp/tidb_cdc_test/synced_status/down_pd.log --data-dir /tmp/tidb_cdc_test/synced_status/down_pd'
++ ps aux
++ grep pd-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
+ sleep 20
{"level":"warn","ts":1715774213.8893392,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc003142a80/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715774213.8894072,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715774213.930438,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002b71a40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1715774213.930533,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715774214.5922365,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0020d9a40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715774214.5923107,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-15T19:56:58.368421+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000dca700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:56:58.374837+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e42e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:56:58.545039+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012401c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
table multi_tables_ddl_test.t55 not exists for 1-th check, retry later
table multi_tables_ddl_test.t55 not exists for 2-th check, retry later
{"level":"warn","ts":"2024-05-15T19:57:04.368838+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000dca700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:04.375635+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e42e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:04.546234+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012401c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
table multi_tables_ddl_test.t55 exists
table multi_tables_ddl_test.t66 not exists for 1-th check, retry later
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
table multi_tables_ddl_test.t66 exists
table multi_tables_ddl_test.t7 exists
table multi_tables_ddl_test.t88 exists
table multi_tables_ddl_test.finish_mark not exists for 1-th check, retry later

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0{"level":"warn","ts":"2024-05-15T19:57:10.369698+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000dca700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:10.376364+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e42e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:10.546603+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012401c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
table multi_tables_ddl_test.finish_mark not exists for 2-th check, retry later
table multi_tables_ddl_test.finish_mark exists
check table exists success
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=test-normal
+ expected_state=normal
+ error_msg=null
+ tls_dir=
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c test-normal -s
+ info='{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-normal",
  "state": "normal",
  "checkpoint_tso": 449779917745553442,
  "checkpoint_time": "2024-05-15 19:57:02.357",
  "error": null
}'
+ echo '{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-normal",
  "state": "normal",
  "checkpoint_tso": 449779917745553442,
  "checkpoint_time": "2024-05-15 19:57:02.357",
  "error": null
}'
{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-normal",
  "state": "normal",
  "checkpoint_tso": 449779917745553442,
  "checkpoint_time": "2024-05-15 19:57:02.357",
  "error": null
}
++ echo '{' '"upstream_id":' 7369194070519101266, '"namespace":' '"default",' '"id":' '"test-normal",' '"state":' '"normal",' '"checkpoint_tso":' 449779917745553442, '"checkpoint_time":' '"2024-05-15' '19:57:02.357",' '"error":' null '}'
++ jq -r .state
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
++ echo '{' '"upstream_id":' 7369194070519101266, '"namespace":' '"default",' '"id":' '"test-normal",' '"state":' '"normal",' '"checkpoint_tso":' 449779917745553442, '"checkpoint_time":' '"2024-05-15' '19:57:02.357",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=test-error-1
+ expected_state=normal
+ error_msg=null
+ tls_dir=
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c test-error-1 -s
+ info='{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-error-1",
  "state": "normal",
  "checkpoint_tso": 449779920262397954,
  "checkpoint_time": "2024-05-15 19:57:11.958",
  "error": null
}'
+ echo '{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-error-1",
  "state": "normal",
  "checkpoint_tso": 449779920262397954,
  "checkpoint_time": "2024-05-15 19:57:11.958",
  "error": null
}'
{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-error-1",
  "state": "normal",
  "checkpoint_tso": 449779920262397954,
  "checkpoint_time": "2024-05-15 19:57:11.958",
  "error": null
}
++ echo '{' '"upstream_id":' 7369194070519101266, '"namespace":' '"default",' '"id":' '"test-error-1",' '"state":' '"normal",' '"checkpoint_tso":' 449779920262397954, '"checkpoint_time":' '"2024-05-15' '19:57:11.958",' '"error":' null '}'
++ jq -r .state
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
++ echo '{' '"upstream_id":' 7369194070519101266, '"namespace":' '"default",' '"id":' '"test-error-1",' '"state":' '"normal",' '"checkpoint_tso":' 449779920262397954, '"checkpoint_time":' '"2024-05-15' '19:57:11.958",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=test-error-2
+ expected_state=failed
+ error_msg=ErrSyncRenameTableFailed
+ tls_dir=
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c test-error-2 -s
+ info='{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-error-2",
  "state": "failed",
  "checkpoint_tso": 449779917627850788,
  "checkpoint_time": "2024-05-15 19:57:01.908",
  "error": {
    "time": "2024-05-15T19:57:03.012599842+08:00",
    "addr": "127.0.0.1:8300",
    "code": "CDC:ErrSyncRenameTableFailed",
    "message": "[CDC:ErrSyncRenameTableFailed]table'\''s old name is not in filter rule, and its new name in filter rule table id '\''128'\'', ddl query: [rename table t11 to t9], it'\''s an unexpected behavior, if you want to replicate this table, please add its old name to filter rule."
  }
}'
+ echo '{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-error-2",
  "state": "failed",
  "checkpoint_tso": 449779917627850788,
  "checkpoint_time": "2024-05-15 19:57:01.908",
  "error": {
    "time": "2024-05-15T19:57:03.012599842+08:00",
    "addr": "127.0.0.1:8300",
    "code": "CDC:ErrSyncRenameTableFailed",
    "message": "[CDC:ErrSyncRenameTableFailed]table'\''s old name is not in filter rule, and its new name in filter rule table id '\''128'\'', ddl query: [rename table t11 to t9], it'\''s an unexpected behavior, if you want to replicate this table, please add its old name to filter rule."
  }
}'
{
  "upstream_id": 7369194070519101266,
  "namespace": "default",
  "id": "test-error-2",
  "state": "failed",
  "checkpoint_tso": 449779917627850788,
  "checkpoint_time": "2024-05-15 19:57:01.908",
  "error": {
    "time": "2024-05-15T19:57:03.012599842+08:00",
    "addr": "127.0.0.1:8300",
    "code": "CDC:ErrSyncRenameTableFailed",
    "message": "[CDC:ErrSyncRenameTableFailed]table's old name is not in filter rule, and its new name in filter rule table id '128', ddl query: [rename table t11 to t9], it's an unexpected behavior, if you want to replicate this table, please add its old name to filter rule."
  }
}
++ jq -r .state
++ echo '{' '"upstream_id":' 7369194070519101266, '"namespace":' '"default",' '"id":' '"test-error-2",' '"state":' '"failed",' '"checkpoint_tso":' 449779917627850788, '"checkpoint_time":' '"2024-05-15' '19:57:01.908",' '"error":' '{' '"time":' '"2024-05-15T19:57:03.012599842+08:00",' '"addr":' '"127.0.0.1:8300",' '"code":' '"CDC:ErrSyncRenameTableFailed",' '"message":' '"[CDC:ErrSyncRenameTableFailed]table'\''s' old name is not in filter rule, and its new name in filter rule table id ''\''128'\'',' ddl query: '[rename' table t11 to 't9],' 'it'\''s' an unexpected behavior, if you want to replicate this table, please add its old name to filter 'rule."' '}' '}'
+ state=failed
+ [[ ! failed == \f\a\i\l\e\d ]]
++ jq -r .error.message
++ echo '{' '"upstream_id":' 7369194070519101266, '"namespace":' '"default",' '"id":' '"test-error-2",' '"state":' '"failed",' '"checkpoint_tso":' 449779917627850788, '"checkpoint_time":' '"2024-05-15' '19:57:01.908",' '"error":' '{' '"time":' '"2024-05-15T19:57:03.012599842+08:00",' '"addr":' '"127.0.0.1:8300",' '"code":' '"CDC:ErrSyncRenameTableFailed",' '"message":' '"[CDC:ErrSyncRenameTableFailed]table'\''s' old name is not in filter rule, and its new name in filter rule table id ''\''128'\'',' ddl query: '[rename' table t11 to 't9],' 'it'\''s' an unexpected behavior, if you want to replicate this table, please add its old name to filter 'rule."' '}' '}'
+ message='[CDC:ErrSyncRenameTableFailed]table'\''s old name is not in filter rule, and its new name in filter rule table id '\''128'\'', ddl query: [rename table t11 to t9], it'\''s an unexpected behavior, if you want to replicate this table, please add its old name to filter rule.'
+ [[ ! [CDC:ErrSyncRenameTableFailed]table's old name is not in filter rule, and its new name in filter rule table id '128', ddl query: [rename table t11 to t9], it's an unexpected behavior, if you want to replicate this table, please add its old name to filter rule. =~ ErrSyncRenameTableFailed ]]
check diff successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Wed May 15 19:57:15 CST 2024] <<<<<< run test case multi_tables_ddl success! >>>>>>
Exiting on signal: INTERRUPT

  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0{"level":"warn","ts":"2024-05-15T19:57:16.371347+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000dca700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:16.376677+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e42e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:16.547709+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012401c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0{"level":"warn","ts":"2024-05-15T19:57:18.360188+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000dca700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2024-05-15T19:57:18.360239+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-15T19:57:18.365404+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e42e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2024-05-15T19:57:18.36546+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-15T19:57:18.534594+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012401c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":"2024-05-15T19:57:18.534653+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0{"level":"warn","ts":"2024-05-15T19:57:22.373052+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000dca700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:22.377555+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e42e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:22.548732+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012401c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_kafka_test-593/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }

  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0{"level":"warn","ts":"2024-05-15T19:57:28.375072+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000dca700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:28.378917+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e42e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:28.550598+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012401c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"warn","ts":1715774248.8913612,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc003142a80/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715774248.8914087,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0{"level":"warn","ts":1715774248.9312518,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002b71a40/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1715774248.9313104,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715774249.593296,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0020d9a40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715774249.5933728,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0{"level":"warn","ts":"2024-05-15T19:57:34.375704+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000dca700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:34.379488+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e42e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-15T19:57:34.551339+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012401c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    27
+ synced_status='{
    "error_msg": "[CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded",
    "error_code": "CDC:ErrPDEtcdAPIError"
}'
++ jq -r .error_code
++ echo '{' '"error_msg":' '"[CDC:ErrPDEtcdAPIError]etcd' api call error: context deadline 'exceeded",' '"error_code":' '"CDC:ErrPDEtcdAPIError"' '}'
+ error_code=CDC:ErrPDEtcdAPIError
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_unavailable_tikv conf/changefeed-redo.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df07737940017	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:27795, start at 2024-05-15 19:57:59.167433212 +0800 CST m=+5.519578204	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:59:59.173 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:57:59.141 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:47:59.141 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df07737940017	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:27795, start at 2024-05-15 19:57:59.167433212 +0800 CST m=+5.519578204	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:59:59.173 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:57:59.141 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:47:59.141 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df077390c0005	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:27872, start at 2024-05-15 19:57:59.241240452 +0800 CST m=+5.493569422	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-19:59:59.250 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-19:57:59.235 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:47:59.235 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-19-gb9e45523c
Edition:         Community
Git Commit Hash: b9e45523c76c544235842fd3a78bb711c0d627c9
Git Branch:      HEAD
UTC Build Time:  2024-05-13 08:44:12
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-13 08:48:26
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["b9e45523c76c544235842fd3a78bb711c0d627c9"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-19-gb9e45523c"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.29070.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449779933981179908
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449779933981179908 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449779933981179908
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Wed May 15 19:58:05 CST 2024] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS=
+ (( i = 0 ))
+ (( i <= 50 ))
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.2909729099.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Wed, 15 May 2024 11:58:08 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/9cfeb158-15e0-4cc4-a417-793c5706d951
	{"id":"9cfeb158-15e0-4cc4-a417-793c5706d951","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1da85ff6
	9cfeb158-15e0-4cc4-a417-793c5706d951

/tidb/cdc/default/default/upstream/7369194380481194580
	{"id":7369194380481194580,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/9cfeb158-15e0-4cc4-a417-793c5706d951
	{"id":"9cfeb158-15e0-4cc4-a417-793c5706d951","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1da85ff6
	9cfeb158-15e0-4cc4-a417-793c5706d951

/tidb/cdc/default/default/upstream/7369194380481194580
	{"id":7369194380481194580,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/9cfeb158-15e0-4cc4-a417-793c5706d951
	{"id":"9cfeb158-15e0-4cc4-a417-793c5706d951","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c1da85ff6
	9cfeb158-15e0-4cc4-a417-793c5706d951

/tidb/cdc/default/default/upstream/7369194380481194580
	{"id":7369194380481194580,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed-redo.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449779933981179908 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.29147.out cli changefeed create --start-ts=449779933981179908 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7369194380481194580,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-15T19:58:09.267614543+08:00","start_ts":449779933981179908,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"eventual","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"storage":"file:///tmp/tidb_cdc_test/synced_status/redo","use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v7.5.1-30-g92884c9e7","resolved_ts":449779933981179908,"checkpoint_ts":449779933981179908,"checkpoint_time":"2024-05-15 19:58:04.291"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 exists
+ sleep 5
+ kill_tikv
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    27225 24.0  0.4 3708744 1683424 ?     Sl   19:57   0:06 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20160 --status-addr 127.0.0.1:20181 --log-file /tmp/tidb_cdc_test/synced_status/tikv1.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv1
jenkins    27226 17.4  0.4 3679560 1635656 ?     Sl   19:57   0:04 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20161 --status-addr 127.0.0.1:20182 --log-file /tmp/tidb_cdc_test/synced_status/tikv2.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv2
jenkins    27227 17.0  0.4 3678536 1609996 ?     Sl   19:57   0:04 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20162 --status-addr 127.0.0.1:20183 --log-file /tmp/tidb_cdc_test/synced_status/tikv3.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv3
jenkins    27229 23.4  0.4 3714376 1674068 ?     Sl   19:57   0:05 tikv-server --pd 127.0.0.1:2479 -A 127.0.0.1:21160 --status-addr 127.0.0.1:21180 --log-file /tmp/tidb_cdc_test/synced_status/tikv_down.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv_down'
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   3987      0 --:--:-- --:--:-- --:--:--  4050
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-15 19:58:14.742","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"2024-05-15 19:58:11.192","now_ts":"2024-05-15 19:58:16.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-15' '19:58:14.742","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2024-05-15' '19:58:11.192","now_ts":"2024-05-15' '19:58:16.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-15' '19:58:14.742","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2024-05-15' '19:58:11.192","now_ts":"2024-05-15' '19:58:16.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   723  100   723    0     0  10220      0 --:--:-- --:--:-- --:--:-- 10328
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-15 19:58:15.741","puller_resolved_ts":"2024-05-15 19:58:15.741","last_synced_ts":"2024-05-15 19:58:11.192","now_ts":"2024-05-15 20:00:26.000","info":"Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' \u003e '\''Resolved-Ts'\'' \u003e '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait"}'
++ jq .synced
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-15' '19:58:15.741","puller_resolved_ts":"2024-05-15' '19:58:15.741","last_synced_ts":"2024-05-15' '19:58:11.192","now_ts":"2024-05-15' '20:00:26.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-15' '19:58:15.741","puller_resolved_ts":"2024-05-15' '19:58:15.741","last_synced_ts":"2024-05-15' '19:58:11.192","now_ts":"2024-05-15' '20:00:26.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq -r .info
+ info='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ target_message='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ '[' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' '!=' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' ']'
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_unavailable_tidb conf/changefeed-redo.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df0817d500012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:30237, start at 2024-05-15 20:00:47.463228764 +0800 CST m=+5.419121732	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-20:02:47.470 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-20:00:47.444 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:50:47.444 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df0817d500012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:30237, start at 2024-05-15 20:00:47.463228764 +0800 CST m=+5.419121732	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-20:02:47.470 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-20:00:47.444 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:50:47.444 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df0817d200005	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:30321, start at 2024-05-15 20:00:47.435904672 +0800 CST m=+5.291092377	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-20:02:47.441 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-20:00:47.432 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:50:47.432 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-19-gb9e45523c
Edition:         Community
Git Commit Hash: b9e45523c76c544235842fd3a78bb711c0d627c9
Git Branch:      HEAD
UTC Build Time:  2024-05-13 08:44:12
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-13 08:48:26
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-19-gb9e45523c"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["b9e45523c76c544235842fd3a78bb711c0d627c9"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.31548.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449779978127015937
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449779978127015937 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449779978127015937
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Wed May 15 20:00:54 CST 2024] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS=
+ (( i = 0 ))
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.3158031582.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Wed, 15 May 2024 12:00:57 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c7edb24f-28f3-452c-bbd2-f7703a6978c0
	{"id":"c7edb24f-28f3-452c-bbd2-f7703a6978c0","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c203a31f4
	c7edb24f-28f3-452c-bbd2-f7703a6978c0

/tidb/cdc/default/default/upstream/7369195101037867444
	{"id":7369195101037867444,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c7edb24f-28f3-452c-bbd2-f7703a6978c0
	{"id":"c7edb24f-28f3-452c-bbd2-f7703a6978c0","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c203a31f4
	c7edb24f-28f3-452c-bbd2-f7703a6978c0

/tidb/cdc/default/default/upstream/7369195101037867444
	{"id":7369195101037867444,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c7edb24f-28f3-452c-bbd2-f7703a6978c0
	{"id":"c7edb24f-28f3-452c-bbd2-f7703a6978c0","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c203a31f4
	c7edb24f-28f3-452c-bbd2-f7703a6978c0

/tidb/cdc/default/default/upstream/7369195101037867444
	{"id":7369195101037867444,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed-redo.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449779978127015937 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.31624.out cli changefeed create --start-ts=449779978127015937 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7369195101037867444,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-15T20:00:57.632192942+08:00","start_ts":449779978127015937,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"eventual","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"storage":"file:///tmp/tidb_cdc_test/synced_status/redo","use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v7.5.1-30-g92884c9e7","resolved_ts":449779978127015937,"checkpoint_ts":449779978127015937,"checkpoint_time":"2024-05-15 20:00:52.694"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 not exists for 1-th check, retry later
table test.t1 exists
+ sleep 5
+ kill_tidb
++ ps aux
++ grep tidb-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    30237 10.5  0.0 2722888 192208 ?      Sl   20:00   0:02 tidb-server -P 4000 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1715774442038114026.toml --store tikv --path 127.0.0.1:2379 --status=10080 --log-file /tmp/tidb_cdc_test/synced_status/tidb.log
jenkins    30241  3.0  0.0 2655912 157872 ?      Sl   20:00   0:00 tidb-server -P 4001 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1715774442041078607.toml --store tikv --path 127.0.0.1:2379 --status=10081 --log-file /tmp/tidb_cdc_test/synced_status/tidb_other.log
jenkins    30321 10.1  0.0 2558328 215528 ?      Sl   20:00   0:02 tidb-server -P 3306 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1715774442138588439.toml --store tikv --path 127.0.0.1:2479 --status=20080 --log-file /tmp/tidb_cdc_test/synced_status/tidb_down.log'
++ ps aux
++ grep tidb-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   4385      0 --:--:-- --:--:-- --:--:--  4418
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-15 20:01:05.145","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"2024-05-15 20:00:59.144","now_ts":"2024-05-15 20:01:06.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-15' '20:01:05.145","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2024-05-15' '20:00:59.144","now_ts":"2024-05-15' '20:01:06.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-15' '20:01:05.145","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2024-05-15' '20:00:59.144","now_ts":"2024-05-15' '20:01:06.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   221  100   221    0     0   2555      0 --:--:-- --:--:-- --:--:--  2569
+ synced_status='{"synced":true,"sink_checkpoint_ts":"2024-05-15 20:03:15.295","puller_resolved_ts":"2024-05-15 20:03:09.244","last_synced_ts":"2024-05-15 20:00:59.144","now_ts":"2024-05-15 20:03:16.000","info":"Data syncing is finished"}'
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-15' '20:03:15.295","puller_resolved_ts":"2024-05-15' '20:03:09.244","last_synced_ts":"2024-05-15' '20:00:59.144","now_ts":"2024-05-15' '20:03:16.000","info":"Data' syncing is 'finished"}'
++ jq .synced
+ status=true
+ '[' true '!=' true ']'
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-15' '20:03:15.295","puller_resolved_ts":"2024-05-15' '20:03:09.244","last_synced_ts":"2024-05-15' '20:00:59.144","now_ts":"2024-05-15' '20:03:16.000","info":"Data' syncing is 'finished"}'
++ jq -r .info
+ info='Data syncing is finished'
+ target_message='Data syncing is finished'
+ '[' 'Data syncing is finished' '!=' 'Data syncing is finished' ']'
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_failpoint conf/changefeed-redo.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-65-g1f29133f36
Edition: Community
Git Commit Hash: 1f29133f3629e407220c8f319c67381f437284bc
Git Branch: release-7.5
UTC Build Time: 2024-05-14 09:30:20
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df08bef580018	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:32723, start at 2024-05-15 20:03:38.616331297 +0800 CST m=+5.280425176	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-20:05:38.623 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-20:03:38.582 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:53:38.582 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df08bef580018	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:32723, start at 2024-05-15 20:03:38.616331297 +0800 CST m=+5.280425176	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-20:05:38.623 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-20:03:38.582 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:53:38.582 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63df08bf1940013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-kafka-test-593-17scm, pid:32817, start at 2024-05-15 20:03:38.756234372 +0800 CST m=+5.314915338	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240515-20:05:38.763 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240515-20:03:38.725 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240515-19:53:38.725 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-19-gb9e45523c
Edition:         Community
Git Commit Hash: b9e45523c76c544235842fd3a78bb711c0d627c9
Git Branch:      HEAD
UTC Build Time:  2024-05-13 08:44:12
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-13 08:48:26
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["b9e45523c76c544235842fd3a78bb711c0d627c9"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-19-gb9e45523c"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status
+ export 'GO_FAILPOINTS=github.com/pingcap/tiflow/cdc/owner/ChangefeedOwnerNotUpdateCheckpoint=return(true)'
+ GO_FAILPOINTS='github.com/pingcap/tiflow/cdc/owner/ChangefeedOwnerNotUpdateCheckpoint=return(true)'
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.34023.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449780023028875265
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449780023028875265 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449780023028875265
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Wed May 15 20:03:45 CST 2024] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ (( i = 0 ))
+ (( i <= 50 ))
+ GO_FAILPOINTS='github.com/pingcap/tiflow/cdc/owner/ChangefeedOwnerNotUpdateCheckpoint=return(true)'
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.3405934061.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Wed, 15 May 2024 12:03:48 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/2c942425-c628-44b8-b719-61e3d65e2105
	{"id":"2c942425-c628-44b8-b719-61e3d65e2105","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c22d16ef6
	2c942425-c628-44b8-b719-61e3d65e2105

/tidb/cdc/default/default/upstream/7369195838966397720
	{"id":7369195838966397720,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/2c942425-c628-44b8-b719-61e3d65e2105
	{"id":"2c942425-c628-44b8-b719-61e3d65e2105","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c22d16ef6
	2c942425-c628-44b8-b719-61e3d65e2105

/tidb/cdc/default/default/upstream/7369195838966397720
	{"id":7369195838966397720,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/2c942425-c628-44b8-b719-61e3d65e2105
	{"id":"2c942425-c628-44b8-b719-61e3d65e2105","address":"127.0.0.1:8300","version":"v7.5.1-30-g92884c9e7"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f7c22d16ef6
	2c942425-c628-44b8-b719-61e3d65e2105

/tidb/cdc/default/default/upstream/7369195838966397720
	{"id":7369195838966397720,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed-redo.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449780023028875265 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.34107.out cli changefeed create --start-ts=449780023028875265 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7369195838966397720,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-15T20:03:48.961613788+08:00","start_ts":449780023028875265,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"eventual","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"storage":"file:///tmp/tidb_cdc_test/synced_status/redo","use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v7.5.1-30-g92884c9e7","resolved_ts":449780023028875265,"checkpoint_ts":449780023028875265,"checkpoint_time":"2024-05-15 20:03:43.981"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
+ sleep 20
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   723  100   723    0     0  10092      0 --:--:-- --:--:-- --:--:-- 10183
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-15 20:03:43.981","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"1970-01-01 08:00:00.000","now_ts":"2024-05-15 20:04:10.000","info":"Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' \u003e '\''Resolved-Ts'\'' \u003e '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-15' '20:03:43.981","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2024-05-15' '20:04:10.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ jq -r .info
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-15' '20:03:43.981","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2024-05-15' '20:04:10.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
+ info='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ target_message='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ '[' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' '!=' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' ']'
+ export GO_FAILPOINTS=
+ GO_FAILPOINTS=
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ check_logs /tmp/tidb_cdc_test/synced_status
++ date
+ echo '[Wed May 15 20:04:21 CST 2024] <<<<<< run test case synced_status success! >>>>>>'
[Wed May 15 20:04:21 CST 2024] <<<<<< run test case synced_status success! >>>>>>
+ stop_tidb_cluster
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_kafka_test-593/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS