Skip to content
Success

Console Output

Skipping 2,334 KB.. Full Log
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri Jul 11 18:36:32 CST 2025] <<<<<< run test case canal_json_claim_check success! >>>>>>
/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/canal_json_claim_check/run.sh: line 1: 36723 Killed                  cdc_kafka_consumer --upstream-uri $SINK_URI --downstream-uri="mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false" --upstream-tidb-dsn="root@tcp(${UP_TIDB_HOST}:${UP_TIDB_PORT})/?" --config="$CUR/conf/changefeed.toml" 2>&1
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/open_protocol_claim_check/run.sh using Sink-Type: kafka... <<=================
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/open_protocol_claim_check
Starting Upstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Starting Downstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe43d3f840013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-d54fn, pid:37736, start at 2025-07-11 18:36:59.788891609 +0800 CST m=+5.453577960	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:38:59.795 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:36:59.796 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:26:59.796 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe43d3f840013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-d54fn, pid:37736, start at 2025-07-11 18:36:59.788891609 +0800 CST m=+5.453577960	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:38:59.795 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:36:59.796 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:26:59.796 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe43d410c0010	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-d54fn, pid:37826, start at 2025-07-11 18:36:59.860353015 +0800 CST m=+5.421414267	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:38:59.866 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:36:59.843 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:26:59.843 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.6-7-gbf1951c09e
Edition:         Community
Git Commit Hash: bf1951c09e4f330799d93475f8440bc2dd0fc96d
Git Branch:      HEAD
UTC Build Time:  2025-07-09 06:34:30
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   47ebe57272ab76cc9206d2bed49712871cb479bb
Git Commit Branch: HEAD
UTC Build Time:    2025-07-09 06:39:30
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/open_protocol_claim_check/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/open_protocol_claim_check/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["bf1951c09e4f330799d93475f8440bc2dd0fc96d"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/open_protocol_claim_check/tiflash/log/proxy.log"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/open_protocol_claim_check/tiflash/db/proxy"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/open_protocol_claim_check/tiflash-proxy.toml"] }, "memory-limit-ratio": MatchedArg { occurs: 1, indices: [22], vals: ["0.800000"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.6-7-gbf1951c09e"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.open_protocol_claim_check.cli.39054.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='459336640101089281
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 459336640101089281 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.open_protocol_claim_check.cli.39099.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='459336641451393025
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 459336641451393025 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
[Fri Jul 11 18:37:11 CST 2025] <<<<<< START cdc server in open_protocol_claim_check case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.open_protocol_claim_check.3913539137.out server --log-file /tmp/tidb_cdc_test/open_protocol_claim_check/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/open_protocol_claim_check/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 11 Jul 2025 10:37:14 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c7603ff5-4278-4f25-8e62-cb24776636d8
	{"id":"c7603ff5-4278-4f25-8e62-cb24776636d8","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f90f22c018
	c7603ff5-4278-4f25-8e62-cb24776636d8

/tidb/cdc/default/default/upstream/7525771449506369713
	{"id":7525771449506369713,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c7603ff5-4278-4f25-8e62-cb24776636d8
	{"id":"c7603ff5-4278-4f25-8e62-cb24776636d8","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f90f22c018
	c7603ff5-4278-4f25-8e62-cb24776636d8

/tidb/cdc/default/default/upstream/7525771449506369713
	{"id":7525771449506369713,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c7603ff5-4278-4f25-8e62-cb24776636d8
	{"id":"c7603ff5-4278-4f25-8e62-cb24776636d8","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f90f22c018
	c7603ff5-4278-4f25-8e62-cb24776636d8

/tidb/cdc/default/default/upstream/7525771449506369713
	{"id":7525771449506369713,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.open_protocol_claim_check.cli.39180.out cli changefeed create --start-ts=459336640101089281 --target-ts=459336641451393025 '--sink-uri=kafka://127.0.0.1:9092/open-protocol-claim-check?protocol=open-protocol&max-message-bytes=800&kafka-version=2.4.1' --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/open_protocol_claim_check/conf/changefeed.toml
Create changefeed successfully!
ID: 2fe868f1-0899-4468-b141-297369b899ca
Info: {"upstream_id":7525771449506369713,"namespace":"default","id":"2fe868f1-0899-4468-b141-297369b899ca","sink_uri":"kafka://127.0.0.1:9092/open-protocol-claim-check?protocol=open-protocol\u0026max-message-bytes=800\u0026kafka-version=2.4.1","create_time":"2025-07-11T18:37:15.077046604+08:00","start_ts":459336640101089281,"target_ts":459336641451393025,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"kafka_config":{"large_message_handle":{"large_message_handle_option":"claim-check","large_message_handle_compression":"lz4","claim_check_storage_uri":"file:///tmp/open-protocol-claim-check"}},"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.6-12-g60a79aa87","resolved_ts":459336640101089281,"checkpoint_ts":459336640101089281,"checkpoint_time":"2025-07-11 18:37:04.995"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
table test.finish_mark not exists for 1-th check, retry later
table test.finish_mark not exists for 2-th check, retry later
table test.finish_mark exists
check diff successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri Jul 11 18:37:22 CST 2025] <<<<<< run test case open_protocol_claim_check success! >>>>>>
/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/open_protocol_claim_check/run.sh: line 1: 39225 Killed                  cdc_kafka_consumer --upstream-uri $SINK_URI --downstream-uri="mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false" --upstream-tidb-dsn="root@tcp(${UP_TIDB_HOST}:${UP_TIDB_PORT})/?" --config="$CUR/conf/changefeed.toml" 2>&1
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/canal_json_storage_basic/run.sh using Sink-Type: kafka... <<=================
[Fri Jul 11 18:37:33 CST 2025] <<<<<< run test case canal_json_storage_basic success! >>>>>>
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/canal_json_storage_partition_table/run.sh using Sink-Type: kafka... <<=================
[Fri Jul 11 18:37:36 CST 2025] <<<<<< run test case canal_json_storage_partition_table success! >>>>>>
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/multi_tables_ddl/run.sh using Sink-Type: kafka... <<=================
* About to connect() to 127.0.0.1 port 24927 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:24927; Connection refused
* Closing connection 0

 You are running an older version of MinIO released 4 years ago 
 Update: Run `mc admin update` 


Attempting encryption of all config, IAM users and policies on MinIO backend
Endpoint:  http://127.0.0.1:24927

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide
* About to connect() to 127.0.0.1 port 24927 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 24927 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:24927
> Accept: */*
> 
< HTTP/1.1 403 Forbidden
< Accept-Ranges: bytes
< Content-Length: 226
< Content-Security-Policy: block-all-mixed-content
< Content-Type: application/xml
< Server: MinIO/RELEASE.2020-07-27T18-37-02Z
< Vary: Origin
< X-Amz-Request-Id: 18512C23C08EC38A
< X-Xss-Protection: 1; mode=block
< Date: Fri, 11 Jul 2025 10:37:42 GMT
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
Bucket 's3://logbucket/' created
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/multi_tables_ddl
Starting Upstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Starting Downstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe440b7000005	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-d54fn, pid:40370, start at 2025-07-11 18:37:56.54838571 +0800 CST m=+5.478582696	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:39:56.554 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:37:56.544 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:27:56.544 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe440b7000005	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-d54fn, pid:40370, start at 2025-07-11 18:37:56.54838571 +0800 CST m=+5.478582696	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:39:56.554 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:37:56.544 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:27:56.544 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe440b8800015	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-d54fn, pid:40447, start at 2025-07-11 18:37:56.676569272 +0800 CST m=+5.518470475	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:39:56.683 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:37:56.640 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:27:56.640 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.6-7-gbf1951c09e
Edition:         Community
Git Commit Hash: bf1951c09e4f330799d93475f8440bc2dd0fc96d
Git Branch:      HEAD
UTC Build Time:  2025-07-09 06:34:30
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   47ebe57272ab76cc9206d2bed49712871cb479bb
Git Commit Branch: HEAD
UTC Build Time:    2025-07-09 06:39:30
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/multi_tables_ddl/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/multi_tables_ddl/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "memory-limit-ratio": MatchedArg { occurs: 1, indices: [22], vals: ["0.800000"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["bf1951c09e4f330799d93475f8440bc2dd0fc96d"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.6-7-gbf1951c09e"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/multi_tables_ddl/tiflash-proxy.toml"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/multi_tables_ddl/tiflash/log/proxy.log"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/multi_tables_ddl/tiflash/db/proxy"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
[Fri Jul 11 18:38:01 CST 2025] <<<<<< START cdc server in multi_tables_ddl case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.multi_tables_ddl.4181041812.out server --log-file /tmp/tidb_cdc_test/multi_tables_ddl/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/multi_tables_ddl/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 11 Jul 2025 10:38:04 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/34a5bcbd-a936-4f20-a88b-fe9f8c729ce0
	{"id":"34a5bcbd-a936-4f20-a88b-fe9f8c729ce0","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f910025fb9
	34a5bcbd-a936-4f20-a88b-fe9f8c729ce0

/tidb/cdc/default/default/upstream/7525771690376314810
	{"id":7525771690376314810,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/34a5bcbd-a936-4f20-a88b-fe9f8c729ce0
	{"id":"34a5bcbd-a936-4f20-a88b-fe9f8c729ce0","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f910025fb9
	34a5bcbd-a936-4f20-a88b-fe9f8c729ce0

/tidb/cdc/default/default/upstream/7525771690376314810
	{"id":7525771690376314810,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/34a5bcbd-a936-4f20-a88b-fe9f8c729ce0
	{"id":"34a5bcbd-a936-4f20-a88b-fe9f8c729ce0","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f910025fb9
	34a5bcbd-a936-4f20-a88b-fe9f8c729ce0

/tidb/cdc/default/default/upstream/7525771690376314810
	{"id":7525771690376314810,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
Create changefeed successfully!
ID: test-normal
Info: {"upstream_id":7525771690376314810,"namespace":"default","id":"test-normal","sink_uri":"kafka://127.0.0.1:9092/ticdc-multi-tables-ddl-test-normal-13396?protocol=open-protocol\u0026partition-num=4\u0026kafka-version=2.4.1\u0026max-message-bytes=10485760","create_time":"2025-07-11T18:38:04.827831579+08:00","start_ts":459336654925070337,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["multi_tables_ddl_test.t1","multi_tables_ddl_test.t2","multi_tables_ddl_test.t3","multi_tables_ddl_test.t4","multi_tables_ddl_test.t1_7","multi_tables_ddl_test.t2_7","multi_tables_ddl_test.finish_mark"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":true,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.6-12-g60a79aa87","resolved_ts":459336654925070337,"checkpoint_ts":459336654925070337,"checkpoint_time":"2025-07-11 18:38:01.544"}
Create changefeed successfully!
ID: test-error-1
Info: {"upstream_id":7525771690376314810,"namespace":"default","id":"test-error-1","sink_uri":"kafka://127.0.0.1:9092/ticdc-multi-tables-ddl-test-error-1-11923?protocol=open-protocol\u0026partition-num=4\u0026kafka-version=2.4.1\u0026max-message-bytes=10485760","create_time":"2025-07-11T18:38:05.027017548+08:00","start_ts":459336654925070337,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["multi_tables_ddl_test.t5","multi_tables_ddl_test.t6","multi_tables_ddl_test.t7","multi_tables_ddl_test.t8"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":true,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.6-12-g60a79aa87","resolved_ts":459336654925070337,"checkpoint_ts":459336654925070337,"checkpoint_time":"2025-07-11 18:38:01.544"}
Create changefeed successfully!
ID: test-error-2
Info: {"upstream_id":7525771690376314810,"namespace":"default","id":"test-error-2","sink_uri":"kafka://127.0.0.1:9092/ticdc-multi-tables-ddl-test-error-2-21769?protocol=open-protocol\u0026partition-num=4\u0026kafka-version=2.4.1\u0026max-message-bytes=10485760","create_time":"2025-07-11T18:38:05.163070652+08:00","start_ts":459336654925070337,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["multi_tables_ddl_test.t9","multi_tables_ddl_test.t10"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":true,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.6-12-g60a79aa87","resolved_ts":459336654925070337,"checkpoint_ts":459336654925070337,"checkpoint_time":"2025-07-11 18:38:01.544"}
[Fri Jul 11 18:38:05 CST 2025] <<<<<< START kafka consumer in multi_tables_ddl case >>>>>>
[Fri Jul 11 18:38:05 CST 2025] <<<<<< START kafka consumer in multi_tables_ddl case >>>>>>
[Fri Jul 11 18:38:05 CST 2025] <<<<<< START kafka consumer in multi_tables_ddl case >>>>>>
table multi_tables_ddl_test.t55 not exists for 1-th check, retry later
table multi_tables_ddl_test.t55 not exists for 2-th check, retry later
table multi_tables_ddl_test.t55 not exists for 3-th check, retry later
table multi_tables_ddl_test.t55 not exists for 4-th check, retry later
table multi_tables_ddl_test.t55 exists
table multi_tables_ddl_test.t66 exists
table multi_tables_ddl_test.t7 exists
table multi_tables_ddl_test.t88 exists
table multi_tables_ddl_test.finish_mark not exists for 1-th check, retry later
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   221  100   221    0     0   2921      0 --:--:-- --:--:-- --:--:--  2946
+ synced_status='{"synced":true,"sink_checkpoint_ts":"2025-07-11 18:38:04.610","puller_resolved_ts":"2025-07-11 18:37:58.610","last_synced_ts":"2025-07-11 18:35:48.660","now_ts":"2025-07-11 18:38:06.000","info":"Data syncing is finished"}'
++ echo '{"synced":true,"sink_checkpoint_ts":"2025-07-11' '18:38:04.610","puller_resolved_ts":"2025-07-11' '18:37:58.610","last_synced_ts":"2025-07-11' '18:35:48.660","now_ts":"2025-07-11' '18:38:06.000","info":"Data' syncing is 'finished"}'
++ jq .synced
+ status=true
+ '[' true '!=' true ']'
+ kill_pd
++ ps aux
++ grep pd-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    25007  7.1  0.0 13521052 139432 ?     Sl   18:35   0:11 pd-server --advertise-client-urls http://127.0.0.1:2379 --client-urls http://0.0.0.0:2379 --advertise-peer-urls http://127.0.0.1:2380 --peer-urls http://0.0.0.0:2380 --config /tmp/tidb_cdc_test/synced_status/pd-config.toml --log-file /tmp/tidb_cdc_test/synced_status/pd1.log --data-dir /tmp/tidb_cdc_test/synced_status/pd1 --name=pd1 --initial-cluster=pd1=http://127.0.0.1:2380
jenkins    25071  4.7  0.0 13651868 138208 ?     Sl   18:35   0:07 pd-server --advertise-client-urls http://127.0.0.1:2479 --client-urls http://0.0.0.0:2479 --advertise-peer-urls http://127.0.0.1:2480 --peer-urls http://0.0.0.0:2480 --config /tmp/tidb_cdc_test/synced_status/pd-config.toml --log-file /tmp/tidb_cdc_test/synced_status/down_pd.log --data-dir /tmp/tidb_cdc_test/synced_status/down_pd'
++ ps aux
++ grep pd-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
+ sleep 20
{"level":"warn","ts":1752230291.952657,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0038381c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1752230291.9527245,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1752230291.9735804,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001fcd180/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1752230291.9736276,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1752230292.7013264,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002388700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1752230292.701386,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2025-07-11T18:38:16.481575+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f921c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:16.487868+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d4fa40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:16.649764+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001248000/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
table multi_tables_ddl_test.finish_mark not exists for 2-th check, retry later
table multi_tables_ddl_test.finish_mark exists
check table exists success
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=test-normal
+ expected_state=normal
+ error_msg=null
+ tls_dir=
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c test-normal -s
+ info='{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-normal",
  "state": "normal",
  "checkpoint_tso": 459336656655220774,
  "checkpoint_time": "2025-07-11 18:38:08.144",
  "error": null
}'
+ echo '{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-normal",
  "state": "normal",
  "checkpoint_tso": 459336656655220774,
  "checkpoint_time": "2025-07-11 18:38:08.144",
  "error": null
}'
{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-normal",
  "state": "normal",
  "checkpoint_tso": 459336656655220774,
  "checkpoint_time": "2025-07-11 18:38:08.144",
  "error": null
}
++ echo '{' '"upstream_id":' 7525771690376314810, '"namespace":' '"default",' '"id":' '"test-normal",' '"state":' '"normal",' '"checkpoint_tso":' 459336656655220774, '"checkpoint_time":' '"2025-07-11' '18:38:08.144",' '"error":' null '}'
++ jq -r .state
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
++ echo '{' '"upstream_id":' 7525771690376314810, '"namespace":' '"default",' '"id":' '"test-normal",' '"state":' '"normal",' '"checkpoint_tso":' 459336656655220774, '"checkpoint_time":' '"2025-07-11' '18:38:08.144",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=test-error-1
+ expected_state=normal
+ error_msg=null
+ tls_dir=
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c test-error-1 -s
+ info='{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-error-1",
  "state": "normal",
  "checkpoint_tso": 459336659827163141,
  "checkpoint_time": "2025-07-11 18:38:20.244",
  "error": null
}'
+ echo '{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-error-1",
  "state": "normal",
  "checkpoint_tso": 459336659827163141,
  "checkpoint_time": "2025-07-11 18:38:20.244",
  "error": null
}'
{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-error-1",
  "state": "normal",
  "checkpoint_tso": 459336659827163141,
  "checkpoint_time": "2025-07-11 18:38:20.244",
  "error": null
}
++ echo '{' '"upstream_id":' 7525771690376314810, '"namespace":' '"default",' '"id":' '"test-error-1",' '"state":' '"normal",' '"checkpoint_tso":' 459336659827163141, '"checkpoint_time":' '"2025-07-11' '18:38:20.244",' '"error":' null '}'
++ jq -r .state
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
++ echo '{' '"upstream_id":' 7525771690376314810, '"namespace":' '"default",' '"id":' '"test-error-1",' '"state":' '"normal",' '"checkpoint_tso":' 459336659827163141, '"checkpoint_time":' '"2025-07-11' '18:38:20.244",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=test-error-2
+ expected_state=failed
+ error_msg=ErrSyncRenameTableFailed
+ tls_dir=
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c test-error-2 -s
+ info='{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-error-2",
  "state": "failed",
  "checkpoint_tso": 459336656175497252,
  "checkpoint_time": "2025-07-11 18:38:06.314",
  "error": {
    "time": "2025-07-11T18:38:08.274856564+08:00",
    "addr": "127.0.0.1:8300",
    "code": "CDC:ErrSyncRenameTableFailed",
    "message": "[CDC:ErrSyncRenameTableFailed]table'\''s old name is not in filter rule, and its new name in filter rule table id '\''128'\'', ddl query: [rename table t11 to t9], it'\''s an unexpected behavior, if you want to replicate this table, please add its old name to filter rule."
  }
}'
+ echo '{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-error-2",
  "state": "failed",
  "checkpoint_tso": 459336656175497252,
  "checkpoint_time": "2025-07-11 18:38:06.314",
  "error": {
    "time": "2025-07-11T18:38:08.274856564+08:00",
    "addr": "127.0.0.1:8300",
    "code": "CDC:ErrSyncRenameTableFailed",
    "message": "[CDC:ErrSyncRenameTableFailed]table'\''s old name is not in filter rule, and its new name in filter rule table id '\''128'\'', ddl query: [rename table t11 to t9], it'\''s an unexpected behavior, if you want to replicate this table, please add its old name to filter rule."
  }
}'
{
  "upstream_id": 7525771690376314810,
  "namespace": "default",
  "id": "test-error-2",
  "state": "failed",
  "checkpoint_tso": 459336656175497252,
  "checkpoint_time": "2025-07-11 18:38:06.314",
  "error": {
    "time": "2025-07-11T18:38:08.274856564+08:00",
    "addr": "127.0.0.1:8300",
    "code": "CDC:ErrSyncRenameTableFailed",
    "message": "[CDC:ErrSyncRenameTableFailed]table's old name is not in filter rule, and its new name in filter rule table id '128', ddl query: [rename table t11 to t9], it's an unexpected behavior, if you want to replicate this table, please add its old name to filter rule."
  }
}
++ jq -r .state
++ echo '{' '"upstream_id":' 7525771690376314810, '"namespace":' '"default",' '"id":' '"test-error-2",' '"state":' '"failed",' '"checkpoint_tso":' 459336656175497252, '"checkpoint_time":' '"2025-07-11' '18:38:06.314",' '"error":' '{' '"time":' '"2025-07-11T18:38:08.274856564+08:00",' '"addr":' '"127.0.0.1:8300",' '"code":' '"CDC:ErrSyncRenameTableFailed",' '"message":' '"[CDC:ErrSyncRenameTableFailed]table'\''s' old name is not in filter rule, and its new name in filter rule table id ''\''128'\'',' ddl query: '[rename' table t11 to 't9],' 'it'\''s' an unexpected behavior, if you want to replicate this table, please add its old name to filter 'rule."' '}' '}'
+ state=failed
+ [[ ! failed == \f\a\i\l\e\d ]]
++ jq -r .error.message
++ echo '{' '"upstream_id":' 7525771690376314810, '"namespace":' '"default",' '"id":' '"test-error-2",' '"state":' '"failed",' '"checkpoint_tso":' 459336656175497252, '"checkpoint_time":' '"2025-07-11' '18:38:06.314",' '"error":' '{' '"time":' '"2025-07-11T18:38:08.274856564+08:00",' '"addr":' '"127.0.0.1:8300",' '"code":' '"CDC:ErrSyncRenameTableFailed",' '"message":' '"[CDC:ErrSyncRenameTableFailed]table'\''s' old name is not in filter rule, and its new name in filter rule table id ''\''128'\'',' ddl query: '[rename' table t11 to 't9],' 'it'\''s' an unexpected behavior, if you want to replicate this table, please add its old name to filter 'rule."' '}' '}'
+ message='[CDC:ErrSyncRenameTableFailed]table'\''s old name is not in filter rule, and its new name in filter rule table id '\''128'\'', ddl query: [rename table t11 to t9], it'\''s an unexpected behavior, if you want to replicate this table, please add its old name to filter rule.'
+ [[ ! [CDC:ErrSyncRenameTableFailed]table's old name is not in filter rule, and its new name in filter rule table id '128', ddl query: [rename table t11 to t9], it's an unexpected behavior, if you want to replicate this table, please add its old name to filter rule. =~ ErrSyncRenameTableFailed ]]
check diff successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri Jul 11 18:38:22 CST 2025] <<<<<< run test case multi_tables_ddl success! >>>>>>
Exiting on signal: INTERRUPT
{"level":"warn","ts":"2025-07-11T18:38:22.48253+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f921c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:22.489192+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d4fa40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:22.651603+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001248000/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0{"level":"warn","ts":"2025-07-11T18:38:28.483569+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f921c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:28.490936+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d4fa40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:28.6533+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001248000/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0{"level":"warn","ts":"2025-07-11T18:38:34.48459+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f921c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:34.492235+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d4fa40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:34.654762+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001248000/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_kafka_test-1134/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }

  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0{"level":"warn","ts":"2025-07-11T18:38:36.470612+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f921c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2025-07-11T18:38:36.470661+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2025-07-11T18:38:36.477069+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d4fa40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2025-07-11T18:38:36.477116+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2025-07-11T18:38:36.590649+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001248000/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":"2025-07-11T18:38:36.590701+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0{"level":"warn","ts":"2025-07-11T18:38:40.48554+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f921c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:40.493321+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d4fa40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:40.656151+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001248000/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0{"level":"warn","ts":"2025-07-11T18:38:46.487355+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f921c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:46.494749+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d4fa40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:46.657615+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001248000/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"warn","ts":1752230326.9543478,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0038381c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1752230326.954415,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1752230326.9749687,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001fcd180/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1752230326.9750178,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0{"level":"warn","ts":1752230327.7026436,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002388700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1752230327.7026908,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0{"level":"warn","ts":"2025-07-11T18:38:52.48874+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f921c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:52.496127+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d4fa40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2025-07-11T18:38:52.65876+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001248000/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    27
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    33
+ synced_status='{
    "error_msg": "[CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded",
    "error_code": "CDC:ErrPDEtcdAPIError"
}'
++ jq -r .error_code
++ echo '{' '"error_msg":' '"[CDC:ErrPDEtcdAPIError]etcd' api call error: context deadline 'exceeded",' '"error_code":' '"CDC:ErrPDEtcdAPIError"' '}'
+ error_code=CDC:ErrPDEtcdAPIError
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_unavailable_tikv conf/changefeed-redo.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Starting Downstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe4459ef40013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:28246, start at 2025-07-11 18:39:16.959533903 +0800 CST m=+5.548992542	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:41:16.966 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:39:16.925 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:29:16.925 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe4459ef40013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:28246, start at 2025-07-11 18:39:16.959533903 +0800 CST m=+5.548992542	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:41:16.966 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:39:16.925 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:29:16.925 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe4459fa00014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:28329, start at 2025-07-11 18:39:17.002909251 +0800 CST m=+5.455601293	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:41:17.009 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:39:16.968 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:29:16.968 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.6-7-gbf1951c09e
Edition:         Community
Git Commit Hash: bf1951c09e4f330799d93475f8440bc2dd0fc96d
Git Branch:      HEAD
UTC Build Time:  2025-07-09 06:34:30
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   47ebe57272ab76cc9206d2bed49712871cb479bb
Git Commit Branch: HEAD
UTC Build Time:    2025-07-09 06:39:30
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["bf1951c09e4f330799d93475f8440bc2dd0fc96d"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.6-7-gbf1951c09e"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "memory-limit-ratio": MatchedArg { occurs: 1, indices: [22], vals: ["0.800000"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.29569.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='459336676049158145
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 459336676049158145 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=459336676049158145
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Fri Jul 11 18:39:23 CST 2025] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ GO_FAILPOINTS=
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.2960229604.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 11 Jul 2025 10:39:26 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/31885b63-8eef-46ca-a8bb-637d458fc6f6
	{"id":"31885b63-8eef-46ca-a8bb-637d458fc6f6","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f911442cbf
	31885b63-8eef-46ca-a8bb-637d458fc6f6

/tidb/cdc/default/default/upstream/7525772040962160021
	{"id":7525772040962160021,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/31885b63-8eef-46ca-a8bb-637d458fc6f6
	{"id":"31885b63-8eef-46ca-a8bb-637d458fc6f6","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f911442cbf
	31885b63-8eef-46ca-a8bb-637d458fc6f6

/tidb/cdc/default/default/upstream/7525772040962160021
	{"id":7525772040962160021,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/31885b63-8eef-46ca-a8bb-637d458fc6f6
	{"id":"31885b63-8eef-46ca-a8bb-637d458fc6f6","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f911442cbf
	31885b63-8eef-46ca-a8bb-637d458fc6f6

/tidb/cdc/default/default/upstream/7525772040962160021
	{"id":7525772040962160021,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed-redo.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=459336676049158145 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.29662.out cli changefeed create --start-ts=459336676049158145 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7525772040962160021,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2025-07-11T18:39:27.029669521+08:00","start_ts":459336676049158145,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"open":{"output_old_value":true}},"consistent":{"level":"eventual","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"storage":"file:///tmp/tidb_cdc_test/synced_status/redo","use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v7.5.6-12-g60a79aa87","resolved_ts":459336676049158145,"checkpoint_ts":459336676049158145,"checkpoint_time":"2025-07-11 18:39:22.126"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 not exists for 1-th check, retry later
table test.t1 exists
+ sleep 5
+ kill_tikv
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    27679 21.9  0.4 3688796 1743528 ?     Sl   18:39   0:05 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20160 --status-addr 127.0.0.1:20181 --log-file /tmp/tidb_cdc_test/synced_status/tikv1.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv1
jenkins    27680 16.3  0.4 3661656 1674248 ?     Sl   18:39   0:04 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20161 --status-addr 127.0.0.1:20182 --log-file /tmp/tidb_cdc_test/synced_status/tikv2.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv2
jenkins    27681 16.0  0.4 3651420 1676480 ?     Sl   18:39   0:04 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20162 --status-addr 127.0.0.1:20183 --log-file /tmp/tidb_cdc_test/synced_status/tikv3.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv3
jenkins    27683 21.1  0.4 3675992 1730976 ?     Sl   18:39   0:05 tikv-server --pd 127.0.0.1:2479 -A 127.0.0.1:21160 --status-addr 127.0.0.1:21180 --log-file /tmp/tidb_cdc_test/synced_status/tikv_down.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv_down'
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   5996      0 --:--:-- --:--:-- --:--:--  6075
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2025-07-11 18:39:34.525","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"2025-07-11 18:39:28.475","now_ts":"2025-07-11 18:39:35.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2025-07-11' '18:39:34.525","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2025-07-11' '18:39:28.475","now_ts":"2025-07-11' '18:39:35.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2025-07-11' '18:39:34.525","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2025-07-11' '18:39:28.475","now_ts":"2025-07-11' '18:39:35.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   723  100   723    0     0   7919      0 --:--:-- --:--:-- --:--:--  7945
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2025-07-11 18:39:34.525","puller_resolved_ts":"2025-07-11 18:39:34.525","last_synced_ts":"2025-07-11 18:39:28.475","now_ts":"2025-07-11 18:41:45.000","info":"Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' \u003e '\''Resolved-Ts'\'' \u003e '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2025-07-11' '18:39:34.525","puller_resolved_ts":"2025-07-11' '18:39:34.525","last_synced_ts":"2025-07-11' '18:39:28.475","now_ts":"2025-07-11' '18:41:45.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ jq -r .info
++ echo '{"synced":false,"sink_checkpoint_ts":"2025-07-11' '18:39:34.525","puller_resolved_ts":"2025-07-11' '18:39:34.525","last_synced_ts":"2025-07-11' '18:39:28.475","now_ts":"2025-07-11' '18:41:45.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
+ info='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ target_message='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ '[' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' '!=' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' ']'
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
wait process cdc.test exit for 3-th time...
cdc.test: no process found
wait process cdc.test exit for 4-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_unavailable_tidb conf/changefeed-redo.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Starting Downstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe45027f00012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:30684, start at 2025-07-11 18:42:09.561433914 +0800 CST m=+5.516237391	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:44:09.571 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:42:09.532 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:32:09.532 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe45027f00012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:30684, start at 2025-07-11 18:42:09.561433914 +0800 CST m=+5.516237391	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:44:09.571 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:42:09.532 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:32:09.532 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe45029680015	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:30761, start at 2025-07-11 18:42:09.661242956 +0800 CST m=+5.513434076	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:44:09.667 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:42:09.626 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:32:09.626 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.6-7-gbf1951c09e
Edition:         Community
Git Commit Hash: bf1951c09e4f330799d93475f8440bc2dd0fc96d
Git Branch:      HEAD
UTC Build Time:  2025-07-09 06:34:30
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   47ebe57272ab76cc9206d2bed49712871cb479bb
Git Commit Branch: HEAD
UTC Build Time:    2025-07-09 06:39:30
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "memory-limit-ratio": MatchedArg { occurs: 1, indices: [22], vals: ["0.800000"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.6-7-gbf1951c09e"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["bf1951c09e4f330799d93475f8440bc2dd0fc96d"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.32064.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='459336721296785409
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 459336721296785409 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=459336721296785409
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Fri Jul 11 18:42:16 CST 2025] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.3209232094.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 11 Jul 2025 10:42:19 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/86beeae6-b663-4e1b-8a03-0b847cf18859
	{"id":"86beeae6-b663-4e1b-8a03-0b847cf18859","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f913de9dc0
	86beeae6-b663-4e1b-8a03-0b847cf18859

/tidb/cdc/default/default/upstream/7525772778534464557
	{"id":7525772778534464557,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/86beeae6-b663-4e1b-8a03-0b847cf18859
	{"id":"86beeae6-b663-4e1b-8a03-0b847cf18859","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f913de9dc0
	86beeae6-b663-4e1b-8a03-0b847cf18859

/tidb/cdc/default/default/upstream/7525772778534464557
	{"id":7525772778534464557,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/86beeae6-b663-4e1b-8a03-0b847cf18859
	{"id":"86beeae6-b663-4e1b-8a03-0b847cf18859","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f913de9dc0
	86beeae6-b663-4e1b-8a03-0b847cf18859

/tidb/cdc/default/default/upstream/7525772778534464557
	{"id":7525772778534464557,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed-redo.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=459336721296785409 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.32141.out cli changefeed create --start-ts=459336721296785409 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7525772778534464557,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2025-07-11T18:42:19.633551904+08:00","start_ts":459336721296785409,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"open":{"output_old_value":true}},"consistent":{"level":"eventual","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"storage":"file:///tmp/tidb_cdc_test/synced_status/redo","use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v7.5.6-12-g60a79aa87","resolved_ts":459336721296785409,"checkpoint_ts":459336721296785409,"checkpoint_time":"2025-07-11 18:42:14.732"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 exists
+ sleep 5
+ kill_tidb
++ ps aux
++ grep tidb-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    30680  3.4  0.0 2747668 160996 ?      Sl   18:42   0:00 tidb-server -P 4000 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1752230524015797304.toml --store tikv --path 127.0.0.1:2379 --status=10080 --log-file /tmp/tidb_cdc_test/synced_status/tidb.log
jenkins    30684 10.9  0.0 2445216 202208 ?      Sl   18:42   0:02 tidb-server -P 4001 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1752230524038458365.toml --store tikv --path 127.0.0.1:2379 --status=10081 --log-file /tmp/tidb_cdc_test/synced_status/tidb_other.log
jenkins    30761 11.2  0.0 3019704 212340 ?      Sl   18:42   0:02 tidb-server -P 3306 -config /tmp/tidb_cdc_test/synced_status/tidb-config-1752230524141301624.toml --store tikv --path 127.0.0.1:2479 --status=20080 --log-file /tmp/tidb_cdc_test/synced_status/tidb_down.log'
++ ps aux
++ grep tidb-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   4262      0 --:--:-- --:--:-- --:--:--  4339
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2025-07-11 18:42:25.132","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"2025-07-11 18:42:21.533","now_ts":"2025-07-11 18:42:26.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2025-07-11' '18:42:25.132","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2025-07-11' '18:42:21.533","now_ts":"2025-07-11' '18:42:26.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2025-07-11' '18:42:25.132","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2025-07-11' '18:42:21.533","now_ts":"2025-07-11' '18:42:26.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   221  100   221    0     0   2700      0 --:--:-- --:--:-- --:--:--  2728
+ synced_status='{"synced":true,"sink_checkpoint_ts":"2025-07-11 18:44:35.232","puller_resolved_ts":"2025-07-11 18:44:31.232","last_synced_ts":"2025-07-11 18:42:21.533","now_ts":"2025-07-11 18:44:36.000","info":"Data syncing is finished"}'
++ echo '{"synced":true,"sink_checkpoint_ts":"2025-07-11' '18:44:35.232","puller_resolved_ts":"2025-07-11' '18:44:31.232","last_synced_ts":"2025-07-11' '18:42:21.533","now_ts":"2025-07-11' '18:44:36.000","info":"Data' syncing is 'finished"}'
++ jq .synced
+ status=true
+ '[' true '!=' true ']'
++ echo '{"synced":true,"sink_checkpoint_ts":"2025-07-11' '18:44:35.232","puller_resolved_ts":"2025-07-11' '18:44:31.232","last_synced_ts":"2025-07-11' '18:42:21.533","now_ts":"2025-07-11' '18:44:36.000","info":"Data' syncing is 'finished"}'
++ jq -r .info
+ info='Data syncing is finished'
+ target_message='Data syncing is finished'
+ '[' 'Data syncing is finished' '!=' 'Data syncing is finished' ']'
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ run_case_with_failpoint conf/changefeed-redo.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Starting Downstream PD...
Release Version: v7.5.6-4-gda85ca769
Edition: Community
Git Commit Hash: da85ca769989e82fd6b3e83c8e80d513b3c59046
Git Branch: release-7.5
UTC Build Time:  2025-07-09 04:20:09
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.7
Edition:           Community
Git Commit Hash:   5ad8129a6cbbd9cc3b258cab939a316caa4bb5eb
Git Commit Branch: release-7.5
UTC Build Time:    2025-07-11 07:24:05
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.6-42-gf72854f735
Edition: Community
Git Commit Hash: f72854f73567c8a770b1840d85be7fccba37cc77
Git Branch: release-7.5
UTC Build Time: 2025-07-11 05:28:49
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe45a7aa00015	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:33225, start at 2025-07-11 18:44:58.681280298 +0800 CST m=+5.512205641	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:46:58.687 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:44:58.664 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:34:58.664 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe45a7aa00015	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:33225, start at 2025-07-11 18:44:58.681280298 +0800 CST m=+5.512205641	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:46:58.687 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:44:58.664 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:34:58.664 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	181	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	65fe45a7b480003	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:p-tiflow-release-7-5-pull-cdc-integration-kafka-test-1134-rphf4, pid:33314, start at 2025-07-11 18:44:58.709133523 +0800 CST m=+5.444492863	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250711-18:46:58.715 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250711-18:44:58.706 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250711-18:34:58.706 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.6-7-gbf1951c09e
Edition:         Community
Git Commit Hash: bf1951c09e4f330799d93475f8440bc2dd0fc96d
Git Branch:      HEAD
UTC Build Time:  2025-07-09 06:34:30
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   47ebe57272ab76cc9206d2bed49712871cb479bb
Git Commit Branch: HEAD
UTC Build Time:    2025-07-09 06:39:30
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.6-7-gbf1951c09e"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "memory-limit-ratio": MatchedArg { occurs: 1, indices: [22], vals: ["0.800000"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["bf1951c09e4f330799d93475f8440bc2dd0fc96d"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ cd /tmp/tidb_cdc_test/synced_status
+ export 'GO_FAILPOINTS=github.com/pingcap/tiflow/cdc/owner/ChangefeedOwnerNotUpdateCheckpoint=return(true)'
+ GO_FAILPOINTS='github.com/pingcap/tiflow/cdc/owner/ChangefeedOwnerNotUpdateCheckpoint=return(true)'
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.34532.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='459336765620879361
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ awk -F ' ' '{print $1}'
+ echo 459336765620879361 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ set +x
+ start_ts=459336765620879361
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Fri Jul 11 18:45:05 CST 2025] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS='github.com/pingcap/tiflow/cdc/owner/ChangefeedOwnerNotUpdateCheckpoint=return(true)'
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.3456034562.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 11 Jul 2025 10:45:08 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/f3051995-27a6-4ec8-b9e7-d1db89b2d89a
	{"id":"f3051995-27a6-4ec8-b9e7-d1db89b2d89a","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f9167730be
	f3051995-27a6-4ec8-b9e7-d1db89b2d89a

/tidb/cdc/default/default/upstream/7525773511742067486
	{"id":7525773511742067486,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/f3051995-27a6-4ec8-b9e7-d1db89b2d89a
	{"id":"f3051995-27a6-4ec8-b9e7-d1db89b2d89a","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f9167730be
	f3051995-27a6-4ec8-b9e7-d1db89b2d89a

/tidb/cdc/default/default/upstream/7525773511742067486
	{"id":7525773511742067486,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/f3051995-27a6-4ec8-b9e7-d1db89b2d89a
	{"id":"f3051995-27a6-4ec8-b9e7-d1db89b2d89a","address":"127.0.0.1:8300","version":"v7.5.6-12-g60a79aa87"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/223197f9167730be
	f3051995-27a6-4ec8-b9e7-d1db89b2d89a

/tidb/cdc/default/default/upstream/7525773511742067486
	{"id":7525773511742067486,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed-redo.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=459336765620879361 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.34606.out cli changefeed create --start-ts=459336765620879361 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/synced_status/conf/changefeed-redo.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7525773511742067486,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2025-07-11T18:45:08.744170731+08:00","start_ts":459336765620879361,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"open":{"output_old_value":true}},"consistent":{"level":"eventual","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"storage":"file:///tmp/tidb_cdc_test/synced_status/redo","use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v7.5.6-12-g60a79aa87","resolved_ts":459336765620879361,"checkpoint_ts":459336765620879361,"checkpoint_time":"2025-07-11 18:45:03.815"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
+ set +x
+ sleep 20
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   723  100   723    0     0   9992      0 --:--:-- --:--:-- --:--:-- 10041
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2025-07-11 18:45:03.815","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"1970-01-01 08:00:00.000","now_ts":"2025-07-11 18:45:30.000","info":"Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' \u003e '\''Resolved-Ts'\'' \u003e '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2025-07-11' '18:45:03.815","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2025-07-11' '18:45:30.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2025-07-11' '18:45:03.815","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2025-07-11' '18:45:30.000","info":"Please' check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view ''\''TiKV-Details'\''' '\u003e' ''\''Resolved-Ts'\''' '\u003e' ''\''Max' Leader Resolved TS 'gap'\''' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please 'wait"}'
++ jq -r .info
+ info='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ target_message='Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait'
+ '[' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' '!=' 'Please check whether PD is online and TiKV Regions are all available. If PD is offline or some TiKV regions are not available, it means that the data syncing process is complete. To check whether TiKV regions are all available, you can view '\''TiKV-Details'\'' > '\''Resolved-Ts'\'' > '\''Max Leader Resolved TS gap'\'' on Grafana. If the gap is large, such as a few minutes, it means that some regions in TiKV are unavailable. Otherwise, if the gap is small and PD is online, it means the data syncing is incomplete, so please wait' ']'
+ export GO_FAILPOINTS=
+ GO_FAILPOINTS=
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
+ check_logs /tmp/tidb_cdc_test/synced_status
++ date
+ echo '[Fri Jul 11 18:45:40 CST 2025] <<<<<< run test case synced_status success! >>>>>>'
[Fri Jul 11 18:45:40 CST 2025] <<<<<< run test case synced_status success! >>>>>>
+ stop_tidb_cluster
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_kafka_test-1134/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS