Skip to content

Console Output

Skipping 1,673 KB.. Full Log
table processor_delay.t39 exists
table processor_delay.t40 not exists for 1-th check, retry later
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table move_table.check2 not exists for 3-th check, retry later
table processor_delay.t40 not exists for 2-th check, retry later
table changefeed_auto_stop_2.usertable exists
table changefeed_auto_stop_3.usertable not exists for 1-th check, retry later
table move_table.check2 not exists for 4-th check, retry later
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table changefeed_auto_stop_3.usertable not exists for 2-th check, retry later
table move_table.check2 exists
check diff successfully
table processor_delay.t40 not exists for 3-th check, retry later
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e1969be240006	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-v502j, pid:14416, start at 2024-05-17 19:40:24.850650995 +0800 CST m=+5.564565877	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:42:24.861 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:40:24.841 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:30:24.841 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
wait process cdc.test exit for 3-th time...
table changefeed_auto_stop_3.usertable not exists for 3-th check, retry later
cdc.test: no process found
wait process cdc.test exit for 4-th time...
process cdc.test already exit
[Fri May 17 19:40:27 CST 2024] <<<<<< run test case move_table success! >>>>>>
table processor_delay.t40 not exists for 4-th check, retry later
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e1969be240006	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-v502j, pid:14416, start at 2024-05-17 19:40:24.850650995 +0800 CST m=+5.564565877	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:42:24.861 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:40:24.841 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:30:24.841 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e1969bd4c0014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-v502j, pid:14498, start at 2024-05-17 19:40:24.837479288 +0800 CST m=+5.499914635	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:42:24.848 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:40:24.837 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:30:24.837 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-22-gacdbe728f
Edition:         Community
Git Commit Hash: acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59
Git Branch:      HEAD
UTC Build Time:  2024-05-16 14:18:59
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-16 14:22:45
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/savepoint/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/savepoint/tiflash/log/error.log
arg matches is ArgMatches { args: {"advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/savepoint/tiflash-proxy.toml"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/savepoint/tiflash/log/proxy.log"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/savepoint/tiflash/db/proxy"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-22-gacdbe728f"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
table changefeed_auto_stop_3.usertable not exists for 4-th check, retry later
table processor_delay.t40 exists
table processor_delay.t41 not exists for 1-th check, retry later
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.savepoint.cli.15804.out cli tso query --pd=http://127.0.0.1:2379
table changefeed_auto_stop_3.usertable not exists for 5-th check, retry later
table processor_delay.t41 exists
table processor_delay.t42 not exists for 1-th check, retry later
+ set +x
+ tso='449824956073115649
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449824956073115649 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
[Fri May 17 19:40:31 CST 2024] <<<<<< START cdc server in savepoint case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.savepoint.1583115833.out server --log-file /tmp/tidb_cdc_test/savepoint/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/savepoint/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
table changefeed_auto_stop_3.usertable exists
table changefeed_auto_stop_4.usertable exists
table processor_delay.t42 not exists for 2-th check, retry later
check diff failed 1-th time, retry later
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 17 May 2024 11:40:34 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/193041cf-a906-4ad3-a056-c302307a6f08
	{"id":"193041cf-a906-4ad3-a056-c302307a6f08","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865a45e5f5
	193041cf-a906-4ad3-a056-c302307a6f08

/tidb/cdc/default/default/upstream/7369932023398548993
	{"id":7369932023398548993,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/193041cf-a906-4ad3-a056-c302307a6f08
	{"id":"193041cf-a906-4ad3-a056-c302307a6f08","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865a45e5f5
	193041cf-a906-4ad3-a056-c302307a6f08

/tidb/cdc/default/default/upstream/7369932023398548993
	{"id":7369932023398548993,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/193041cf-a906-4ad3-a056-c302307a6f08
	{"id":"193041cf-a906-4ad3-a056-c302307a6f08","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865a45e5f5
	193041cf-a906-4ad3-a056-c302307a6f08

/tidb/cdc/default/default/upstream/7369932023398548993
	{"id":7369932023398548993,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.savepoint.cli.15872.out cli changefeed create --start-ts=449824956073115649 '--sink-uri=file:///tmp/tidb_cdc_test/savepoint/storage_test/ticdc-savepoint-test-6950?protocol=canal-json&enable-tidb-extension=true'
Create changefeed successfully!
ID: b14568bb-ee92-445c-a9a0-a18011ac23a7
Info: {"upstream_id":7369932023398548993,"namespace":"default","id":"b14568bb-ee92-445c-a9a0-a18011ac23a7","sink_uri":"file:///tmp/tidb_cdc_test/savepoint/storage_test/ticdc-savepoint-test-6950?protocol=canal-json\u0026enable-tidb-extension=true","create_time":"2024-05-17T19:40:34.967565641+08:00","start_ts":449824956073115649,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"file_index_width":20,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-40-g7bcb4de0c","resolved_ts":449824956073115649,"checkpoint_ts":449824956073115649,"checkpoint_time":"2024-05-17 19:40:29.942"}
PASS
coverage: 2.5% of statements in github.com/pingcap/tiflow/...
table processor_delay.t42 not exists for 3-th check, retry later
check diff failed 2-th time, retry later
+ set +x
+ workdir=/tmp/tidb_cdc_test/savepoint
+ sink_uri='file:///tmp/tidb_cdc_test/savepoint/storage_test/ticdc-savepoint-test-6950?protocol=canal-json&enable-tidb-extension=true'
+ consumer_replica_config=
+ log_suffix=
++ pwd
+ pwd=/tmp/tidb_cdc_test/savepoint
++ date
+ echo '[Fri May 17 19:40:37 CST 2024] <<<<<< START storage consumer in savepoint case >>>>>>'
[Fri May 17 19:40:37 CST 2024] <<<<<< START storage consumer in savepoint case >>>>>>
+ cd /tmp/tidb_cdc_test/savepoint
+ '[' '' '!=' '' ']'
+ cd /tmp/tidb_cdc_test/savepoint
+ set +x
+ cdc_storage_consumer --log-file /tmp/tidb_cdc_test/savepoint/cdc_storage_consumer.log --log-level debug --upstream-uri 'file:///tmp/tidb_cdc_test/savepoint/storage_test/ticdc-savepoint-test-6950?protocol=canal-json&enable-tidb-extension=true' --downstream-uri 'mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false'
table processor_delay.t42 not exists for 4-th check, retry later
check diff failed 3-th time, retry later
table processor_delay.t42 exists
table processor_delay.t43 not exists for 1-th check, retry later
check diff failed 4-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/autorandom/run.sh using Sink-Type: storage... <<=================
The 1 times to try to start tidb cluster...
table savepoint.finish_mark not exists for 1-th check, retry later
table processor_delay.t43 exists
table processor_delay.t44 not exists for 1-th check, retry later
check diff successfully
wait process cdc.test exit for 1-th time...
table savepoint.finish_mark not exists for 2-th check, retry later
wait process cdc.test exit for 2-th time...
table processor_delay.t44 not exists for 2-th check, retry later
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri May 17 19:40:43 CST 2024] <<<<<< run test case changefeed_auto_stop success! >>>>>>
table savepoint.finish_mark not exists for 3-th check, retry later
table processor_delay.t44 not exists for 3-th check, retry later
table savepoint.finish_mark not exists for 4-th check, retry later
table processor_delay.t44 not exists for 4-th check, retry later
table savepoint.finish_mark exists
check diff successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
table processor_delay.t44 exists
table processor_delay.t45 not exists for 1-th check, retry later
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri May 17 19:40:49 CST 2024] <<<<<< run test case savepoint success! >>>>>>
table processor_delay.t45 exists
table processor_delay.t46 not exists for 1-th check, retry later
start tidb cluster in /tmp/tidb_cdc_test/autorandom
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
table processor_delay.t46 not exists for 2-th check, retry later
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_storage_test-364/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
table processor_delay.t46 not exists for 3-th check, retry later
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
Starting Upstream TiDB...
Release Version: v7.5.1-70-gbe578f5db8
Edition: Community
Git Commit Hash: be578f5db8a899a19030344cbac6b4d3629ec872
Git Branch: release-7.5
UTC Build Time: 2024-05-17 08:16:47
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-70-gbe578f5db8
Edition: Community
Git Commit Hash: be578f5db8a899a19030344cbac6b4d3629ec872
Git Branch: release-7.5
UTC Build Time: 2024-05-17 08:16:47
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
table processor_delay.t46 not exists for 4-th check, retry later
table processor_delay.t46 exists
table processor_delay.t47 not exists for 1-th check, retry later
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table processor_delay.t47 exists
table processor_delay.t48 not exists for 1-th check, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status/run.sh using Sink-Type: storage... <<=================
+++ dirname /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status/run.sh
++ cd /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status
++ pwd
+ CUR=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status
+ source /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status/../_utils/test_prepare
++ UP_TIDB_HOST=127.0.0.1
++ UP_TIDB_PORT=4000
++ UP_TIDB_OTHER_PORT=4001
++ UP_TIDB_STATUS=10080
++ UP_TIDB_OTHER_STATUS=10081
++ DOWN_TIDB_HOST=127.0.0.1
++ DOWN_TIDB_PORT=3306
++ DOWN_TIDB_STATUS=20080
++ TLS_TIDB_HOST=127.0.0.1
++ TLS_TIDB_PORT=3307
++ TLS_TIDB_STATUS=30080
++ UP_PD_HOST_1=127.0.0.1
++ UP_PD_PORT_1=2379
++ UP_PD_PEER_PORT_1=2380
++ UP_PD_HOST_2=127.0.0.1
++ UP_PD_PORT_2=2679
++ UP_PD_PEER_PORT_2=2680
++ UP_PD_HOST_3=127.0.0.1
++ UP_PD_PORT_3=2779
++ UP_PD_PEER_PORT_3=2780
++ DOWN_PD_HOST=127.0.0.1
++ DOWN_PD_PORT=2479
++ DOWN_PD_PEER_PORT=2480
++ TLS_PD_HOST=127.0.0.1
++ TLS_PD_PORT=2579
++ TLS_PD_PEER_PORT=2580
++ UP_TIKV_HOST_1=127.0.0.1
++ UP_TIKV_PORT_1=20160
++ UP_TIKV_STATUS_PORT_1=20181
++ UP_TIKV_HOST_2=127.0.0.1
++ UP_TIKV_PORT_2=20161
++ UP_TIKV_STATUS_PORT_2=20182
++ UP_TIKV_HOST_3=127.0.0.1
++ UP_TIKV_PORT_3=20162
++ UP_TIKV_STATUS_PORT_3=20183
++ DOWN_TIKV_HOST=127.0.0.1
++ DOWN_TIKV_PORT=21160
++ DOWN_TIKV_STATUS_PORT=21180
++ TLS_TIKV_HOST=127.0.0.1
++ TLS_TIKV_PORT=22160
++ TLS_TIKV_STATUS_PORT=22180
+++ cat /tmp/tidb_cdc_test/KAFKA_VERSION
+++ echo 2.4.1
++ KAFKA_VERSION=2.4.1
+ WORK_DIR=/tmp/tidb_cdc_test/synced_status
+ CDC_BINARY=cdc.test
+ SINK_TYPE=storage
+ CDC_COUNT=3
+ DB_COUNT=4
+ trap stop_tidb_cluster EXIT
+ run_normal_case_and_unavailable_pd conf/changefeed.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
The 1 times to try to start tidb cluster...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196bf7440012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-j7tp5, pid:30204, start at 2024-05-17 19:41:01.289293789 +0800 CST m=+5.184245737	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:01.299 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:01.265 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:01.265 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196bf7440012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-j7tp5, pid:30204, start at 2024-05-17 19:41:01.289293789 +0800 CST m=+5.184245737	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:01.299 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:01.265 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:01.265 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196bf8d00014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-j7tp5, pid:30286, start at 2024-05-17 19:41:01.401353278 +0800 CST m=+5.241345318	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:01.408 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:01.414 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:01.414 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-22-gacdbe728f
Edition:         Community
Git Commit Hash: acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59
Git Branch:      HEAD
UTC Build Time:  2024-05-16 14:18:59
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-16 14:22:45
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/autorandom/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/autorandom/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-22-gacdbe728f"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/autorandom/tiflash/db/proxy"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/autorandom/tiflash-proxy.toml"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/autorandom/tiflash/log/proxy.log"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
table processor_delay.t48 not exists for 2-th check, retry later
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
[Fri May 17 19:41:05 CST 2024] <<<<<< START cdc server in autorandom case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.autorandom.3164531647.out server --log-file /tmp/tidb_cdc_test/autorandom/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/autorandom/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
table processor_delay.t48 not exists for 3-th check, retry later
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
table processor_delay.t48 not exists for 4-th check, retry later
Starting Upstream TiDB...
Release Version: v7.5.1-70-gbe578f5db8
Edition: Community
Git Commit Hash: be578f5db8a899a19030344cbac6b4d3629ec872
Git Branch: release-7.5
UTC Build Time: 2024-05-17 08:16:47
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-70-gbe578f5db8
Edition: Community
Git Commit Hash: be578f5db8a899a19030344cbac6b4d3629ec872
Git Branch: release-7.5
UTC Build Time: 2024-05-17 08:16:47
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 17 May 2024 11:41:08 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/91ed3bf0-8eea-4104-9c0f-b20d609c4812
	{"id":"91ed3bf0-8eea-4104-9c0f-b20d609c4812","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865ad1d5f2
	91ed3bf0-8eea-4104-9c0f-b20d609c4812

/tidb/cdc/default/default/upstream/7369932183358405614
	{"id":7369932183358405614,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/91ed3bf0-8eea-4104-9c0f-b20d609c4812
	{"id":"91ed3bf0-8eea-4104-9c0f-b20d609c4812","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865ad1d5f2
	91ed3bf0-8eea-4104-9c0f-b20d609c4812

/tidb/cdc/default/default/upstream/7369932183358405614
	{"id":7369932183358405614,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/91ed3bf0-8eea-4104-9c0f-b20d609c4812
	{"id":"91ed3bf0-8eea-4104-9c0f-b20d609c4812","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865ad1d5f2
	91ed3bf0-8eea-4104-9c0f-b20d609c4812

/tidb/cdc/default/default/upstream/7369932183358405614
	{"id":7369932183358405614,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
Create changefeed successfully!
ID: 6a8350ab-d65a-45f7-8f4b-ae6e0e45373f
Info: {"upstream_id":7369932183358405614,"namespace":"default","id":"6a8350ab-d65a-45f7-8f4b-ae6e0e45373f","sink_uri":"file:///tmp/tidb_cdc_test/autorandom/storage_test/ticdc-autorandom-test-23698?protocol=canal-json\u0026enable-tidb-extension=true","create_time":"2024-05-17T19:41:08.416591773+08:00","start_ts":449824966132105221,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"file_index_width":20,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-40-g7bcb4de0c","resolved_ts":449824966132105221,"checkpoint_ts":449824966132105221,"checkpoint_time":"2024-05-17 19:41:08.314"}
+ workdir=/tmp/tidb_cdc_test/autorandom
+ sink_uri='file:///tmp/tidb_cdc_test/autorandom/storage_test/ticdc-autorandom-test-23698?protocol=canal-json&enable-tidb-extension=true'
+ consumer_replica_config=
+ log_suffix=
++ pwd
+ pwd=/tmp/tidb_cdc_test/autorandom
++ date
+ echo '[Fri May 17 19:41:08 CST 2024] <<<<<< START storage consumer in autorandom case >>>>>>'
[Fri May 17 19:41:08 CST 2024] <<<<<< START storage consumer in autorandom case >>>>>>
+ cd /tmp/tidb_cdc_test/autorandom
+ '[' '' '!=' '' ']'
+ cd /tmp/tidb_cdc_test/autorandom
+ set +x
+ cdc_storage_consumer --log-file /tmp/tidb_cdc_test/autorandom/cdc_storage_consumer.log --log-level debug --upstream-uri 'file:///tmp/tidb_cdc_test/autorandom/storage_test/ticdc-autorandom-test-23698?protocol=canal-json&enable-tidb-extension=true' --downstream-uri 'mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false'
table autorandom_test.table_a not exists for 1-th check, retry later
table processor_delay.t48 exists
table processor_delay.t49 exists
table processor_delay.t50 not exists for 1-th check, retry later
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table autorandom_test.table_a not exists for 2-th check, retry later
table processor_delay.t50 not exists for 2-th check, retry later
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table autorandom_test.table_a not exists for 3-th check, retry later
table processor_delay.t50 not exists for 3-th check, retry later
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196cb6840016	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-v502j, pid:16842, start at 2024-05-17 19:41:13.524560201 +0800 CST m=+5.232472516	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:13.531 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:13.505 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:13.505 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196cb6840016	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-v502j, pid:16842, start at 2024-05-17 19:41:13.524560201 +0800 CST m=+5.232472516	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:13.531 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:13.505 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:13.505 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196cb6e00006	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-v502j, pid:16929, start at 2024-05-17 19:41:13.534992308 +0800 CST m=+5.164151396	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:13.542 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:13.528 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:13.528 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-22-gacdbe728f
Edition:         Community
Git Commit Hash: acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59
Git Branch:      HEAD
UTC Build Time:  2024-05-16 14:18:59
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-16 14:22:45
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-22-gacdbe728f"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
table autorandom_test.table_a not exists for 4-th check, retry later
table processor_delay.t50 not exists for 4-th check, retry later
+ cd /tmp/tidb_cdc_test/synced_status
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.18300.out cli tso query --pd=http://127.0.0.1:2379
table autorandom_test.table_a not exists for 5-th check, retry later
table processor_delay.t50 not exists for 5-th check, retry later
+ set +x
+ tso='449824968397553665
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449824968397553665 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449824968397553665
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Fri May 17 19:41:18 CST 2024] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.1832518327.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
table autorandom_test.table_a exists
check diff successfully
wait process cdc.test exit for 1-th time...
table processor_delay.t50 exists
check diff successfully
wait process cdc.test exit for 2-th time...
wait process cdc.test exit for 1-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri May 17 19:41:20 CST 2024] <<<<<< run test case autorandom success! >>>>>>
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri May 17 19:41:21 CST 2024] <<<<<< run test case processor_etcd_worker_delay success! >>>>>>
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 17 May 2024 11:41:21 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/aa591c17-cc71-4676-bae2-8d3f53860ee4
	{"id":"aa591c17-cc71-4676-bae2-8d3f53860ee4","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b0945f4
	aa591c17-cc71-4676-bae2-8d3f53860ee4

/tidb/cdc/default/default/upstream/7369932233026549851
	{"id":7369932233026549851,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/aa591c17-cc71-4676-bae2-8d3f53860ee4
	{"id":"aa591c17-cc71-4676-bae2-8d3f53860ee4","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b0945f4
	aa591c17-cc71-4676-bae2-8d3f53860ee4

/tidb/cdc/default/default/upstream/7369932233026549851
	{"id":7369932233026549851,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/aa591c17-cc71-4676-bae2-8d3f53860ee4
	{"id":"aa591c17-cc71-4676-bae2-8d3f53860ee4","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b0945f4
	aa591c17-cc71-4676-bae2-8d3f53860ee4

/tidb/cdc/default/default/upstream/7369932233026549851
	{"id":7369932233026549851,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449824968397553665 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status/conf/changefeed.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.18371.out cli changefeed create --start-ts=449824968397553665 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status/conf/changefeed.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7369932233026549851,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-17T19:41:21.889549401+08:00","start_ts":449824968397553665,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v7.5.1-40-g7bcb4de0c","resolved_ts":449824968397553665,"checkpoint_ts":449824968397553665,"checkpoint_time":"2024-05-17 19:41:16.956"}
PASS
coverage: 2.4% of statements in github.com/pingcap/tiflow/...
+ set +x
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   221  100   221    0     0   2263      0 --:--:-- --:--:-- --:--:--  2278
+ synced_status='{"synced":true,"sink_checkpoint_ts":"2024-05-17 19:41:16.956","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"1970-01-01 08:00:00.000","now_ts":"2024-05-17 19:41:23.000","info":"Data syncing is finished"}'
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-17' '19:41:16.956","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2024-05-17' '19:41:23.000","info":"Data' syncing is 'finished"}'
++ jq .synced
+ status=true
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-17' '19:41:16.956","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2024-05-17' '19:41:23.000","info":"Data' syncing is 'finished"}'
++ jq -r .sink_checkpoint_ts
+ sink_checkpoint_ts='2024-05-17 19:41:16.956'
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-17' '19:41:16.956","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2024-05-17' '19:41:23.000","info":"Data' syncing is 'finished"}'
++ jq -r .puller_resolved_ts
+ puller_resolved_ts='1970-01-01 08:00:00.000'
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-17' '19:41:16.956","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"1970-01-01' '08:00:00.000","now_ts":"2024-05-17' '19:41:23.000","info":"Data' syncing is 'finished"}'
++ jq -r .last_synced_ts
+ last_synced_ts='1970-01-01 08:00:00.000'
+ '[' true '!=' true ']'
+ '[' '1970-01-01 08:00:00.000' '!=' '1970-01-01 08:00:00.000' ']'
+ '[' '1970-01-01 08:00:00.000' '!=' '1970-01-01 08:00:00.000' ']'
++ date '+%Y-%m-%d %H:%M:%S'
+ current='2024-05-17 19:41:23'
+ echo 'sink_checkpoint_ts is 2024-05-17' 19:41:16.956
sink_checkpoint_ts is 2024-05-17 19:41:16.956
++ date -d '2024-05-17 19:41:16.956' +%s
+ checkpoint_timestamp=1715946076
++ date -d '2024-05-17 19:41:23' +%s
+ current_timestamp=1715946083
+ '[' 7 -gt 300 ']'
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 not exists for 1-th check, retry later
table test.t1 exists
+ sleep 5
check_changefeed_state http://127.0.0.1:2379 31c6812b-6446-41d4-8e3b-f325c103cf1e finished null
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=31c6812b-6446-41d4-8e3b-f325c103cf1e
+ expected_state=finished
+ error_msg=null
+ tls_dir=null
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c 31c6812b-6446-41d4-8e3b-f325c103cf1e -s
+ info='{
  "upstream_id": 7369931817054471781,
  "namespace": "default",
  "id": "31c6812b-6446-41d4-8e3b-f325c103cf1e",
  "state": "finished",
  "checkpoint_tso": 449824967352647681,
  "checkpoint_time": "2024-05-17 19:41:12.970",
  "error": null
}'
+ echo '{
  "upstream_id": 7369931817054471781,
  "namespace": "default",
  "id": "31c6812b-6446-41d4-8e3b-f325c103cf1e",
  "state": "finished",
  "checkpoint_tso": 449824967352647681,
  "checkpoint_time": "2024-05-17 19:41:12.970",
  "error": null
}'
{
  "upstream_id": 7369931817054471781,
  "namespace": "default",
  "id": "31c6812b-6446-41d4-8e3b-f325c103cf1e",
  "state": "finished",
  "checkpoint_tso": 449824967352647681,
  "checkpoint_time": "2024-05-17 19:41:12.970",
  "error": null
}
++ echo '{' '"upstream_id":' 7369931817054471781, '"namespace":' '"default",' '"id":' '"31c6812b-6446-41d4-8e3b-f325c103cf1e",' '"state":' '"finished",' '"checkpoint_tso":' 449824967352647681, '"checkpoint_time":' '"2024-05-17' '19:41:12.970",' '"error":' null '}'
++ jq -r .state
+ state=finished
+ [[ ! finished == \f\i\n\i\s\h\e\d ]]
++ echo '{' '"upstream_id":' 7369931817054471781, '"namespace":' '"default",' '"id":' '"31c6812b-6446-41d4-8e3b-f325c103cf1e",' '"state":' '"finished",' '"checkpoint_tso":' 449824967352647681, '"checkpoint_time":' '"2024-05-17' '19:41:12.970",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
run task successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri May 17 19:41:25 CST 2024] <<<<<< run test case changefeed_finish success! >>>>>>
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   5935      0 --:--:-- --:--:-- --:--:--  6075
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-17 19:41:30.406","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"2024-05-17 19:41:23.605","now_ts":"2024-05-17 19:41:30.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-17' '19:41:30.406","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2024-05-17' '19:41:23.605","now_ts":"2024-05-17' '19:41:30.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-17' '19:41:30.406","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2024-05-17' '19:41:23.605","now_ts":"2024-05-17' '19:41:30.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/generate_column/run.sh using Sink-Type: storage... <<=================
[Fri May 17 19:41:32 CST 2024] <<<<<< run test case generate_column success! >>>>>>
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_storage_test-364/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/sink_hang/run.sh using Sink-Type: storage... <<=================
The 1 times to try to start tidb cluster...
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/force_replicate_table/run.sh using Sink-Type: storage... <<=================
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/sink_hang
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v7.5.1-70-gbe578f5db8
Edition: Community
Git Commit Hash: be578f5db8a899a19030344cbac6b4d3629ec872
Git Branch: release-7.5
UTC Build Time: 2024-05-17 08:16:47
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-70-gbe578f5db8
Edition: Community
Git Commit Hash: be578f5db8a899a19030344cbac6b4d3629ec872
Git Branch: release-7.5
UTC Build Time: 2024-05-17 08:16:47
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
start tidb cluster in /tmp/tidb_cdc_test/force_replicate_table
Starting Upstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Starting Downstream PD...
Release Version: v7.5.1-7-g7eb188c4f
Edition: Community
Git Commit Hash: 7eb188c4f8caba495a33eafedd4540afbc4ca6fc
Git Branch: release-7.5
UTC Build Time:  2024-05-13 04:29:07
Verifying upstream PD is started...
Verifying downstream PD is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   f2be3c0b9f0e60b619dade22410979ca91f4d85a
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-14 11:07:23
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
Starting Upstream TiDB...
Release Version: v7.5.1-70-gbe578f5db8
Edition: Community
Git Commit Hash: be578f5db8a899a19030344cbac6b4d3629ec872
Git Branch: release-7.5
UTC Build Time: 2024-05-17 08:16:47
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-70-gbe578f5db8
Edition: Community
Git Commit Hash: be578f5db8a899a19030344cbac6b4d3629ec872
Git Branch: release-7.5
UTC Build Time: 2024-05-17 08:16:47
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196ec6500003	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-cnpmc, pid:8212, start at 2024-05-17 19:41:47.285729533 +0800 CST m=+5.114038157	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:47.295 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:47.284 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:47.284 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196ec6500003	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-cnpmc, pid:8212, start at 2024-05-17 19:41:47.285729533 +0800 CST m=+5.114038157	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:47.295 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:47.284 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:47.284 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196ec63c0014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-cnpmc, pid:8297, start at 2024-05-17 19:41:47.304485203 +0800 CST m=+5.076733021	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:47.311 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:47.279 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:47.279 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-22-gacdbe728f
Edition:         Community
Git Commit Hash: acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59
Git Branch:      HEAD
UTC Build Time:  2024-05-16 14:18:59
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-16 14:22:45
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/sink_hang/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/sink_hang/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-22-gacdbe728f"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/sink_hang/tiflash-proxy.toml"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/sink_hang/tiflash/log/proxy.log"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/sink_hang/tiflash/db/proxy"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
[Fri May 17 19:41:50 CST 2024] <<<<<< START cdc server in sink_hang case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS='github.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/MySQLSinkExecDMLError=2*return(true)'
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.sink_hang.95599561.out server --log-file /tmp/tidb_cdc_test/sink_hang/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/sink_hang/cdc_data --cluster-id default --addr 127.0.0.1:8300 --pd http://127.0.0.1:2379
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 17 May 2024 11:41:53 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/f5d91747-b346-4fcf-ba8f-0c68f4b578ca
	{"id":"f5d91747-b346-4fcf-ba8f-0c68f4b578ca","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b8ba9f2
	f5d91747-b346-4fcf-ba8f-0c68f4b578ca

/tidb/cdc/default/default/upstream/7369932380151918281
	{"id":7369932380151918281,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/f5d91747-b346-4fcf-ba8f-0c68f4b578ca
	{"id":"f5d91747-b346-4fcf-ba8f-0c68f4b578ca","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b8ba9f2
	f5d91747-b346-4fcf-ba8f-0c68f4b578ca

/tidb/cdc/default/default/upstream/7369932380151918281
	{"id":7369932380151918281,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/f5d91747-b346-4fcf-ba8f-0c68f4b578ca
	{"id":"f5d91747-b346-4fcf-ba8f-0c68f4b578ca","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b8ba9f2
	f5d91747-b346-4fcf-ba8f-0c68f4b578ca

/tidb/cdc/default/default/upstream/7369932380151918281
	{"id":7369932380151918281,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ workdir=/tmp/tidb_cdc_test/sink_hang
+ sink_uri='file:///tmp/tidb_cdc_test/sink_hang/storage_test/ticdc-sink-hang-test-30768?protocol=canal-json&enable-tidb-extension=true'
+ consumer_replica_config=
+ log_suffix=
++ pwd
+ pwd=/tmp/tidb_cdc_test/sink_hang
++ date
+ echo '[Fri May 17 19:41:53 CST 2024] <<<<<< START storage consumer in sink_hang case >>>>>>'
[Fri May 17 19:41:53 CST 2024] <<<<<< START storage consumer in sink_hang case >>>>>>
+ cd /tmp/tidb_cdc_test/sink_hang
+ '[' '' '!=' '' ']'
+ cd /tmp/tidb_cdc_test/sink_hang
+ set +x
+ cdc_storage_consumer --log-file /tmp/tidb_cdc_test/sink_hang/cdc_storage_consumer.log --log-level debug --upstream-uri 'file:///tmp/tidb_cdc_test/sink_hang/storage_test/ticdc-sink-hang-test-30768?protocol=canal-json&enable-tidb-extension=true' --downstream-uri 'mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false'
check_changefeed_status 127.0.0.1:8300 88f31a0e-ccb5-4165-a64c-bc70ae6193fd normal last_error null
+ endpoint=127.0.0.1:8300
+ changefeed_id=88f31a0e-ccb5-4165-a64c-bc70ae6193fd
+ expected_state=normal
+ field=last_error
+ error_pattern=null
++ curl 127.0.0.1:8300/api/v2/changefeeds/88f31a0e-ccb5-4165-a64c-bc70ae6193fd/status
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    86  100    86    0     0    646      0 --:--:-- --:--:-- --:--:--   651
+ info='{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}'
+ echo '{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}'
{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}
++ echo '{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}'
++ jq -r .state
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
+ [[ -z last_error ]]
++ echo '{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}'
++ jq -r .last_error.message
+ error_msg=null
+ [[ ! null =~ null ]]
run task successfully
check_changefeed_status 127.0.0.1:8300 88f31a0e-ccb5-4165-a64c-bc70ae6193fd normal last_warning null
+ endpoint=127.0.0.1:8300
+ changefeed_id=88f31a0e-ccb5-4165-a64c-bc70ae6193fd
+ expected_state=normal
+ field=last_warning
+ error_pattern=null
++ curl 127.0.0.1:8300/api/v2/changefeeds/88f31a0e-ccb5-4165-a64c-bc70ae6193fd/status
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    86  100    86    0     0    667      0 --:--:-- --:--:-- --:--:--   671
+ info='{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}'
+ echo '{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}'
{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}
++ echo '{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}'
++ jq -r .state
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
+ [[ -z last_warning ]]
++ echo '{"state":"normal","resolved_ts":449824978051792900,"checkpoint_ts":449824978051792900}'
++ jq -r .last_warning.message
+ error_msg=null
+ [[ ! null =~ null ]]
run task successfully
table sink_hang.t1 not exists for 1-th check, retry later
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196f23840015	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-813sv, pid:6902, start at 2024-05-17 19:41:53.278863277 +0800 CST m=+5.047633160	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:53.285 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:53.249 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:53.249 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196f23840015	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-813sv, pid:6902, start at 2024-05-17 19:41:53.278863277 +0800 CST m=+5.047633160	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:53.285 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:53.249 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:53.249 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e196f24fc0013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:xtiflow-release-7-5-pull-cdc-integration-storage-test-364-813sv, pid:6986, start at 2024-05-17 19:41:53.366974304 +0800 CST m=+5.083809971	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-19:43:53.373 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-19:41:53.343 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-19:31:53.343 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-22-gacdbe728f
Edition:         Community
Git Commit Hash: acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59
Git Branch:      HEAD
UTC Build Time:  2024-05-16 14:18:59
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-05-16 14:22:45
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/force_replicate_table/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/force_replicate_table/tiflash/log/error.log
arg matches is ArgMatches { args: {"log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash/log/proxy.log"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-22-gacdbe728f"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash/db/proxy"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["acdbe728f97e2f5e0625d44d24ddbd1cd90d7a59"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash-proxy.toml"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
table sink_hang.t1 not exists for 2-th check, retry later
[Fri May 17 19:41:57 CST 2024] <<<<<< START cdc server in force_replicate_table case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ (( i = 0 ))
+ (( i <= 50 ))
+ GO_FAILPOINTS=
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.force_replicate_table.83198321.out server --log-file /tmp/tidb_cdc_test/force_replicate_table/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/force_replicate_table/cdc_data --cluster-id default
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
table sink_hang.t1 not exists for 3-th check, retry later
table sink_hang.t1 not exists for 4-th check, retry later
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 17 May 2024 11:42:00 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/92879b7e-a2d8-4e7a-b7cb-78c1b7774556
	{"id":"92879b7e-a2d8-4e7a-b7cb-78c1b7774556","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b9f43f0
	92879b7e-a2d8-4e7a-b7cb-78c1b7774556

/tidb/cdc/default/default/upstream/7369932395099466156
	{"id":7369932395099466156,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/92879b7e-a2d8-4e7a-b7cb-78c1b7774556
	{"id":"92879b7e-a2d8-4e7a-b7cb-78c1b7774556","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b9f43f0
	92879b7e-a2d8-4e7a-b7cb-78c1b7774556

/tidb/cdc/default/default/upstream/7369932395099466156
	{"id":7369932395099466156,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/92879b7e-a2d8-4e7a-b7cb-78c1b7774556
	{"id":"92879b7e-a2d8-4e7a-b7cb-78c1b7774556","address":"127.0.0.1:8300","version":"v7.5.1-40-g7bcb4de0c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f865b9f43f0
	92879b7e-a2d8-4e7a-b7cb-78c1b7774556

/tidb/cdc/default/default/upstream/7369932395099466156
	{"id":7369932395099466156,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
Create changefeed successfully!
ID: cfaea9dd-7614-4d01-87c5-9487fe98d9e8
Info: {"upstream_id":7369932395099466156,"namespace":"default","id":"cfaea9dd-7614-4d01-87c5-9487fe98d9e8","sink_uri":"file:///tmp/tidb_cdc_test/force_replicate_table/storage_test/ticdc-force_replicate_table-test-23082?protocol=canal-json\u0026enable-tidb-extension=true","create_time":"2024-05-17T19:42:00.866595647+08:00","start_ts":449824979039027201,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":true,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"file_index_width":20,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-40-g7bcb4de0c","resolved_ts":449824979039027201,"checkpoint_ts":449824979039027201,"checkpoint_time":"2024-05-17 19:41:57.550"}
+ workdir=/tmp/tidb_cdc_test/force_replicate_table
+ sink_uri='file:///tmp/tidb_cdc_test/force_replicate_table/storage_test/ticdc-force_replicate_table-test-23082?protocol=canal-json&enable-tidb-extension=true'
+ consumer_replica_config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/force_replicate_table/conf/changefeed.toml
+ log_suffix=
++ pwd
+ pwd=/tmp/tidb_cdc_test/force_replicate_table
++ date
+ echo '[Fri May 17 19:42:00 CST 2024] <<<<<< START storage consumer in force_replicate_table case >>>>>>'
[Fri May 17 19:42:00 CST 2024] <<<<<< START storage consumer in force_replicate_table case >>>>>>
+ cd /tmp/tidb_cdc_test/force_replicate_table
+ '[' /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/force_replicate_table/conf/changefeed.toml '!=' '' ']'
+ cd /tmp/tidb_cdc_test/force_replicate_table
+ set +x
+ cdc_storage_consumer --log-file /tmp/tidb_cdc_test/force_replicate_table/cdc_storage_consumer.log --log-level debug --upstream-uri 'file:///tmp/tidb_cdc_test/force_replicate_table/storage_test/ticdc-force_replicate_table-test-23082?protocol=canal-json&enable-tidb-extension=true' --downstream-uri 'mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false' --config /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/force_replicate_table/conf/changefeed.toml
table sink_hang.t1 not exists for 5-th check, retry later
table sink_hang.t1 exists
table sink_hang.t2 exists
check diff failed 1-th time, retry later
check diff failed 2-th time, retry later
table force_replicate_table.t0 not exists for 1-th check, retry later
check diff failed 3-th time, retry later
table force_replicate_table.t0 not exists for 2-th check, retry later
check diff failed 4-th time, retry later
table force_replicate_table.t0 exists
table force_replicate_table.t1 exists
table force_replicate_table.t2 exists
table force_replicate_table.t3 exists
table force_replicate_table.t4 exists
table force_replicate_table.t5 not exists for 1-th check, retry later
table force_replicate_table.t5 not exists for 2-th check, retry later
check diff failed 5-th time, retry later
check diff successfully
wait process cdc.test exit for 1-th time...
table force_replicate_table.t5 not exists for 3-th check, retry later
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri May 17 19:42:16 CST 2024] <<<<<< run test case sink_hang success! >>>>>>
table force_replicate_table.t5 not exists for 4-th check, retry later
table force_replicate_table.t5 not exists for 5-th check, retry later
table force_replicate_table.t5 not exists for 6-th check, retry later
table force_replicate_table.t5 exists
table force_replicate_table.t6 not exists for 1-th check, retry later
table force_replicate_table.t6 not exists for 2-th check, retry later
table force_replicate_table.t6 not exists for 3-th check, retry later
table force_replicate_table.t6 not exists for 4-th check, retry later
table force_replicate_table.t6 exists
check_data_subset force_replicate_table.t0 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t1 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t2 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t3 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t4 127.0.0.1 4000 127.0.0.1 3306
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_storage_test-364/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
run task successfully
check_data_subset force_replicate_table.t5 127.0.0.1 4000 127.0.0.1 3306
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
run task successfully
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
id=7,a=NULL doesn't exist in downstream table force_replicate_table.t6
run task failed 1-th time, retry later
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
id=7,a=NULL doesn't exist in downstream table force_replicate_table.t6
run task failed 2-th time, retry later
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
id=19,a=NULL doesn't exist in downstream table force_replicate_table.t6
run task failed 3-th time, retry later
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
id=7,a=NULL doesn't exist in downstream table force_replicate_table.t6
run task failed 4-th time, retry later
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
run task successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri May 17 19:42:58 CST 2024] <<<<<< run test case force_replicate_table success! >>>>>>
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_storage_test-364/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   221  100   221    0     0   2719      0 --:--:-- --:--:-- --:--:--  2728
+ synced_status='{"synced":true,"sink_checkpoint_ts":"2024-05-17 19:43:40.505","puller_resolved_ts":"2024-05-17 19:43:32.505","last_synced_ts":"2024-05-17 19:41:23.605","now_ts":"2024-05-17 19:43:40.000","info":"Data syncing is finished"}'
++ echo '{"synced":true,"sink_checkpoint_ts":"2024-05-17' '19:43:40.505","puller_resolved_ts":"2024-05-17' '19:43:32.505","last_synced_ts":"2024-05-17' '19:41:23.605","now_ts":"2024-05-17' '19:43:40.000","info":"Data' syncing is 'finished"}'
++ jq .synced
+ status=true
+ '[' true '!=' true ']'
+ kill_pd
++ ps aux
++ grep pd-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    16225  7.5  0.0 13978676 140304 ?     Sl   19:41   0:11 pd-server --advertise-client-urls http://127.0.0.1:2379 --client-urls http://0.0.0.0:2379 --advertise-peer-urls http://127.0.0.1:2380 --peer-urls http://0.0.0.0:2380 --config /tmp/tidb_cdc_test/synced_status/pd-config.toml --log-file /tmp/tidb_cdc_test/synced_status/pd1.log --data-dir /tmp/tidb_cdc_test/synced_status/pd1 --name=pd1 --initial-cluster=pd1=http://127.0.0.1:2380
jenkins    16283  5.3  0.0 14174964 136852 ?     Sl   19:41   0:08 pd-server --advertise-client-urls http://127.0.0.1:2479 --client-urls http://0.0.0.0:2479 --advertise-peer-urls http://127.0.0.1:2480 --peer-urls http://0.0.0.0:2480 --config /tmp/tidb_cdc_test/synced_status/pd-config.toml --log-file /tmp/tidb_cdc_test/synced_status/down_pd.log --data-dir /tmp/tidb_cdc_test/synced_status/down_pd'
++ ps aux
++ grep pd-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
+ sleep 20
{"level":"warn","ts":1715946228.5544808,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0014f6e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715946228.5545523,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715946228.5659175,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00173f500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715946228.5659833,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715946228.5673175,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0013addc0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1715946228.567373,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-17T19:43:53.355764+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f0e700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:43:53.356334+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e83c00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:43:53.42954+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e8c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:43:59.356745+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f0e700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:43:59.357343+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e83c00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:43:59.430815+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e8c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0{"level":"warn","ts":"2024-05-17T19:44:05.358452+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e83c00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:05.358615+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f0e700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:05.431554+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e8c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0{"level":"warn","ts":"2024-05-17T19:44:11.359353+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e83c00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:11.359859+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f0e700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:11.432524+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e8c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0{"level":"warn","ts":"2024-05-17T19:44:13.347176+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e83c00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2024-05-17T19:44:13.347237+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-17T19:44:13.347389+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f0e700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":"2024-05-17T19:44:13.347425+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":"2024-05-17T19:44:13.423777+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e8c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":"2024-05-17T19:44:13.42382+0800","logger":"etcd-client","caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0{"level":"warn","ts":"2024-05-17T19:44:17.360001+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e83c00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:17.361052+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f0e700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:17.433869+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e8c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0{"level":"warn","ts":"2024-05-17T19:44:23.360334+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e83c00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:23.36187+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f0e700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:23.434519+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e8c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"warn","ts":1715946263.5560474,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0014f6e00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715946263.5560808,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715946263.56679,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00173f500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715946263.5668244,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715946263.5676868,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0013addc0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1715946263.5677164,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}

  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0{"level":"warn","ts":"2024-05-17T19:44:29.361044+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e83c00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:29.362573+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f0e700/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-17T19:44:29.435461+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000e9e8c0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}

  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    27
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    33
+ synced_status='{
    "error_msg": "[CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded",
    "error_code": "CDC:ErrPDEtcdAPIError"
}'
++ jq -r .error_code
++ echo '{' '"error_msg":' '"[CDC:ErrPDEtcdAPIError]etcd' api call error: context deadline 'exceeded",' '"error_code":' '"CDC:ErrPDEtcdAPIError"' '}'
+ error_code=CDC:ErrPDEtcdAPIError
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
Aborted by Jenkins Admin
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
++ stop_tidb_cluster
/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status/run.sh: line 1: 18645 Terminated              stop_tidb_cluster
/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_storage_test/tiflow/tests/integration_tests/synced_status/run.sh: line 1: 18705 Terminated              stop_tidb_cluster
script returned exit code 143
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G09'
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 323382b3-05f0-44c4-81f9-d2875852b5a5
Finished: ABORTED